google ads
Neuroradiology: Ai (artificial Intelligence) Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Neuroradiology ❯ AI (Artificial Intelligence)

-- OR --

  • Purpose: To assess an FDA-approved and CE-certified deep learning (DL) software application compared to the performance of human radiologists in detecting intracranial hemorrhages (ICH).
    Methods: Within a 20-week trial from January to May 2020, 2210 adult non-contrast head CT scans were performed in a single center and automatically analyzed by an artificial intelligence (AI) solution with workflow integration. After excluding 22 scans due to severe motion artifacts, images were retrospectively assessed for the presence of ICHs by a second-year resident and a certified radiologist under simulated time pressure. Disagreements were resolved by a subspecialized neuro- radiologist serving as the reference standard. We calculated interrater agreement and diagnostic performance parameters, including the Breslow–Day and Cochran–Mantel–Haenszel tests.  
    Results: An ICH was present in 214 out of 2188 scans. The interrater agreement between the resident and the certified radiologist was very high (κ = 0.89) and even higher (κ = 0.93) between the resident and the reference standard. The software has delivered 64 false-positive and 68 false-negative results giving an overall sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 68.2%, 96.8%, 69.5%, 96.6%, and 94.0%, respectively. Corresponding values for the resident were 94.9%, 99.2%, 93.1%, 99.4%, and 98.8%. The accuracy of the DL application was inferior (p < 0.001) to that of both the resident and the certified neuroradiologist.  
    Conclusion: A resident under time pressure outperformed an FDA-approved DL program in detecting ICH in CT scans. Our results underline the importance of thoughtful workflow integration and post-approval validation of AI applications in various clinical environments.  
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • Conclusion A resident under time pressure outperformed an FDA-approved DL program in detecting ICH in CT scans. Our results underline the importance of thoughtful workflow integration and post-approval validation of AI applications in various clinical environments.  
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • “Individual AI algorithms have been developed to sup- port clinicians in the identification and prioritization of cases suspected to be ICHs. So far, only a few ven- dors have received FDA approval for their solutions. The Aidoc software has been reported to perform with accuracy levels of up to 98%, notably with even higher specificity than sensitivity [20]. Nonetheless, the generalization of different datasets and clinical translation are known chal- lenges restraining convolutional neural networks (CNN).”
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • “The objective of our study was to assess the Aidoc software in a diverse clinical setting compared to the performance of human radiologists under simulated time pressure. Specifically, we investigated whether the diagnostic accuracy of this workflow-integrated AI solution was equivalent to that of a second-year resident.”
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • "In conclusion, the radiological assessment detected ICHs in a total of 214 examinations (9.8%). Measured against the reference standard, the initial automatic DL software analysis delivered 64 false-positive and 68 false-negative results. This results in overall sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 68.2%, 96.8%, 69.5%, 96.6%, and 94.0%, respectively, for the software-based detection of ICH. In contrast, the resi- dent achieved respective accuracy values of 94.9%, 99.2%, 93.1%, 99.4%, and 98.8%; the certified radiologist achieved values of 95.8%, 99.7%, 97.2%, 99.5%, and 99.3%, respec- tively (Table 1).”
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • "The software application detected five ICHs not spotted by the resident. Discrepancies between the resident and the neuroradiologist were caused in nine cases by beam hardening artifacts; in four cases due to discordant assessment of subdural bleeding; in three cases related to subtle subarach- noid hemorrhage (SAH); in three cases due to dural thick- ening after osteoplastic craniotomy; and relating to the fol- lowing conditions in one case each: intracerebral hematoma, vascular malformation, brain tumor, dense venous sinus, and ischemic infarct with possible hemorrhagic transformation. Discrepancies between the certified radiologist and the neuroradiologist occurred in six cases of SDH; in three cases of subtle tumor hemorrhage or calcification, respectively; in two cases of dense sinus or tentorium; and in each one case related to beam hardening, tiny cortical bleeding, and calcifications of the falx or brain parenchyma.”
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • "Ultimately, the combination of man and machine will most likely achieve the highest diagnostic accuracy. Our results support the expectation that well-integrated algo- rithms should be further improved to assist radiologists, especially in high-output situations and during on-call hours. O’Neill et al. recently reported that AI-assisted reprioritization of the reading worklist was beneficial in terms of turnaround time, especially for examinations ordered as routine. It remains to be seen which frequent tasks will be integrated into neuroradiological solutions over time, and to what extent rare differential diagnoses may move into the focus of development.”  
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • "Our study adds to the body of evidence required for the implementation of AI solutions in real-world scenarios. We assessed an FDA-approved DL software application for ICH detection in a routine setting compared to the perfor- mance of human radiologists under time-constrained study conditions. A second-year resident outperformed the AI tool in terms of both sensitivity and specificity. Since most erroneous alerts can be resolved by experienced radiologists, the software holds promise for prioritizing cranial CT exams. However, due to a notable rate of unflagged ICH scans, we doubt generalizability and recommend this AI solution be improved. Our results underline the need for external post-approval validation in various clinical environments. They warrant further research with a focus on the combination of human and artificial intelligence for an accurate and timely diagnosis of ICH.”
    FDA‐approved deep learning software application versus radiologists with different levels of expertise: detection of intracranial hemorrhage in a retrospective single‐center study  
    Thomas Kau et al.
    Neuroradiology (2022) 64:981–990 
  • Aidoc Model Development

    “A proprietary convolutional neural network architecture was used. The training dataset included approximately 50,000 CT studies collected from 9 different sites. In total, data was derived from 17 different scanner models. CT data slice thickness (z-axis) ranged from 0.5 to 5 mm. Data from all anatomic planes was used, when available (axial, sagittal, coronal). Only soft-tissue kernel images were used. Ground truth labeling structure varied depending on hemorrhage type and size, and included both weak and strong labeling schema. Label types included study-level classification for diffuse SAH, slice-level bounding boxes around indistinct extra and intra-axial hemorrhage foci, and pixel-level semantic segmentation of well-defined intraparenchymal hemorrhage.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • “While rapid detection of intracranial hemorrhage (ICH) on computed tomography (CT) is a critical step in assessing patients with acute neurological symptoms in the emergency setting, prioritizing scans for radiologic interpretation by the acuity of imaging findings remains a challenge and can lead to delays in diagnosis at centers with heavy imaging volumes and limited staff resources. Deep learning has shown promise as a technique in aiding physicians in performing this task accurately and expeditiously and may be especially useful in a resource-constrained context.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • Our group evaluated the performance of a convolutional neural network (CNN) model developed by Aidoc (Tel Aviv, Israel). This model is one of the first artificial intelligence devices to receive FDA clearance for enabling radiologists to triage patients after scan acquisition. The algorithm was tested on 7112 non-contrast head CTs acquired during 2016–2017 from a two, large urban academic and trauma centers. Ground truth labels were assigned to the test data per PACS query and prior reports by expert neuroradiologists. No scans from these two hospitals had been used during the algorithm training process and Aidoc staff were at all times blinded to the ground truth labels. Model output was reviewed by three radiologists and manual error analysis performed on discordant findings. Specificity was 99%, sensitivity was 95%, and overall accuracy was 98%.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • “Specificity was 99%, sensitivity was 95%, and overall accuracy was 98%.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • “In summary, we report promising results of a scalable and clinically pragmatic deep learning algorithm tested on a large set of real-world data from high-volume medical centers that requires no human intervention to accurately characterize the presence or absence of ICH. This model holds promise for assisting clinicians in the identification and prioritization of exams suspicious for ICH, facilitating both the diagnosis and treatment of an emergent and life-threatening condition.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • Purpose: The artificial intelligence (AI) widget was designed to identify intracranial hemorrhage (ICH) and alert the radiologist to prioritize reading the critical case over others on the list. The purpose of this study was to determine if there was a significant difference in report turnaround time (TAT) of after-hours, non-contrast head CT (NCCT) performed before and after implementation of the AI widget. TAT data was stratified to include positive and negative cases from both ED and inpatient locations.
    Conclusions: TAT was reduced in the month following AI implementation among all categories of head CT. Overall, there was a 24.5% reduction in TAT and a slightly greater reduction of 37.8% in all ICH-positive cases, suggesting the positive cases were prioritized. This effect extended to ED and inpatient studies. The reduction in overall TAT may reflect the disproportionate influence positive cases had on overall TAT, or that the AI widget also flagged some cases which were not positive, resulting in the prioritization of all NCCT over other exams. There were differences in the overnight radiologists working between the two months we examined, and the study was initiated soon after our hospital began staffing 2 radiologists overnight. Further investigation will be required to understand these findings.
    Comparison of After-Hours Head CT Report Turnaround Time Before and After Implementation of an Artificial Intelligence Widget Developed to Detect Intracranial Hemorrhage.
    Brady Laughlin et al.
    American College of Radiology(2018)
  • Purpose: The artificial intelligence (AI) widget was designed to identify intracranial hemorrhage (ICH) and alert the radiologist to prioritize reading the critical case over others on the list. The purpose of this study was to determine if there was a significant difference in report turnaround time (TAT) of after-hours, non-contrast head CT (NCCT) performed before and after implementation of the AI widget. TAT data was stratified to include positive and negative cases from both ED and inpatient locations.
    Conclusions: TAT was reduced in the month following AI implementation among all categories of head CT. Overall, there was a 24.5% reduction in TAT and a slightly greater reduction of 37.8% in all ICH-positive cases, suggesting the positive cases were prioritized. This effect extended to ED and inpatient studies. The reduction in overall TAT may reflect the disproportionate influence positive cases had on overall TAT, or that the AI widget also flagged some cases which were not positive, resulting in the prioritization of all NCCT over other exams.
    Comparison of After-Hours Head CT Report Turnaround Time Before and After Implementation of an Artificial Intelligence Widget Developed to Detect Intracranial Hemorrhage.
    Brady Laughlin et al.
    American College of Radiology(2018)
  • OBJECTIVE:To develop and apply a neural network segmentation model(theHeadXNetmodel) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians’ intracranial aneurysm diagnostic performance.
    CONCLUSIONS AND RELEVANCE:The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence–assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • RESULTS The data set contained 818 examinations from 662 unique patients with 328CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence–produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians’ mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19).
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • “A deep learning model was developed to automatically detect clinically significant intracranial aneurysms on CTA. We found that the augmentation significantly improved clinicians’ sensitivity, accuracy, and interrater reliability. Future work should investigate the performance of this model prospectively and in application of data from other institutions and hospitals.”
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • Purpose: To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers.
    Materials and Methods: Prospective 18F-FDG PET brain images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (2109 imaging studies from 2005 to 2017, 1002 patients) and retrospective independent test set (40 imaging studies from 2006 to 2016, 40 patients) were collected. Final clinical diagnosis at follow-up was recorded. Convolutional neural network of InceptionV3 architecture was trained on 90% of ADNI data set and tested on the remaining 10%, as well as the independent test set, with performance compared to radiologic readers. Model was analyzed with sensitivity, specificity, receiver operating characteristic (ROC), saliency map, and t-distributed stochastic neighbor embedding.
    Results: The algorithm achieved area under the ROC curve of 0.98 (95% confidence interval: 0.94, 1.00) when evaluated on predicting the final clinical diagnosis of AD in the independent test set (82% specificity at 100% sensitivity), an average of 75.8 months prior to the final diagnosis, which in ROC space outperformed reader performance (57% [four of seven] sensitivity, 91% [30 of 33] specificity; P , .05). Saliency map demonstrated attention to known areas of interest but with focus on the entire brain.
    Conclusion: By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9 
    https://doi.org/10.1148/radiol.2018180958
  • Purpose: To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers.
    Conclusion: By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/radiol.2018180958
  • In this study, we aimed to evaluate whether a deep learning algorithm could be trained to predict the final clinical diagnoses in patients who underwent 18F-FDG PET of the brain and, once trained, how the deep learning algorithm compares with the cur- rent standard clinical reading methods in differentiation of patients with final diagnoses of AD, MCI, or no evidence of dementia. We hypothesized that the deep learning algorithm could detect features or patterns that are not evident on standard clinical review of images and thereby improve the final diagnostic classification of individuals.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/radiol.2018180958

  • A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.2018180958
  • Our study had several limitations. First, our independent test data were relatively small (n = 40) and were not collected as part of a clinical trial. Most notably, this was a highly selected cohort in that all patients must have been referred to the memory clinic and neurologist must have decided that a PET study of the brain would be useful in clinical management. This effectively excluded most non-AD neurodegenerative cases and other neurologic disorders such as stroke that could affect memory function. Arguably, such cohort of patients would be the most relevant group to test the deep learning algorithm, but the algorithm’s performance on a more general patient population remains untested and un- proven, hence the pilot nature of this study.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Second, the deep learning algorithm’s robustness is inherently limited by the clinical distribution of the training set from ADNI. The algorithm achieved strong performance on a small independent test set, where the population substantially differed from the ADNI test set; however, its performance and robustness cannot yet be guaranteed on prospective, unselected, and real-life scenario patient cohorts. Further validation with larger and prospective external test set must be performed before actual clinical use. Further- more, this training set from ADNI did not include non-AD neurodegenerative cases, limiting the utility of the algorithm in such patient population.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Third, the deep learning algorithm did not yield a human interpretable imaging biomarker despite visualization with saliency map, which highlights the inherent black-box limitation of deep learning algorithms. The algorithm instead made predictions based on holistic features of the imaging study, distinct from the human expert approaches. Fourth, MCI and non-AD/MCI were inherently unstable diagnoses in that their accuracy is dependent on the length of follow-up. For example, some of the MCI patients, if followed up for long enough time, may have eventually progressed to AD.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Overall, our study demonstrates that a deep learning algorithm can predict the final diagnosis of AD from 18F- FDG PET imaging studies of the brain with high accuracy and robustness across external test data. Furthermore, this study proposes a working deep learning approaches and a set of convolutional neural network hyperparameters, validated on a public dataset, that can be the groundwork for further model improvement. With further large-scale external validation on multi-institutional data and model calibration, the algorithm may be integrated into clinical workflow and serve as an important decision support tool to aid radiology readers and clinicians with early prediction of AD from 18F- FDG PET imaging studies.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • “Overall, our study demonstrates that a deep learning algorithm can predict the final diagnosis of AD from 18F- FDG PET imaging studies of the brain with high accuracy and robustness across external test data. Furthermore, this study proposes a working deep learning approaches and a set of convolutional neural network hyperparameters, validated on a public dataset, that can be the groundwork for further model improvement.”
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Brain Tumors: Pathology
    We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics.
  • “We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics.”
    DNA methylation-based classification of central nervous system tumours
    Pfister SM et al.
    Nature 2018 (in press)
  • “We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics. Furthermore, in contrast to traditional pathology, where there is a pressure to assign all tumours to a described entity even for atypical or challenging cases, the objective measure that we provide here allows for ‘no match’ to a defined class.”
    DNA methylation-based classification of central nervous system tumours
    Nature 2018 (in press)
  • “We have shown that radiological scores can be predicted to an excellent standard using only the disc-specific assessments as a reference set. The proposed method is quite general, and although we have implemented it here for sagittal T2 scans, it could easily be applied to T1 scans or axial scans, and for radiological features not studied here or indeed to any medical task where label/grading might be available only for a small region or a specific anatomy of an image. One benefit of automated reading is to produce a numerical signal score that would provide a scale of degeneration and so avoid an arbitrary categorization into artificial grades.”
    Automation of reading of radiological features from magnetic resonance images (MRIs) of the lumbar spine without human intervention is comparable with an expert radiologist
    Jamaludin A et al.
    Eur Spine J 2018; DOI 10.1007/s00586-017-4956-3
  • “Automation of radiological grading is now on par with human performance. The system can be beneficial in aiding clinical diagnoses in terms of objectivity of gradings and the speed of analysis. It can also draw the attention of a radiologist to regions of degradation. This objectivity and speed is an important stepping stone in the investigation of the relationship between MRIs and clinical diagnoses of back pain in large cohorts.”
    Automation of reading of radiological features from magnetic resonance images (MRIs) of the lumbar spine without human intervention is comparable with an expert radiologist
    Jamaludin A et al.
    Eur Spine J 2018; DOI 10.1007/s00586-017-4956-3
  • The process in a flow chart
  • Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks.
    Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)
  • “We propose a unified model that jointly models disease identification and localization with limited localization annotation data. This is achieved through the same underlying prediction model for both tasks. Quantitative and qualitative results demonstrate that our method significantly outperforms the state-of-the-art algorithm”
    Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)c

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)c
  • FDA Approval Statement (AIDOC)
  • BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, namely Intracranial Hemorrhage (ICH) .
    BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
    The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
  • “BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device. “

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.