google ads
Deep Learning: Deep Learning and Brain Apps Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Brain Apps

-- OR --

  • Summary
    The performance of an artificial intelligence clinical decision support solution for intracranial hemorrhage detection was low in a low prevalence environment; falsely flagged studies led to increased radiologist interpretation time, potentially reducing system efficiency.
    Key Points
    ■ An artificial intelligence (AI) clinical decision support solution for intracranial hemorrhage detection yielded a positive predictive value of 21.1% in a low prevalence (2.70%) environment.
    ■ Falsely flagged studies by the AI solution led to lengthened radiologist read times and system inefficiencies (median read time increased 1 minute 14 seconds [P < .001] for examinations with false-positive findings and 1 minute 5 seconds [P = .04] for examinations with false-negative findings).
    ■ Factoring in prevalence of a condition in varying clinical settings and the impact that falsely flagged studies will have on system efficiency may aid institutional decision-making for use of an AI solution and help set clearer expectations for end users.
    Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time
    Andrew James Del Gaizo,  et al.
    Radiology: Artificial Intelligence 2024; 6(5):e240067
  • The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001).  
    Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time
    Andrew James Del Gaizo,  et al.
    Radiology: Artificial Intelligence 2024; 6(5):e240067
  • The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment.
    Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time
    Andrew James Del Gaizo,  et al.
    Radiology: Artificial Intelligence 2024; 6(5):e240067
  • “CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment.”
    Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time
    Andrew James Del Gaizo,  et al.
    Radiology: Artificial Intelligence 2024; 6(5):e240067
  • “In conclusion, use of an AI tool for ICH detection in our teleradiology practice yielded reduced sensitivity and specificity compared with the published literature. However, a low prevalence of ICH in our patients contributed to a substantially lower positive predictive value. Noncontrast head CT examinations falsely flagged by an AI solution lengthened mean and median read times. In aggregate, this led to system inefficiencies that reduced the potential benefit of using the AI tool in our environment. A broader understanding of an AI solution’s impact on system efficiency may aid institutional decision-making and help set clearer expectations for end users.”  
    Deep Learning to Detect Intracranial Hemorrhage in a National Teleradiology Program and the Impact on Interpretation Time
    Andrew James Del Gaizo,  et al.
    Radiology: Artificial Intelligence 2024; 6(5):e240067
  • “Brain tumors are the most prominent neurologically malignant cancers with the highest injury and death rates worldwide. Glioma classification is crucial for the prognosis, assessment of prognostication and the planning of clinical guidelines before surgery. Herein, we introduce a novel stationary wavelet-based radiomics approach to classify the grade of glioma more accurately and in a non-invasive manner. The training dataset of Brain Tumor Segmentation (BraTS) Challenge 2018 is used for performance evaluation and calculation is done based on the radiomics features for three different regions of interest. The classifier, Random Forest, is trained on these features and predicted the grade of glioma. At last, the performance is val- idated by using five-fold cross-validation scheme. The state-of-the-art performance is achieved considering metric ⟨Acc, Sens, Spec, Score, MCC , AUC ⟩ ≡ ⟨97.54%, 97.62%, 97.33%, 98.3%, 94.12%, 97.48%⟩ with machine learning predictive model Random Forest (RF) for brain tumor patients’ classification. Considering the importance of glioma classification for the assessment of prognosis, our approach could be useful in the planning of clinical guidelines prior to surgery.”
    CGHF: A Computational Decision Support System for Glioma Classification Using Hybrid Radiomics- and Stationary Wavelet-Based Features
    Kumar R et al.
    IEEE Access Digital Object Identifier 10.1109/ACCESS.2020.2989193
  • In the proposed work, CGHF, a computationally efficient decision support system based on machine predictive model Random Forest is proposed for gliomas grading. The model is used to predict the instances in HGG or LGG category. The proposed system, CGHF, utilized the filters for radiomics feature extraction and several effective feature selection tech- niques over publicly available BraTS 2018 dataset and train the RF model for classification task. The LS and RFA are best feature selection methods for RF using R-, S- and RS-Extraction methods and ANOVA is second best, stable and suitable method for the proposed system, CGHF. As a future perspective, the multi-class classification of graded gliomas can be considered for prediction of brain tumors.
    CGHF: A Computational Decision Support System for Glioma Classification Using Hybrid Radiomics- and Stationary Wavelet-Based Features
    Kumar R et al.
    IEEE Access Digital Object Identifier 10.1109/ACCESS.2020.2989193
  • “We trained a fully convolutional neural network with 4,396 head CT scans performed at the University of California at San Francisco and affiliated hospitals and compared the algorithm’s performance to that of 4 American Board of Radiology (ABR) certified radiologists on an independent test set of 200 randomly selected head CT scans. Our algorithm demonstrated the highest accuracy to date for this clinical application, with a receiver operating characteristic (ROC) area under the curve (AUC) of 0.991 ± 0.006 for identification of examinations positive for acute intracranial hemorrhage, and also exceeded the performance of 2 of 4 radiologists.”
    Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning
    Kuo W et al.
    Proc Natl Acad Sci U S A. 2019 Nov 5;116(45):22737-22745
  • “Our algorithm demonstrated the highest accuracy to date for this clinical application, with a receiver operating characteristic (ROC) area under the curve (AUC) of 0.991 ± 0.006 for identification of examinations positive for acute intracranial hemorrhage, and also exceeded the performance of 2 of 4 radiologists. We demonstrate an end-to-end network that performs joint classification and segmentation with examination-level classification comparable to experts, in addition to robust localization of abnormalities, including some that are missed by radiologists, both of which are critically important elements for this application.”
    Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning
    Kuo W et al.
    Proc Natl Acad Sci U S A. 2019 Nov 5;116(45):22737-22745
  • “While rapid detection of intracranial hemorrhage (ICH) on computed tomography (CT) is a critical step in assessing patients with acute neurological symptoms in the emergency setting, prioritizing scans for radiologic interpretation by the acuity of imaging findings remains a challenge and can lead to delays in diagnosis at centers with heavy imaging volumes and limited staff resources. Deep learning has shown promise as a technique in aiding physicians in performing this task accurately and expeditiously and may be especially useful in a resource-constrained context.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • Our group evaluated the performance of a convolutional neural network (CNN) model developed by Aidoc (Tel Aviv, Israel). This model is one of the first artificial intelligence devices to receive FDA clearance for enabling radiologists to triage patients after scan acquisition. The algorithm was tested on 7112 non-contrast head CTs acquired during 2016–2017 from a two, large urban academic and trauma centers. Ground truth labels were assigned to the test data per PACS query and prior reports by expert neuroradiologists. No scans from these two hospitals had been used during the algorithm training process and Aidoc staff were at all times blinded to the ground truth labels. Model output was reviewed by three radiologists and manual error analysis performed on discordant findings. Specificity was 99%, sensitivity was 95%, and overall accuracy was 98%.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • “Specificity was 99%, sensitivity was 95%, and overall accuracy was 98%.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • “In summary, we report promising results of a scalable and clinically pragmatic deep learning algorithm tested on a large set of real-world data from high-volume medical centers that requires no human intervention to accurately characterize the presence or absence of ICH. This model holds promise for assisting clinicians in the identification and prioritization of exams suspicious for ICH, facilitating both the diagnosis and treatment of an emergent and life-threatening condition.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • Aidoc Model Development
    “A proprietary convolutional neural network architecture was used. The training dataset included approximately 50,000 CT studies collected from 9 different sites. In total, data was derived from 17 different scanner models. CT data slice thickness (z-axis) ranged from 0.5 to 5 mm. Data from all anatomic planes was used, when available (axial, sagittal, coronal). Only soft-tissue kernel images were used. Ground truth labeling structure varied depending on hemorrhage type and size, and included both weak and strong labeling schema. Label types included study-level classification for diffuse SAH, slice-level bounding boxes around indistinct extra and intra-axial hemorrhage foci, and pixel-level semantic segmentation of well-defined intraparenchymal hemorrhage.”
    The Utility of Deep Learning: Evaluation of a Convolutional Neural Network for Detection of Intracranial Bleeds on Non-Contrast Head Computed Tomography Studies
    Ojeda P, Zawaideh M et al.
    Medical Imaging 2019: Image Processing, edited by Elsa D. Angelini, Bennett A. Landman, Proc. of SPIE Vol. 10949, 109493J
  • Purpose: The artificial intelligence (AI) widget was designed to identify intracranial hemorrhage (ICH) and alert the radiologist to prioritize reading the critical case over others on the list. The purpose of this study was to determine if there was a significant difference in report turnaround time (TAT) of after-hours, non-contrast head CT (NCCT) performed before and after implementation of the AI widget. TAT data was stratified to include positive and negative cases from both ED and inpatient locations.
    Conclusions: TAT was reduced in the month following AI implementation among all categories of head CT. Overall, there was a 24.5% reduction in TAT and a slightly greater reduction of 37.8% in all ICH-positive cases, suggesting the positive cases were prioritized. This effect extended to ED and inpatient studies. The reduction in overall TAT may reflect the disproportionate influence positive cases had on overall TAT, or that the AI widget also flagged some cases which were not positive, resulting in the prioritization of all NCCT over other exams. There were differences in the overnight radiologists working between the two months we examined, and the study was initiated soon after our hospital began staffing 2 radiologists overnight. Further investigation will be required to understand these findings.
    Comparison of After-Hours Head CT Report Turnaround Time Before and After Implementation of an Artificial Intelligence Widget Developed to Detect Intracranial Hemorrhage.
    Brady Laughlin et al.
    American College of Radiology(2018)
  • Purpose: The artificial intelligence (AI) widget was designed to identify intracranial hemorrhage (ICH) and alert the radiologist to prioritize reading the critical case over others on the list. The purpose of this study was to determine if there was a significant difference in report turnaround time (TAT) of after-hours, non-contrast head CT (NCCT) performed before and after implementation of the AI widget. TAT data was stratified to include positive and negative cases from both ED and inpatient locations.
    Conclusions: TAT was reduced in the month following AI implementation among all categories of head CT. Overall, there was a 24.5% reduction in TAT and a slightly greater reduction of 37.8% in all ICH-positive cases, suggesting the positive cases were prioritized. This effect extended to ED and inpatient studies. The reduction in overall TAT may reflect the disproportionate influence positive cases had on overall TAT, or that the AI widget also flagged some cases which were not positive, resulting in the prioritization of all NCCT over other exams.
    Comparison of After-Hours Head CT Report Turnaround Time Before and After Implementation of an Artificial Intelligence Widget Developed to Detect Intracranial Hemorrhage.
    Brady Laughlin et al.
    American College of Radiology(2018)
  • OBJECTIVE:To develop and apply a neural network segmentation model(theHeadXNetmodel) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians’ intracranial aneurysm diagnostic performance.
    CONCLUSIONS AND RELEVANCE:The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence–assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • RESULTS The data set contained 818 examinations from 662 unique patients with 328CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence–produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians’ mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19).
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • “A deep learning model was developed to automatically detect clinically significant intracranial aneurysms on CTA. We found that the augmentation significantly improved clinicians’ sensitivity, accuracy, and interrater reliability. Future work should investigate the performance of this model prospectively and in application of data from other institutions and hospitals.”
    Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model
    Allison Park et al.
    JAMA Network Open. 2019;2(6):e195600. doi:10.1001
  • Brain Tumors: Pathology
    We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics.
  • “We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics.”
    DNA methylation-based classification of central nervous system tumours
    Pfister SM et al.
    Nature 2018 (in press)
  • “We here demonstrate that DNA methylation-based CNS tumour classification using a comprehensive machine-learning approach is a valuable asset for clinical decision-making. In particular, the high level of standardization has great promise to reduce the substantial inter-observer variability observed in current CNS tumour diagnostics. Furthermore, in contrast to traditional pathology, where there is a pressure to assign all tumours to a described entity even for atypical or challenging cases, the objective measure that we provide here allows for ‘no match’ to a defined class.”
    DNA methylation-based classification of central nervous system tumours
    Nature 2018 (in press)
  • Purpose: To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers.
    Materials and Methods: Prospective 18F-FDG PET brain images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (2109 imaging studies from 2005 to 2017, 1002 patients) and retrospective independent test set (40 imaging studies from 2006 to 2016, 40 patients) were collected. Final clinical diagnosis at follow-up was recorded. Convolutional neural network of InceptionV3 architecture was trained on 90% of ADNI data set and tested on the remaining 10%, as well as the independent test set, with performance compared to radiologic readers. Model was analyzed with sensitivity, specificity, receiver operating characteristic (ROC), saliency map, and t-distributed stochastic neighbor embedding.
    Results: The algorithm achieved area under the ROC curve of 0.98 (95% confidence interval: 0.94, 1.00) when evaluated on predicting the final clinical diagnosis of AD in the independent test set (82% specificity at 100% sensitivity), an average of 75.8 months prior to the final diagnosis, which in ROC space outperformed reader performance (57% [four of seven] sensitivity, 91% [30 of 33] specificity; P , .05). Saliency map demonstrated attention to known areas of interest but with focus on the entire brain.
    Conclusion: By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9 
    https://doi.org/10.1148/radiol.2018180958
  • In this study, we aimed to evaluate whether a deep learning algorithm could be trained to predict the final clinical diagnoses in patients who underwent 18F-FDG PET of the brain and, once trained, how the deep learning algorithm compares with the cur- rent standard clinical reading methods in differentiation of patients with final diagnoses of AD, MCI, or no evidence of dementia. We hypothesized that the deep learning algorithm could detect features or patterns that are not evident on standard clinical review of images and thereby improve the final diagnostic classification of individuals.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/radiol.2018180958

  • A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.2018180958
  • Our study had several limitations. First, our independent test data were relatively small (n = 40) and were not collected as part of a clinical trial. Most notably, this was a highly selected cohort in that all patients must have been referred to the memory clinic and neurologist must have decided that a PET study of the brain would be useful in clinical management. This effectively excluded most non-AD neurodegenerative cases and other neurologic disorders such as stroke that could affect memory function. Arguably, such cohort of patients would be the most relevant group to test the deep learning algorithm, but the algorithm’s performance on a more general patient population remains untested and un- proven, hence the pilot nature of this study.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Purpose: To develop and validate a deep learning algorithm that predicts the final diagnosis of Alzheimer disease (AD), mild cognitive impairment, or neither at fluorine 18 (18F) fluorodeoxyglucose (FDG) PET of the brain and compare its performance to that of radiologic readers.
    Conclusion: By using fluorine 18 fluorodeoxyglucose PET of the brain, a deep learning algorithm developed for early prediction of Alzheimer disease achieved 82% specificity at 100% sensitivity, an average of 75.8 months prior to the final diagnosis.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/radiol.2018180958
  • Second, the deep learning algorithm’s robustness is inherently limited by the clinical distribution of the training set from ADNI. The algorithm achieved strong performance on a small independent test set, where the population substantially differed from the ADNI test set; however, its performance and robustness cannot yet be guaranteed on prospective, unselected, and real-life scenario patient cohorts. Further validation with larger and prospective external test set must be performed before actual clinical use. Further- more, this training set from ADNI did not include non-AD neurodegenerative cases, limiting the utility of the algorithm in such patient population.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Third, the deep learning algorithm did not yield a human interpretable imaging biomarker despite visualization with saliency map, which highlights the inherent black-box limitation of deep learning algorithms. The algorithm instead made predictions based on holistic features of the imaging study, distinct from the human expert approaches. Fourth, MCI and non-AD/MCI were inherently unstable diagnoses in that their accuracy is dependent on the length of follow-up. For example, some of the MCI patients, if followed up for long enough time, may have eventually progressed to AD.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Overall, our study demonstrates that a deep learning algorithm can predict the final diagnosis of AD from 18F- FDG PET imaging studies of the brain with high accuracy and robustness across external test data. Furthermore, this study proposes a working deep learning approaches and a set of convolutional neural network hyperparameters, validated on a public dataset, that can be the groundwork for further model improvement. With further large-scale external validation on multi-institutional data and model calibration, the algorithm may be integrated into clinical workflow and serve as an important decision support tool to aid radiology readers and clinicians with early prediction of AD from 18F- FDG PET imaging studies.
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • Overall, our study demonstrates that a deep learning algorithm can predict the final diagnosis of AD from 18F- FDG PET imaging studies of the brain with high accuracy and robustness across external test data. Furthermore, this study proposes a working deep learning approaches and a set of convolutional neural network hyperparameters, validated on a public dataset, that can be the groundwork for further model improvement.”
    A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
    Yiming Ding et al.
    Radiology 2018; 00:1–9
    https://doi.org/10.1148/iol.201818095
  • High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3Dconvolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.

    
3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.
Nie D et al.
Med Image Comput Comput Assist Interv. 2016 Oct;9901:212-222
  • “High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features.”

    
3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.
Nie D et al.
Med Image Comput Comput Assist Interv. 2016 Oct;9901:212-220
  • “While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease.”


    Classification of CT brain images based on deep learning networks.
Gao XW1, Hui R2, Tian Z
Comput Methods Programs Biomed. 2017 Jan;138:49-56

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.