google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Deep Learning and Cardiothoracic Apps Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Cardiothoracic Apps

-- OR --


  • Pneumothorax Detection

  • PE Detection

  • PE Detection
  • OBJECTIVES To develop a deep learning–based algorithm that can classify normal and abnormal results from chest radiographs with major thoracic diseases including pulmonary malignant neoplasm, active tuberculosis, pneumonia, and pneumothorax and to validate the algorithm’s performance using independent data sets.
    CONCLUSIONS AND RELEVANCE The algorithm consistently outperformed physicians, including thoracic radiologists, in the discrimination of chest radiographs with major thoracic diseases, demonstrating its potential to improve the quality and efficiency of clinical practice.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • RESULTS The algorithm demonstrated a median (range) area under the curve of 0.979 (0.973-1.000) for image-wise classification and 0.972 (0.923-0.985) for lesion-wise localization; the algorithm demonstrated significantly higher performance than all 3 physician groups in both image-wise classification (0.983 vs 0.814-0.932; all P < .005) and lesion-wise localization (0.985 vs 0.781-0.907; all P < .001). Significant improvements in both image-wise classification (0.814-0.932 to 0.904-0.958; all P < .005) and lesion-wise localization (0.781-0.907 to 0.873-0.938; all P < .001) were observed in all 3 physician groups with assistance of the algorithm.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Key Points
    Question Can a deep learning–based algorithm accurately discriminate abnormal chest radiograph results showing major thoracic diseases from normal chest radiograph results?
    Findings In this diagnostic study of 54 221 chest radiographs with normal findings and 35 613 with abnormal findings, the deep learning–based algorithm for discrimination of chest radiographs with pulmonary malignant neoplasms, active tuberculosis, pneumonia, or pneumothorax demonstrated excellent and consistent performance throughout 5 independent data sets. The algorithm outperformed physicians, including radiologists, and enhanced physician performance when used as a second reader.
    Meaning A deep learning–based algorithm may help improve diagnostic accuracy in reading chest radiographs and assist in prioritizing chest radiographs, thereby increasing workflow efficacy.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Findings In this diagnostic study of 54 221 chest radiographs with normal findings and 35 613 with abnormal findings, the deep learning–based algorithm for discrimination of chest radiographs with pulmonary malignant neoplasms, active tuberculosis, pneumonia, or pneumothorax demonstrated excellent and consistent performance throughout 5 independent data sets. The algorithm outperformed physicians, including radiologists, and enhanced physician performance when used as a second reader.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • The strengths of our study can be summarized as follows. First, the development data set underwent extensive data curation by radiologists. It has been shown that the performance of deep learning–based algorithms depends not only on the quantity of the training data set, but also on the quality of the data labels. As for CRs, several open-source data sets are currently available; however, those data sets remain suboptimal for the development of deep learning–based algorithms because they are weakly labeled by radiologic reports or lack localization information. In contrast, in the present study, we initially collected data from the radiology reports and clinical diagnosis; then experienced board-certified radiologists meticulously reviewed all of the collected CRs. Furthermore, annotation of the exact location of each abnormal finding was done in 35.6% of CRs with abnormal results, which we believe led to the excellent performance of our DLAD.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “In contrast, in the present study, we initially collected data from the radiology reports and clinical diagnosis; then experienced board-certified radiologists meticulously reviewed all of the collected CRs. Furthermore, annotation of the exact location of each abnormal finding was done in 35.6% of CRs with abnormal results, which we believe led to the excellent performance of our DLAD.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • Third, we compared the performance of our DLAD with the performance of physicians with various levels of experience. The stand-alone performance of a CAD system can be influenced by the difficulty of the test data sets and can be exaggerated in easy test data sets. However, observer performance tests may provide a more objective measure of performance by comparing the performance between the CAD system and physicians. Impressively, the DLAD demonstrated significantly higher performance both in image-wise classification and lesion-wise localization than all physician groups, even the thoracic radiologist group.
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “The high performance of the DLAD in classification of CRs with normal and abnormal findings indicative of major thoracic diseases, outperforming even thoracic radiologists, suggests its potential for stand-alone use in select clinical situations. It may also help improve the clinical workflow by prioritizing CRs with suspicious abnormal findings requiring prompt diagnosis and management. It can also improve radiologists’ work efficiency, which would partially alleviate the heavy workload burden that radiologists face today and improve patients’ turnaround time. Furthermore, the improved performance of physicians with the assistance of the DLAD indicates the potential of our DLAD as a second reader. The DLAD can contribute to reducing perceptual error of interpreting physicians by alerting them to the possibility of major thoracic diseases and visualizing the location of the abnormality. In particular, the more obvious increment of performance in less-experienced physicians suggests that our DLAD can help improve the quality of CR interpretations in situations in which expert thoracic radiologists may not be available.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “The high performance of the DLAD in classification of CRs with normal and abnormal findings indicative of major thoracic diseases, outperforming even thoracic radiologists, suggests its potential for stand-alone use in select clinical situations. It may also help improve the clinical workflow by prioritizing CRs with suspicious abnormal findings requiring prompt diagnosis and management. It can also improve radiologists’ work efficiency, which would partially alleviate the heavy workload burden that radiologists face today and improve patients’ turnaround time.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • “We developed a DLAD algorithm that can classify CRs with normal and abnormal findings indicating major thoracic diseases with consistently high performance, outperforming even radiologists, which may improve the quality and efficiency of the current clinical workflow.”
    Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs
    Eui Jin Hwang et al.
    JAMA Network Open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
  • OBJECTIVE. Diagnostic imaging has traditionally relied on a limited set of qualitative imaging characteristics for the diagnosis and management of lung cancer. Radiomics—the extraction and analysis of quantitative features from imaging—can identify additional imaging characteristics that cannot be seen by the eye. These features can potentially be used to diagnose cancer, identify mutations, and predict prognosis in an accurate and noninvasive fash- ion. This article provides insights about trends in radiomics of lung cancer and challenges to widespread adoption.
    CONCLUSION. Radiomic studies are currently limited to a small number of cancer types. Its application across various centers are nonstandardized, leading to difficulties in comparing and generalizing results. The tools available to apply radiomics are specialized and limited in scope, blunting widespread use and clinical integration in the general population. Increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention and clarity to the growing field of radiomics.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • OBJECTIVE. Diagnostic imaging has traditionally relied on a limited set of qualitative imaging characteristics for the diagnosis and management of lung cancer. Radiomics—the extraction and analysis of quantitative features from imaging—can identify additional imaging characteristics that cannot be seen by the eye. These features can potentially be used to diagnose cancer, identify mutations, and predict prognosis in an accurate and noninvasive fashion. This article provides insights about trends in radiomics of lung cancer and challenges to widespread adoption.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • CONCLUSION. Radiomic studies are currently limited to a small number of cancer types. Its application across various centers are nonstandardized, leading to difficulties in comparing and generalizing results. The tools available to apply radiomics are specialized and limited in scope, blunting widespread use and clinical integration in the general population. Increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention and clarity to the growing field of radiomics.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics is defined as the quantification of the phenotypic features of a lesion from medical imaging (i.e., CT, PET, MRI, ultrasound). These features include lesion shape, volume, texture, attenuation, and many more that are not readily apparent or are too numerous for an individual radiologist to assess visually or qualitatively. In other words, radiomics is the process of creating a set of organized data based on the physical properties of an object of interest.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Regardless of lesion histology and location, the workflow in radiomics remains similar. Images of the lesion, typically CT images, are acquired. The images are segmented to define the outer limits of a given lesion. Specific phenotypic features are then selected, extracted from the images, and recorded. Finally, data analysis is performed on the recorded data. Image features can be extracted and analyzed in either 2D or 3D: 2D refers to segmentation and analysis of radiomic metrics on a single-slice image, whereas 3D refers to the same process across the entire volume of a tumor (many slices).
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Image features can be extracted and analyzed in either 2D or 3D: 2D refers to segmentation and analysis of radiomic metrics on a single-slice image, whereas 3D refers to the same process across the entire volume of a tumor (many slices). Therefore, 3D radiomics, by definition, requires analysis of the entire volume of tumor. In general, feature extraction and analysis are easier and faster in 2D than in 3D, but 3D may theoretically carry more information. Two-dimensional radiomics is used more commonly, but 3D radiomics is appealing with regard to analyzing intratumoral heterogeneity in cases in which different parts of a tumor may exhibit differing histologic subtypes.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Segmentation of a lesion is the act of extracting or isolating a lesion of interest (e.g., lung nodule) from the surrounding normal lung. Features are then extracted and are further analyzed directly from the segmented lesion. This can be thought of in distinction to deep learning, where an algorithm must learn to automatically extract features from an unsegmented image.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Lesion segmentation can be done either manually or in an automated fashion. Manual segmentation—that is, segmentation performed by a trained observer who manually outlines the lesion of interest—is time-consuming and is more prone to interobserver variability and subjectivity than semiautomated and fully automated segmentation. Manual segmentation is important when accuracy of the tumor outline (i.e., lesion shape and size) is needed.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Shape is one of the feature categories understood as both semantic and agnostic. It is a category of features that includes diameter measurements (e.g., minimum, maximum) and their derivatives including volume, ratio of diameters, surface-to-volume ratio, and compactness. Diameter measurements and their derivatives are among the most commonly assessed features. Semantic descriptions such as round, oval, and spiculated are understood agnostically by a varied lexicon that attempts to determine how irregular the object is. In the shape category, tumor volume has shown the most promise in predicting treatment response.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Texture in radiomics broadly refers to the variation in the gray-scale intensities of adjacent pixels or voxels in an image. Depending on the technique involved, texture features are categorized into first, second, and higher-order statistical measures. The first-order statistical measures are composed of features that account for variations in gray-scale intensities without accounting for their spatial location or orientation on the image. For example, a histogram of pixel or voxel intensities, which is a visual representation of the distribution of gray-scale intensity values on an image, is the most common technique to derive the first-order texture measures.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Second-order texture metrics encompass hundreds of features derived by evaluating the relationship of adjacent pixels in an ROI or across the entire lesion. These metrics account for both the intensity of a gray-scale value and its location or orientation in the image. CT images are formed from a 3D matrix of data that is used to determine the amount of gray-level color to display for a given image pixel. Texture or heterogeneity refers to analysis of adjacent pixels of gray color to determine the relationship between them; if there are wide variances in the amount of gray color in a given area, then a lesion is considered more heterogeneous or to have a coarse texture.
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Texture has shown the most promise in predicting the presence of malignancy and prognosis. Local binary patterns (LBPs) and gray-level co-occurrence matrices (GLCMs) are most often used in this. However, evaluations of nodule heterogeneity or texture are not limited to LBPs or GLCMs. Numerous alternative methods that attempt to extract patterns from an image via a series of mathematic transformations or filters applied to the image, including Laws’ energy descriptors, fractal analysis, and wavelet analysis, are being increasing applied. This latter group of texture metrics includes higher-order statistical measures. Texture analysis has practical applications; for example, Parmar and colleagues showed that texture features in lung cancer were significantly associated with tumor stage and patient survival.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504

  • “Segmentation and feature recognition currently rely on the initial identification of a nodule by a radiologist. Thus, the near-term and medium-term role of radiomics is likely to be as a support tool in which radiomics is integrated with traditional radiologic and invasive histologic information. We should note that many prior studies achieved highest accuracy when radiomic data were viewed in light of genetic and clinical information.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “Most importantly, the study of radiomics must be drastically expanded to account for the numerous clinical and radiologic presentations of lung cancer. Radiomics is predicated on creating tools to more accurately diagnose lung cancer and determine prognosis of patients with lung cancer in a noninvasive fashion. However, the tools available to practice radiomics are specialized and limited in scope, blunting wide-spread use and clinical integration in the general population. Looking forward, we believe that increasing the number of multicenter studies and consortiums and inclusion of radiomics in resident training will bring more attention to the growing field of radiomics.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • “ Other challenges for radiomics include advancing interinstitutional standards for image acquisition and reconstruction parameters and the development of a unified lexicon. Radiomic data are affected by different image acquisition and reconstruction parameters (e.g., contrast timing, slice thickness, reconstruction algorithm, tube voltage, tube current, and so on) that can affect the reproducibility of radiomic features . Many radiomic studies have relied on a heterogeneous dataset of imaging using a mixture of these parameters. Standardized imaging parameters, including consistent contrast dose, timing, and radiation dose levels, will likely need to be implemented for radiomic studies.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Furthermore, radiomics can be performed in 2D or 3D. Two-dimensional radiomics is applied to a single image slice, and the resulting radiomic features can vary from slice to slice. Three-dimensional radiomics is applied to the entire volume of a tumor. The potential differences between these two fundamentally different approaches require further evaluation. In addition, radiomics is a multidisciplinary field with experts from different backgrounds who approach radiomics in different ways. These experts often collaborate and have to understand and incorporate the methods and rationale of sometimes unfamiliar disciplines. For example, computer science researchers may have limited knowledge and experience with medical image acquisition and reconstruction. A unified lexicon will be necessary to maintain consistency, especially for researchers who have limited experience with medical imaging.”
    Radiomics in Pulmonary Lesion Imaging
    Hassani C et al.
    AJR 2019; 212:497–504
  • Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks. Thoracic Disease Identification and Localization with Limited Supervision Zhe Li et al arVIX March 2018 (in press)
  • “We propose a unified model that jointly models disease identification and localization with limited localization annotation data. This is achieved through the same underlying prediction model for both tasks. Quantitative and qualitative results demonstrate that our method significantly outperforms the state-of-the-art algorithm”
    Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)

  • Thoracic Disease Identification and Localization with Limited Supervision
    Zhe Li et al
    arVIX March 2018 (in press)
  • “To address these two issues, the authors made use of a large data set of 103489 chest radiographs obtained between 2007 and 2016 in 46712 patients. Only 5232 patients with 7390 radiographs had a BNP test value avail- able. This data set with BNP data was termed “labeled,” and the other data set without BNP data was termed “unlabeled.” In the labeled data set, BNP level was dichotomized at 100 ng/L, above which CHF was defined as present. The labeled data set was divided into a training data set (80% of the data) and a test data set (20% of the data).”
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • Nevertheless, clearly the work of Seah et al is highly innovative and has wide applications in many different areas in medical imaging. The concept of GVR is in fact similar to the idea of counterfactuals used in causal inference studies. A GVR-generated deep learning neural network system (as nicely implemented in this study) would definitely improve over time as more labeled images, finer-resolution images, and improved machine learning algorithms become available. One can easily imagine having this system as an additional tool to assist radiologists in delivering better diagnostic information to their patients.
    Using a Deep Learning Network to Diagnose Congestive Heart Failure
    Ngo LH
    Radiology 2019; 00:1–2 •
    https://doi.org/10.1148/radiol.2018182341
  • BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, namely Intracranial Hemorrhage (ICH) .
    BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
    The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
  • “BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing care image interpretation. The user is presented standard of with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device. “
  • AI and Pathology in Lung Cancer
  • Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them—STK11, EGFR, FAT1, SETBP1, KRAS and TP53—can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations.
  • Purpose: To develop and validate a deep learning–based automatic detection algorithm (DLAD) for malignant pulmonary nodules on chest radiographs and to compare its performance with physicians including thoracic radiologists.
    Conclusion: This deep learning–based automatic detection algorithm outperformed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and it enhanced physicians’ performances when used as a second reader.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al. Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Materials and Methods: For this retrospective study, DLAD was developed by using 43 292 chest radiographs (normal radiograph– to–nodule radiograph ratio, 34 067:9225) in 34 676 patients (healthy-to-nodule ratio, 30 784:3892; 19 230 men [mean age, 52.8 years; age range, 18–99 years]; 15 446 women [mean age, 52.3 years; age range, 18–98 years]) obtained between 2010 and 2015, which were labeled and partially annotated by 13 board-certified radiologists, in a convolutional neural network. Radiograph classification and nodule detection performances of DLAD were validated by using one internal and four external data sets from three South Korean hospitals and one U.S. hospital. For internal and external validation, radiograph classification and nodule detection performances of DLAD were evaluated by using the area under the receiver operating characteristic curve (AUROC) and jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit (FOM), respectively. An observer performance test involving 18 physicians, including nine board-certified radiologists, was conducted by using one of the four external validation data sets. Performances of DLAD, physicians, and physicians assisted with DLAD were evaluated and compared.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Results: According to one internal and four external validation data sets, radiograph classification and nodule detection performances of DLAD were a range of 0.92–0.99 (AUROC) and 0.831–0.924 (JAFROC FOM), respectively. DLAD showed a higher AUROC and JAFROC FOM at the observer performance test than 17 of 18 and 15 of 18 physicians, respectively (P , .05), and all physicians showed improved nodule detection performances with DLAD (mean JAFROC FOM improvement, 0.043; range, 0.006–0.190; P , .05).
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237 
  • Summary: Our deep learning–based automatic detection algorithm outper- formed physicians in radiograph classification and nodule detection performance for malignant pulmonary nodules on chest radiographs, and when used as a second reader, it enhanced physicians’ performances.
    Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs
    Ju Gang Nam et al.
    Radiology 2018; 00:1–11 • https://doi.org/10.1148/radiol.2018180237
  • Implications for Patient Care
    - Our deep learning–based automatic detection algorithm showed excellent detection performances on both a per-radiograph and per-nodule basis in one internal and four external validation data sets.
    - Our deep learning–based automatic detection algorithm demonstrated higher performance than the thoracic radiologist group.
    - When accompanied by our deep learning–based automatic detection algorithm, all physicians improved their nodule detection performances.

  • “The process of achieving value in terms of medical decision support does not remove the clinician or radiologist, but instead, provides easier access to information that might otherwise be inaccessible, inefficient, or difficult to integrate in real-time for the consulting physician. When this information is distilled in a way available to the radiologist, it becomes knowledge that can positively impact the clinician’s judgment in a personalized way in real-time.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Many tools have been developed to risk stratify patients into categories of pretest probability for CAD by generalizing patients into low-risk, medium-risk, and high- risk categories. Examples such as the Diamond and Forrester method, the Duke Clinical Score, and the Framingham Risk Score incorporate prior clinical history of cardiac events, certain characteristics of the chest pain, family history, medical history, age, sex, and results of a lipid panel. Imaging findings have been used in this type of risk stratification as well, with coronary calcium scoring.”


    Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Importantly for radiologists, machine learning algorithms can help address many problems in current-day radiology practices that do not involve image interpretation. Although much of the attention in the machine learning space has focused on the ability of machines to classify image findings, there are many other useful applications of machine learning that will improve efficiency and utilization of radiology practices today. Moreover, we may see a world where a symbiosis of subspecialty experts and machines lead to better care than could be provided by either one alone. Those practices that implement these technologies today are likely to better position themselves for the future.” 


    Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)
© 1999-2019 Elliot K. Fishman, MD, FACR. All rights reserved.