Search
CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning CTisus CT Scanning Ask the Fish

Everything you need to know about Computed Tomography (CT) & CT Scanning

3D and Workflow: Deep Learning Imaging Pearls - Learning Modules | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ 3D and Workflow ❯ Deep Learning

-- OR --

  • “ The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “The amount and quality of training data are dominant influencers on a machine learning (ML) model’s performance. The common theme from all attendees was that everyone participating in medical image evaluation with machine learning is data starved. This is a particularly pressing problem in the new era of deep learning.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “Radiologists’ reports are not definitive expressions of ground truth. A retrospective 20-year literature re- view in 2001 suggested that the level of significant radiology error ranged between 2 and 20% . This is not limited to radiology; a Mayo clinic study comparing clinical diagnoses with postmortem autopsies reported that a major diagnosis was missed clinically in 26% of patients.”

    
Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “In contrast, most publically available medical image datasets have tens or hundreds of cases, and datasets with more than 5000 well-annotated cases are rare. In the USA, individual healthcare institutions may have 103 up to rarely 107 of an exam type. These common radiology exam types, for example, chest radiographs, unenhanced brain CTs, mammograms, and abdominal CTs, are often high-dimensional data due to variations in pathology, technique, radiology interpretation, patient population, and clinical setting.”

    
Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “Action items, and priority research topics, for this field of study include the following: 
Describe, via a whitepaper, the high-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products 
& Describe common categories of use cases for medical image datasets, and understand unique dataset attributes applicable to each 
& Describe the metadata, framework, and standards needed to catalog and discover datasets of medical images appropriate for ML. 
& Understand and describe business cases and models for medical image datasets.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data.”

    
Toolkits and Libraries for Deep Learning 
Bradley J. Erickson et al. 
J Digit Imaging (2017) 30:400–405
  • “AI using deep learning demonstrates promise for detecting critical findings at noncontrast-enhanced head CT. A dedicated algorithm was required to detect SAI. Detection of SAI showed lower sensitivity in comparison to detection of HMH, but showed reasonable performance. Findings sup- port further investigation of the algorithm in a controlled and prospective clinical setting to determine whether it can independently screen noncontrast-enhanced head CT examinations and notify the interpreting radiologist of critical findings.”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press

  • “To evaluate the performance of an arti cial intelligence (AI) tool using a deep learning algorithm for detecting hemorrhage, mass effect, or hydrocephalus (HMH) at non—contrast material–enhanced head computed tomo- graphic (CT) examinations and to determine algorithm performance for detection of suspected acute infarct (SAI).”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press
  • “Although more medical information than ever is now contained within EHRs, data continue to exist in isolated silos. Appropriate data analytic tools are imperative to handle disparate data and mine these data efficiently and usefully. Owning the clinical and imaging follow-up loops is necessary to cement our clinical relevance.”

    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Medical images have unique issues that make radiology image interpretation more complex than it may first appear to non–domain-expert data scientists. The cost to develop data sets is very high, the process is fraught with legal issues, and, even in best case scenarios, the data sets will constitute only a fraction of the number of animal photos on Facebook and Instagram. A data set of images must be classified into relevant categories, such as “disease” and “normal.” This labor-intensive process depends on humans to classify the data. “

    Ground truth” is the term for classification accuracy.”
Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Historically, the limiting factor to useful ML projects has been appropriate training data sets. For the moment, at least, radiologists hold great power over the imaging, clinical, and radiology-expertise data needed to develop image interpretation deep learning algorithms. This data combination is hugely valuable. When these algorithms finally do arrive, they have the potential to disrupt radiology adversely.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Whether small or large, good data sets for ML are complex, expensive undertakings. Data sets must be partitioned into training, verification, and validation portions; some of these data will be part of the public domain, whereas other data would be private. Large data sets aggregated from multiple sources face issues including intellectual property (IP), business agreements, and governance, all of which need to be resolved.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “In summary, radiologists will not be replaced by machines. Radiologists of the future will be essential data scientists of medicine. We will leverage clinical data science and ML to diagnose and treat patients better, faster, and more efficiently. Although this new clinical data science milieu will undoubtedly alter radiology practice, if performed correctly, it will empower radiologists to continue to provide better actionable recommendations on the basis of new insights from the medical images and other relevant data.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records.”

    
Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • “Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk.”


    Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • “Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy.”


    Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • PURPOSE: Diabetic retinopathy (DR) is one of the leading causes of preventable blindness globally. Performing retinal screening examinations on all diabetic patients is an unmet need, and there are many undiagnosed and untreated cases of DR. The objective of this study was to develop robust diagnostic technology to automate DR screening. Referral of eyes with DR to an ophthalmologist for further evaluation and treatment would aid in reducing the rate of vision loss, enabling timely and accurate diagnoses.


    CONCLUSIONS: A fully data-driven artificial intelligence-based grading algorithm can be used to screen fundus photographs obtained from diabetic patients and to identify, with high reliability, which cases should be referred to an ophthalmologist for further evaluation and treatment. The implementation of such an algorithm on a global basis could reduce drastically the rate of vision loss attributed to DR.
Automated Identification of Diabetic Retinopathy Using Deep Learning.
Gargeya R1, Leng T2.
Ophthalmology. 2017 Mar 27. pii: S0161-6420(16)31774-2

  • “In computerized detection of clustered microcalcifications (MCs) from mammograms, the traditional approach is to apply a pattern detector to locate the presence of individual MCs, which are subsequently grouped into clusters. Such an approach is often susceptible to the occurrence of false positives (FPs) caused by local image patterns that resemble MCs. We investigate the feasibility of a direct detection approach to determining whether an image region contains clustered MCs or not. Toward this goal, we develop a deepconvolutional neural network (CNN) as the classifier model to which the input consists of a large image window. The multiple layers in the CNN classifier are trained to automatically extract image features relevant to MCs at different spatial scales.”


    Global detection approach for clustered microcalcifications in mammograms using a deep learning network.
Wang J, Nishikawa RM, Yang Y
J Med Imaging (Bellingham). 2017 Apr;4(2):024501
  • “This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on.”


    Deep Learning in Medical Image Analysis.
Shen D, Wu G, Suk HI
Annu Rev Biomed Eng. 2017 (in press)

  • “High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features.”

    
3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.
Nie D et al.
Med Image Comput Comput Assist Interv. 2016 Oct;9901:212-220
  • “While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease.”


    Classification of CT brain images based on deep learning networks.
Gao XW1, Hui R2, Tian Z
Comput Methods Programs Biomed. 2017 Jan;138:49-56
  • “The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology.”


    Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A1, Lee G2.
Med Image Anal. 2016 Oct;33:170-5
  • “The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder.”


    Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets.
Cha KH et al.
Med Phys. 2016 Apr;43(4):1882

  • High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3Dconvolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.

    
3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.
Nie D et al.
Med Image Comput Comput Assist Interv. 2016 Oct;9901:212-222
  • OBJECTIVE. The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, su- pervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to brie y describe ethical dilemmas and legal risk. 

    CONCLUSION. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement. 


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML comprises a broad class of statistical analysis algorithms that iteratively improve in response to training data to build models for autonomous predictions. In other words, computer program performance improves automatically with experience . The goal of an ML algorithm is to develop a mathematic model that is the data. Once this model is known data, it can be used to predict the labels of new data. Because radiology is inherently a data interpretation profession in extracting features from images and applying a large knowledge base to interpret those features—it provides ripe opportunities to apply these tools to improve practice.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760


  • 







Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Most ML relevant to radiology is super- vised. In supervised ML, data are labeled before the model is trained. For example, in training a project to identify a specific brain tumor type, the label would be tumor pathologic results or genomic information. These labels, also known as ground truth, can be as specific or general as needed to answer the question. The ML algorithm is exposed to enough of these labeled data to allow them to morph into a model designed to answer the question of interest. Because of the large number of well-labeled images required to train models, curating these datasets is often laborious and expensive.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “In unsupervised ML, unlabeled data are exposed to the algorithm with the goal of generating labels that will meaningfully organize the data. This is typically done by identifying useful clusters of data based on one or more dimensions. Compared with supervised techniques, unsupervised learning sometimes requires much larger training datasets. Unsupervised learning is useful in identifying meaningful clustering labels that can then be used in supervised training to develop a useful ML algorithm. This blend of supervised and unsupervised learning is known as semisupervised.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “At the outset of an ML project, data are divided into three sets: training, test, and validation. The training dataset is sent through the algorithm repeatedly to establish values for each hyperparameter. After the hyper- parameters stabilize, the test dataset is sent through the model, and the accuracy of the predictions or classifications is evaluated. At this point, the trainer decides whether the model is fully trained or adjusts the algorithm architecture to repeat training. After several iterative cycles of training and testing, the algorithm is fed validation data for evaluation. Application of ML to radiology involves both medical knowledge and pixel data.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm. Newer deep learning and CNN techniques can help by incorporating the image-preprocessing step into the algorithm itself, saving manual labor and potentially leading to the discovery of preprocessing techniques that perform better in the subsequent neural network layers. These techniques require either tightly focused and well defined data or extremely large datasets.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “The FDA has not issued rules about test datasets, transparency, or verification proce- dures. It will probably evaluate models and associated test datasets on a case by case ba- sis. How this will evolve is unclear at present. In addition, regulation that created the FDA was enacted before the availability of ML, and existing laws regarding devices are difficult to apply to ML algorithms.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML encompasses many powerful tools with the potential to dramatically increase the information radiologists extract from images. It is no exaggeration to suggest the tools will change radiology as dramatically as the advent of cross-sectional imaging did. We believe that owing to the narrow scope of existing applications of ML and the complexity of creating and training ML models, the possibility that radiologists will be replaced by machines is at best far in the future. Successful application of ML to the radiology domain will require that radiologists extend their knowledge of statistics and data science to supervise and correctly interpret ML-derived results.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “An automated machine learning computer system was created to detect, anatomically localize, and categorize vertebral compression fractures at high sensitivity and with a low false-positive rate, as well as to calculate vertebral bone density, on CT images.”
Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
    “Sensitivity for detection or localization of compression fractures was 95.7% (201 of 210; 95% confidence interval [CI]: 87.0%, 98.9%), with a false-positive rate of 0.29 per patient. Additionally, sensitivity was 98.7% and specificity was 77.3% at case-based receiver operating characteristic curve analysis.”


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • “This system performed with 95.7% sensitivity in fracture detection and lo- calization to the correct vertebral level, with a low false-positive rate. There was a high level of overall agreement (95%) for compression morphology and 68% overall agreement for severity categorization relative to radiologist classification.”


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • *A fully automated machine learning software system with which to detect, localize, and classify compression fractures and determine the bone density of thoracic and lumbar vertebral bodies on CT images was developed and validated. 
* The computer system has a sensitivity of 95.7% in the detection of compression fractures and in the localization of these fractures to the correct vertebrae, with a false-positive rate of 0.29 per patient. 
* The accuracy of this computer system in fracture classification by Genant type was 95% (weighted k = 0.90). 


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • “ By virtue of its information technology-oriented infrastructure, the specialty of radiology is uniquely positioned to be at the forefront of efforts to promote data sharing across the healthcare enterprise, including particularly image sharing. The potential benefits of image sharing for clinical, research, and educational applications in radiology are immense. In this work, our group—the Association of University Radiologists (AUR) Radiology Research Alliance Task Force on Image Sharing—reviews the benefits of implementing image sharing capability, introduces current image sharing platforms and details their unique requirements, and presents emerging platforms that may see greater adoption in the future. By understanding this complex ecosystem of image sharing solutions, radiologists can become im- portant advocates for the successful implementation of these powerful image sharing resources.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Sharing of primary imaging data also makes further research more efficient and can substantially reduce the cost of subsequent studies. In research efforts utilizing extremely large data sets, such as those found in radiogenomics and radiomics research, sharing and exchange of images facilitates linking radiological data with large biological and genetic data sets, thereby enabling the use of big data analysis methods to uncover correlations between imaging phenotypes and underlying genetic and functional molecular expression profiles.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Practical considerations must also factor into the design of implemented workflow. For example, transfer of DICOM imaging studies through portable media such as compact discs has been demonstrated as untenable and unsustainable between institutions as image sharing gains traction. Instead, decentralized upload of DICOM data into a PACS system from 
remote terminals helps distribute the workload between mul- tiple departments, reduces the time required by a central physical processing center, saves the time required for physical transportation of the media, and retains the physical media at the point of care.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Commercial PACS and standalone image sharing plat- form also have not been evaluated in the scientific literature, but market analysis is being conducted through surveys by research companies such as peer60. In peer60’s report, vendors with the largest reported market share among large healthcare institutions include McKesson’s Conserus Image Repository, IBM’s Merge, Nuance PowerShare Network, GE Centricity with OneMedNet BEAM, LifeIMAGE, Philips IntelliSpace Portal, ABI Health Sectra, and Agfa Healthcare Enterprise Imaging.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Modern educational practices in radiology require facilitated learning using computer-based modules and active simulation as part of the learning experience. As part of this trend, there has been tremendous growth in the number of online, case-based learning tools. In their purest form, these tools can take the form of PACS-like, web- based teaching files, which offer almost unlimited scalability for case acquisition and distribution, and may also include the ability to pose questions to radiology trainees, track responses, and categorize cases, ideally with seamless integration with a clinical PACS system.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Even more exciting is the finding that in some cases, computers seem to be able to “see” patterns that are beyond human perception.This discovery has led to substantial and increased interest in the field of machine learning— specifically, how it might be applied to medical images.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)

  • “These algorithms have been used for several challenging tasks, such as pulmonary embolism segmentation with computed tomographic (CT) angiography (3,4), polyp detection with virtual colonoscopy or CT in the setting of colon cancer (5,6), breast cancer detection and diagnosis with mammography (7), brain tumor segmentation with magnetic resonance (MR) imaging (8), and detection of the cognitive state of the brain with functional MR imaging to diagnose neurologic disease (eg, Alzheimer disease).”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “If the algorithm system optimizes its parameters such that its performance improves—that is, more test cases are diagnosed correctly—then it is considered to be learning that task.”



    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Training: The phase during which the ma- chine learning algorithm system is given labeled example data with the answers (ie, labels)—for example, the tumor type or correct boundary of a lesion.The set of weights or decision points for the model is updated until no substantial improvement in performance is achieved.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Deep learning, also known as deep neural network learning, is a new and popular area of research that is yielding impressive results and growing fast. Early neural networks were typi- cally only a few (<5) layers deep, largely because the computing power was not sufficient for more layers and owing to challenges in updating the weights properly. Deep learning refers to the use of neural networks with many layers—typically more than 20.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “CNNs are similar to regular neural networks. The difference is that CNNs assume that the inputs have a geometric relationship—like the rows and columns of images. The input layer of a CNN has neurons arranged to produce a convolution of a small image (ie, kernel) withthe image.This kernel is then moved across the image, and its output at each location as it moves across the input image creates an output value. Although CNNs are so named because of the convolution kernels, there are other important layer types that they share with other deep neural networks. Kernels that detect important features (eg, edges and arcs) will have large outputs that contribute to the final object to be detected.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Machine learning is already being applied in the practice of radiology, and these applications will probably grow at a rapid pace in the near future.The use of machine learning in radiology has important implications for the practice of medicine, and it is important that we engage this area of research to ensure that the best care is afforded to patients. Understanding the properties of machine learning tools is critical to ensuring that they are applied in the safest and most effective manner.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “The combination of big data and artificial intelligence, referred to by some as the fourth industrial revolution, will change radiology and pathology along with other medical specialties. Although reports of radiologists and pathologists being replaced by computers seem exaggerated, these specialties must plan strategically for a future in which artificial intelligence is part of the health care workforce.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Watson has a boundless capacity for learning—and now has 30 billion images to review after IBM acquired Merge. Watson may become the equivalent of a general radiologist with super-specialist skills in every domain—a radiologist’s alter ego and nemesis.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “For example, a radiologist typically views 4000 images in a CT scan of multiple body parts (“pan scan”) in patients with multiple trauma. The abundance of data has changed how radiologists interpret images; from pattern recognition, with clinical context, to searching for needles in haystacks; from inference to detection. The radiologist, once a maestro with a chest ra- diograph, is now often visually fatigued searching for an occult fracture in a pan scan.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Radiologists should identify cognitively simple tasks that could be addressed by artificial intelligence, such as screening for lung cancer on CT. This involves detecting, measuring, and characterizing a lung nodule, the management of which is standardized. A radiology residency or a medical degree is not needed to detect lung nodules.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Because pathology and radiology have a similar past and a common destiny, perhaps these specialties should be merged into a single entity, the “information specialist,” whose responsibility will not be so much to extract information from images and histology but to manage the information extracted by artificial intelligence in the clinical context of the patient.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “The information specialist would interpret the important data, advise on the added value of another diagnostic test, such as the need for additional imaging, anatomical pathology, or a laboratory test, and integrate information to guide clinicians. Radiologists and pathologists will still be the physician’s physician.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “If artificial intelligence becomes adept at screening for lung and breast cancer, it could screen populations faster than ra- diologists and at a fraction of cost. The information specialist could ensure that images are of sufficient quality and that artificial intelligence is yielding neither too many false-positive nor too many false- negative results. The efficiency from the economies of scale because of artificial intelligence could benefit not just developed countries, such as the United States, but developing countries hampered by access to specialists. A single information specialist, with the help of artificial intelligence, could potentially manage screening for an entire town in Africa.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “There may be resistance to merging 2 distinct medical specialties, each of which has unique pedagogy, tradition, accreditation, and reimbursement. However, artificial intelligence will change these diagnostic fields. The merger is a natural fusion of human talent and artificial intelligence. United, radiologists and pathologists can thrive with the rise of artificial intelligence.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Information specialists should train in the traditional sciences of pathology and radiology. The training should take no longer than it presently takes because the trainee will not spend time mastering the pattern recognition required to become a competent radiologist or pathologist. Visual interpretation will be restricted to perceptual tasks that artificial intelligence cannot perform as well as humans. The trainee need only master enough medical physics to improve suboptimal quality of medical images. Information special- ists should be taught Bayesian logic, statistics, and data science and be aware of other sources of information such as genomics and bio- metrics, insofar as they can integrate data from disparate sources with a patient’s clinical condition.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “It is clear that molecular changes in gene expression solicit a structural and vascular change in phenotype that is in turn observable on the imaging modality under consideration. For instance, tumor morphology in standard H&E tissue specimens reflects the sum of all molecular pathways in tumor cells. By the same token radiographic imaging modalities such as MRI and CT are ultimately capturing structural and functional attributes reflective of the biological pathways and cellular morphology characterizing the disease. Historically the concept and importance of radiology-pathology fusion has been around and recognized.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “For the biomedical image computing, machine learning, and bioinformatics scientists, the aforementioned challenges will present new and exciting opportunities for developing new feature analysis and machine learning opportunities. Clearly though, the image computing community will need to work closely with the pathology community and potentially whole slide imaging and microscopy vendors to be able to develop new and innovative solutions to many of the critical image analysis challenges in digital pathology.”


    Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • PURPOSE: Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans.


    CONCLUSIONS: The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.
Automatic 3D liver location and segmentation via convolutional neural network and graph cut.
Lu F et al.
Int J Comput Assist Radiol Surg. 2016 Sep 7. [Epub ahead of print]
  • “Precision medicine relies on an increasing amount of heterogeneous data. Advances in radiation oncology, through the use of CT Scan, dosimetry and imaging performed before each fraction, have generated a considerable flow of data that needs to be integrated. In the same time, Electronic Health Records now provide phenotypic profiles of large cohorts of patients that could be correlated to this information. In this review, we describe methods that could be used to create integrative predictive models in radiation oncology. Potential uses of machine learning methods such as support vector machine, artificial neural networks, and deep learning are also discussed.”

    
Big Data and machine learning in radiation oncology: State of the art and future prospects.
Bibault JE, Giraud P, Burgun A
Cancer Lett. 2016 May 27. pii: S0304-3835(16)30346-9. doi: 10.1016/j.canlet.2016.05.033. [Epub ahead of print]
  • “In recent years machine learning (ML) has revolutionized the fields of computer vision and medical image analysis. Yet, a number of doubts remain about the applicability of ML in clinical practice. Medical doctors may question the lack of interpretability of classifiers; Or it is argued that ML methods require huge amounts of training data. Here we discuss some of these issues and show: 
1. how decision trees (a special class of ML models) can be understood as an automatically-optimized generalization of conventional algorithms, and 
2. how the issue of collecting labelled data (e.g. images) applies to both manually-designed and learning-based algorithms.”


    Machine learning for medical images analysis
Criminisi A
Medical Image Analysis
Volume 33, October 2016, Pages 91–93
  • “Ultimately, these researchers argue, the complex answers given by machine learning have to be part of science’s toolkit because the real world is complex: for phenomena such as the weather or the stock mar- ket, a reductionist, synthetic description might not even exist.“

    There are things we cannot verbalize,” says Stéphane Mallat, an applied math- ematician at the École Polytechnique in Paris.
  • “When you ask a medical doctor why he diagnosed this or this, he’s going to give you some rea- sons,” he says. “But how come it takes 20 years to make a good doctor? Because the information is just not in books.” 


    Can we open the black box of AI?
Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn.
 Davide Castelvecchi
Nature Vol 538, Issue 7623 Oct 2016
  • “Each of these tasks is amenable to automa- tion. Organs can be located by the computer using atlas- and landmark-based methods. Organ volume and shape can be assessed by finding the edges of the organs in three dimensions, a process known as segmentation. Lesions can be detected and segmented by assessing the patterns of Hounsfield unit intensities in the organs to identify anomalies. Example pat- terns include variations in intensities, texture, and shape. The quantitative measurements of these patterns are known as features.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “In the other, generic features are used and a machine-learning algorithm is taught to distinguish disease from nondisease sites by being trained on labeled cases, without the need for handcrafted features. The latter approach, which is made feasible by recent advances in computer science known collo- quially as deep learning, is increasingly being used because it markedly increases the efficiency of image analysis development.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “To perform fully automated abdominal CT image interpretation at the level of a trained ra- diologist, the computer must assess all the or- gans and detect all the abnormalities present in the images. Although this is a seemingly daunting task for the software developer, the numbers of organs and potential abnormalities are finite and can be addressed methodically .” 
Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “The pancreas is a highly deformable organ that has a shape and location that is greatly influenced by the presence of adjacent struc- tures. This makes automated image analysis of the pancreas extremely challenging. A number of different approaches have been taken to automated pancreas analysis, in- cluding the use of anatomic atlases, the loca- tion of the splenic and portal veins, and state- of-the-art computer science methods such as deep learning.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “A recent advance in computer science is the refinement of neural networks, a type of machine learning classifier used to make decisions from data. This refine- ment, known generically as deep learn- ing but more specifically as convolutional neural networks, has shown dramatic improvements in automated intelligence applications. Initially drawing attention for impressive improvements in speech recognition and natural image interpretation, deep learning is now being applied to medical images, as described already in the sections on the pancreas and colitis.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “Another underutilized approach is the use of automated image analysis running in the background to triage patients with potentially life-threatening conditions, to reduce common interpretative errors, to perform large-scale epidemiologic studies, and to co- ordinate and interpret large volumes of clinical, genomic, and imaging data. As radiology practices consolidate into larger hospital-led groups, it will be more feasible to implement such systems.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “Similarly, fully automated abdominal CT image interpretation is likely to change the role of radiologists, but they will still be responsible for taking care of the patient and making the final diagnosis. The automated report could improve reading efficiency, but radiologists will need to be vigilant to avoid placing too much trust in the computer.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “The use of automated image interpretation by nonradiologists will need to be considered. Such users might include radiology technologists, radiologist assistants, and nonradiologist clinicians. The technology could lead to further commoditization of radiology services.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “In conclusion, advances in abdominal CT automated image interpretation are occurring at a rapid pace. In the not too distant future, these advances may enable fully automated image interpretation. Similar advances may occur in other body regions and with other imaging modalities. Risks and benefits are difficult to foresee but may include increased pressures for commoditization, better reading efficiency, fewer interpretive errors, and a more quantitative radiology report. The primary focus must ultimately be on improved patient care.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “ The current design and implementation of Medical Workstations has failed and must be replaced by Knowledge Stations as we move beyond image acquisition and into knowledge acquisition like deep learning.”


    Rethinking Medical Workstations and the Transition to Knowledge Stations
KMHorton, Fishman EK
JACR (in progress)
  • There are a number of ways that the field of deep learning has been characterized. Deep learning is a class of machine learning algorithms that use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
are part of the broader machine learning field of learning representations of data.learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

 Wikipedia
  • Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
 Wikipedia
  • Situational Awareness
    Situation awareness involves being aware of what is happening in the vicinity to understand how information, events, and one's own actions will impact goals and objectives, both immediately and in the near future. One with an adept sense of situation awareness generally has a high degree of knowledge with respect to inputs and outputs of a system, an innate "feel" for situations, people, and events that play out because of variables the subject can control. Lacking or inadequate situation awareness has been identified as one of the primary factors in accidents attributed to human error.[1] Thus, situation awareness is especially important in work domains where the information flow can be quite high and poor decisions may lead to serious consequences (such as piloting an airplane, functioning as a soldier, or treating critically ill or injured patients).
  • For select cancer histologies, aggressive focal therapy of oligometastatic lesions is already the clinical standard of care (i.e. colorectal cancer and sarcomas), while for other tumor types the evidence is still emerging (i.e. prostate, breast, etc.). It is increasingly important, therefore, for the radiologist interpreting oncology patients’ staging or restaging examinations to be aware of those diseases for which targeted therapy of oligometastases may be undertaken to effectively guide such management. The improved imaging resolution provided by technological advances promise to aid in the detection of subtle sites of disease to ensure the identification of patients with oligometastases amenable to targeted treatment. 


    What the Radiologist Needs to Know to Guide Patient Management 
Steven P. Rowe, MD, Hazem Hawasli, Elliot K. Fishman, MD, Pamela T. Johnson, 
Acad Radiol 2016; 23:326–328
  • “As such, some of the impetus for exploring aggressive and potentially curative treatment in patients with oligometastases can come from improvements in imaging technology and techniques. Thus, radiologists should not only understand the implications of the new paradigm of oligometastatic disease for how they interpret studies, but they should also ac- tively engage in the research necessary to optimize the selection of patients for aggressive therapy of oligometastases.“

    What the Radiologist Needs to Know to Guide Patient Management 
Steven P. Rowe, MD, Hazem Hawasli, Elliot K. Fishman, MD, Pamela T. Johnson, 
Acad Radiol 2016; 23:326–328
© 1999-2017 Elliot K. Fishman, MD, FACR. All rights reserved.