Everything you need to know about Computed Tomography (CT) & CT Scanning

Ask the Fish
  • CTisus CT Scanning
  • CTisus CT Scanning
  • CTisus CT Scanning
  • CTisus CT Scanning
  • CTisus CT Scanning
  • CTisus CT Scanning
  • CTisus CT Scanning
3D and Workflow: Deep Learning Imaging Pearls - Learning Modules | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ 3D and Workflow ❯ Deep Learning
    Share on Facebook  

-- OR --

  • High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3Dconvolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.

    
3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.
Nie D et al.
Med Image Comput Comput Assist Interv. 2016 Oct;9901:212-222
  • OBJECTIVE. The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, su- pervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to brie y describe ethical dilemmas and legal risk. 

    CONCLUSION. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement. 


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML comprises a broad class of statistical analysis algorithms that iteratively improve in response to training data to build models for autonomous predictions. In other words, computer program performance improves automatically with experience . The goal of an ML algorithm is to develop a mathematic model that is the data. Once this model is known data, it can be used to predict the labels of new data. Because radiology is inherently a data interpretation profession in extracting features from images and applying a large knowledge base to interpret those features—it provides ripe opportunities to apply these tools to improve practice.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760


  • 







Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Most ML relevant to radiology is super- vised. In supervised ML, data are labeled before the model is trained. For example, in training a project to identify a specific brain tumor type, the label would be tumor pathologic results or genomic information. These labels, also known as ground truth, can be as specific or general as needed to answer the question. The ML algorithm is exposed to enough of these labeled data to allow them to morph into a model designed to answer the question of interest. Because of the large number of well-labeled images required to train models, curating these datasets is often laborious and expensive.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “In unsupervised ML, unlabeled data are exposed to the algorithm with the goal of generating labels that will meaningfully organize the data. This is typically done by identifying useful clusters of data based on one or more dimensions. Compared with supervised techniques, unsupervised learning sometimes requires much larger training datasets. Unsupervised learning is useful in identifying meaningful clustering labels that can then be used in supervised training to develop a useful ML algorithm. This blend of supervised and unsupervised learning is known as semisupervised.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “At the outset of an ML project, data are divided into three sets: training, test, and validation. The training dataset is sent through the algorithm repeatedly to establish values for each hyperparameter. After the hyper- parameters stabilize, the test dataset is sent through the model, and the accuracy of the predictions or classifications is evaluated. At this point, the trainer decides whether the model is fully trained or adjusts the algorithm architecture to repeat training. After several iterative cycles of training and testing, the algorithm is fed validation data for evaluation. Application of ML to radiology involves both medical knowledge and pixel data.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm. Newer deep learning and CNN techniques can help by incorporating the image-preprocessing step into the algorithm itself, saving manual labor and potentially leading to the discovery of preprocessing techniques that perform better in the subsequent neural network layers. These techniques require either tightly focused and well defined data or extremely large datasets.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “The FDA has not issued rules about test datasets, transparency, or verification proce- dures. It will probably evaluate models and associated test datasets on a case by case ba- sis. How this will evolve is unclear at present. In addition, regulation that created the FDA was enacted before the availability of ML, and existing laws regarding devices are difficult to apply to ML algorithms.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML encompasses many powerful tools with the potential to dramatically increase the information radiologists extract from images. It is no exaggeration to suggest the tools will change radiology as dramatically as the advent of cross-sectional imaging did. We believe that owing to the narrow scope of existing applications of ML and the complexity of creating and training ML models, the possibility that radiologists will be replaced by machines is at best far in the future. Successful application of ML to the radiology domain will require that radiologists extend their knowledge of statistics and data science to supervise and correctly interpret ML-derived results.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “An automated machine learning computer system was created to detect, anatomically localize, and categorize vertebral compression fractures at high sensitivity and with a low false-positive rate, as well as to calculate vertebral bone density, on CT images.”
Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
    “Sensitivity for detection or localization of compression fractures was 95.7% (201 of 210; 95% confidence interval [CI]: 87.0%, 98.9%), with a false-positive rate of 0.29 per patient. Additionally, sensitivity was 98.7% and specificity was 77.3% at case-based receiver operating characteristic curve analysis.”


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • “This system performed with 95.7% sensitivity in fracture detection and lo- calization to the correct vertebral level, with a low false-positive rate. There was a high level of overall agreement (95%) for compression morphology and 68% overall agreement for severity categorization relative to radiologist classification.”


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • *A fully automated machine learning software system with which to detect, localize, and classify compression fractures and determine the bone density of thoracic and lumbar vertebral bodies on CT images was developed and validated. 
* The computer system has a sensitivity of 95.7% in the detection of compression fractures and in the localization of these fractures to the correct vertebrae, with a false-positive rate of 0.29 per patient. 
* The accuracy of this computer system in fracture classification by Genant type was 95% (weighted k = 0.90). 


    Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images 
Burns JE et al.
Radiology (in press)
  • “ By virtue of its information technology-oriented infrastructure, the specialty of radiology is uniquely positioned to be at the forefront of efforts to promote data sharing across the healthcare enterprise, including particularly image sharing. The potential benefits of image sharing for clinical, research, and educational applications in radiology are immense. In this work, our group—the Association of University Radiologists (AUR) Radiology Research Alliance Task Force on Image Sharing—reviews the benefits of implementing image sharing capability, introduces current image sharing platforms and details their unique requirements, and presents emerging platforms that may see greater adoption in the future. By understanding this complex ecosystem of image sharing solutions, radiologists can become im- portant advocates for the successful implementation of these powerful image sharing resources.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Sharing of primary imaging data also makes further research more efficient and can substantially reduce the cost of subsequent studies. In research efforts utilizing extremely large data sets, such as those found in radiogenomics and radiomics research, sharing and exchange of images facilitates linking radiological data with large biological and genetic data sets, thereby enabling the use of big data analysis methods to uncover correlations between imaging phenotypes and underlying genetic and functional molecular expression profiles.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Practical considerations must also factor into the design of implemented workflow. For example, transfer of DICOM imaging studies through portable media such as compact discs has been demonstrated as untenable and unsustainable between institutions as image sharing gains traction. Instead, decentralized upload of DICOM data into a PACS system from 
remote terminals helps distribute the workload between mul- tiple departments, reduces the time required by a central physical processing center, saves the time required for physical transportation of the media, and retains the physical media at the point of care.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Commercial PACS and standalone image sharing plat- form also have not been evaluated in the scientific literature, but market analysis is being conducted through surveys by research companies such as peer60. In peer60’s report, vendors with the largest reported market share among large healthcare institutions include McKesson’s Conserus Image Repository, IBM’s Merge, Nuance PowerShare Network, GE Centricity with OneMedNet BEAM, LifeIMAGE, Philips IntelliSpace Portal, ABI Health Sectra, and Agfa Healthcare Enterprise Imaging.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Modern educational practices in radiology require facilitated learning using computer-based modules and active simulation as part of the learning experience. As part of this trend, there has been tremendous growth in the number of online, case-based learning tools. In their purest form, these tools can take the form of PACS-like, web- based teaching files, which offer almost unlimited scalability for case acquisition and distribution, and may also include the ability to pose questions to radiology trainees, track responses, and categorize cases, ideally with seamless integration with a clinical PACS system.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Even more exciting is the finding that in some cases, computers seem to be able to “see” patterns that are beyond human perception.This discovery has led to substantial and increased interest in the field of machine learning— specifically, how it might be applied to medical images.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)

  • “These algorithms have been used for several challenging tasks, such as pulmonary embolism segmentation with computed tomographic (CT) angiography (3,4), polyp detection with virtual colonoscopy or CT in the setting of colon cancer (5,6), breast cancer detection and diagnosis with mammography (7), brain tumor segmentation with magnetic resonance (MR) imaging (8), and detection of the cognitive state of the brain with functional MR imaging to diagnose neurologic disease (eg, Alzheimer disease).”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “If the algorithm system optimizes its parameters such that its performance improves—that is, more test cases are diagnosed correctly—then it is considered to be learning that task.”



    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Training: The phase during which the ma- chine learning algorithm system is given labeled example data with the answers (ie, labels)—for example, the tumor type or correct boundary of a lesion.The set of weights or decision points for the model is updated until no substantial improvement in performance is achieved.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Deep learning, also known as deep neural network learning, is a new and popular area of research that is yielding impressive results and growing fast. Early neural networks were typi- cally only a few (<5) layers deep, largely because the computing power was not sufficient for more layers and owing to challenges in updating the weights properly. Deep learning refers to the use of neural networks with many layers—typically more than 20.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “CNNs are similar to regular neural networks. The difference is that CNNs assume that the inputs have a geometric relationship—like the rows and columns of images. The input layer of a CNN has neurons arranged to produce a convolution of a small image (ie, kernel) withthe image.This kernel is then moved across the image, and its output at each location as it moves across the input image creates an output value. Although CNNs are so named because of the convolution kernels, there are other important layer types that they share with other deep neural networks. Kernels that detect important features (eg, edges and arcs) will have large outputs that contribute to the final object to be detected.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Machine learning is already being applied in the practice of radiology, and these applications will probably grow at a rapid pace in the near future.The use of machine learning in radiology has important implications for the practice of medicine, and it is important that we engage this area of research to ensure that the best care is afforded to patients. Understanding the properties of machine learning tools is critical to ensuring that they are applied in the safest and most effective manner.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “The combination of big data and artificial intelligence, referred to by some as the fourth industrial revolution, will change radiology and pathology along with other medical specialties. Although reports of radiologists and pathologists being replaced by computers seem exaggerated, these specialties must plan strategically for a future in which artificial intelligence is part of the health care workforce.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Watson has a boundless capacity for learning—and now has 30 billion images to review after IBM acquired Merge. Watson may become the equivalent of a general radiologist with super-specialist skills in every domain—a radiologist’s alter ego and nemesis.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “For example, a radiologist typically views 4000 images in a CT scan of multiple body parts (“pan scan”) in patients with multiple trauma. The abundance of data has changed how radiologists interpret images; from pattern recognition, with clinical context, to searching for needles in haystacks; from inference to detection. The radiologist, once a maestro with a chest ra- diograph, is now often visually fatigued searching for an occult fracture in a pan scan.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Radiologists should identify cognitively simple tasks that could be addressed by artificial intelligence, such as screening for lung cancer on CT. This involves detecting, measuring, and characterizing a lung nodule, the management of which is standardized. A radiology residency or a medical degree is not needed to detect lung nodules.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Because pathology and radiology have a similar past and a common destiny, perhaps these specialties should be merged into a single entity, the “information specialist,” whose responsibility will not be so much to extract information from images and histology but to manage the information extracted by artificial intelligence in the clinical context of the patient.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “The information specialist would interpret the important data, advise on the added value of another diagnostic test, such as the need for additional imaging, anatomical pathology, or a laboratory test, and integrate information to guide clinicians. Radiologists and pathologists will still be the physician’s physician.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “If artificial intelligence becomes adept at screening for lung and breast cancer, it could screen populations faster than ra- diologists and at a fraction of cost. The information specialist could ensure that images are of sufficient quality and that artificial intelligence is yielding neither too many false-positive nor too many false- negative results. The efficiency from the economies of scale because of artificial intelligence could benefit not just developed countries, such as the United States, but developing countries hampered by access to specialists. A single information specialist, with the help of artificial intelligence, could potentially manage screening for an entire town in Africa.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “There may be resistance to merging 2 distinct medical specialties, each of which has unique pedagogy, tradition, accreditation, and reimbursement. However, artificial intelligence will change these diagnostic fields. The merger is a natural fusion of human talent and artificial intelligence. United, radiologists and pathologists can thrive with the rise of artificial intelligence.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Information specialists should train in the traditional sciences of pathology and radiology. The training should take no longer than it presently takes because the trainee will not spend time mastering the pattern recognition required to become a competent radiologist or pathologist. Visual interpretation will be restricted to perceptual tasks that artificial intelligence cannot perform as well as humans. The trainee need only master enough medical physics to improve suboptimal quality of medical images. Information special- ists should be taught Bayesian logic, statistics, and data science and be aware of other sources of information such as genomics and bio- metrics, insofar as they can integrate data from disparate sources with a patient’s clinical condition.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “It is clear that molecular changes in gene expression solicit a structural and vascular change in phenotype that is in turn observable on the imaging modality under consideration. For instance, tumor morphology in standard H&E tissue specimens reflects the sum of all molecular pathways in tumor cells. By the same token radiographic imaging modalities such as MRI and CT are ultimately capturing structural and functional attributes reflective of the biological pathways and cellular morphology characterizing the disease. Historically the concept and importance of radiology-pathology fusion has been around and recognized.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “For the biomedical image computing, machine learning, and bioinformatics scientists, the aforementioned challenges will present new and exciting opportunities for developing new feature analysis and machine learning opportunities. Clearly though, the image computing community will need to work closely with the pathology community and potentially whole slide imaging and microscopy vendors to be able to develop new and innovative solutions to many of the critical image analysis challenges in digital pathology.”


    Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • PURPOSE: Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans.


    CONCLUSIONS: The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.
Automatic 3D liver location and segmentation via convolutional neural network and graph cut.
Lu F et al.
Int J Comput Assist Radiol Surg. 2016 Sep 7. [Epub ahead of print]
  • “Precision medicine relies on an increasing amount of heterogeneous data. Advances in radiation oncology, through the use of CT Scan, dosimetry and imaging performed before each fraction, have generated a considerable flow of data that needs to be integrated. In the same time, Electronic Health Records now provide phenotypic profiles of large cohorts of patients that could be correlated to this information. In this review, we describe methods that could be used to create integrative predictive models in radiation oncology. Potential uses of machine learning methods such as support vector machine, artificial neural networks, and deep learning are also discussed.”

    
Big Data and machine learning in radiation oncology: State of the art and future prospects.
Bibault JE, Giraud P, Burgun A
Cancer Lett. 2016 May 27. pii: S0304-3835(16)30346-9. doi: 10.1016/j.canlet.2016.05.033. [Epub ahead of print]
  • “In recent years machine learning (ML) has revolutionized the fields of computer vision and medical image analysis. Yet, a number of doubts remain about the applicability of ML in clinical practice. Medical doctors may question the lack of interpretability of classifiers; Or it is argued that ML methods require huge amounts of training data. Here we discuss some of these issues and show: 
1. how decision trees (a special class of ML models) can be understood as an automatically-optimized generalization of conventional algorithms, and 
2. how the issue of collecting labelled data (e.g. images) applies to both manually-designed and learning-based algorithms.”


    Machine learning for medical images analysis
Criminisi A
Medical Image Analysis
Volume 33, October 2016, Pages 91–93
  • “Ultimately, these researchers argue, the complex answers given by machine learning have to be part of science’s toolkit because the real world is complex: for phenomena such as the weather or the stock mar- ket, a reductionist, synthetic description might not even exist.“

    There are things we cannot verbalize,” says Stéphane Mallat, an applied math- ematician at the École Polytechnique in Paris.
  • “When you ask a medical doctor why he diagnosed this or this, he’s going to give you some rea- sons,” he says. “But how come it takes 20 years to make a good doctor? Because the information is just not in books.” 


    Can we open the black box of AI?
Artificial intelligence is everywhere. But before scientists trust it, they first need to understand how machines learn.
 Davide Castelvecchi
Nature Vol 538, Issue 7623 Oct 2016
  • “Each of these tasks is amenable to automa- tion. Organs can be located by the computer using atlas- and landmark-based methods. Organ volume and shape can be assessed by finding the edges of the organs in three dimensions, a process known as segmentation. Lesions can be detected and segmented by assessing the patterns of Hounsfield unit intensities in the organs to identify anomalies. Example pat- terns include variations in intensities, texture, and shape. The quantitative measurements of these patterns are known as features.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “In the other, generic features are used and a machine-learning algorithm is taught to distinguish disease from nondisease sites by being trained on labeled cases, without the need for handcrafted features. The latter approach, which is made feasible by recent advances in computer science known collo- quially as deep learning, is increasingly being used because it markedly increases the efficiency of image analysis development.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “To perform fully automated abdominal CT image interpretation at the level of a trained ra- diologist, the computer must assess all the or- gans and detect all the abnormalities present in the images. Although this is a seemingly daunting task for the software developer, the numbers of organs and potential abnormalities are finite and can be addressed methodically .” 
Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “The pancreas is a highly deformable organ that has a shape and location that is greatly influenced by the presence of adjacent struc- tures. This makes automated image analysis of the pancreas extremely challenging. A number of different approaches have been taken to automated pancreas analysis, in- cluding the use of anatomic atlases, the loca- tion of the splenic and portal veins, and state- of-the-art computer science methods such as deep learning.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “A recent advance in computer science is the refinement of neural networks, a type of machine learning classifier used to make decisions from data. This refine- ment, known generically as deep learn- ing but more specifically as convolutional neural networks, has shown dramatic improvements in automated intelligence applications. Initially drawing attention for impressive improvements in speech recognition and natural image interpretation, deep learning is now being applied to medical images, as described already in the sections on the pancreas and colitis.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “Another underutilized approach is the use of automated image analysis running in the background to triage patients with potentially life-threatening conditions, to reduce common interpretative errors, to perform large-scale epidemiologic studies, and to co- ordinate and interpret large volumes of clinical, genomic, and imaging data. As radiology practices consolidate into larger hospital-led groups, it will be more feasible to implement such systems.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “Similarly, fully automated abdominal CT image interpretation is likely to change the role of radiologists, but they will still be responsible for taking care of the patient and making the final diagnosis. The automated report could improve reading efficiency, but radiologists will need to be vigilant to avoid placing too much trust in the computer.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “The use of automated image interpretation by nonradiologists will need to be considered. Such users might include radiology technologists, radiologist assistants, and nonradiologist clinicians. The technology could lead to further commoditization of radiology services.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “In conclusion, advances in abdominal CT automated image interpretation are occurring at a rapid pace. In the not too distant future, these advances may enable fully automated image interpretation. Similar advances may occur in other body regions and with other imaging modalities. Risks and benefits are difficult to foresee but may include increased pressures for commoditization, better reading efficiency, fewer interpretive errors, and a more quantitative radiology report. The primary focus must ultimately be on improved patient care.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “ The current design and implementation of Medical Workstations has failed and must be replaced by Knowledge Stations as we move beyond image acquisition and into knowledge acquisition like deep learning.”


    Rethinking Medical Workstations and the Transition to Knowledge Stations
KMHorton, Fishman EK
JACR (in progress)
  • There are a number of ways that the field of deep learning has been characterized. Deep learning is a class of machine learning algorithms that use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
are part of the broader machine learning field of learning representations of data.learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

 Wikipedia
  • Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
 Wikipedia
  • Situational Awareness
    Situation awareness involves being aware of what is happening in the vicinity to understand how information, events, and one's own actions will impact goals and objectives, both immediately and in the near future. One with an adept sense of situation awareness generally has a high degree of knowledge with respect to inputs and outputs of a system, an innate "feel" for situations, people, and events that play out because of variables the subject can control. Lacking or inadequate situation awareness has been identified as one of the primary factors in accidents attributed to human error.[1] Thus, situation awareness is especially important in work domains where the information flow can be quite high and poor decisions may lead to serious consequences (such as piloting an airplane, functioning as a soldier, or treating critically ill or injured patients).
  • For select cancer histologies, aggressive focal therapy of oligometastatic lesions is already the clinical standard of care (i.e. colorectal cancer and sarcomas), while for other tumor types the evidence is still emerging (i.e. prostate, breast, etc.). It is increasingly important, therefore, for the radiologist interpreting oncology patients’ staging or restaging examinations to be aware of those diseases for which targeted therapy of oligometastases may be undertaken to effectively guide such management. The improved imaging resolution provided by technological advances promise to aid in the detection of subtle sites of disease to ensure the identification of patients with oligometastases amenable to targeted treatment. 


    What the Radiologist Needs to Know to Guide Patient Management 
Steven P. Rowe, MD, Hazem Hawasli, Elliot K. Fishman, MD, Pamela T. Johnson, 
Acad Radiol 2016; 23:326–328
  • “As such, some of the impetus for exploring aggressive and potentially curative treatment in patients with oligometastases can come from improvements in imaging technology and techniques. Thus, radiologists should not only understand the implications of the new paradigm of oligometastatic disease for how they interpret studies, but they should also ac- tively engage in the research necessary to optimize the selection of patients for aggressive therapy of oligometastases.“

    What the Radiologist Needs to Know to Guide Patient Management 
Steven P. Rowe, MD, Hazem Hawasli, Elliot K. Fishman, MD, Pamela T. Johnson, 
Acad Radiol 2016; 23:326–328
© 1999-2017 Elliot K. Fishman, MD, FACR. All rights reserved.