google ads
Deep Learning: Deep Learning and Data Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Data

-- OR --

  • “More proactively, companies are beginning to realize that, properly managed, data becomes an asset of potentially limitless potential. It deserves proper management. AI unlocks that potential. Simply stated: To get great results from high-quality AI, you need high-quality data.”
    Ensure High-Quality Data Powers Your AI” by Thomas C. Redman
    Harvard Business Review August 12, 2024
  • “Across the spectrum of developers, users, and institutions, there is an ethical requirement to be good stewards of any technology tool. In high-stakes domains such as health care, this is arguably even more important when the effects of AI biases may dramatically influence health outcomes, even life and death. To mitigate AI bias, tremendous work must be applied to ensure the responsible use of AI. Such efforts can begin with multidisciplinary collaborations that include health care workers throughout the 4 phases of the AI life cycle: predesign, design and development, deployment, and testing and evaluation.4 At the predesign stage, health care workers can provide input on problem identification and the goal of the AI tool. During the design and development of the AI tool, model developers should actively engage clinicians to discuss bias identification and mitigation strategies and embed safeguards to overcome bias.”
    Identifying and Addressing Bias in Artificial Intelligence
    Byron Crowe, Jorge A. Rodriguez
    JAMA Network Open. 2024;7(8):e2425955
  • “Although impressive, AI is still just a technology—how it is designed and implemented is dependent on how it is engineered and put to use by people and organizations. In short, it still must be told what to do, and for what purpose. Whether those instructions create good or cause harm is ultimately the product of human beings and their choices. To this end, we must let our conscience be our guide, while making conscientious decisions that seek to illuminate unfairness and eliminate it. We have already seen the positive effects of such work in the form of revisions to biases in well known clinical tools, such as the estimated glomerular filtration rate. Similar efforts can, and must be done with AI as well.”  
    Identifying and Addressing Bias in Artificial Intelligence Byron
    Crowe, Jorge A. Rodriguez
    JAMA Network Open. 2024;7(8):e2425955
  • “The consequences of bias in AI range from trivial to far reaching, but these effects can only be fully mitigated when they are known. It is imperative that we pay close attention to bias and the potential for unintended consequences from AI design and implementations. Studies such as those by Lee and colleagues will continue to play a role in helping stakeholders understand the impacts of new AI technologies and guide thoughtful modifications. AI stands to greatly benefit human beings, and there will be inevitable trade-offs that may affect performance and acceptability. But by surfacing objective data on AI performance and understanding how it aligns with our goal to improve both health outcomes and broader societal aims, we can do our best to make adjustments in the AI life cycle that will guide us toward a more desirable end state. Although no system is perfect nor is perfection the goal of AI, we all bear a responsibility to ensure AI is fair, trustworthy, and beneficial. Our patients deserve it. Our conscience demands it.”
    Identifying and Addressing Bias in Artificial Intelligence
    Byron Crowe, Jorge A. Rodriguez
    JAMA Network Open. 2024;7(8):e2425955
  • “Computational systems that aid clinicians should be classified as software as a medical device and thus regulated according to the potential risk posed. To facilitate appropriate use of computational methods that interpret high-dimensional data in oncology, treating physicians need access to multidisciplinary teams with broad expertise and deep training amonga subset of clinical oncology fellows in clinical informatics."
    Clinical Application of Computational Methods in Precision Oncology A Review
    Orestis A. Panagiotou et al.
    JAMA Oncol. doi:10.1001/jamaoncol.2020.1247 Published online May 14, 2020.
  • "Regulatory pathways has been challenging, but the risk-based ap- proach of the FDA’s Center for Devices and Radiological Health is a useful paradigm. In this regulatory framework, computational methods would be evaluated according to the potential risk they pose to individuals, including both unknown risks and known risks of implementing the technology in a clinical setting.”
    Clinical Application of Computational Methods in Precision Oncology A Review
    Orestis A. Panagiotou et al.
    JAMA Oncol. doi:10.1001/jamaoncol.2020.1247 Published online May 14, 2020.
  • "Face validity of computational algorithms (ie, the perception that an algorithm is taking appropriate computational steps in its decision- making) also needs to be clear. This transparency is necessary to facilitate shared decision-making for oncologists and their patients. Furthermore, face validity improves the odds that oncologists can justify medical necessity for selected treatments when communicating with health insurance carriers.”
    Clinical Application of Computational Methods in Precision Oncology A Review
    Orestis A. Panagiotou et al.
    JAMA Oncol. doi:10.1001/jamaoncol.2020.1247 Published online May 14, 2020.
  • "Precision oncology therapies, which target specific genetic changes in a patient’s cancer, are changing the nature of cancer treatment. The core principles for improving the translation of computational precision oncology into clinical practice include reducing biases in data collection and management, improving validation and reproducibility of computational algorithms, addressing the regulatory oversight of these algorithms, considering payer perspectives, and developing patient-centered and clinician-friendly tools.”
    Clinical Application of Computational Methods in Precision Oncology A Review
    Orestis A. Panagiotou et al.
    JAMA Oncol. doi:10.1001/jamaoncol.2020.1247 Published online May 14, 2020.
  • ”The main difference between CAD and “true” AI is that CAD only makes diagnoses for which it has been specifically trained and bases its performance on a training dataset and a rigid scheme of recognition that can only be improved if more datasets are given to the CAD algorithm. True AI is characterised by the process of autonomous learning, without explicit programming of each step, based on a network of algorithms and connections, similar to what humans do.”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2
  • ”AI can be an optimizing tool for assisting the technologist and radiologist in choosing a personalised patient’s protocol, in tracking the patient’s dose parameters, and in providing an estimate of the radiation risks associated with cumulative dose and the patient’s susceptibility (age and other clinical parameters).”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • ”The main difference between CAD and “true” AI is that CAD only makes diagnoses for which it has been specifically trained and bases its performance on a training dataset and a rigid scheme of recognition that can only be improved if more datasets are given to the CAD algorithm. True AI is characterised by the process of autonomous learning, without explicit programming of each step, based on a network of algorithms and connections, similar to what humans do.”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2
  • ”AI can be an optimizing tool for assisting the technologist and radiologist in choosing a personalised patient’s protocol, in tracking the patient’s dose parameters, and in providing an estimate of the radiation risks associated with cumulative dose and the patient’s susceptibility (age and other clinical parameters).”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • “The combination of big data and artificial intelligence, referred to by some as the fourth industrial revolution, will change radiology and pathology along with other medical specialties. Although reports of radiologists and pathologists being replaced by computers seem exaggerated, these specialties must plan strategically for a future in which artificial intelligence is part of the health care workforce.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Watson has a boundless capacity for learning—and now has 30 billion images to review after IBM acquired Merge. Watson may become the equivalent of a general radiologist with super-specialist skills in every domain—a radiologist’s alter ego and nemesis.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Sharing of primary imaging data also makes further research more efficient and can substantially reduce the cost of subsequent studies. In research efforts utilizing extremely large data sets, such as those found in radiogenomics and radiomics research, sharing and exchange of images facilitates linking radiological data with large biological and genetic data sets, thereby enabling the use of big data analysis methods to uncover correlations between imaging phenotypes and underlying genetic and functional molecular expression profiles.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Practical considerations must also factor into the design of implemented workflow. For example, transfer of DICOM imaging studies through portable media such as compact discs has been demonstrated as untenable and unsustainable between institutions as image sharing gains traction. Instead, decentralized upload of DICOM data into a PACS system from 
remote terminals helps distribute the workload between mul- tiple departments, reduces the time required by a central physical processing center, saves the time required for physical transportation of the media, and retains the physical media at the point of care.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Commercial PACS and standalone image sharing plat- form also have not been evaluated in the scientific literature, but market analysis is being conducted through surveys by research companies such as peer60. In peer60’s report, vendors with the largest reported market share among large healthcare institutions include McKesson’s Conserus Image Repository, IBM’s Merge, Nuance PowerShare Network, GE Centricity with OneMedNet BEAM, LifeIMAGE, Philips IntelliSpace Portal, ABI Health Sectra, and Agfa Healthcare Enterprise Imaging.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “In unsupervised ML, unlabeled data are exposed to the algorithm with the goal of generating labels that will meaningfully organize the data. This is typically done by identifying useful clusters of data based on one or more dimensions. Compared with supervised techniques, unsupervised learning sometimes requires much larger training datasets. Unsupervised learning is useful in identifying meaningful clustering labels that can then be used in supervised training to develop a useful ML algorithm. This blend of supervised and unsupervised learning is known as semisupervised.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “At the outset of an ML project, data are divided into three sets: training, test, and validation. The training dataset is sent through the algorithm repeatedly to establish values for each hyperparameter. After the hyper- parameters stabilize, the test dataset is sent through the model, and the accuracy of the predictions or classifications is evaluated. At this point, the trainer decides whether the model is fully trained or adjusts the algorithm architecture to repeat training. After several iterative cycles of training and testing, the algorithm is fed validation data for evaluation. Application of ML to radiology involves both medical knowledge and pixel data.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm. Newer deep learning and CNN techniques can help by incorporating the image-preprocessing step into the algorithm itself, saving manual labor and potentially leading to the discovery of preprocessing techniques that perform better in the subsequent neural network layers. These techniques require either tightly focused and well defined data or extremely large datasets.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Pattern recognition for complex, high-dimensionality images are generally trained on large datasets, but such datasets, particularly with appropriate labels, are rare. To produce such sets can be expensive and time-consuming because labeling is difficult, and preprocessing of images must typically be performed to provide useful inputs to the ML algorithm.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Medical images have unique issues that make radiology image interpretation more complex than it may first appear to non–domain-expert data scientists. The cost to develop data sets is very high, the process is fraught with legal issues, and, even in best case scenarios, the data sets will constitute only a fraction of the number of animal photos on Facebook and Instagram. A data set of images must be classified into relevant categories, such as “disease” and “normal.” This labor-intensive process depends on humans to classify the data. “

    Ground truth” is the term for classification accuracy.”
Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Historically, the limiting factor to useful ML projects has been appropriate training data sets. For the moment, at least, radiologists hold great power over the imaging, clinical, and radiology-expertise data needed to develop image interpretation deep learning algorithms. This data combination is hugely valuable. When these algorithms finally do arrive, they have the potential to disrupt radiology adversely.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Whether small or large, good data sets for ML are complex, expensive undertakings. Data sets must be partitioned into training, verification, and validation portions; some of these data will be part of the public domain, whereas other data would be private. Large data sets aggregated from multiple sources face issues including intellectual property (IP), business agreements, and governance, all of which need to be resolved.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy.”


    Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • “Having small amounts of good-quality data is certainly better than having no data at all. Data augmentation can be used to artificially enlarge the size of a small dataset. The idea is to apply random transformations to the data that do not change the appropriateness of the label assignments. Possible random transformations that can be applied to images include ipping, rotation, translation, zooming, skewing, and elastic deformation. Hence, with data augmentation, image variants from an original dataset are created to enlarge the size of a training dataset of images presented to the deep learning models.”

    
Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “Even in computer vision, where CNNs have become a dominant method, there are important limitations for deep learning.The most prominent limitation is that deep learning is an intensely data-hungry technology; learning weights for a large network from scratch requires a very large number of labeled examples to achieve accurate classification. However, unlike traditional approaches to computer vision and machine learning, which do not scale well with dataset size, deep learning does scale well with large datasets.”


    Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “As a result, building large labeled public medical image datasets is an important step for further progress in applying deep learning to radiology. Barriers to this effort include privacy concerns for clinical images, as well as the costs and difficulties of obtaining accurate ground-truth labels from multiple experts or pathology diagnoses. Nevertheless, several efforts are under way to create large datasets of labeled medical images, such as the Cancer Imaging Archive.”


    Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “The role of deep learning and its application to the practice of radiology must still be defined. Deep learning systems may be conceived as a new form of diagnostic test with various clinical usage scenarios . A triage approach would run these automated image analysis systems in the background to detect life-threatening conditions or search through large amounts of clinical, genomic, or imaging data. A replacement approach would use these systems for generating gure captions or even fully automated interpretation of imaging examinations. An add-on approach would support the radiologist by performing time- consuming tasks such as lesion segmentation to assess total tumor burden.”


    Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “ The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “The amount and quality of training data are dominant influencers on a machine learning (ML) model’s performance. The common theme from all attendees was that everyone participating in medical image evaluation with machine learning is data starved. This is a particularly pressing problem in the new era of deep learning.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “Radiologists’ reports are not definitive expressions of ground truth. A retrospective 20-year literature re- view in 2001 suggested that the level of significant radiology error ranged between 2 and 20% . This is not limited to radiology; a Mayo clinic study comparing clinical diagnoses with postmortem autopsies reported that a major diagnosis was missed clinically in 26% of patients.”

    
Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “In contrast, most publically available medical image datasets have tens or hundreds of cases, and datasets with more than 5000 well-annotated cases are rare. In the USA, individual healthcare institutions may have 103 up to rarely 107 of an exam type. These common radiology exam types, for example, chest radiographs, unenhanced brain CTs, mammograms, and abdominal CTs, are often high-dimensional data due to variations in pathology, technique, radiology interpretation, patient population, and clinical setting.”

    
Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 

  • “Action items, and priority research topics, for this field of study include the following: 
Describe, via a whitepaper, the high-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products 
& Describe common categories of use cases for medical image datasets, and understand unique dataset attributes applicable to each 
& Describe the metadata, framework, and standards needed to catalog and discover datasets of medical images appropriate for ML. 
& Understand and describe business cases and models for medical image datasets.”


    Medical Image Data and Datasets in the Era of Machine Learning—Whitepaper from the 2016 C-MIMI Meeting Dataset Session 
Marc D. Kohli & Ronald M. Summers & J. Raymond Geis
J Digit Imaging (2017) 30:392–399 


Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.