google ads
Search

Everything you need to know about Computed Tomography (CT) & CT Scanning

Deep Learning: Decision Support Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Decision Support

-- OR --

  • “These AI applications can be divided broadly into two categories: first, those pertaining to logistic workflows, including order scheduling, patient screening, radiologist reporting, and other operational analytics (also termed upstream AI); and second, those pertaining to the acquired imaging data themselves, such as automat- ed detection and segmentation of findings or features, automated interpretation of findings, and image post- processing (also termed downstream AI).”
    Integrating Al Algorithms into the Clinical Workflow  
    Juluru K et al.
    Radiology: Artificial Intelligence 2021; 3(6):e210013  
  •  “AI algorithms can enable radiologists to perform their jobs more accurately and efficiently, but architectures for deploying them in the clinical workflow are in the very early stages of development. The implementation of the software components we describe can ultimately be used to inform development of standards-based solutions.”  
    Integrating Al Algorithms into the Clinical Workflow  
    Juluru K et al.
    Radiology: Artificial Intelligence 2021; 3(6):e210013  
  • “Artificial intelligence (AI) is a disruptive technology that involves the use of computerised algorithms to dissect complicated data. Among the most promising clinical applications of AI is diagnostic imaging, and mounting attention is being directed at establishing and fine-tuning its performance to facilitate detection and quantification of a wide array of clinical conditions. Investigations leveraging computer-aided diagnostics have shown excellent accuracy, sensitivity, and specificity for the detection of small radiographic abnormalities, with the potential to improve public health. However, outcome assessment in AI imaging studies is commonly defined by lesion detection while ignoring the type and biological aggressiveness of a lesion, which might create a skewed representation of AI’s performance.”
    Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints
    Ohad Oren, Bernard J Gersh, Deepak L Bhatt
    Lancet Digital Health Vol 2 September 2020 (in press)
  • "The approach for adrenal incidentalomas could also benefit from AI-based imaging analysis.Adrenal nodules are the most frequently encountered incidental radiographic finding and can reflect malignant (ie, pheochromocytoma) or benign (ie, adenoma) conditions with overlapping imaging characteristics Quantitative texture analysis through high-throughput extraction might differentiate radiographic adrenal lesions into discrete clinical subsets, reducing costly and invasive testing.”
    Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints
    Ohad Oren, Bernard J Gersh, Deepak L Bhatt
    Lancet Digital Health Vol 2 September 2020 (in press)
  • “The rise and dissemination of AI in clinical medicine will refine our diagnostic accuracy and rule-out capabilities. However, unless AI algorithms are trained to distinguish between benign abnormalities and clinically meaningful lesions, better imaging sensitivity might come at the cost of increased false positives, as well as perplexing scenarios whereby AI findings are not associated with outcomes. To facilitate the study of AI in medical image interpretation, it is paramount to assess the effects on clinically meaningful endpoints to improve applicability and allow effective deployment into clinical practice.”
    Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints
    Ohad Oren, Bernard J Gersh, Deepak L Bhatt
    Lancet Digital Health Vol 2 September 2020 (in press)
  • “Issues of generalisability are not unique to machine learning and are a dominant concern for clinical guidelines where the results of randomised controlled trials, the gold standard for evidence generation, might not generalise beyond the trial settings. If hospitals want to have useful machine learning systems at the bedside, the broader research community need to stop focusing solely on generalisability and consider the ultimate goal: will this system be useful in this specific case?”
    The myth of generalisability in clinical research and machine learning in health care
    Joseph Futoma, Morgan Simons, Trishan Panch et
    Lancet digital-health Vol 2 September 2020 (in press)
  • ”Machine learning systems are not like thermometers, reliably measuring the temperature via universal rules of physics; nor are they like trained clinicians, gracefully adapting to new circumstances. Rather, these systems should be viewed as a set of rules that were trained to operate under certain contexts and rely on certain assumptions, and might work seamlessly at one centre but fail altogether somewhere else.”
    The myth of generalisability in clinical research and machine learning in health care
    Joseph Futoma, Morgan Simons, Trishan Panch et
    Lancet digital-health Vol 2 September 2020 (in press)
  • “Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to “sell” clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed.”
    Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework
    David B.Larson et al.
    Radiology 2020; 00:1–8
  • "The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly fol- lowed. Rather than debate whether patients or provider organizations “own” the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.”
    Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework
    David B.Larson et al.
    Radiology 2020; 00:1–8
  • Background: Variation between radiologists when making recommendations for additional imaging and associated factors are, to the knowledge of the authors, unknown. Clear identification of factors that account for variation in follow-up recommendations might prevent unnecessary tests for incidental or ambiguous image findings.
    Purpose: To determine incidence and identify factors associated with follow-up recommendations in radiology reports from multiple modalities, patient care settings, and imaging divisions.
    Conclusion: Substantial interradiologist variation exists in the probability of recommending a follow-up examination in a radiology report, after adjusting for patient, examination, and radiologist factors.
    Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    LailaR.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.1148/radiol.2019182826
  • Materials and Methods: This retrospective study analyzed 318 366 reports obtained from diagnostic imaging examinations performed at a large urban quaternary care hospital from January 1 to December 31, 2016, excluding breast and US reports. A subset of 1000 reports were randomly selected and manually annotated to train and validate a machine learning algorithm to predict whether a report included a follow-up imaging recommendation (training-and-validation set consisted of 850 reports and test set of 150 reports). The trained algorithm was used to classify 318 366 reports. Multivariable logistic regression was used to determine the likelihood of follow-up recommendation. Additional analysis by im aging subspecialty division was performed, and intradivision and interradiologist variability was quantified.
    Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    LailaR.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.1148/radiol.2019182826
  • Results: The machine learning algorithm classified 38 745 of 318 366 (12.2%) reports as containing follow-up recommendations. Average patient age was 59 years 6 17 (standard deviation); 45.2% (143 767 of 318 366) of reports were from male patients. Among 65 radiologists, 57% (37 of 65) were men. At multivariable analysis, older patients had higher rates of follow-up recom- mendations (odds ratio [OR], 1.01 [95% confidence interval {CI}: 1.01, 1.01] for each additional year), male patients had lower rates of follow-up recommendations (OR, 0.9; 95% CI: 0.9, 1.0), and follow-up recommendations were most common among CT studies (OR, 4.2 [95% CI: 4.0, 4.4] compared with radiography). Radiologist sex (P = .54), presence of a trainee (P = .45), and years in practice (P = .49) were not significant predictors overall. A division-level analysis showed 2.8-fold to 6.7-fold interradiologist variation.
    Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    Laila R.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.148/radiol.2019182826
  • Purpose: To determine incidence and identify factors associated with follow-up recommendations in radiology reports from multiple modalities, patient care settings, and imaging divisions.
    Results: Radiologist sex (P = .54), presence of a trainee (P = .45), and years in practice (P = .49) were not significant predictors overall. A division-level analysis showed 2.8-fold to 6.7-fold interradiologist variation.
    Conclusion: Substantial interradiologist variation exists in the probability of recommending a follow-up examination in a radiology report, after adjusting for patient, examination, and radiologist factors.
    Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    Laila R.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.148/radiol.2019182826
  • “In conclusion, we used machine learning to analyze variation found attributable to both radiologist and nonradiologist factors. Whereas radiologist sex, trainee involvement, and experience did not contribute to unwarranted variation in follow-up recommendations, there was substantial variation in follow-up recommendations between radiologists within the same division. Therefore, interventions to reduce unwarranted variation in follow-up recommendations may be most effective if targeted to individual radiologists. Interventions could include feedback reports that show follow-up recommendation rates for individual radiologists, educational efforts to improve awareness and acceptance of evidence-based imaging guidelines, and improved decision support tools. Future studies will be needed to assess the effect multifaceted interventions have on reducing interradiologist variation in follow-up recommendations and the effect on quality of care.”
    Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    Laila R.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.148/radiol.2019182826

  • Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors
    Laila R.Cochon et al.
    Radiology 2019; 00:1–8 • https://doi.org/10.148/radiol.2019182826
  • “An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.”
    Ethical Dimensions of Using Artificial Intelligence in Health Care
    Michael J. Rigby
    AMA J Ethics. 2019;21(2):E121-124.
  • “Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.”
    Ethical Dimensions of Using Artificial Intelligence in Health Care
    Michael J. Rigby
    AMA J Ethics. 2019;21(2):E121-124.
  • “If artificial intelligence becomes adept at screening for lung and breast cancer, it could screen populations faster than ra- diologists and at a fraction of cost. The information specialist could ensure that images are of sufficient quality and that artificial intelligence is yielding neither too many false-positive nor too many false- negative results. The efficiency from the economies of scale because of artificial intelligence could benefit not just developed countries, such as the United States, but developing countries hampered by access to specialists. A single information specialist, with the help of artificial intelligence, could potentially manage screening for an entire town in Africa.”


    Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Modern educational practices in radiology require facilitated learning using computer-based modules and active simulation as part of the learning experience. As part of this trend, there has been tremendous growth in the number of online, case-based learning tools. In their purest form, these tools can take the form of PACS-like, web- based teaching files, which offer almost unlimited scalability for case acquisition and distribution, and may also include the ability to pose questions to radiology trainees, track responses, and categorize cases, ideally with seamless integration with a clinical PACS system.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “A prudent attitude toward research on unintended consequences could help reduce the odds of negative consequences. Moreover, if such consequences occur despite these efforts, research could help manage and reduce the related effects of these consequences.”


    Unintended Consequences of Machine Learning in Medicine.
Cabitza F, Rasoini R, Gensini GF
JAMA. 2017 Aug 8;318(6):517-518.
  • “The process of achieving value in terms of medical decision support does not remove the clinician or radiologist, but instead, provides easier access to information that might otherwise be inaccessible, inefficient, or difficult to integrate in real-time for the consulting physician. When this information is distilled in a way available to the radiologist, it becomes knowledge that can positively impact the clinician’s judgment in a personalized way in real-time.”

    
Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “Many tools have been developed to risk stratify patients into categories of pretest probability for CAD by generalizing patients into low-risk, medium-risk, and high- risk categories. Examples such as the Diamond and Forrester method, the Duke Clinical Score, and the Framingham Risk Score incorporate prior clinical history of cardiac events, certain characteristics of the chest pain, family history, medical history, age, sex, and results of a lipid panel. Imaging findings have been used in this type of risk stratification as well, with coronary calcium scoring.”

    
Reinventing Radiology: Big Data and the Future of Medical Imaging 
Morris MA et al.
J Thorac Imaging 2018;33:4–16
  • “The ultimate clinical verification of a diagnostic or predictive artificial intelligence tools requires a demonstration of their value through effect on patient outcomes, beyond performance metrics; this can be achieved through clinical trials or well- designed observational outcome research.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
 Seong Ho Park, Kyunghwa Han
Radiology (in press) 2018
  • “Evaluation of the clinical performance of a diagnostic or predictive artificial intelligence model built with high-dimensional data requires use of external data from a clinical cohort that ade- quately represents the target patient population to avoid over-estimation of the results due to over fitting and spectrum bias.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
 Seong Ho Park, Kyunghwa Han
Radiology (in press) 2018
  • “Development of an algorithm for medical diagnosis or prediction, especially an algorithm in which deep neural networks are used, typically requires a huge dataset, often referred to as “big data.” Therefore, unlike prospective clinical trials, in which subjects are typically recruited uniformly and consecutively according to eligibility criteria explicitly defined for a particular clinical setting, the data used to develop a deep learning algorithm for medical diagnosis or prediction often must be collected from multiple heterogeneous sources in various ways.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
 Seong Ho Park, Kyunghwa Han
Radiology (in press) 2018
  • “Robust clinical verification of the performance of a diagnostic or predictive artificial intelligence model requires external validation (validation here means verification of a model’s performance) in a clinical cohort that adequately represents the target patient population, and the use of prospectively collected data is desirable. This procedure is crucial for avoiding overestimation of the performance as a result of over fitting in a high-dimensional or overparameterized classification model and spectrum bias.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
 Seong Ho Park, Kyunghwa Han
Radiology (in press) 2018
  • “Another underutilized approach is the use of automated image analysis running in the background to triage patients with potentially life-threatening conditions, to reduce common interpretative errors, to perform large-scale epidemiologic studies, and to co- ordinate and interpret large volumes of clinical, genomic, and imaging data. As radiology practices consolidate into larger hospital-led groups, it will be more feasible to implement such systems.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “Similarly, fully automated abdominal CT image interpretation is likely to change the role of radiologists, but they will still be responsible for taking care of the patient and making the final diagnosis. The automated report could improve reading efficiency, but radiologists will need to be vigilant to avoid placing too much trust in the computer.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “The use of automated image interpretation by nonradiologists will need to be considered. Such users might include radiology technologists, radiologist assistants, and nonradiologist clinicians. The technology could lead to further commoditization of radiology services.” 


    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “In conclusion, advances in abdominal CT automated image interpretation are occurring at a rapid pace. In the not too distant future, these advances may enable fully automated image interpretation. Similar advances may occur in other body regions and with other imaging modalities. Risks and benefits are difficult to foresee but may include increased pressures for commoditization, better reading efficiency, fewer interpretive errors, and a more quantitative radiology report. The primary focus must ultimately be on improved patient care.”

    Progress in Fully Automated Abdominal CT Interpretation
Summers RM
AJR 2016; 207:67–79
  • “ The current design and implementation of Medical Workstations has failed and must be replaced by Knowledge Stations as we move beyond image acquisition and into knowledge acquisition like deep learning.”


    Rethinking Medical Workstations and the Transition to Knowledge Stations
KMHorton, Fishman EK
JACR (in progress)
© 1999-2021 Elliot K. Fishman, MD, FACR. All rights reserved.