google ads
Deep Learning: Deep Learning and Ethics Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Deep Learning and Ethics

-- OR --

  • “I could help address urgent global health priorities, but the realization of this potential is contingent upon having data that represent those people, diseases, and geographies. Unfortunately, the current health data landscape does not reflect this. Evidence across health datasets (including in ophthalmology, dermatology, and radiology) has consistently highlighted that publicly accessible health data are heavily skewed toward just a few countries and exclude much of the world. In this issue of NEJM AI, we publish an article by Wu et al. demonstrating that this problem similarly exists in clinical text datasets, a topic of increasing importance as health care applications of large language models grow rapidly. In this article, we see 192 publicly available clinical text datasets originating from 14 countries and covering nine languages, yet they leave out Africa and Oceania entirely.”
    A Global Health Data Divide
    Xiaoxuan Liu et al.
    NEJM AI 2024; 1 (6)
  • Accepting that global inequality and inequitable health care access are in equal parts pernicious and persistent, what needs to change if AI is to be part of the solution? Improving the availability of health care data that appropriately represent the individuals most in need must be a core priority. Having a diverse pool of health data globally is critical for fostering a research, development, and innovation ecosystem that is able to support a range of use cases. Moreover, investment is needed in both infrastructure and digital literacy to empower countries and their citizens, ensuring that data collection and AI development are focused on the most important use cases. We cannot afford to ignore the message emerging consistently from multiple reviews of available health datasets: that thereis a substantial global health data divide. Addressing this problem will be challenging, but this is essential to enabling AI health technologies that help those who need it most.
    A Global Health Data Divide
    Xiaoxuan Liu et al.
    NEJM AI 2024; 1 (6)
  • “Clinical decision support tools can improve diagnostic performance or reduce variability, but they are also subject to post-deployment underperformance. Although using AI in an assistive setting offsets many concerns with autonomous AI in medicine, systems that present all predictions equivalently fail to protect against key AI safety concerns. We design a decision pipeline that supports the diagnostic model with an ecosystem of models, integrating disagreement prediction, clinical significance categorization, and prediction quality modeling to guide prediction presentation. We characterize disagreement using data from a deployed chest X-ray interpretation aid and compare clinician burden in this proposed pipeline to the diagnostic model in isolation. The average disagreement rate is 6.5%, and the expected burden reduction is 4.8%, even if 5%mof disagreements on urgent findings receive a second read. We conclude that, in our production setting, we can adequately balance risk mitigation with clinician burden if disagreement false positives are reduced.”
    AI-clinician collaboration via disagreement prediction: A decision pipeline and retrospective analysis of real-world radiologist-AI interactions.  
    Sanchez M, Alford K, Krishna V, Huynh TM, Nguyen CDT, Lungren MP, Truong SQH, Rajpurkar P  
    Cell Rep Med. 2023 Oct 17;4(10):101207. 
  • “In this work, we discuss a number of limitations and considerations for deploying AI-assisted diagnostic aids. We study disagreement in real-world production data from a chest X-ray interpretation tool and use that data to motivate the ideation of an AI assistance pipeline. We detail this pipeline, which makes use of machine learning to intelligently decide when and how to present model output in a clinically conscious manner, and again use the production data to simulate its effect on clinician burden depending on the characteristics of two models in the pipeline.”
     AI-clinician collaboration via disagreement prediction: A decision pipeline and retrospective analysis of real-world radiologist-AI interactions.
     Sanchez M, Alford K, Krishna V, Huynh TM, Nguyen CDT, Lungren MP, Truong SQH, Rajpurkar P  
    Cell Rep Med. 2023 Oct 17;4(10):101207. 
  • “Rapid advances in automated methods for extracting large numbers of quantitative features from medical images have led to tremendous growth of publications reporting on radiomic analyses. Translation of these research studies into clinical practice can be hindered by biases introduced during the design, analysis, or reporting of the studies. Herein, the authors review biases, sources of variability, and pitfalls that frequently arise in radiomic research, with an emphasis on study design and statistical analysis considerations. Drawing on existing work in the statistical, radiologic, and machine learning literature, approaches for avoiding these pitfalls are described.”
    Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)
  • Summary
    This review highlights biases and inappropriate methods used in Radiomic research that can lead to erroneous conclusions; address in these issues will accelerate translation of research to clinical practice and have the potential to positively impact patient care.
    Essentials
    • Many radiomic research studies are hindered by systematic biases.
    •  In addition to ongoing initiatives for standardization, improvements in study design, data collection, rigorous statistical analysis,and thorough reporting are needed in radiomic research.
    • Insight into potential problems and suggestions for how to circumvent common pitfalls in radiomic studies are provided.
    Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)
  • “It is not always possible to safeguard against all potential sources of study bias in radiomics research. Therefore, it is imperativethat researchers thoroughly report on their imaging data (ie,Digital Imaging and Communications in Medicine [DICOM]header information), methodology, limitations, and any other potential sources of variability. Rigorous reporting enablesresearchers to build on others’ results and protects against failed attempts to replicate spurious and overstated results.  For instance, Eslami et al included a detailed description of their methodology in their supplementary material.”
    Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)
  • “Radiomic analyses are highly susceptible to bias arising from multiple sources. A unifying theme behind the biases and pitfalls we have outlined is that they can all lead to incorrect inference and a model that erroneously includes or excludes imaging features and, ultimately, performs poorly. While not meant to be an allencompassing list, the issues we have highlighted arise frequently. Some, such as overfitting and lack of adjusting for multiple testing, are particularly relevant in radiomic studies. Others are issues that may arise equally as frequently in other types of studies but have been highlighted here because we have noticed a lack of awareness of these issues among investigators conducting radiomic studies. All are issues that are broadly applicable to many studies, including those where features are derived by the computer using convolution neural network (deep) approaches. In any analysis, the challenge is to identify the most relevant sources of bias and measurement error.”
    Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)
  • "Although software packages to implement analyses are readily available and increasingly user friendly, if they are not implemented with the necessary expertise or correct guidance, there is a high risk that incorrect conclusions will be drawn from the work. The field of radiomics lies at the intersection of medicine, computer science, and statistics. We contend that to produce clinically meaningful results that positively impact patient care and minimize biases and pitfalls, radiomic analysis requires a multidisciplinary approach with a research team that includes individuals with multiple areas of expertise.”
    Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)

  • Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)

  • Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)

  • Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)

  • Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies
    Chaya S. Moskowitz et al.
    Radiology 2022; (in press)
  • “The broad application of artificial intelligence techniques in medicine is currently hindered by limited dataset availability for algorithm training and validation, due to the absence of standardized electronic medical records, and strict legal and ethical requirements to protect patient privacy. In medical imaging, harmonized data exchange formats such as Digital Imaging and Communication in Medicine and electronic data storage are the standard, partially addressing the first issue, but the requirements for privacy preservation are equally strict. To prevent patient privacy compromise while promoting scientific research on large datasets that aims to improve patient care, the implementation of technical solutions to simultaneously address the demands for data protection and utilization is mandatory. Here we present an overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond.”
    Secure, privacy-preserving and federated machine learning in medical imaging
    Georgios A. Kaissis et al.
    Nature Machine Intelligence
    Nat Mach Intell 2, 305–311 (2020). 
  • "We believe that the widespread adoption of secure and private AI will require targeted multi-disciplinary research and investment in the following areas. Decentralized data storage and federated learning systems, replacing the current paradigm of data sharing and centralized storage, have the greatest potential to enable privacy-preserving cross-institutional research in a breadth of biomedical disciplines in the near future, with results in medical imaging and genomics recently demonstrated.”
    Secure, privacy-preserving and federated machine learning in medical imaging
    Georgios A. Kaissis et al.
    Nature Machine Intelligence
    Nat Mach Intell 2, 305–311 (2020). 

  • Secure, privacy-preserving and federated machine learning in medical imaging
    Georgios A. Kaissis et al.
    Nature Machine Intelligence Nat Mach Intell 2, 305–311 (2020). 
  • “Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to “sell” clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed.”
    Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework
    David B.Larson et al.
    Radiology 2020; 00:1–8
  • "The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly fol- lowed. Rather than debate whether patients or provider organizations “own” the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.”
    Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework
    David B.Larson et al.
    Radiology 2020; 00:1–8
  • “Medical artificial intelligence (AI) can perform with expert-level accuracy and deliver cost-effective care at scale. IBM’s Watson diagnoses heart disease better than cardiologists do. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps now detect skin cancer with expert accuracy. Algorithms identify eye diseases just as well as specialized physicians. Some forecast that medical AI will pervade 90% of hospitals and replace as much as 80% of what doctors currently do. But for that to come about, the health care system will have to overcome patients’ distrust of AI.”
    AI Can Outperform Doctors. So Why Don’t Patients Trust It?
    Chiara Longoni and Carey K. Morewedge
    Harvard Business Review Oct 30, 2019
  • “The reason, we found, is not the belief that AI provides inferior care. Nor is it that patients think that AI is more costly, less convenient, or less informative. Rather, resistance to medical AI seems to stem from a belief that AI does not take into account one’s idiosyncratic characteristics and circumstances. People view themselves as unique, and we find that this belief includes their health. Other people experience a cold; “my” cold, however, is a unique illness that afflicts “me” in a distinct way. By contrast, people see medical care delivered by AI providers as inflexible and standardized — suited to treat an average patient but inadequate to account for the unique circumstances that apply to an individual.”
    AI Can Outperform Doctors. So Why Don’t Patients Trust It?
    Chiara Longoni and Carey K. Morewedge
    Harvard Business Review Oct 30, 2019
  • "There are a number of steps that care providers can take to overcome patients’ resistance to medical Al. For example, providers can assuage concerns about being treated as an average or a statistic by taking actions that increase the perceived personalization of the care delivered by AI. When we explicitly described an AI provider as capable of tailoring its recommendation for whether to undergo coronary bypass surgery to each patient’s unique characteristics and medical history, study participants reported that they would be as likely to follow the treatment recommendations of the AI provider as they would be to follow the treatment recommendations of a human physician.”
    AI Can Outperform Doctors. So Why Don’t Patients Trust It?
    Chiara Longoni and Carey K. Morewedge
    Harvard Business Review Oct 30, 2019
  • “AI-based health care technologies are being developed and deployed at an impressive rate. AI- assisted surgery could guide a surgeon’s instrument during an operation and use data from past operations to inform new surgical techniques. AI-based telemedicine could provide primary care support to remote areas without easy access to health care. Virtual nursing assistants could interact with patients 24/7, offer round-the-clock monitoring, and answer questions. But harnessing the full potential of these and other consumer-facing medical AI services will require that we first overcome patients’ skepticism of having an algorithm, rather than a person, making decisions about their care.”
    AI Can Outperform Doctors. So Why Don’t Patients Trust It?
    Chiara Longoni and Carey K. Morewedge
    Harvard Business Review Oct 30, 2019
  • “One element of AI’s uniqueness is actually its vulnerability. Algorithms are sensitive to the ground truth, formerly called “gold standard.” Thus, labeled data have value in the market, however that market place emerges. To take this concept further, properly labeling datasets with ground truth, an onerous task requiring labor, is valuable. Labeled data is a currency of sorts. The corollary is that publicly-available datasets, which are extensively used by researchers, should be taken with a pinch of salt.”
    Artificial Intelligence in Radiology–– The State of the Future
    Jha S, Cook T
    Acad Radiol 2020; 27:1–2
  • “It could be argued that algorithms trained on vast amounts of individual-level data are un- wieldy or even superfluous. Who needs an algorithm to suggest the same decisions people would make themselves? Such a function might become critical, however, when choices have to be made, for instance, regarding continued life support for someone who can no longer make decisions."
    Algorithm-Aided Prediction of Patient Preferences — An Ethics Sneak Peek
    Nikola Biller‐Andorno, Armin Biller
    n engl j med 381;15
  • "Conceiving of AI as a substitute for human decision making is challenging from a technical point of view. Examining the relationship between AI and decision making, Jean-Charles Pomerol has delineated two major aspects of decision making: diagnosis and “look ahead.” Diagnosis involves pattern matching and is therefore perfectly amenable to AI. Look ahead requires both the ability to combine many actions and events and the ability to anticipate all possible reactions.”
    Algorithm-Aided Prediction of Patient Preferences — An Ethics Sneak Peek
    Nikola Biller‐Andorno, Armin Biller
    n engl j med 381;15
  • "The prospect that algorithms may compound the effects of evidence-based medicine, guide- lines, and budget targets in limiting the scope available for individual clinical judgment is dis- concerting to clinicians who believe that their professionalism is under threat.26 The American Medical Association addresses this point by con- ceiving of AI not as artificial intelligence but as “augmented intelligence” that enhances rather than replaces physicians’ expertise.”
    Algorithm-Aided Prediction of Patient Preferences — An Ethics Sneak Peek
    Nikola Biller‐Andorno, Armin Biller
    n engl j med 381;15
  • “Algorithms may prompt us to revisit some questions that ethicists have long puzzled over, such as how we can know what a good ethical decision is. They also raise new ones: Will algorithms end up making better, more reliable, and more consistent moral choices than humans do? What can we learn from algorithms to improve our ethical reasoning and decision-making skills?"
    Algorithm-Aided Prediction of Patient Preferences — An Ethics Sneak Peek
    Nikola Biller‐Andorno, Armin Biller
    n engl j med 381;15
  • “Radiologists will remain ultimately responsible for patient care and will need to acquire new skills to do their best for patients in the new AI ecosystem.”
    Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement
    J. Raymond Geis et al.
    Insights into Imaging (2019) 10:101
  • “AI-based health care technologies are being developed and deployed at an impressive rate. AI- assisted surgery could guide a surgeon’s instrument during an operation and use data from past operations to inform new surgical techniques. AI-based telemedicine could provide primary care support to remote areas without easy access to health care. Virtual nursing assistants could interact with patients 24/7, offer round-the-clock monitoring, and answer questions. But harnessing the full potential of these and other consumer-facing medical AI services will require that we first overcome patients’ skepticism of having an algorithm, rather than a person, making decisions about their care.”
    AI Can Outperform Doctors. So Why Don’t Patients Trust It?
    Chiara Longoni and Carey K. Morewedge
    Harvard Business Review October 2019

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.