Imaging Pearls ❯ Deep Learning ❯ AI Guidelines
-- OR -- |
|
- To date, the development of machine learning (ML) and artificial intelligence(AI) in medicine has been characterized by steady progress with times of rapid growth—each with promise to improve patientoutcomes and clinical practice and, at times, reduce costs. With the expansion from traditional AI (ML, computer vision, and natural language processing) to generative AI using generative pretrained transformers (GPTs), staggering new opportunities exist to both develop insights and present them to clinicians and patients. However, to maximize the positive impact of these innovations, a framework is needed for clinicians and patients to understand AI in the context of clinical practice, including the evidence of efficacy, safety, and monitoring in real-world clinical use. We believe that the progress and adoption of ML and AI tools in medicine will be accelerated by a clinical framework for AI development and testing that links evidence generation to indication and benefit and risk and allows clinicians to immediately understand in the context of existing practice guidelines.
Translating AI for the Clinician
Manesh R. Patel, Suresh Balu, Michael J. Pencina
JAMA Published online October 15, 2024 - “To realize their full potential, current development of health AI technologies needs to focus on the clinical use case or indication thatnthe technologies aim to improve. Specifically, developers should prioritize aligning the technologies with clinical indication and use cases to maximize impact. We believe this first step is a conceptual sea change from the current development pathway, which focuses on the advanced computational techniques and available health data sources being used, with emphasis on variety ,amount, and breadth. Although this is necessary for AI algorithm and model formation, it is not sufficient. For successful adoption of AI technologies in the clinic, we must first articulate the specific problems or use cases that would benefit from the incorporation of AI. “
Translating AI for the Clinician
Manesh R. Patel, Suresh Balu, Michael J. Pencina
JAMA Published online October 15, 2024 - “Although regulatory agencies have provided some guidance,2 risk or indication-based testing and monitoring are key to rapid development and implementation of high-quality health AI technologies. Specifically, for low-risk AI technologies (those that might improve health behaviors, such as encouraging more movement or more sleep),one can imagine study designs incorporating these tools in prospective observational studies. These study designs still need representative patients and broad uptake and change measures for meaningful use. In contrast, high-risk AI technologies, such as tools to improve clinical performance during either a therapeutic (eg, percutaneous angioplasty) or diagnostic (eg, AI algorithm for detection of a dangerous cardiac rhythm) procedure, would require a randomized trial with a control group to ensure clinical evidence for use. In this way, the indication and risk of the AI technology would matchthe methodology for clinical testing and adoption.”
Translating AI for the Clinician
Manesh R. Patel, Suresh Balu, Michael J. Pencina
JAMA Published online October 15, 2024 - “The next decades of health care innovation will undoubtedly be dependent on the volume of health data generated in the daily conduct of health delivery. Coupled with the technological breakthroughs affordedby the rapid growth of AI capabilities, these health care innovations could truly revolutionize the practice of medicine as we know it. However, this potential will not be realized without a refocusing of AI technology development toward a closer alignment with the health goals that clinicians and patients understand are required to ensure widespread adoption and maximal impact to improve human health. We need clearly articulated clinical indications, well-defined risk-based clinical testing processes and evidence generation, and continuous monitoring linked to these indications. Without this type of paradigm shift, we fear that the use of AI technologies will struggle to gain sufficient trust among clinicians and patients, which in turn will limit its adoption and impact on health.”
Translating AI for the Clinician
Manesh R. Patel, Suresh Balu, Michael J. Pencina
JAMA Published online October 15, 2024
Translating AI for the Clinician
Manesh R. Patel, Suresh Balu, Michael J. Pencina
JAMA Published online October 15, 2024
- “It is not always possible for radiology practices to engage in system-level decision-making regarding the implementation of AI algorithms, even when they leverage medical imaging data. Individuals with informatics expertise who work for the radiology practice can integrate the algorithm in the practice with vendor support; for simple applications, the process can be analogous to other software installations. However, more complex AI tools might require not only domain expertise but also computing resources, monitoring processes, and methods for data access. The governance team will have to expand to include diverse experts who can review evidence, perform utility analysis, estimate risk, assess technical and clinical readiness, and predict economic effects. Ethics and fairness review should be incorporated into the algorithm assessment process.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9 - “Many consumers of AI believe they can rely on FDA clearance and vendor marketing to ensure AI applications will work as expected in clinical practice. But even those tools cleared by the FDA may not perform as expected outside of the environments where they are trained. Local evaluation of AI tools by Radiology practices will be equally important in community practices as in academic practices. Multiple components are typically included in the scoring rubrics established by the AI governance committee when evaluating an application for implementation.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9 - “Once an algorithm has been approved by the governance group, responsible resources must work with vendors or internal developers for robustness and integration testing, ideally with staged “shadow” and pilot implementation, respectively. In shadow deployment, clinical data are fed to the algorithm in real time and results are gathered to assess performance and safety, but the generated results are not provided to clinical users. In a pilot deployment, chosen clinical users test the model in a limited part of the practice and provide production feedback before full clinical deployment. After these staged deployments, prechosen metrics are reviewed to determine if the application warrants a fuller implementation, further assessment, or rejection.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9 - “Explainable AI exploits the ability of some AI models to provide feedback on how input features drive generated output. This form of explanation can help identify the cause of observed problems, where exploration of the importance of each parameter—and in some cases, decision tree structure— can more readily provide a human-interpretable explanation of flawed model decision-making. However, whereas the ability to explain can be a valuable trait in an AI tool, it does not equate to trustworthiness (34). Unfortunately, unlike classic machine learning methods applied to electronic health record data, the ability to interpret and explain are typically more challenging for imaging AI tools, which often use deep learning methods with millions of parameters.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9 - “As more AI tools are considered for implementation, standard strategies should be adapted to the life cycle of each tool. The governing body may become aware of tools at different development stages, leveraging different technology platforms, and requiring different points of clinical integration. A set of useful tools that support a variety of scenarios will allow for more effective and seamless clinical implementation of AI applications. Governing bodies should establish processes to stratify risk and determine the appropriate path for initial implementation and monitoring. The stage of tool development, FDA clearance status, and method of clinical implementation may inform risk and guide the implementation strategy. Implementation of early-stage tools may favor governance decentralization and shadow implementation, whereas validated applications in low-risk settings may immediately undergo clinical implementation after expedited approval from central governance structures. The governance process we propose herein may require financial and resource investment at the institutional level. However, this investment is essential to ensure the quality and safety of AI implementation in clinical practice.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9 - “From its inception, radiology has led many technological revolutions. Despite the challenges that are brought by major changes, transformation is not foreign to our specialty. We have always adapted. The advent of artificial intelligence (AI) marks yet another such watershed moment in the history of our specialty. Appropriate governance and management structures will empower us to adapt to this change and fully embrace it, even though it may be years before mature AI tools are routinely integrated into daily practice. Although the infrastructure and investment required for clinical implementation of AI is daunting and imminent changes will bring many challenges—uncertainty and fear not least among them—we believe this transformation will immensely benefit our specialty and presents us with an opportunity to lead as AI augments the practice of medicine.”
Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
Dania Daye et al.
Radiology 2022; 000:1–9
- “Much fear has been generated among radiologists by the statements in public media from researchers engaged in AI development, predicting the imminent extinction of our specialty. For example, Andrew Ng (Stanford) stated that “[a] highly-trained and specialised radiologist may now be in greater danger of being replaced by a machine than his own executive assistant”, whereas Geoffrey Hinton (Toronto) said “if you work as a radiologist, you’re like the coyote that’s already over the edge of the cliff, but hasn’t yet looked down so doesn’t realise there’s no ground underneath him. People should stop training radiologists now. It’s just completely obvious that within 5 years, deep learning is going to do better than radiologists. We’ve got plenty of radiologists already ”.
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 - “Much fear has been generated among radiologists by the statements in public media from researchers engaged in AI development, predicting the imminent extinction of our specialty. For example, Andrew Ng (Stanford) stated that “[a] highly-trained and specialized radiologist may now be in greater danger of being replaced by a machine than his own executive assistant””.
What the radiologist should know about artificial intelligence – an ESR white paper
Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2
- The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavor to align them.
ETHICS GUIDELINES FOR TRUSTWORTHY AI
High-Level Expert Group on Artificial Intelligence
European Commission 2019 - Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system.
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology
Jacob L. Jaremko
Can Assoc Radiol J. 2019 May;70(2):107-118 - ”This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system.”
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology
Jacob L. Jaremko
Can Assoc Radiol J. 2019 May;70(2):107-118 - ” Sharing medical data for research purposes is a complex issue balancing individual privacy rights versus potential collective societal benefits. This is particularly important for radiology AI data analysis, which uniquely requires large quantities of sensitive image data for algorithm training. A paradigm shift from a patient’s right to near-absolute data privacy, to the sharing of anonymized data becoming regarded as one of the duties or responsibilities of a citizen is underway. This requires a move from ‘‘informed consent’’ for traditional research projects, toward other forms of consent (‘‘broad consent,’’ ‘‘opt-out’ consent,’’ ‘‘presumed consent’’) for AI data analyses.”
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology
Jacob L. Jaremko
Can Assoc Radiol J. 2019 May;70(2):107-118 - The institution implementing AI could potentially be held liable for AI-related medical error in several ways. It could be held responsible for malpractice under ‘‘vicarious liability’’ in the following circumstances: (1) if the AI system is deemed equivalent to an employee, or as a ‘‘learned intermediary,’’ or (2) if the AI system is deemed a technological device that the institution has a duty to deploy appropriately. The AI technology manufacturer/developer could theoretically be held liable under ‘‘products liability,’’ though this type of liability is notoriously difficult to demonstrate for computer software.
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology
Jacob L. Jaremko
Can Assoc Radiol J. 2019 May;70(2):107-118 - “The issue of who owns this personal health data is characterized by a complex tension between health care provider proprietary interests, patient privacy, copyright issues, and AI developer intellectual property, and an overarching public interest in open access to data that can improve medical care. In Canada this is complicated by the constitutional division of powers, under which copyright law is in federal jurisdiction, governed by the Copyright Act R.S.C. 1985, c. C-42, while health care is in the jurisdiction of provinces and territories. To the question, ‘‘who owns patient medical data in Canada,’’ the answer is nuanced and depends on how and by whom the data is being used."
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology
Jacob L. Jaremko
Can Assoc Radiol J. 2019 May;70(2):107-118
Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in RadiologyJacob L. JaremkoCan Assoc Radiol J. 2019 May;70(2):107-118
Value of Data in the AI Era- Patient Data and AI: Who has access to what?
- Who owns the patient data? (medical records, imaging studies, pathology report)
- Who controls access to the patient data?
- What do hospitals need to do to control the use of patient data?
- What is a patients rights to their own data?