google ads
Deep Learning: Ai and Legal Issues Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ AI and Legal Issues

-- OR --

  • IMPORTANCE Advances in artificial intelligence (AI) must be matched by efforts to better understand and evaluate how AI performs across health care and biomedicine as well as develop appropriate regulatory frameworks. This Special Communication reviews the history of the US Food and Drug Administration’s (FDA) regulation of AI; presents potential uses of AI in medical product development, clinical research, and clinical care; and presents concepts that merit consideration as the regulatory system adapts to AI’s unique challenges.
    CONCLUSIONS AND RELEVANCE Strong oversight by the FDA protects the long-term success of industries by focusing on evaluation to advance regulated technologies that improve health. The FDA will continue to play a central role in ensuring safe, effective, and trustworthy AI tools to improve the lives of patients and clinicians alike. However, all involved entities will need to attend to AI with the rigor this transformative technology merits.  
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • OBSERVATIONS The FDA has authorized almost 1000 AI-enabled medical devices and has received hundreds of regulatory submissions for drugs that used AI in their discovery and development. Health AI regulation needs to be coordinated across all regulated industries, the US government, and with international organizations. Regulators will need to advance flexible mechanisms to keep up with the pace of change in AI across biomedicine and health care. Sponsors need to be transparent about and regulators need proficiency in evaluating the use of AI in premarket development. A life cycle management approach incorporating recurrent local postmarket performance monitoring should be central to health AI development. Special mechanisms to evaluate large language models and their uses are needed. Approaches are necessary to balance the needs of the entire spectrum of health ecosystem interests, from large firms to start-ups. The evaluation and regulatory system will need to focus on patient health outcomes to balance the use of AI for financial optimization for developers, payers, and health systems.
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.

  • FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • The FDA’s first approval of a partially AI-enabled medical device took place in 1995,when the FDA approved PAPNET, a software that used neural networks to prevent misdiagnosis of cervical cancer in women undergoing Papanicolaou tests. Although PAPNET was shown to be more accurate than human pathologists, it was not adopted in clinical practice due to inadequate cost-effectiveness. Since then, the FDA has authorized approximately 1000 AI-enabled medical devices, with their most common use being in radiology, followed by cardiology.  
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.

  • FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • Although the FDA does not regulate the practice of medicine, it has a strong mission to both advance public health and biomedical innovation. Therefore, there is concern that a disproportionate focus of AI applications on financial return on investment could harm patient outcomes and reduce acceptance and trust in this technology. Many AI innovations that could benefit patients may come at the price of traditional jobs, capital structures, and revenue streams in health care. Yet too many US residents live in health care deserts, with primary care shortages even in many physician-dense areas, and AI algorithms could point to more preventive services that currently are not profitable. Furthermore, AI could significantly improve the efficiency of clinical services, there by freeing clinicians to do the one thing that ultimately no machine can: forge a human connection with the patient.
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • Historic advances in AI applied to biomedicine and health caremust be matched by continuous complementary efforts to better understand how AI performs in the settings in which it is deployed. This will entail a comprehensive approach reaching far beyond the FDA, spanning the consumer and health care ecosystems to keep pace with accelerating technical progress. If not, there is a risk that AI could disappoint similar to other general-purpose technologies deployed in health care settings or even create significant harm if untended models’ performance deteriorates or focuses on financial return without adequate attention to impact on clinical outcomes.  
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • “Strong oversight by the FDA and other agencies aims to protect the long-term success of regulated products by maintaining a high grade of public trust in the regulated space. It is in the interest of the biomedical, digital, and health care industries to identify and deal with irresponsible actors and to avoid misleading hyperbole. Regulated industries, academia, and the FDA will need to develop and optimize the tools needed to assess the ongoing safety and effectiveness of AI in health care and biomedicine. The FDA will continue to play a central role with a focus on health outcomes, but all involved sectors will need to attend to AI with the care and rigor this potentially transformative technology merits.”  
    FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine
    Haider J. Warraich, Troy Tazbaz, Robert M. Califf
    JAMA. doi:10.1001/jama.2024.21451Published online October 15, 2024.
  • “Although the FDA is the most critical player in regulating regenerative (and likely generative) clinical AI in the United States, other agencies will also participate. The Office of the National Coordinator for Health Information Technology has already used its authority over EHR certification to require that any AI incorporated into certified EHRs meet various transparency requirements. These will be most valuable for assuring users that the data used to train an ML application are appropriate to its intended clinical use. Transparency about the methods with which the ML reaches its findings remains problematic because the process is often inexplicable and because of proprietary concerns about the underlying software. The Federal Trade Commission also has the authority to sue vendors that misrepresent the performance of their AI products.”
    The Regulation of Clinical Artificial Intelligence
    David Blumenthal , Bakul Patel  
    NEJM AI 2024; 1 (8)
  • “Current systems for regulating human clinical intelligence are imperfect, as is human intelligence. Some prospective clinicians perform poorly in their undergraduate or graduate coursework. Most do not correctly answer every question on their licensing or board certification examinations. Virtually every clinician, no matter how distinguished or experienced, makes errors in diagnosis and treatment during their careers. We also understand that clinical intelligence is dynamic. Good clinicians are lifelong learners who rely on their experience and new scientific information to improve their care over time.”
    The Regulation of Clinical Artificial Intelligence
    David Blumenthal , Bakul Patel  
    NEJM AI 2024; 1 (8)
  • “The regulation of clinical AI poses novel challenges for society. We may never be able to provide the type of assurances concerning the safety and efficacy of GAI that regulators have achieved for other medical devices and pharmaceuticals. However, those challenges are worth embracing. These technologies have enormous potential to improve health and lives. Wise regulation can help to ensure that clinical AI reaches its full potential to improve patient care.”
    The Regulation of Clinical Artificial Intelligence
    David Blumenthal , Bakul Patel  
    NEJM AI 2024; 1 (8)
  • “How different is this situation from the developments in medicine where physicians are giving away their knowledge to artificial intelligence (AI) on a voluntary basis and spend hours of valuable research time sharing expert knowledge with AI systems. AI has entered the medical field so rapidly and unobtrusively that it seems as if its interactions with the profession have been accepted without due diligence or in-depth consideration. It is clear that AI applications are being developed with the speed of lightning, and from recent publications it becomes frightfully apparent what we are heading for and not all of this is good. AI may be capable of amazing performance in terms of speed, consistency, and accuracy, but all of its operations are built on knowledge derived from experts in the field. We here follow the example of the kidney pathology field to illustrate the developments, emphasizing that this field is only exemplary of other fields in medicine.”
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Feb 13;331(6):471-472. 
  • “This era will show a decrease in intellectual debates among colleagues, a sign of the time that computer scientists have already warned us about. While authors of literature are fighting for regulations to control the usage of AI in art, physicians should contemplate how to take advantage of the potential benefits from AI in medicine without losing control over their profession. With the issue of a landmark Executive Order in the US to ensure that America leads the way in managing the risks of AI and the EU becoming the first continent to set clear rules9 for the use of AI, physicians should realize that keeping AI within boundaries is essential for the survival of their profession and for meaningful progress in diagnosis and understanding of disease mechanisms.”
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Feb 13;331(6):471-472. 
  • “Even among industry leaders, there is a wide variety in maturity levels, as health systems approach AI from different angles and baselines. This roadmap will be a valuable tool for guiding best practices, investments, and conversations, and aligning progress to help health systems take advantage of sector-wide advancement. The AI Collaborative continues to meet, discuss, and refine the AI Maturity Roadmap, with plans to expand the Business Implementation, Value, Maintenance and Operations, and Information Architecture sections.”
    The AI Maturity Roadmap: A Framework for Effective and Sustainable AI in Health Care
    Peter Durlach et al.
    NEJM AI DOI: 10.1056/AI-S2400177 
  • “Researchers expect the deficit of primary care physicians to reach as much as 55,200 by 2033, and a shortage of non-primary specialty physicians of up to 86,700. This is due to a combination of the factors above, and intensified by the fact that more than two-fifths of currently active physicians will reach the standard retirement age within the next decade. With so many experiencing burnout, it’s likely that these physicians will seek to accelerate their retirement, rather than extend their careers.”
    The AI Maturity Roadmap: A Framework for Effective and Sustainable AI in Health Care
    Peter Durlach et al.
    NEJM AI DOI: 10.1056/AI-S2400177 

  • The AI Maturity Roadmap: A Framework for Effective and Sustainable AI in Health Care
    Peter Durlach et al.
    NEJM AI DOI: 10.1056/AI-S2400177
  • ”Deployed strategically, AI has a key role to play in the future of health care, supporting everything from individual diagnoses to broad organizational strategy. As it continues to evolve, the AI Maturity Roadmap can function as a framework for all health systems on this journey.”
    The AI Maturity Roadmap: A Framework for Effective and Sustainable AI in Health Care
    Peter Durlach et al.
    NEJM AI DOI: 10.1056/AI-S2400177 
  • “In summary, the ethical and effective deployment of AI in healthcare is substantially enhanced by rigorous QA protocols, transparent vendor practices, and a commitment to ongoing monitoring and adaptation. Through continuous monitoring and rigorous testing, QA ensures that medical AI tools remain reliable and effective across varied patient demographics and clinical scenarios. Rigorous testing procedures enhance their trustworthiness among clinicians and patients and support the broader goal of ensuring that AI tools can be effectively generalized to different settings. Integrating robust QA programs creates a more resilient healthcare system equipped to harness the benefits of AI while minimizing risks. These elements collectively contribute to making AI a more reliable, safe, and equitable tool in medicine, enabling healthcare providers to build trust and prevent harm while adapting to the evolving landscape of AI.”
    Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003 
  • “Implementing AI tools into clinical practice is a shared responsibility between manufacturers and end-users2that should mirror the QA programs required to install medical imaging devices. The programs should include comprehensive acceptance testing (AT) and continued, periodic quality control (QC) procedures. End-user training and a proper trial period with the local patient population should be required to ensure an understanding of the intended use and limitations of the AI tools before the AI recommendation may influence clinical decisions. ”  
    Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003 
  • “Given the diverse applications of AI in medical imaging, each AI tool will require its own specific QA program. However, the general principle should be to assess each tool's functionality locally using well-curated, reference test sets with sufficient annotated cases for each subgroup in the local patient population. This approach involves evaluating the AI tool's performance across diverse patient subgroups that cover the local real-world patient population of interest, including subgroups that might be underrepresented in the initial training or pre-release test data. A carefully designed testing regime goes beyond mere accuracy metrics; it critically examines potential biases, sensitivity to specific anatomical variations, and the tool's adaptability to different clinical contexts. The increased scrutiny ensures that the AI tool operates equitably across a broader spectrum of patients, thereby building trust in its ability to generalize to the unique components of the local context. Additionally, the QA process should be tailored to the specific application, associated risks, and clinical environment in which it will be used.”
    Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003 

  • Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003 

  •  Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003 
  • ”Building on the essential roles of AT and QC, user training is a critical element for successfully integrating AI tools into healthcare. To encourage adoption and minimize risks, the end-users must understand the tool’s intended use, capabilities, limitations, and ethical implications. Such training should be both comprehensive and tailored to meet the unique requirements and protocols of each clinical site.10-12 In addition to application-specific instructions, training modules should include information on the correct usage of the AI tool, underlying assumptions, legal framework, and case studies illustrating both successful and unsuccessful applications. This multifaceted approach aids in understanding the tool's strengths and limitations. Crucially, user training should commence before the AI tool starts influencing clinical decisions and should be periodically updated throughout the AI tool's operational life. Continuous education should include peer-reviewed audits and equip clinicians to effectively communicate the role and impact of AI tools in patient care. Furthermore, settings where AI outputs guide downstream decisions warrant additional discipline-specific training.”
    Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  
    Usman Mahmood Artificial intelligence in medicine: mitigating risks and maximizing benefits via quality assurance, quality control, and acceptance testing  Usman Mahmood  
    BJR|Artificial Intelligence, 2024, 1(1), ubae003  
  • “In summary, AI poses challenges for applying tort principles. Because it is primarily plaintiffs who will struggle, liability worries may be outsized during this period of adolescence for software- related tort doctrine. However, we believe that this situation cannot hold. Tort doctrine will evolve to address needs arising from technological changes, as it has historically.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “Third, across all case clusters, the reluctance by courts to distinguish “AI” from “traditional” software suggests that rules or approaches that courts create in AI-related cases may have spillover effects on non-AI software (and vice versa), although technical differences may make them ill-suited to another type of model. For example, courts might relax requirements for proving design defects, although not all software models present opacity problems.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “A framework to support health care organizations and clinicians in assessing AI liability risk is provided in Figure 1. The framework incorporates our findings regarding how courts evaluate claims related to software errors and broadens the lens to include assessment of the likelihood that claims will be brought. Drawing on previous conceptual work in safety science and malpractice claiming dynamics, we conceptualized risk as a function of the following four factors: the likelihood and nature of model errors, the likelihood that humans or another system will detect the errors and prevent harm, the potential harm if errors are not caught, and the likelihood that injuries would garner compensation in the tort system.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “While awaiting clarification of how tort doctrine will evolve to address AI, health care organizations and clinicians can take several steps to manage liability uncertainty. One such step is to resist the temptation to lump all applications of AI together. Adoption decisions and postdeployment monitoring should reflect the fact that some tools are riskier than others. When tools have the hallmarks of high liability risk that we have identified (e.g., low opportunity to catch the error, high potential for patient harm, and unrealistic assumptions about clinician behavior), organizations should expect to allocate substantial time and resources to safety monitoring and gather considerable information from model developers and implementation teams. In contrast, for lower-risk tools, organizations may be able to apply more generalized, lower-touch monitoring.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “When models are developed in house, there is no external developer to assume legal obligations; having adequate insurance is therefore critical.29 Professional liability insurers may impose coverage exclusions for AI-related injuries, and cyber policies may cover only economic losses, not physical injuries. Organizations should ensure that their coverage is not limited in these ways and is deep enough to cover worst-case scenarios in which a systematic error affects many patients.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “Health care organizations should also anticipate the evidentiary problems that may arise in AI litigation. AI models may be frequently updated in order to account for distribution shift, yet litigation will require that parties be able to reproduce past predictions. Our reviewed cases included instances in which failure to appropriately track software versions or types prolonged litigation. Model inputs, outputs, and versions should be documented at the time of care, along with the reasons that clinicians followed or departed from model recommendations.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “It is also useful for health care organizations to recognize that the defense of AI cases may require different expertise than what malpractice defense counsel are accustomed to needing. Our case review suggests the question of who qualifies as a health care AI expert is far from settled. In addition to cultivating relationships with expert witnesses in computer science, counsel will need to develop sufficient familiarity with AI methods to be able to quarterback a legal defense.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “It also may be prudent to inform patients when AI models are used in diagnostic or treatment decisions. In evaluating claims alleging breach of informed consent, many jurisdictions apply a patient-centered standard to decide what constitutes material information that should have been disclosed, and unlike with other software, surveys indicate a majority of U.S. residents feel uncomfortable about AI being used in their care. If use of AI is documented in the medical record, it will come to light during litigation; disclosure to patients reduces the risk that plaintiffs will add informed-consent claims in response. A reasonable disclosure might include what function the model serves, what shortcomings are known, how the team uses output inlight of shortcomings, and why they believe that its use improves care.”
    Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
    Michelle M. Mello, J.D., Ph.D., and Neel Guha, M.S.
    n engl j med 390;3 nejm.org January 18, 2024
  • “There are lessons to be learned from the past that are relevant to the future of AI. First, the history of human-subjects regulation shows that a core decision to be made relates to the role of professions in guiding or replacing government regulations. It will be important to focus on discussions of who, specifically, should have authority to establish and enforce rules for AI, with public values in mind. Second, attention to data ethics, including questions of how strenuously to regulate data collection and ownership, will be key to robust AI regulation. Third, the history of human-subjects regulation shows that for any fast-moving area of science, anticipating and planning for rule revision is necessary. AI’s emerging properties and new use cases warrant clear, built-in mechanisms to allow speedy regulatory updates made with meaningful public input to support science, medicine, and social justice.”
    Medicine’s Lessons for AI Regulation
    Laura Stark
    n engl j med 389;24
  • “The capacity of AI is rapidly evolving — as are public concerns about norms of use, corporate accountability, and effects on global security, labor, climate, and other areas. The history of human subjects research suggests that it will be important to keep rules for AI as nimble as the science they regulate. Federal agencies, rather than Congress, typically lead the way in updating regulations using a process, known as retrospective review, that is con-ducted when demanded by stakeholders. Yet regulation of AI is best envisioned as an ongoing project, to ensure that new rules emerge alongside new scientific possibilities and political contexts.”
    Medicine’s Lessons for AI Regulation
    Laura Stark
    n engl j med 389;24
  • “There are clear opportunities ahead, but there is a need for a path forward to guide researchers. If these efforts are successful, every publicly funded project will have two equally important goals: first, to accomplish its research aims of collecting and analyzing data and reporting results to advance science, and second, to produce data that other investigators can use to replicate findings and produce new insights, thereby accelerating and maximizing the impact of U.S. government funding of science.”
    Data Sharing — A New Era for Research Funded by the U.S. Government
    Joseph S. Ross, M.D., M.H.S., Joanne Waldstreicher, M.D., and Harlan M. Krumholz, MD  
    n engl j med  2023 (in press)
  • Data-sharing efforts haven’t been entirely successful, however. Many journals now require data sharing statements in published articles, but studies have shown that authors rarely honor these statements and data generally aren’t made available upon request.  Moreover, although data from thousands of clinical trials are now available on data-sharing platforms, approval rates for data use requests and rates of approved requests leading to published studies have both varied. In addition, not all platforms have transparent policies regarding data use. The variety of data-sharing approaches offers opportunities  for the NIH and other federal agencies to learn from previous experience and be successful in their efforts to promote data sharing; ensure transparency, ease of use of data-sharing tools, and efficient resource allocation; and maximize the likelihood of successful completion and dissemination of research using shared data.
    Data Sharing — A New Era for Research Funded by the U.S. Government
    Joseph S. Ross, M.D., M.H.S., Joanne Waldstreicher, M.D., and Harlan M. Krumholz, MD
     n engl j med  2023 (in press)
  • “Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare.”
    Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals
    Nadine Bienefeld, Jens Michael Boss, Rahel Lüthy  
    npj Digital Medicine (2023) 6:94 ; https://doi.org/10.1038/s41746-023-00837-4
  • “Publications on artificial intelligence (AI) and machine learning (ML) in medicine have quintupled in the last decade. However, implementation of these systems into clinical practice lags behind due to a lack of trust and system explainability. Solving the explainability conundrum in AI/ML (XAI) is considered the number one requirement for enabling trustful human-AI teaming in medicine; yet, current efforts consisting of complex mathematical methodologies (e.g., ante-hoc or posthoc procedures7) are unlikely to increase clinicians’ trust and practical understanding. Furthermore, it is still unclear if and how clinicians and system developers interpret XAI (differently), and whether designing such systems in healthcare is achievable or even desirable.”
    Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals
    Nadine Bienefeld, Jens Michael Boss, Rahel Lüthy  
    npj Digital Medicine (2023) 6:94 ; https://doi.org/10.1038/s41746-023-00837-4 
  • “Most private and community radiology practices have a good working relationship with their hospitals but are financially independent. This dichotomy makes a hybrid model between the health system and the radiologists most likely to be effective. Well-defined governance structures for AI development, purchase, and implementation in private and community practice are less prevalent than in academic practices. However, as adoption of AI in the community becomes more widespread, structured AI oversight within these radiology practices will be equally important. Results of the American College of Radiology 2019 radiologist workforce survey demonstrated less than 17% of radiology group practices are part of academic university practices, with the majority of the remaining practices falling into the categories of private practice (47%), multispecialty clinic (12%), and hospital-based practice and corporate practice (4%).”
    Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How?
    Dania Daye et al.
    Radiology 2022; 000:1–9 
  • “Public databases are an important resource for machine learning research, but their growing availability sometimes leads to “off-label” usage, where data published for one task are used for another. This work reveals that such off-label usage could lead to biased, overly optimistic results of machine-learning algorithms. The underlying cause is that public data are processed with hidden processing pipelines that alter the data features. Here we study three well-known algorithms developed for image reconstruction from magnetic resonance imaging measurements and show they could produce biased results with up to 48% artificial improvement when applied to public databases. We relate to the publication of such results as implicit “data crimes” to raise community awareness of this growing big data problem.”
    Implicit data crimes: Machine learning bias arising from misuse  of public data  
    Efrat Shimrona et al.
    PNAS 2022 Vol. 119 No. 13 e2117203119 
  • “In summary, this research aims to raise a red flag regarding naive off-label usage of open-access data in the development of machine-learning algorithms. We showed that such usage may lead to biased results of inverse problem solvers. Furthermore, we demonstrated that training MRI reconstruction algorithms using such data could yield an overly optimistic evaluation of their abil- ity to reconstruct small, clinically relevant details and pathology. This increases the risk of translation of biased algorithms into clinical practice. Therefore, we call for attention of researchers and reviewers: Data usage and pipeline adequacy should be consid- ered carefully, reproducible research should be encouraged, and research transparency should be required. Through this work, we hope to raise community awareness, stimulate discussions, and set the ground for future studies of data usage.”
    Implicit data crimes: Machine learning bias arising from misuse  of public data  
    Efrat Shimrona et al.
    PNAS 2022 Vol. 119 No. 13 e2117203119 
  • AI and Liability
    - Who is responsible for the accuracy of an AI system when it makes an error?
    - What is the liability of the Radiologist when using AI?
    - What is the liability of the health system that purchases an AI product?
  • “Developers of health care AI products face the risk of product liability lawsuits when their products injure patients, whether injuries arise from defective manufacturing, defective design, or failure to warn users about mitigable dangers.16 Physicians may also face risks from patient injuries stemming from the use of AI, including faulty recommendations or inadequate monitoring. Similarly, hospitals or health systems may face liability as coordinating providers of health care or on the basis of inadequate care in supplying AI tools — an analogy to familiar forms of medical liability for providing inadequate facilities or negligently credentialing a physician practicing at the hospital. Such risks may reduce incentives to adopt AI tools.”
    AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care  
    Ariel Dora Stern et al.  
    NEJM Catalyst Vol. 3 No. 4 | April 2022  DOI: 10.1056/CAT.21.0242 
  • "AI liability insurance would reduce the liability risk to developers, physicians, and hospitals. Insurance is a tool for managing risk, allowing the insurance policy holders to benefit from pooling risk with others. Insurance providers are intermediaries that play an organizing role in creating these pools and performing actuarial assessment of associated risks. While many types of insurance exist in the health care context, our focus in this article is entirely on AI liability insurance rather than coverage for health care services.”
    AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care  
    Ariel Dora Stern et al.  
    NEJM Catalyst Vol. 3 No. 4 | April 2022  DOI: 10.1056/CAT.21.0242 
  • "The credentialing function of insurance will thus reinforce the patient-centered incentives of AI developers Consequently, this insurance may alleviate health care provider concerns, at least to the point at which they are willing to adopt the AI technology. Indeed, this should be the case regardless of whether the AI manufacturer or the health care provider is the holder of the insurance policy, as long as such a policy can be purchased. However, the price and implicit value of insurance are likely to be passed through. For example, a manufacturer selling an AI tool that comes with liability insurance will be able to command a higher price than for the same tool without such insurance. Insurers may also require ongoing performance data from AI developers, whether they are in house or commercial; such data could be well beyond those needed to meet the requirements of regulatory premarket review.28 While insurers do not provide the same level of centralized review that regulators do, they may well serve a more context-sensitive, hands-on evaluative role focused on both quantifying and reducing risk — a role that may be especially important given the questionable generalizability of many current-generation AI systems.”
    AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care  
    Ariel Dora Stern et al.  
    NEJM Catalyst Vol. 3 No. 4 | April 2022  DOI: 10.1056/CAT.21.0242 
  • “Proponents of artificial intelligence (“AI”) technology have suggested that in the near future, AI software may replace human radiologists. While AI’s assimilation into the specialty has occurred more slowly than predicted, developments in machine learning, deep learning, and neural networks suggest that technological hurdles and costs will eventually be overcome. However, beyond these technological hurdles, formidable legal hurdles threaten AI’s impact on the specialty. Legal liability for errors committed by AI will influence AI’s ultimate role within radiology and whether AI remains a simple decision support tool or develops into an autonomous member of the healthcare team.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    https://doi.org/10.2214/AJR.21.27224 
  • “Additional areas of uncertainty include the potential application of products liability law to AI, and the approach taken by the U.S. FDA in potentially classifying autonomous AI as a medical device. The current ambiguity of the legal treatment of AI will profoundly impact autonomous AI development given that vendors, radiologists, and hospitals will be unable to reliably assess their liability from implementing such tools. Advocates of AI in radiology and health care in general should lobby for legislative action to better clarify the liability risks of AI in a way that does not deter technological development.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "Duplicating radiologists’ abilities through technology has proven more of a challenge than originally posited, with resultant skepticism regarding AI’s ultimate impact on the field, at least for the near term. Technological hurdles and costs will fall, and it is only a matter of time until machines can offer a reasonable facsimile of the radiologist report. However, even beyond these technological hurdles, formidable legal obstacles, often not given enough attention in the literature, threaten AI’s impact on the specialty and, if unchanged, have the potential to preclude the future success of this emerging industry.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • “Fundamentally, the legal handling of AI will hinge on the degree of autonomy exercised by the AI software. If the primary use of AI is simply as a decision support tool to highlight findings for the radiologist, who thereafter makes the final determinations and issues a report, the issues are quite simple. The radiologist who makes the final determination bears the liability risk.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "A radiologist breaches this duty when the expected standard of care is not met. The standard of care is the degree of care that a “reasonably prudent radiologist” would be expected to exercise under the same or similar circumstances. The issue of liability is one of reasonableness: what would a reasonably prudent radiologist do in this situation? This standard of care will largely be established in the context of the courtroom using expert witness testimony, whereby other radiologists opine as to what, in their professional opinion, would be a reasonable action in this situation.”  
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "But how does medical malpractice even work in the setting of an autonomous algorithm? Is there a similar physician-patient relationship when the “physician” is an algorithm? How is an AI algorithm held to the “reasonably prudent radiologist” (or perhaps “reasonably prudent algorithm”) standard, and who could serve as expert witness to determine this standard? Is there a different standard of care or expectation for an algorithm, and does the expectation change if the algorithm is performing tasks that go beyond the capabilities of the typical human radiologist (e.g., predicting optimum therapy options or responses based on imaging or genomic lesion characterization)? Ultimately, the facility hosting the AI likely would bear liability, and malpractice principles would no longer be applicable or even defensible; the circumstance would essentially become a form of “enterprise” liability.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "An injured patient tends to be a sympathetic witness in the eyes of a jury, whereas an AI algorithm would be unsympathetic; faceless emotionless robots make for very bad defendants. A skilled plaintiff’s attorney would elicit a mental image of machines running amok, including cold passionless robots making life and death judgments; jurors, inclined to fear technology from a lifetime of science fiction dystopias, would likely “throw the book” at the defendant. The idea that a medical center would replace a caring and compassionate doctor with a robot such as HAL 9000 to maximize revenue would not play well to a jury.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • “Debate is ongoing regarding the appropriate integration of AI tools with human decision- makers (including non-radiologists), the risks of ignoring AI outputs as AI use becomes he standard of care, and potential issues in overreliance on AI tools that may be relevant to liability. AI law remains in its early stages, and ongoing uncertainty is present regarding the manner in which courts will allocate liability for AI mistakes in radiology and the impact that such costs may have on AI development. Proponents of AI should recognize the legal system’s complexities and hurdles.”  
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "AI is undergoing rapid integration into radiology practice, driven by the appeal of improvements in diagnostic accuracy, cost effectiveness, and savings. While the legal implications of simple applications of AI as a radiology tool are overall straightforward, the legal ramifications of greater AI autonomy are thus far incompletely delineated. Current technological hurdles to the integration of advanced AI solutions into radiology practice will gradually be overcome. However the accompanying legal hurdles and complexities are substantial and, depending on how they are handled, could lead to untapped technological potential.”
    Is Artificial Intelligence (AI) a Pipe Dream? Why Legal Issues Present Significant Hurdles to AI Autonomy  
    Jonathan L. Mezrich, MD, JD, LLM, MBA  
    AJR 2022 Feb 9 [published online]. Accepted manuscript. doi:10.2214/AJR.21.27224 
  • "The FDA has recently approved software by AIDoc Medical (Tel Aviv, Israel) as well as Zebra Medical Vision (Shefayim, Israel) that automatically detects pulmonary embolisms in chest CTs. As described by Weikert et al., the work by AIDoc was based on a compiled dataset of 1499 CT pulmonary angiograms with corresponding reports that was then tested on four trained prototype algorithms. The algorithm that achieved optimal results was shown to have a sensitivity of 93% and a specificity of 95%, with a positive predictive value of 77%.”
    The first use of artificial intelligence (AI) in the ER: triage not diagnosis
    Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
    Emergency Radiology (2020) 27:361–366
  • "Cerebral hemorrhage is a key category of emergent diagnoses in which AI is making inroads. The FDA has approved AI software applications by AIDoc, Zebra Medical, and MaxQ designed to detect intracranial bleeds. The initial goal of the software is to improve workflow for radiologists (and our patients), and facilitate the triage process to improve the chances that cases with bleeds are read earlier in radiologic review. Supporting the phrase “time is brain,” this is an ideal use of AI and deep learning.”
    The first use of artificial intelligence (AI) in the ER: triage not diagnosis
    Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
    Emergency Radiology (2020) 27:361–366
  • "Acknowledging the effects of high imaging volumes on wait times for radiograph reviews, Taylor et al. conducted a large retrospective study to annotate a substantial dataset of pneumothorax-containing chest X-rays (ultimately, 13,292 frontal chest X-rays, 3107 of which included pneumothorax) to use to train deep CNNs to evaluate for possible emergent pneumothorax upon acquisition of the image. The investigators succeeded in developing models that can yield high- specificity screening of moderate or large pneumothoraces in cases where human review may be affected by scheduling, but the algorithm notably fails to detect small and some larger pneumothoraces.”
    The first use of artificial intelligence (AI) in the ER: triage not diagnosis
    Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
    Emergency Radiology (2020) 27:361–366
  • "The ability to triage patients and take care of acute processes such as intracranial bleed, pneumothorax, and pulmonary embolism will largely benefit the health system, improving patient care and reducing costs. In the end, our mission is the care of our patients, and if AI can improve it, we will need to adopt it with open arms.”
    The first use of artificial intelligence (AI) in the ER: triage not diagnosis
    Edmund M. Weisberg, Linda C. Chu, Elliot K. Fishman
    Emergency Radiology (2020) 27:361–366
  • Rationale and Objectives: Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology.
    Conclusion: GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.”
    Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
    Vera Sorin et al.
    Acad Radiol 2020 (in press)
  • “Generative adversarial networks (GANs) are a more recent deep learning development, invented by Ian Goodfellow and colleagues. GAN is a type of deep learning model that is aimed at generating new images. GANs are now at the center of public attention due to “deepfake” digital media manipulations. This technique uses GANs to generate artificial images of humans. As an example, this webpage uses GAN to create random fake pictures of non-existent people.”
    Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
    Vera Sorin et al.
    Acad Radiol 2020 (in press)
  • "Deep learning can improve diagnostic imaging tasks in radiology enabling segmentation of images, improvement of image quality, classification of images, detection of findings, and prioritization of examinations according to urgent diagnoses. Successful training of deep learning algorithms requires large-scale data sets. However, the difficulty of obtaining sufficient data limits the development and implementation of deep learning algorithms in radiology. GANs can help to overcome this obstacle. As dem- onstrated in this review, several studies have successfully trained deep learning algorithms using augmented data generated by GANs. Data augmentation with generated images significantly improved the performance of CNN algorithms. Furthermore, using GANs can reduce the amount of clinical data needed for training. The increasing research focus on GANs can therefore impact successful automatic image analysis in radiology.”
    Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
    Vera Sorin et al.
    Acad Radiol 2020 (in press)
  • "Some risks are involved with the development of GANs. In a recent publication Mirski et al. warn against hacking of imaging examinations, artificially adding or removing medical conditions from patient scans. Also, using generated images in clinical practice should be done with caution, as the algorithms are not without limitations. For example, in image reconstruction details can get lost at translation, while fake inexistent details can suddenly appear.”
    Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
    Vera Sorin et al.
    Acad Radiol 2020 (in press)
  • “The medico-legal issue that then arises is the question of “who is responsible for the diagnosis,” especially if it is wrong. Whether data scientists or manufacturers involved in development, marketing, and installation of AI systems will carry the ultimate legal responsibility for adverse outcomes arising from AI algorithm use is a dif- ficult legal question; if doctors are no longer the primary agents of interpretation of radiological studies, will they still be held accountable?”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • ”If radiologists monitor AI system outputs and still have a role in validating AI interpretations, do they still carry the ultimate responsibility, even though they do not understand, and cannot interrogate the precise means by which a diagnosis was determined? This “black box” element of AI poses many challenges, not least to the basic human need to under- stand how and why important decisions were made."
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • “Furthermore, if patient data are used to build AI products which go on to generate profit, consideration needs to be given to the issue of intellectual property rights. Do the involved patients and the collecting organizations have a right to share in the profits that derive from their data?”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • “Fundamentally, each patient whose data is used by a third party should pro- vide consent for that use, and that consent may need to be obtained afresh if the data is re-used in a different context (e.g., to train an updated software version). Moreover, ownership of imaging datasets varies from one jurisdiction to another. In many countries, the ultimate ownership of such personal data resides with the patient, although the data may be stored, with consent, in a hospital or imaging centre repository.
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 
  • The real challenge is not to oppose the incorporation of AI into the professional lives (a futile effort) but to embrace the inevitable change of radiological practice, incorporating AI in the radiological workflow. The most likely danger is that “[w]e’ll do what computers tell us to do, because we’re awestruck by them and trust them to make important decisions”
    What the radiologist should know about artificial intelligence – an ESR white paper
    Insights into Imaging (2019) 10:44 https://doi.org/10.1186/s13244-019-0738-2 

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.