google ads
Deep Learning: Ai and Patient Care Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ AI and Patient Care

-- OR --

  • “We found that experience-based radiologist characteristics, including years of experience, subspecialty in thoracic radiology and experience with AI tools, did not serve as reliable predictors of treatment effect, in terms of both calibration performance and discriminationperformance. These findings challenge the associations between experience-based radiologist characteristics and the treatment effect of AI assistance reported in previous research. The observed variability could be attributed to our larger and more diverse sample size, encompassing 140 radiologists with varying skill levels, experiences and preferences. Additionally, our study’s inclusion of a wide range of diagnostic tasks enables a robust examination of the complex factors influencing the treatment effect. Furthermore, the performance characteristics and quality of the specific AI system may play an important role, highlighting the need for developers to consider these factors when deploying AI assistance. To optimize the implementation of AI assistance, a comprehensive assessment of multiple factors, including the clinical task, patient population and AI system, is essential.”
    Heterogeneity and predictors of the effects of AI assistance on radiologists
    Feiyang Yu et al.
    Nature Medicine | Volume 30 | March 2024 | 837–849
  • “In conclusion, our study underscores the need for individualized approaches that are aware of clinician heterogeneity, high-quality AI models and comprehensive assessments of multiple factors to optimize the implementation of AI assistance in clinical medicine. Collaboration between clinicians and AI developers, focusing on personalized strategies and continuous improvement of AI models, will be essential for achieving the full potential of clinician–AI collaboration in healthcare.”
    Heterogeneity and predictors of the effects of AI assistance on radiologists
    Feiyang Yu et al.
    Nature Medicine | Volume 30 | March 2024 | 837–849
  • “Artificial Intelligence (AI) has emerged as a transformative force within medical imaging, making significant strides within emergency radiology. Presently, there is a strong reliance on radiologists to accurately diagnose and characterize foreign bodies in a timely fashion, a task that can be readily augmented with AI tools. This article will first explore the most common clinical scenarios involving foreign bodies, such as retained surgical instruments, open and penetrating injuries, catheter and tube malposition, and foreign body ingestion and aspiration. By initially exploring the existing imaging techniques employed for diagnosing these conditions, the potential role of AI in detecting non-biological materials can be better elucidated. Yet, the heterogeneous nature of foreign bodies and limited data availability complicates the development of computer-aided detection models. Despite these challenges, integrating AI can potentially decrease radiologist workload, enhance diagnostic accuracy, and improve patient outcomes.”
    Artificial intelligence in the detection of non‑biological materials
    Liesl Eibschutz et al.
    Emergency Radiology (2024) 31:391–403
  • “Many authors note that the risk of this complication decreases if institutions follow the recommended perioperative and postoperative checklists and guidelines. Yet over 80% of operations noted to have RSB reported correct counts at the end of the case. As most RSB have standardized shapes and sizes, computer-aided detection (CAD) systems can be highly effective for identification.”
    Artificial intelligence in the detection of non‑biological materials
    Liesl Eibschutz et al.
    Emergency Radiology (2024) 31:391–403
  • “Despite AI’s enormous potential in foreign body detection, current applications have thus far been in research settings, often training and validating models on devised images such as those with cadavers or fusion images. Before the widespread deployment of AI systems, these models must be trialed on natural datasets to ensure real-world clinical utility and performance. Though significant legal hurdles surrounding liability and tort law remain that may limit AI’s potential use, the ongoing advancements in the field augment its clinical utility and potential.”
    Artificial intelligence in the detection of non‑biological materials
    Liesl Eibschutz et al.
    Emergency Radiology (2024) 31:391–403
  • “Despite these challenges, the advancements in AI technology, coupled with collective efforts to obtain diverse and comprehensive datasets, offer a promising trajectory for the future of medical imaging in foreign body analysis. Further, the integration of AI in clinical practice has the potential to alleviate radiologist workload, enhance their efficiency, and reduce diagnostic errors. As the field of medical imaging continues to progress, the collaboration between AI and radiology may ultimately enhance diagnostic precision and patient care.”  
    Artificial intelligence in the detection of non‑biological materials
    Liesl Eibschutz et al.
    Emergency Radiology (2024) 31:391–403
  • “Finally, clinical AI implementation in radiology practices is especially poised for high-value impact in resource-limited settings, such as in rural communities across the globe with few radiologists. Firsthand experiences from Elahi et al and Ciecierski- Holmes et al demonstrate that the major challenges to clinical AI implementation unique to these environments are more related to establishing appropriate digital infrastructure and support networks to enable AI-assisted clinical medicine. Inthe settings where human experts are limited, AI diagnostic tools have crucial roles in interpretingpatient data that is either read by machines or not read at all.”  
    Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology
    Allison Chae, et al.
    Radiology 2024; 310(1):e223170
  • We envision that a regulatory strategy focused on patient outcomes is only the first step and that regulators will learn much about how AI tools can be used to address health equity, access, safety, and other regulatory benchmarks from early outcome-centric evaluations. In response to the White House executive order,we urge rule makers to curtail process-centric regulations that could impede AI progress and, instead, champion outcomes to advance AI for the betterment of people’s lives.
    Regulate Artificial Intelligence in Health Care by Prioritizing Patient Outcomes.  
    Ayers JW, Desai N, Smith DM.  
    JAMA. 2024 Jan 29. doi: 10.1001/jama.2024.0549. 
  • “We recognize that our proposed outcome-centric regulatory pathway can be a significant impediment for AI industry partners and regulators alike. Perhaps a dedicated agency will be needed to facilitate the evaluation of proposed clinical AI tools. This agency’s mission would be to shepherd AI developers through a process of rigorously demonstrating clinical value for patients by creating new rules that guide digital health trial registry, trial standards, and approval mechanisms. Compared with drug trials, AI tools, because of their digital delivery, can be evaluated relatively swiftly within multiple health care centers that reflect diverse patient populations, forgoing the necessity of an iterative study phase design typical of drug trials.”
    Regulate Artificial Intelligence in Health Care by Prioritizing Patient Outcomes.  
    Ayers JW, Desai N, Smith DM.  
    JAMA. 2024 Jan 29. doi: 10.1001/jama.2024.0549. 
  • “Previous theoretical work and empirical studies of AI in primary care have focused on 3 main use cases.First, experts have suggested that AI can improve clinical care by suggesting diagnoses or treatment based on available patient data. Second, AI can provide automated interpretation of complex data, such as images and skin lesions, and these suggestions may increasingly influence the practice of primary care. Third, AI has been applied to process EHR data to suggest the addition of diagnostic codes for billing, although with variable accuracy. With advances in technology, experts suggest that these types of applications will become seamless and widespread. However, we believe that these use cases are only a small part of AI application.”
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “If applied broadly to the challenges of primary care, generative AI has the potential to not only incrementally improve primary care but transform it.We believe AI can reduce many of the everyday challenges experienced by primary care clinicians as they strive to deliver high-quality care. We see 4 specific types of pri primary care work that are all pain points and could benefit from AI: (1) inbox management, (2) documentation, (3) between-visit panel management, and (4) decision support for diagnosis and treatment.”  
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “Similarly, widespread EHR implementation has increased the time that clinicians spend on documentation, which has driven clinician dissatisfaction. Artificial intelligence can use “ambient listening” to capture the content of encounters and generate preliminary notes for visits, which can then be edited by clinicians. Based on their relationships with patients, clinicians can then add relevant information from patient comments, body language, and speech tone as well as their own clinical reasoning, increasing both the efficiency and the clinical value of documentation.”  
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “Another challenge of primary care is delivering care between visits, including timely ordering and follow-up of needed screening and ongoing monitoring of chronic conditions. Artificial intelligence can relieve someof this burden from physicians, incorporate diagnosis-specific feedback from patients,suggest certain monitoring tests, and make the whole process feel more continuous for patients. The AI tools can also summarize a patient’s clinical course between one visit and the next, permitting primary care clinicians to efficiently keep up with evolving disease processes and life circumstances.”
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “Of course, AI is not a panacea. Some suggestions will be wrong, and we advocate using AI as a tool to support primary care medicine rather than as a substitute for human clinicians’ acumen. Internal data security practices also must be sufficiently robust to safeguard patient privacy. Some have questioned whether the use of generative AI, in particular its use in responding to patient messages, will further attenuate the humanistic patient-clinician relationship. In response, we contend that automating selected aspects of primary care work frees time for the relationship building and tending that are essential to primary care. We expect that implementation of cutting-edge AI tools will require rigorous monitoring for safety and usefulness, with iterative adjustments that include input from frontline care teams, patients, and families.”
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “Despite these challenges, we contend that primary care patients and clinicians need AI. The current practice of primary care is suboptimal for patients and no longer feasible for clinicians, and AI will improve its outcomes, efficiency, and hopefully, sustainability as a career path.”  
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “These AI solutions offer great potential benefit but only if thoughtfully developed with the needs of frontline clinicians in mind.We all use tools daily that leverage AI, such as map apps for route finding and websites for purchasing items. Now is the time for primary care clinicians to partner with informaticists and technology developers to build AI solutions that will make primary care attractive again by improving efficiency for both clinicians and patients and enabling the lower cost, more efficient, and safer care that is so badly needed.”  
    Using Artificial Intelligence to Improve Primary Care for Patients and Clinicians.  
    Sarkar U, Bates DW.  
    JAMA Intern Med. 2024 Feb 12. doi: 10.1001/jamainternmed.2023.7965. Epub ahead of print. PMID: 38345801.
  • “Google entered a research partnership with the University of Chicago, including its medical center (UC), to develop an AI model that could predict significant medical events and reduce hospital readmissions. Ucshared with Google “de-identified” EHR data from adult patients encountered between January 2010 and June 2016. These data still contained “dates of service” and “de-identified, free-text medical notes.” The data use agreement (DUA) prohibited Google from reidentifying patients.1 The DUA also granted UC “a nonexclusive, perpetual license to use the (…)Trained Models and Predictions” developed by Google “for internal noncommercial research purposes.”
    Health Care AI and Patient Privacy—Dinerstein v Google
    Mindy Nunez Duffourc, Sara Gerke
    JAMA Feb 2024 (in press)
  • Hospitals that plan to share EHR data with private parties for AI development can take away the following lessons from the federal courts’ decisions in MD’s case. First, disclosure and privacy practices should accurately reflect any data-sharing activities that involve patients’ EHR data. Second, EHR data should be sufficiently deidentified according to HIPAA before sharing them with third parties for research purposes—ie, either through an expert determination that the reidentification risk is “very small” or by removing 18 specific identifiers (45CFR§164.514[b]).They should consider using an independent committee that includes experts in ethics, statistics, computer science, and patients to assess certain uses and disclosures of deidentified datasets, including reidentification risks.6 Third, if hospitals decide to share PHI, including limited datasets, they should get prior written authorization from patients, institutional review board approval, and/or sign a DUA that adequately protects the data and prohibits reidentification. They should also carefully verify whether they really share—vs actually sell—the PHI under HIPAA.
    Health Care AI and Patient Privacy—Dinerstein v Google
    Mindy Nunez Duffourc, Sara Gerke
    JAMA Feb 2024 (in press)
  • “Last, hospitals and physicians should be aware that the legal landscape surrounding privacy, particularly regarding sensitive personal information like health data, is in flux as concerns about the rapid growth of new technologies and big data mount. Google’s increasingly dominant control and influence over information on the internet only adds to these concerns. Additionally, because Google already obtains massive amounts of personal data, including HER data stemming from research agreements with large medical centers, its status as “the principal purveyor of online health information” remains unchecked for now, which may have serious implications for patient privacy.”
    Health Care AI and Patient Privacy—Dinerstein v Google
    Mindy Nunez Duffourc, Sara Gerke
    JAMA Feb 2024 (in press)
  • “The promise of AI is alluring but it must be integrated into the clinical landscape with great intention to avoid harming the clinician-patient relationship, further overwhelming these clinicians, and causing undue harm to patients. This may be accomplished in a number of different ways.A few worth considering when developing AI for primary care include learning from the failures of electronic medical record incorporation into practice, optimizing the 5 key functions of primary care identified by the World Health Organization, and spending at least as much on proven improvements as on the unproven ones offered by AI.”
    The Perils of Artificial Intelligence in a Clinical Landscape
    Isabel Ostrer, MD; Louise Aronson, MD,MFA
    JAMA Internal Medicine Published online February 12, 2024 
  • The ideal purveyors of AI for primary care would take a different approach to electronic medical record developers, prioritizing patients and clinicians, rather than billing, profits, and the priorities of health care systems with established patterns of primary care neglect. These purveyors would also consider how AI would function to address the World Health Organization’s 5 core functions: access to primary care, development of long-term personal clinician-patient relationships, augmentation of comprehensive care from prevention through palliation, better coordination across practitioners, systems, and time, and support of patient education and decision-making.
    The Perils of Artificial Intelligence in a Clinical Landscape
    Isabel Ostrer, MD; Louise Aronson, MD,MFA
    JAMA Internal Medicine Published online February 12, 2024
  • “Uncertainty regarding the future of radiologists is largely driven by the emergence of artificial intelligence (AI). If AI succeeds, will radiologists continue to monopolize imaging services? As AI accuracy progresses  with alacrity, radiology reads will be excellent. Some articles show that AI can make non-radiologists experts. However, eminent figures within AI development have expressed concerns over its possible adverse uses. Bad actors, not bad AI, may account for a future in which AI is not as successful as we might hope and, as some fear, even pernicious. More relevant to current predictions over the course of AI in medicine, and radiology in particular, is how the evolution of AI is often seen in a vacuum. We cannot predict the future with certainty. But as we contemplate the potential impact of AI in radiology, we should remember that radiology does not exist in a vacuum; while AI is changing, so is everything else. “
    The future of radiology and radiologists: AI is pivotal but not the only change afoot
    E.M. Weisberg and E.K. Fishman
    Journal of Medical Imaging and Radiation Sciences, https://doi.org/10.1016/j.jmir.2024.02.002
  • ‘The medical system, not to mention the world’s population, has been severely impacted by the global COVID-19 pandemic and numerous experts expect future worldwide pandemics. We cannot predict the condition of the healthcare system in two decades but may assume that radiology will likely remain critical in any future medical practice. For now, we should responsibly use all tools at our disposal (including AI) to make ourselves as indispensable as possible. Our best chances of remaining relevant and instrumental to patient care will likely hinge on our ability to lead the changes rather than be passively impacted by them.”
    The future of radiology and radiologists: AI is pivotal but not the only change afoot
    E.M. Weisberg and E.K. Fishman
    Journal of Medical Imaging and Radiation Sciences, https://doi.org/10.1016/j.jmir.2024.02.002 
  • “For the foreseeable future, radiology will continue to be an integral part of patient care. With newer scanners and novel technology, including AI, the impact of radiology on patient care, especially early diagnosis, will likely be stronger than ever. However, where the radiologist fits in is perhaps less clear. As radiologists push working from home in hybrid work environments one has to consider how this will affect the near future. For now, it is a good model, profitable, and successful. In the long term, one has to wonder if it is little more than a trojan horse and that it, too, will lead to less than a Disney happy ending, not even accounting for concerns related to the broader risks of AI.”  
    The future of radiology and radiologists: AI is pivotal but not the only change afoot
    E.M. Weisberg and E.K. Fishman
    Journal of Medical Imaging and Radiation Sciences, https://doi.org/10.1016/j.jmir.2024.02.002
  • “It might be an act of hubris to predict the status of radiology in 2035 given the weighty issues plaguing the world and, of course, the health of the public and healthcare systems. We can hope, though, that humanity manages to resolve these issues and improves the convoluted healthcare system in which radiology is, and should remain, an integral part. The future of radiologists? Who knows. For the time being, we should responsibly use all of the tools at our disposal (including AI) to make ourselves as indispensable as possible. Our best chances of remaining relevant and instrumental to patient care will likely hinge on our ability to lead the changes rather than be passively impacted by them. Linton’s advice should be followed today as we move forward.”  
    The future of radiology and radiologists: AI is pivotal but not the only change afoot
    E.M. Weisberg and E.K. Fishman
    Journal of Medical Imaging and Radiation Sciences, https://doi.org/10.1016/j.jmir.2024.02.002
  • “Several years ago, eminent radiology historian Otha Linton expressed optimism for the future of radiology but cautioned that the future of radiologists was less clear and would depend on radiologists keeping up with advances in technology and in providing care directly to the patient . Today, uncertainty regarding the future of radiologists is largely driven by the emergence of artificial intelligence (AI). AI, particularly ChatGPT, is evolving so quickly that a paper reporting on one version of the program is virtually rendered obsolete by the time of pub- lication due to the release of an updated version, or two, of the software. This new reality calls for more emphatic questions about radiologists, namely, “What is the future of radiologists if AI succeeds?” and "Will radiologists continue to monopolize imaging services?"
    The future of radiology and radiologists: AI is pivotal but not the only change afoot
    E.M. Weisberg and E.K. Fishman
    Journal of Medical Imaging and Radiation Sciences, https://doi.org/10.1016/j.jmir.2024.02.002 
  • “Generative artificial intelligence (AI), specifically the large language models (LLMs) that underlie impressive new applications such as ChatGPT, are already fundamentally changing medicine. Unlike more traditional AI systems that produce simple outputs such as a number (say, the predicted length of stay for a patient in the hospital) or a category (say, “malignant” or “benign” for a radiologic system), “generative AI” refers broadly to systems whose outputs take the form of more unstructured media objects, such as images and documents. Under the hood, many of these systems are actually built by executing models that serve a more classical purpose. Generative text models, for example, generate whole documents by iteratively predicting “what word comes next.” But the ability to produce a whole document with desired properties unlocks a host of exciting applications.”
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “More interesting now, after the heights of the pandemic, we are beginning to see a market need for technology-driven efficiencies to help health care systems deal with historic losses over the past couple of years. For example, staffing shortages are top cost drivers and a concern for hospital executives. Strategies that may have worked in the past, raising prices via market mergers and consolidation or asking clinicians to see more patients and boost volumes, are unlikely to work this time given market dynamics. Instead, the new name of the game for health systems is productivity increase but with a twist. They need to improve care delivery experiences and outcomes while also improving efficiencies. Most important, they need to meet the health care demand without increasing the exodus of frontline health care workers.”
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “As clinicians, we are tasked to serve three constituents for patients we see. Those constituents are (1) our care team members, who benefit from clinical notes that convey our thought process; (2) ourselves, as the physicians who also need to place orders, report diagnostic codes, and handle procedure codes for billing and revenue cycle; and (3) our patients, the most important constituents, who benefit from visit summaries and access to their OpenNotes in their portals.”  
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print

  • Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “There is broad recognition that the burnout-inducing amount of paperwork associated with modern medical practice needs to be addressed to limit the early departure of clinicians from the field. Amrecent report in the Journal of General Internal Medicine suggested that doctors need 27 hours a day to complete all their work . An AMA study from 2021msuggested that 63% of physicians surveyed reported burnout . Expenses of $4.6 billion annually were related to physician turnover and reduced clinical hours in 2019. All of these statistics have only worsened on the other side of the (peak) pandemic. Our priority is to assist and integrate for the depth of the workflow in an enterprise way, spanning the work that happens before, during, and after a patient encounter from notes to orders and coding.”
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “The arrival of LLMs that can provide real-time assistance to physicians may allow a remarkable increase in their bandwidth, regardless of specialty. In radiology, leveraging these emerging technologies will potentially allow more scans to be read without adding burden or stress to the interpreting radiologist. We would be in the “high consequences for factual inaccuracies and high volume of decisions” quadrant of Figure 1, where the assistance of an LLM would be its key feature. The importance of that added bandwidth would be its potential to ameliorate disparities by democratizing the expertise of radiologists who are already able to handle large volumes or who may have special skill sets in less common examinations. Such improvements in access to care would hopefully have downstream effects of improved outcomes in marginalized populations.”
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “Perhaps the most important lesson for radiologists is that we need to have a seat at the table as LLMs are adopted more broadly as assistants and augmenters. We can help drive the maximum value for ourselves and, most important, our patients from those emerging technologies.”  
    Improving Efficiencies While Also Delivering Better Health Care Outcomes: A Role for Large Language Models.
    Rao SK, Fishman EK, Rizk RC, Chu LC, Rowe SP.  
    J Am Coll Radiol. 2024 Jan 12:S1546-1440(24)00005-X. doi: 10.1016/j.jacr.2024.01.003. Epub ahead of print
  • “AI has entered the medical field so rapidly and unobtrusively that it seems as if its interactions with the profession have been accepted without due diligence or in-depth consideration. It is clear that AI applications are being developed with the speed of lightning, and from recent publications it becomes frightfully apparent what we are heading for and not all of this is good. AI may be capable of amazing performance in terms of speed, consistency, and accuracy, but all of its operations are builton knowledge derived from experts in the field.”
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Jan 19. doi: 10.1001/jama.2024.0018. Epub ahead of print. 
  • This era will show a decrease in intellectual debates among colleagues, a sign of the time that computer scientists have already warned us about. While authors of literature are fighting for regulations to control the usage of AI in art, physicians should contemplate how to take advantage of the potential benefits from AI in medicine without losing control over their profession. With the issue of a landmark Executive Order8 in the US to ensure that America leads the way in managing the risks of AI and the EU becoming the first continent to set clear rules for the use of AI, physicians should realize that keeping AI within boundaries is essential for the survival of their profession and for meaningful progress in diagnosis and understanding of disease mechanisms.
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Jan 19. doi: 10.1001/jama.2024.0018. Epub ahead of print. 
  • “The human radiologist will no longer depend on dictating text to generate reports. This will result in substantial increases in efficiency, reduced variability, and increased quality, not only for the radiologist but the entire health care enterprise. Replaced by real-time collaborative human-machine image interpretation workflow orchestration tools, pertinent observations will be automatically identified, normalized, and stored as a reusable schematized interpretation object. This object, with the help of generative intelligent agents (eg, large language models), will enable the automatic creation of communication and collaboration tools, such as the following.”
    Imaging Informatics: Maturing Beyond Adolescence to Enable the Return of the Doctor's Doctor.  
    Chang PJ.  
    Radiology. 2023 Oct;309(1):e230936. 
  • “First, with the help of AI agents, enhanced radiology reports will provide consistent structure, vocabulary, and relevant actionable patient management guidelines and recommendations. These intelligent IT agents will also provide enhanced patient-specific differential diagnoses and correlative/comparative analysis incorporating all imaging and nonimaging phenotypic evidence. Second, with the help of AI, health consumer reports will use more accessible prose (automatically augmented by value added links to additional useful resources). Third, there will be workflow automation to deliver relevant data and alerts to downstream AI agents for various enhancements throughout the health care enterprise. This will include feeding and supporting oncology databases; decision support and analytic agents; research, teaching, quality, and operational systems; urgent result notification engines; actionable and incidental finding workflow orchestration; and AI lifecycle management infrastructure.”
    Imaging Informatics: Maturing Beyond Adolescence to Enable the Return of the Doctor's Doctor.  
    Chang PJ.  
    Radiology. 2023 Oct;309(1):e230936. 
  • “While these predictions/wishes would seem fanciful, I sincerely believe the technical risk can and will eventually be mitigated. The real issue is whether we have the will to change our legacy models to embrace these disruptive but transformative approaches. I believe we have no choice; our existing informatics solutions are barely able to handle today’s demands and will certainly fail to support even near-future requirements. As always, a critical enabling resource in this journey will be the dissatisfied but engaged radiologist. Do not let us informaticists get away with weak excuses. Our laggard position relative to other industries gives practicing radiologists ample shared metaphors (“why can’t my PACS work as well as my dating app?”). As stated earlier, we must have patience but also provide firm guidance to facilitate the growth of IT and imaging informatics beyond adolescence to allow us to once again be considered the doctor’s doctor for the next generation.”
    Imaging Informatics: Maturing Beyond Adolescence to Enable the Return of the Doctor's Doctor.  
    Chang PJ.  
    Radiology. 2023 Oct;309(1):e230936. 

  • The Future of AI and Informatics in Radiology: 10 Predictions.
    Langlotz CP
    Radiology. 2023 Oct;309(1):e231114. 
  • “The value of AI is viewed differently by different stakeholders. Artificial intelligence vendors appealing to hospitalexecutives make the business case for AI: a return on investment, which is invariably revenue generation. Industry sells AI as a productivity-enhancing tool, which is singularly unappealing to radiologists, AI’s end users, who already are at their wits’ end with imaging volumes. However, because of the reimbursement structure, productivity is the lifeblood of radiology. Radiologists are judged and rewarded for how many studies they read. They are in a productivity quagmire: productivity sustains them and productivity ails them.”  
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • With efficiency gains from AI, radiologists can read either more studies in the same time or the same number of studies in less time. With the latter, AI could theoretically enable a more pleasant work experience. In practice this is unlikely. Although the link between productivity pressure and burnout seems clear, it is far from certain that marginal gains, such as finishing work an hour earlier, will spiritually rejuvenate radiologists. Instead, efficiency gains might extend the dominance of a smaller labor force: fewer radiologists working more efficiently but just as intensely, and similarly predisposed to burnout.  
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • Whether AI augments radiologists’ performance or radiologists regress to AI’s level remains speculative. What is not speculative is that an extra pair of eyes, with both AI and radiologists checking each other’s work,may not reduce the net labor conscripted to extract meaningful clinical information from images. Over time, radiologists may reflexively note then possibility that AI will detect something that they will not and may make radiologists, who infamously hedge, hedge even more. Knownas the Solow paradox, development in information technology has historically often slowed productivity.
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • Another use for AI is reading normal study results, decanting the abnormal ones for radiologists. Because many study results are normal, AI could reduce radiologists’ workload. This sounds appealing but only superficially. For one, “normal” must be contextualized with symptoms because a normal study result does not mean the patient does not have an illness. To detect “abnormal,” radiologists must know normal. In fact, “normal” is arguably the most difficult diagnosis to make. On imaging, “normal” has a broad and variegated coastline. The heaviest radiology book I own is Atlas of Normal Roentgen Variants Which May Simulate Disease,which I still have not finished reading
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • Radiologists have focused excessively on radiology reports, considering them the sole distillates of their expertise. Nothing is more sacrosanct for the profession than the turnaround time of reports.The report is only 1 part of imaging’s role in care systems. For example, positive imaging findings, such as coronary atherosclerosis, should induce other steps, such as ensuring the patient is taking an optimal dose of statins and directing the patient to the appropriate physician. By interrogating the electronic health record, AI can theoretically do this seamlessly. If the profession changes from beingcharged with just report generation to system management as well, in which the radiologist’s job is as much activating the next step in the diagnostic pathways as it is deciding whether a study finding is positive or negative, radiologists may find AI more useful.
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • “It took more than a decade for magnetic resonance imaging to travel from research to clinical practice; AI, which is still young, is already being used in the radiology value chain, from image reconstruction to report generation. Eventually, AI will allow radiologists to perform at the top of their license.”  
    Algorithms at the Gate-Radiology's AI Adoption Dilemma.
    Jha S.  
    JAMA. 2023 Oct 6. doi: 10.1001/jama.2023.16049. Epub ahead of print. PMID: 37801311
  • “In the 21st century, artificial intelligence (AI) has emerged as a valuable approach in data science and a growing influence in medical research, with an accelerating pace of innovation. This development is driven, in part, by the enormous expansion in computer power and data availability. However, the very features that make AI such a valuable additional tool for data analysis are the same ones that make it vulnerable from a statistical perspective. This paradox is particularly pertinent for medical science. Techniques that are adequate for targeted advertising to voters and consumers or that enhance weather prediction may not meet the rigorous demands of risk prediction or diagnosis in medicine. In this review article, we discuss the statistical challenges in applying AI to biomedical data analysis and the delicate balance that researchers face in wishing to learn as much as possible from data while ensuring that data-driven conclusions are accurate, robust, and reproducible.”
    Where Medical Statistics Meets Artificial Intelligence
    David J. Hunter, M.B., B.S., and Christopher Holmes, Ph.D.
    N Engl J Med 2023;389:1211-9.
  • “For some world problems, relying on AI is the only option we have. Diabetic retinopathy (DR) is them leading cause of vision loss globally, and out of the 451 million people worldwide with diabetes, a third of these people will develop DR. If treated, blindness due to DR is completely preventable, but the problem is that for many individuals, a proper diagnosis of DR is impossible given the fact that there are only 200,000 ophthalmologists in the world. Thankfully, AI models have >97% accuracy (on par with ophthalmologists) in detecting DR, compensating for the lack of physicians capable of making this diagnosis. However, despite AI’s promising potential, it comes with challenges and limitations of its own.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “AI expertise alone cannot solve these problems; we need to collaborate with subject matter experts. Machine learning excels at prediction and correlation, but not at identifying causation. It does not know the direction of the causality.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.
     J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “We can be fooled by bias. In 1991, a study published in the New England Journal of Medicine found that left-handed people died 9 years younger than righthanded people [8]. However, the study failed to consider that there used to be bias against lefthanded people and many of them were forced to become right-handed. Researchers had assumed that the percentage of left-handed people is stable over time; the population, although random, is biased against lefthanded people. Most data we collect have biases, and if we do not understand them and take them into account, our data models will not be correct.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “We forget that correlation or predictive power does not imply causation; the fact that two variables are correlated does not imply that one causes the other. In a Gallup poll several years ago, surveyors asked participants if correlation implied causation, and 64% of Americans answered yes. This occurs because humans learn from correlation, but we cannot observe causality. We have to understand that most people do not know the difference.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “Models are very good at cheating; if  here is anything the model can use to cheat, it will learn it. An AI model was trained to distinguish between skin cancer and benign lesions and was thought to achieve dermatologist-level performance . However, many of the positive cases had a ruler in the picture but the negative cases did not. The model learned that if there was a ruler present in the image, there was a much higher chance of the patient having cancer.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “Access to data is one of the biggest challenges we face. There is a significant amount of data that cannot be open to researchers because of privacy issues, especially in the medical world. In most scenarios, data anonymization just does not work because even if we anonymize the data, there is always the risk of someone attempting to deanonymize it. As a result, Microsoft has invested in the Differential Privacy Platform, which provides a way for researchers to ascertain insights from the data without violating the privacy of individuals. Privacy-preserving synthetic images can also generate realistic synthetic data, including synthetic medical images, after training on a real data set, without affecting the privacy of individuals.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “Other technology companies such as Google Health, NVIDIA, and Amazon have also invested in strategic partnerships with health care organizations to combine their expertise in AI with our health care domain knowledge to solve impactful clinical problems. These collaborations leverage the power of big data analytics and have immense potential to improve cancer detection, predict patient outcomes, and reduce health equity by improving patient access.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “Contrary to popular media speculation, AI alone is not enough to overcome the problems that society seeks to resolve; rather, machine learning depends on subject-matter experts to find solutions. AI provides us with numerous opportunities for advancement in the field of radiology: improved diagnostic certainty, suspicious case identification for early review, better patient prognosis, and a quicker turnaround. Machine learning depends on radiologists and our expertise, and the convergence of radiologists and AI will bring forth the best outcomes for patients.”
    Artificial Intelligence as a Public Service.  
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Sep;20(9):919-921
  • “Cross-institution collaborations are constrained by data-sharing challenges. These challenges hamper innovation, particularly in artificial intelligence, where models require diverse data to ensure strong performance. Federated learning (FL) solves data-sharing challenges. In typical collaborations, data is sent to a central repository where models are trained. With FL, models are sent to participating sites, trained locally, and model weights aggregated to create a master model with improved performance. At the 2021 Radiology Society of North America’s (RSNA) conference, a panel was conducted titled “Accelerating AI: How Federated Learning Can Protect Privacy, Facilitate Collaboration and Improve Outcomes.” Two groups shared insights: researchers from the EXAM study (EMC CXR AI Model) and members of the National Cancer Institute’s Early Detection Research Network’s (EDRN) pancreatic cancer working group. EXAM brought together 20 institutions to create a model to predict oxygen requirements of patients seen in the emergency department with COVID-19 symptoms. The EDRN collaboration is focused on improving outcomes for pancreatic cancer patients through earlier detection. This paper describes major insights from the panel, including direct quotes. The panelists described the impetus for FL, the long-term potential vision of FL, challenges faced in FL, and the immediate path forward for FL.”
    Patel M, Dayan I, Fishman EK, et al.
    Accelerating artificial intelligence: How federated learning can protect privacy, facilitate collaboration, and improve outcomes.  
    Patel M, Dayan I, Fishman EK, et al.
    Health Informatics J. 2023 Oct-Dec;29(4):14604582231207744.
  • Every institution has different rules and complexities regarding data sharing, and sharing is often the exception rather than the rule. Thus, the traditional approach to create a data lake to centralize all data for training creates enormous administrative, cost, and regulatory hurdles, especially when doing so with more than a few thousand cases. Federation is the next generation, where rather than trying to centralize everything and going through the process of signing material transfer agreements and manual de-identification, every single case can contain all the protected health information and just use the federated linkage to connect key analytic components, with no protected health information leakage. This approach will change cross-institutional research collaborations - where currently the first few years are spent working out administrative details before doing science - into projects where administrative barriers only take a few months.
    Accelerating artificial intelligence: How federated learning can protect privacy, facilitate collaboration, and improve outcomes.  
    Patel M, Dayan I, Fishman EK, et al.
    Health Informatics J. 2023 Oct-Dec;29(4):14604582231207744.
  • “AI for Good, a multimillion dollar philanthropic initiative from Microsoft, highlights the potential of AI and aims to help and empower those working around the world to solve issues related to five pillars: Earth, Accessibility, Humanitarian Action, Cultural Heritage, and Health. AI for Health is a $40 million investment made over 5 years with the goal of empowering researchers and organizations to use AI to advance the health of people and communities around the world. We also dedicated $20 million to help those on the front lines of research of COVID-19 through the AI for Health program.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “ For some world problems, relying on AI is the only option we have. Diabetic retinopathy (DR) is the leading cause of vision loss globally, and out of the 451 million people worldwide with diabetes, a third of these people will develop DR. If treated, blindness due to DR is completely preventable, but the problem is that for many individuals, a proper diagnosis of DR is impossible given the fact that there are only 200,000 ophthalmologists in the world. Thankfully, AI models have >97% accuracy (on par with ophthalmologists) in detecting DR [7], compensating for the lack of physicians capable of making this diagnosis. However, despite AI’s promising potential, it comes with challenges and limitations of its own.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “AI expertise alone cannot solve these problems; we need to collaborate with subject matter experts. Machine learning excels at prediction and correlation, but not at identifying causation. It does not know the direction of the causality.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “We can be fooled by bias. In 1991, a study published in the New England Journal of Medicine found that left-handed people died 9 years younger than righthanded people. However, the study failed to consider that there used to be bias against lefthanded people and many of them were forced to become right-handed. Researchers had assumed that the percentage of left-handed people is stable over time; the population, although random, is biased against lefthanded people. Most data we collect have biases, and if we do not understand them and take them into account, our data models will not be correct.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “We forget that correlation or predictive power does not imply causation; the fact that two variables are correlated does not imply that one causes the other. In a Gallup poll several years ago, surveyors asked participants if correlation implied causation, and 64% of Americans answered yes. This occurs because humans learn from correlation, but we cannot observe causality. We have to understand that most people do not know the difference.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “Models are very good at cheating; if there is anything the model can use to cheat, it will learn it. An AI model was trained to distinguish between skin cancer and benign lesions and was thought to achieve dermatologist-level performance. However, many of the positive cases had a ruler in the picture but the negative cases did not. The model learned that if there was a ruler present in the image, there was a much higher chance of the patient having cancer.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “Access to data is one of the biggest challenges we face. There is a significant amount of data that cannot be open to researchers because of privacy issues, especially in the medical world. In most scenarios, data anonymization just does not work because even if we anonymize the data, there is always the risk of someone attempting to deanonymize it. As a result, Microsoft has invested in the Differential Privacy Platform, which provides a way for researchers to ascertain insights from the data without violating the privacy of individuals. Privacy-preserving synthetic images can also generate realistic synthetic data, including synthetic medical images, after training on a real data set, without affecting the privacy of individuals.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “Other technology companies such as Google Health, NVIDIA, and Amazon have also invested in strategic partnerships with health care organizations to combine their expertise in AI with our health care domain knowledge to solve impactful clinical problems. These collaborations leverage the power of big data analytics and have immense potential to improve cancer detection, predict patient outcomes, and reduce health equity by improving patient access.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “Contrary to popular media speculation, AI alone is not enough to overcome the problems that society seeks to resolve; rather, machine learning depends on subject-matter experts to find solutions. AI provides us with numerous opportunities for advancement in the field of radiology: improved diagnostic certainty, suspicious case identification for early review, better patient prognosis, and a quicker turnaround. Machine learning depends on radiologists and our expertise, and the convergence of radiologists and AI will bring forth the best outcomes for patients.”
    Artificial Intelligence as a Public Service.
    Lavista Ferres JM, Fishman EK, Rowe SP, Chu LC, Lugo-Fagundo E.  
    J Am Coll Radiol. 2023 Mar 30:S1546-1440(23)00265-X. doi: 10.1016/j.jacr.2023.01.013. Epub ahead of print. 
  • “The importance of training for the successful implementation of AI systems stems was stressed upon in several studies. In one study referring to a continuative, predictive monitoring system and two addressing machine learning systems participants reported a lack of experience with the systems which resulted in feeling overwhelmed. Alumran et al. observed that about half (53.49%) of nurses (N = 71) who did not use an AI system also did not participate in prior training. Half of those receiving one training used the system whereas taking two training courses resulted in the use of the system in 83% of trained nurses. When taking three training courses this percentage increased to 100%.”
    An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals
    Sophie Isabelle Lambert et al.
    npj Digital Medicine (2023) 6:111 ; https://doi.org/10.1038/s41746-023-00852-5
  • “In a study about the implementation of AI in radiology, the effect of organizational culture on the acceptance of the system versus the resistance to change was discussed. Several participants mentioned structuring the adoption on the system by selecting champions and expert groups. However, in another study reporting on a wound-related CDSS, some nurses preferred to base their behaviour on their own decision-making process and feared that their organization was forcing them to do otherwise.”
    An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals
    Sophie Isabelle Lambert et al.
    npj Digital Medicine (2023) 6:111 ; https://doi.org/10.1038/s41746-023-00852-5
  • “Human factors such as personality and experience were found to affect the perception of an AI system. Depending on the healthcare professional, their needs and the work environment, the acceptance of an AI system might differ. The same AI system might be perceived as helpful by a person and would therefore be accepted while another professional might find that the system could hold up their work and would therefore deem it as unacceptable. Moreover, as found in our review and supported by the literature, more experienced healthcare professionals tend to trust their knowledge and experience more than an AI system. Consequently, they might override the system’s recommendations and make their own decisions based on their personal judgement. This might be related to their fear of losing autonomy in a situation where the AI system is recommending something that is not in line with their critical thinking process.”
    An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals
    Sophie Isabelle Lambert et al.
    npj Digital Medicine (2023) 6:111 ; https://doi.org/10.1038/s41746-023-00852-5
  • “In this integrative review, various perspectives of healthcare professionals in hospital settings regarding the acceptance of AI were revealed. Many facilitating factors to the acceptance of AI as well as limiting factors were discussed. Factors related to acceptance or limited acceptance were discussed in association with the characteristics of the UTAUT model. After reviewing 42 studies and discussing them in rapport with studies from the literature, we conclude that hesitation to accept AI in the healthcare setting has to be acknowledged by those in charge of implementing AI technologies in hospital settings. Once the causes of hesitation are known and personal fears and concerns are recognized, appropriate interventions such as training, reliability of AI systems and their ease of use may aid in overcoming the indecisiveness to accept AI in order to allow users to be keen, satisfied and enthusiastic about the technologies.”
    An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals
    Sophie Isabelle Lambert et al.
    npj Digital Medicine (2023) 6:111 ; https://doi.org/10.1038/s41746-023-00852-5
  • “This paper reviews the current state of patient safety and the application of artificial intelligence (AI) techniques to patient safety. This paper defines patient safety broadly, not just inpatient care but across the continuum of care, including diagnostic errors, misdiagnosis, adverse events, injuries, and measurement issues. It outlines the major current uses of AI in patient safety and the relative adoption of these techniques in hospitals and health systems. It also outlines some of the limitations of these AI systems and the challenges with evaluation of these systems. Finally, it outlines the importance of developing a proactive agenda for AI in healthcare that includes marked increased funding of research and evaluation in this area.”
    Bending the patient safety curve: how much can AI help?
    David C. Classen , Christopher Longhurst  and Eric J. Thomas
    Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5
  • “There are few rigorous assessments of actual AI deployments in health care delivery systems, and while there is some limited evidence for improved safety processes or outcomes when these AI tools are deployed, there is also evidence that these systems can increase risk if the algorithms are tuned to give overly confident results. For example, within AI risk prediction models, the sizeable literature on model development and validation is in stark contrast to the scant data describing successful clinical deployment and impact of those models in health care settings. One study revealed significant problems with one vendor’s HER sepsis prediction algorithm, which has been very widely deployed among many health systems without any rigorous evaluation.”
    Bending the patient safety curve: how much can AI help?
    David C. Classen , Christopher Longhurst  and Eric J. Thomas
    Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5
  • “The prediction of sepsis for inpatients, a common condition with a high mortality rate, is an area of intense AI focus in health care. Many studies have shown early detection and treatment of patients with sepsis can markedly reduce mortality. Indeed, a recent review found over 1800 published studies of AI programs developed to predict sepsis in patients hospitalized or in the emergency room. However, none of these models have been widely adopted. The resulting vacuum has been filled by a large commercial EHR vendor that developed its own proprietary model which it deployed to hundreds of US hospitals without any published critical evaluation.”
    Bending the patient safety curve: how much can AI help?
    David C. Classen , Christopher Longhurst  and Eric J. Thomas
    Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5
  • “One of the health systems that uses this commercial EHR sepsis prediction program performed an evaluation of this program in its own health system. The results were unexpected: the EHR vendor predictive program only picked up 7% of 2552 patients with sepsis who were not treated with antibiotics in a timely fashion and failed to identify 1709 patients with sepsis that the hospital did identify. Obviously, this AI sepsis prediction algorithm was not subjected to rigorous external evaluation but nevertheless was broadly adopted because the HER vendor implemented it in its EHR package and thus made it conveniently available for its large install base of hospitals. No published evaluation on the impact of this proprietary EHR AI program on patients beyond this hospital has emerged and the impacts both positive and negative that it may have caused in its broad hospital use is unknown.”
    Bending the patient safety curve: how much can AI help?
    David C. Classen , Christopher Longhurst  and Eric J. Thomas
    Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5

  • Bending the patient safety curve: how much can AI help?
    David C. Classen , Christopher Longhurst  and Eric J. Thomas
    Digital Medicine (2023) 6:2 ; https://doi.org/10.1038/s41746-022-00731-5
  • “Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified  in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient’s values. The guide can be used for diverse clinical applications of AI.”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care..”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “Values that are relevant to patients, identified  in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient’s values. The guide can be used for diverse clinical applications of AI.”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “Shared decision making is an important part of patient-centered care that contributes to a positive therapeutic relationship by respecting patient autonomy and dignity through empowering patients to actively engage in treatment decisions. The goal is for a clinician to partner with a patient to identify the best option based on the patient’s values. During a shared decision-making conversation, the clinician provides the patient with information to build an accurate illness understanding. The patient is then asked to consider what is most important to them in relation to their health and share their values, beliefs, and overall life goals, why they are important, and how they apply to quality of life. Taking this into consideration, the clinician then offers the patient different options and informs them about the risks and benefits based on the best available evidence.”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “I has the potential to enhance and/or replace certain processes, e.g., diagnosis, treatment planning, and treatment delivery. Patients should be made aware of the use or potential use of AI in their clinical care. Shared decision-making conversations with patients regarding the use of AI in their clinical care require clinicians to determine what patients need in order to be comfortable with the use of AI in their clinical care. During a shared decision-making conversation, the clinician can explain the benefits of using AI, including how AI can provide further options or evidence for diagnosis, treatment planning, or treatment delivery. Patient concerns should be addressed as openly and honestly as possible.”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “As is the case for all treatment options, the clinician has an obligation to help the patient understand the risks and benefits, alongside alternatives. For example, if AI is used to replace pathologists or radiologists for diagnosis by analyzing images, patients want reassurance that the pathologist or radiologist will review the image and confirm the AI diagnosis. Alternatively, an AI system may present different treatment options with possible outcomes. It is important to have shared decision-making conversations with patients to adhere to informed decision making (i.e., the legal consent process) and to help them decide on the best treatment based on their unique values.”
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • The guide that we are suggesting will use a similar format and ask clinicians, as well as potentially even further upstream AI developers, to consider certain questions to ensure that predominant patient values associated with the use of AI in clinical care are respected prior to and throughout the shared decision-making process. This will help clinicians to carry out the following: 1. Ensure that they have considered the information that the patient may identify as important or relevant to them in the use of a particular technology in their clinical care. 2. Have an opportunity to explore patient-specific values associated with the implementation of AI in their care. 3. Work with the patient to apply their values to their clinical decision making.  
    The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making
    Rosanna Macri and Shannon L. Roberts
    Curr. Oncol. 2023, 30, 2178–2186
  • “The exceptionally rapid development of highly flexible, reusable artificial intelligence (AI) models is likely to usher in newfound capabilities in medicine. We propose a new paradigm for medical AI, which we refer to as generalist medical AI (GMAI). GMAI models will be capable of carrying out a diverse set of tasks using very little or no task-specific labelled data. Built through self-supervision on large, diverse datasets, GMAI will flexibly interpret different combinations of medical modalities, including data from imaging, electronic health records, laboratory results, genomics, graphs or medical text. Models will in turn produce expressive outputs such as free-text explanations, spoken recommendations or image annotations that demonstrate advanced medical reasoning abilities. Here we identify a set of high-impact potential applications for GMAI and lay out specific technical capabilities and training datasets necessary to enable them. We expect that GMAI-enabled applications will challenge current strategies for regulating and validating AI devices for medicine and will shift practices associated with the collection of large medical datasets.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “The exceptionally rapid development of highly flexible, reusable artificial intelligence (AI) models is likely to usher in newfound capabilities in medicine. We propose a new paradigm for medical AI, which we refer to as generalist medical AI (GMAI). GMAI models will be capable of carrying out a diverse set of tasks using very little or no task-specific labelled data. Built through self-supervision on large, diverse datasets, GMAI will flexibly interpret different combinations of medical modalities, including data from imaging, electronic health records, laboratory results, genomics, graphs or medical text. Models will in turn produce expressive outputs such as free-text explanations, spoken recommendations or image annotations that demonstrate advanced medical reasoning abilities.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “Instead, medical AI models are largely still developed with a task-specific approach to model development. For instance, a chest X-ray interpretation model may be trained on a dataset in which every image has been explicitly labelled as positive or negative for pneumonia, probably requiring substantial annotation effort. This model would only detect pneumonia and would not be able to carry out the complete diagnostic exercise of writing a comprehensive radiology report. This narrow, task-specific approach produces inflexible models, limited to carrying out tasks predefined by the training dataset and its labels. In current practice, such models typically cannot adapt to other tasks (or even to different data distributions for the same task) without being retrained on another dataset. Of the more than 500 AI models for clinical medicine that have received approval by the Food and Drug Administration, most have been approved for only 1 or 2 narrow tasks.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.

  • Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “A solution needs to integrate vision, language and audio modalities, using a vision–audio–language model to accept spoken queries and carry out tasks using the visual feed. Vision–language models have already gained traction, and the development of models that incorporate further modalities is merely a question of time24. Approaches may build on previous work that combines language models and knowledge graphs25,26 to reason step-by-step about surgical tasks. Additionally, GMAI deployed in surgical settings will probably face unusual clinical phenomena that cannot be included during model development, owing to their rarity, a challenge known as the long tail of unseen conditions. Medical reasoning abilities will be crucial for both detecting previously unseen outliers and explaining them, as exemplified in Fig. 2.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “Foundation models have the potential to transform healthcare. The class of advanced foundation models that we have described, GMAI, will interchangeably parse multiple data modalities, learn new tasks on the fly and leverage domain knowledge, offering opportunities across a nearly unlimited range of medical tasks. GMAI’s flexibility allows models to stay relevant in new settings and keep pace with emerging diseases and technologies without needing to be constantly retrained from scratch. GMAI-based applications will be deployed both in traditional clinical settings and on remote devices such as smartphones, and we predict that they will be useful to diverse audiences, enabling both clinician-facing and patient-facing applications.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “Despite their promise, GMAI models present unique challenges. Their extreme versatility makes them difficult to comprehensively validate, and their size can bring increased computational costs. There will be particular difficulties associated with data collection and access, as GMAI’s training datasets must be not only large but also diverse, with adequate privacy protections. We implore the AI community and clinical stakeholders to carefully consider these challenges early on, to ensure that GMAI consistently delivers clinical value. Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.”
    Foundation models for generalist medical artificial intelligence.  
    Moor M, et al..  
    Nature. 2023 Apr;616(7956):259-265.
  • “Artificial Intelligence (AI) has been increasingly used in radiology to improve diagnostic procedures over the past decades. The application of AI at the time of cancer diagnosis also creates challenges in the way doctors should communicate the use of AI to patients. The present systematic review deals with the patient’s psycho-cognitive perspective on AI and the interpersonal skills between patients and physicians when AI is implemented in cancer diagnosis communication. Evidence from the retrieved studies pointed out that the use of AI in radiology is negatively associated with patient trust in AI and patient-centered communication in cancer disease.”
    The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.
  • “Communication can be seen as a pivotal ingredient in medical care, and XAI might provide a patient-friendly explanation of biomedical decisions based on ML. Particularly, XAI would be highly valuable in the oncology field, where it is essential to consider not only the purely medical aspects but also the patient’s psychological and emotional dimensions. Technological aspects of AI systems are largely described by the current literature in different health sectors. However, the patient’s standpoint of AI to make decisions on their health is often neglected. Scarce communication between patients and clinicians about the potential benefits of AI is likely to cause to patients’ mistrust of such a promising tool.”
    The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.

  • The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.
  • “The use of AI in healthcare involves not only technical issues but also ethical, psychocognitive, and social-demographic considerations of presenting patients with cancer with the presence of AI at the time of the diagnosis. Trust, Accountability, Personal interaction, Efficiency, and General attitude toward AI were identified as five core areas by Ongena et al. The variables that merge such aspects of patients’ attitudes to using and communicating diagnosis with AI are education and knowledge. Accordingly, the authors showed that participants who have lower education are less supportive of AI, and those who have thought AI to be less efficient have a more negative attitude toward AI. Therefore, it is possible to consider that those who do not have a good understanding of the way AI works tend to have a negative attitude toward its effectiveness and less trust in its potential. Moreover, those who mistrust the diagnostic accuracy of AI as well as are not well educated tend to seek interpersonal interaction with doctors much more than those who were neutral about the efficacy of AI.”
    The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.
  • “Future research may consider some useful steps in applying AI bearing in mind patients’ psycho-cognitive perspectives. We propose the acronym AIR-IUT to highlight the three main steps to be considered in the application of AI in the field of radiology and future studies dealing with the patient’s experience of the application of AI. The acronym stands for the fact that in the field of Artificial Intelligence in Radiology, the process is to Inform patients to Understand and Trust the use of AI. Future interventions should consider implementing the use of digital platforms with illustrative videos to inform patients, offering reliable educative means that might be delivered in the waiting rooms. Indeed, involving patients with digital interaction could increase compliance, reduce the fear of the unknown about health technology and psychological feelings, and improve patients’ decision-making at the time of treatment, since they are actively involved and informed at the screening time. Concurrently, a training course to enhance doctor–patient communication skills at the time of diagnosis may be developed. Such a course should help clinicians to adopt patient-friendly language (i.e., jargon words must be explained or replaced by simpler words) and an empathetic approach, entailing particular attention to the patient’s psychological well-being.”
    The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.
  • “In conclusion, doctors should sharpen their communication skills when AI is involved in diagnosis, and patients should be engaged in the process mainly by being informed on the functioning of medical tools used to formulate their diagnosis. One of the most evident elements from the retrieved studies is that patients do not know what AI is and this lack of knowledge affects trust and doctor–patient communication. Since patients should be empowered and tailor informed at all phases of their clinical journey, they should ideally know which diagnostic tools are used by their clinicians and the way they work. Given the outstanding AI’s potential, we believe that informing patients about its progress in our field will help them to be more trusting towards it.”
    The Use of Artificial Intelligence (AI) in the Radiology Field: What Is the State of Doctor-Patient Communication in Cancer Diagnosis?  
    Derevianko A et al.  
    Cancers (Basel). 2023 Jan 12;15(2):470.
  •  Second, expertise in the field of AI and machine learning is closely linked to commercial applications. The underlying technology is rapidly changing and, in many cases, is being produced by companies and academic investigators with financial interests in their products. For a growing class of large-scale AI models, companies that have the necessary resources may be the only ones able to push the frontier of AI systems. Since many such models are not widely available yet, hands-on experience and a detailed understanding of a model’s operating characteristics often rest with only a small handful of model developers. Despite the potential for financial incentives that could create conflicts of interest,  a deep understanding of AI and machine learning and their uses in medicine requires the participation of people involved in their development. Thus, in the series of AI articles we are publishing in the Journal and in NEJM AI, we will not restrict authorship and editorial control to persons without relevant financial ties but will follow a policy of transparency and disclosure.
    Artificial Intelligence in Medicine
    Andrew L. Beam, et al.  
    n engl j med 388;13 March 30, 2023
  • “Thus, in the series of AI articles we are publishing in the Journal and in NEJM AI, we will not restrict authorship and editorial control to persons without relevant financial ties but will follow a policy of transparency and disclosure.”
     Artificial Intelligence in Medicine
    Andrew L. Beam, et al.  
    n engl j med 388;13 March 30, 2023
  • “As noted above, the use of AI and machine learning has already become accepted medical practice in the interpretation of some types of medical images, such as ECGs, plain radiographs, computed tomographic (CT) and magnetic resonance imaging (MRI) scans, skin images, and retinal photographs. For these applications, AI and machine learning have been shown to help the health care provider by flagging aspects of images that deviate from the norm.”
    Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8.  

  • Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8

  • Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8.  
  • “Pitfalls aside, there is much promise. If AI and machine-learning algorithms can be reduced to clinically useful “apps,” will they be able to weed their way through mountains of clinical, genomic, metabolomic, and environmental data to aid in precision diagnosis? Can AI and machine-learning–driven apps become your personal scribe and free up your time spent on documentation so that you can spend more time with patients? Can the apps prompt you to ask a key question that could help in the differential diagnosis? Can they outwit the AI and machine-learning algorithms, used by insurance companies, that make it difficult for you to order a positron-emission tomographic–CT scan or collect reimbursement for the time you spent with a patient and the patient’s family? In each area, progress has been made. Is it good enough?”
    Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8. 
  • “when the results of the research are applied in such a way as to influence practice, the outcome must be beneficial for all patients under consideration, not just those who are similar to the ones with characteristics and findings on which the algorithm was trained. This raises the question of whether such algorithms should include consideration of public health (i.e., the use of scarce resources) when diagnostic or treatment recommendations are being made and the extent to which such considerations are part of the decision-making process of the algorithm. Such ethical considerations have engaged health professionals and the public for centuries.”
    Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8.  

  • Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8. 
  • “It is important to understand that this is a fast-moving field, so to some extent, what we publish may have the resolution of a snapshot of the landscape taken from a bullet train. Specifically, things happening in close temporal proximity to publication may be blurred because they are changing quickly, but the distant background will be in reasonably good focus. ”
    Artificial Intelligence and Machine Learning in Clinical Medicine, 2023  
    Charlotte J. Haug, Jeffrey M. Drazen  
    N Engl J Med 2023;388:1201-8. 
  • AI in Healthcare: The Patient
    - While public opinion on AI is still evolving, knowledge about the technology also determined patients’ hesitance levels.
    - Patients who said they have heard little or nothing about AI were more likely to be uncomfortable with their provider using them than those who said they had heard about them, the survey found.
    - Ultimately 75% of respondents said they are worried their providers are moving too fast implementing the tools without fully knowing the risks, compared to just 23% who said they are moving too slow

  • Perspectives of Patients About Artificial Intelligence in Health Care
    Dhruv Khullar et al.
    JAMA Network Open. 2022;5(5):e2210309.
  • “Comfort with AI varied by clinical application . For example, 12.3%of respondents were very comfortable and 42.7%were somewhat comfortable with AI reading chest radiographs, but only 6.0% were very comfortable and 25.2%were somewhat comfortable about AI making cancer diagnoses. Most respondents were very concerned or somewhat concerned about AI’s unintended consequences, including misdiagnosis (91.5%), privacy breaches (70.8%), less time with clinicians (69.6%), and higher health care costs (68.4%). A higher proportion of respondents who self identified as being members of racial and ethnic minority groups indicated being very concerned about these issues, compared with White respondents.”
    Perspectives of Patients About Artificial Intelligence in Health Care
    Dhruv Khullar et al.
    JAMA Network Open. 2022;5(5):e2210309.
  • “Clinicians, policy makers, and developers should be aware of patients’ views regarding AI. Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making. Future work should examine how views evolve as patients become more familiar with AI.”
    Perspectives of Patients About Artificial Intelligence in Health Care
    Dhruv Khullar et al.
    JAMA Network Open. 2022;5(5):e2210309.
  • “AI technologies have been widely applied to medicine and healthcare. However, with the increasing complexity of AI technologies, the related applications become more and more difficult to explain and communicate. To solve this problem, the concept of XAI came into being. XAI not only can make an AI application more transparent, but also can assist in the improvement of the AI application. However, at present, common XAI tools and technologies require background knowledge from various fields and are not tailored to a specific AI application. As a result, the explainability of the AI application may be far from satisfactory.”
    An improved explainable artificial intelligence tool in healthcare for hospital recommendation
    Yu-Cheng Wang,  Tin-Chih Toly Chen, Min-Chi Chiu 
    Healthcare Analytics 3 (2023) 100147
  • “XAI is already a trend in artificial intelligence (AI). At present, the development of XAI can be divided into two categories. In the first category, XAI is to enhance the practicality of an existing AI technology by explaining its execution process and result. In the second category, XAI is to improve existing AI technologies by incorporating easy-to-interpret tools such as heatmaps, decision rules, decision trees, scatter plots, to highlight deficiencies in the application process . In addition, a feedback mechanism must also be designed to adjust the application process or result to make improvements. This study falls within the first category.”
    An improved explainable artificial intelligence tool in healthcare for hospital recommendation
    Yu-Cheng Wang,  Tin-Chih Toly Chen, Min-Chi Chiu  
    Healthcare Analytics 3 (2023) 100147
  • AI and our Patients
  • “Most respondents had positive views about AI’s ability to improve care but had concerns about its potential for misdiagnosis, privacy breaches, reducing time with clinicians, and increasing costs, with racial and ethnic minority groups expressing greater concern. Respondents were more comfortable with AI in specific clinical settings, and most wanted to know when AI was used in their care. One limitation of this study was it involved a panel that had agreed to participate in surveys, which may limit generalizability. In addition, compared with nonrespondents, respondents were younger, but no significant differences by sex or race and ethnicity were found. Clinicians, policy makers, and developers should be aware of patients’ views regarding AI. Patients may benefit from education on how AI is being incorporated into care and the extent to which clinicians rely on AI to assist with decision-making. Future work should examine how views evolve as patients become more familiar with AI.”
    Perspectives of Patients About Artificial Intelligence in Health Care.
    Khullar D et al.
    JAMA Netw Open. 2022 May 2;5(5):e2210309
  • “The growing use of artificial intelligence (AI) in health care has raised questions about who should be held liable for medical errors that result from care delivered jointly by physicians and algorithms. In this survey study comparing views of physicians and the U.S. public, we find that the public is significantly more likely to believe that physicians should be held responsible when an error occurs during care delivered with medical AI, though the majority of both physicians and the public hold this view (66.0% vs 57.3%; P = .020). Physicians are more likely than the public to believe that vendors (43.8% vs 32.9%; P = .004) and healthcare organizations should be liable for AI-related medical errors (29.2% vs 22.6%; P = .05). Views of medical liability did not differ by clinical specialty. Among the general public, younger people are more likely to hold nearly all parties liable.”
    Public vs physician views of liability for artificial intelligence in health care.  
    Khullar D et al.  
    J Am Med Inform Assoc. 2021 Jul 14;28(7):1574-1577
  • “Moreover, patients must be properly informed about the relevant concepts. Many patients are unfamiliar with the concept of over-diagnosis and therefore may be unable to weigh the relative risk of unnecessary diagnosis and treatment against the risk failing to discover a cancer. Moreover, patients may not always have preferences about such outcomes. There must still be a sensible default decision threshold that can be used in cases in which patients choose to withhold their attitudes or simply have no preferences.”
    Clinical decisions using AI must consider patient values  
    Jonathan Birch, Kathleen A. Creel, Abhinav K.  
    Nature Medicine | VOL 28 | Feb 2022 | 226–235 
  • “A risk-profiling questionnaire suitable for cancer screening would probe the patient’s attitudes about the risk of over-diagnosis, false-positive and false-negative results, and over-treatment versus under-treatment, and the expected value to the patient of additional years of life of varying quality levels. The questionnaire might also ask patients to respond to statements such as ‘I would rather risk surgical complications to treat a benign tumor than risk missing a cancerous tumor’.”
    Clinical decisions using AI must consider patient values  
    Jonathan Birch, Kathleen A. Creel, Abhinav K.  
    Nature Medicine | VOL 28 | Feb 2022 | 226–235
  • “Relatively weak evidence supporting the use of AI in routine clinical practice health care settings, AI models continue to be marketed and deployed. A recent example is the Epic Sepsis Model. While this model was widely implemented in hundreds of US hospitals, a recent study showed that it performed significantly worse in correctly identifying patients with early sepsis and improving patient outcomes in a clinical setting compared with performance observed during development of the model.”
    Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence  
    Cornelius A. James,  et al.
    JAMA Published online March 21, 2022
  • “AI will soon become ubiquitous in health care. Building on lessons learned as implementation strategies continue to be devised, it will be essential to consider the key role of clinicians as end users of AI-developed algorithms, processes, and risk predictors. It is imperative that clinicians have the knowledge and skills to assess and determine the appropriate application of AI outputs, for their own clinical practice and for their patients. Rather than being replaced by AI, these new technologies will create new roles and responsibilities for clinicians.”
    Preparing Clinicians for a Clinical World Influenced by Artificial Intelligence  
    Cornelius A. James,  et al.
    JAMA Published online March 21, 2022
  • “In the hospital context, Alexa could become an integral feature of patient rooms because it would allow patients to change the channel on the television, listen to prerecorded physician instructions, find out when their next dose of medication is due, receive daily briefs on what to expect, and actively engage or respond to other aspects of their hospital stay.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “After discharge, Alexa can aid communication between the patient and the medical team (eg, “Ask my doctor when I should change my dressing”). Through machine learning, it may be possible for Alexa to figure out what patients want to know but are reluctant to ask and can then provide that information up front.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “Specifically for radiology, home artificial intelligence devices could prove to be a valuable tool for offering instructions to pa- tients on how to prepare for imaging examinations and what to expect when they arrive at an imaging center or hospital.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “Integrating Alexa, or similar platforms, into everyday workflow may free the radiologist from many otherwise time-consuming tasks. For example, integrating artificial intelligence with voice technology into the phone network of the hospital would greatly speed the process of contacting ordering clinicians to report critical findings.”
    Learning to Talk Again in a Voice-First World
    David Isbitski, Elliot K. Fishman, MD, Karen M. Horton, MD, Steven P. Rowe, MD, PhD
    J Am Coll Radiol. 2019 Aug;16(8):1123-1124
  • “He understands that every guest is always the most important person in the room, and we have instilled that ethos in all of our staff members. Interestingly, this is one area where your industry, in my experience, fails: several years ago, my father underwent major surgery, and I felt that the health care staff did not consider him to be the most important person in the room and were simply not listening to what he and my family had to say to them.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • “I realize that managing the customer experience will undoubtedly be harder in a big organization like yours. Never- theless, that is no excuse not to try. Hire people who care about and believe in what your organization is doing, and keep paying attention to every aspect of the customer experience.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • “At our institution, likely reflective of practices across the country, radiologists pay little attention to this group of employees, virtually never interact with them, and are often blind to the impor- tance of these staff members in driving patients’ perception of a practice and the ultimate economic success of a radiology group.”
    Stories From the Kitchen: Lessons for Radiology From the Restaurant Business
    Cindy Wolf, Elliot K. Fishman, MD, Karen M. Horton, MD, Siva P. Raman
    J Am Coll Radiol. 2015 Mar;12(3):307-8
  • Even patients with substantial expertise in science or particular medical problems still rely on physicians during times of stress and uncertainty, and need them to perform procedures, interpret diagnostic tests, and prescribe medications. In these situations, reciprocal trust is central to the functioning of a health system and leads to higher treatment adherence, improvements in self-reported health, and better patient experience.So the question is: as technology continues to change relationships between patients and physicians, how can patient-physician trust be maintained or even improved?
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Prior work has examined the accuracy of AI, potential for biases, and lack of explainability (“black box”), all of which may affect physicians’ and patients’ trust in health care AI, as well as the potential for AI to replace physicians. However, in settings for which care will still be provided by a physician, whether and how AI will affect trust between physicians and patients has yet to be addressed. The potential effects of AI on trust between physicians and patients should be explicitly designed and planned for.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When considering the implications of health care AI on trust, a broad range of health care AI applications need to be considered, including (1) use of health care AI by physicians and systems, such as for clinical decision support and system strengthening, physician assessment and training, quality improvement, clinical documentation, and nonclinical tasks, such as scheduling and notifications; (2) use of health care AI by patients including triage, diagnosis, and self-management; and (3) data for health care AI involving the routine use of patient data to develop, validate, and fine-tune health care AI as well as to personalize the output of health care AI.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Each of these applications has the potential to enable and disable the 3 components of trust: competency, motive, and transparency.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • Competency, Motive, Transparency
  • Competency reflects both the extent to which physicians are perceived to have clinical mastery and patients’ knowledge and self-efficacy of their own health. Because much of AI is and will be used to augment the abilities of physicians, there is potential to increase physician competency and enable patient-physician trust. This includes not only AI-assisted clinical decision support (eg, by suggesting possible diagnoses to consider) but also the use of AI for physician training and quality improvement (eg, by providing automated feedback to physicians about their diagnostic performance). AI can also serve an important role in empowering patients to better understand their health and self-manage their conditions.
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “ On the other hand, trust will be compromised by AI that is inaccurate, biased, or reflective of poor-quality practices as well as AI that lacks explainability and inappropriately conflicts with physician judgment and patient autonomy.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Motive refers to a patient’s trust that the physician is acting solely in the interests of the patient. Patients are likely to perceive motive through the lens of the extent of the open dialogue they have with their physicians. Through greater automation of low-value tasks, such as clinical documentation, it is possible that AI will free up physicians to identify patients’ goals, barriers, and beliefs, and counsel them about their decisions and choices, thereby increasing trust. Conversely, AI could automate more of the physician’s workflow, but then fill freed-up time with more patients with clinical issues that are more cognitively or emotionally complex. AI could also enable greater distribution of care across a care team (both human agents and computer agents).”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Whether this would enhance or harm trust would depend on the degree of collaboration among team members and the information flow, and could compromise trust if robust, longitudinal relationships were impeded.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Well-designed AI that allows patients to appreciate and understand that clinical decisions are based on evidence and expert consensus should enhance trust. It can also process patient data (including health care and consumer data) to provide physicians’ insight on patients’ behaviors and preferences.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “ Moreover, if patient data are routinely shared with external entities for AI development, patients may become less transparent about divulging their information to physicians, and physicians may be more reluctant to acknowledge their own uncertainties. AI that does not explain the source or nature of its recommendations (“black box”) may also erode trust.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Where health care AI is implemented by health systems, it should be directed toward automating the transactional, business, and documentation aspects of care; doing so may provide time to physicians to engage with their patients more deeply. If AI is effective in relieving physicians from the burdens of data entry and other clerical tasks, much of the reclaimed time should be made available for patient care, shared decision-making, and counseling, which are the cornerstones of effective health care that are often compromised today.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When health care AI is developed by health systems and third-party organizations using patient data, physicians should be mindful of the effect on patient-physician trust. It will be important to develop ethical approaches that allow for patient input into decisions by health systems to share data for the purposes of developing AI through some combination of individual patient consent and the involvement of patient advocacy groups.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.