google ads
Deep Learning: Man Vs Ai Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Man vs AI

-- OR --


  • Medical Artificial Intelligence and Human Values.
    Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK.
    N Engl J Med. 2024 May 30;390(20):1895-1904
  • “Such human values pertain broadly to the principles, standards, and preferences that reflect human goals and guide human behaviors. As we review here, LLMs and new foundation models, as technically impressive as they are, are only the latest incarnation in a long line of probabilistic models that have been integrated into medical decision making, which have all required that their creators and implementers make value judgments.”
    Medical Artificial Intelligence and Human Values.
    Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK.
    N Engl J Med. 2024 May 30;390(20):1895-1904
  • Medical Artificial Intelligence and Human Values
    - As large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Medical decision analysis offers lessons on measuring human values.
    - A large language model will respond differently depending on the exact way a query is worded and how the model was directed by its makers and users. Caution is advised when considering the use of model output in decision making.
  • “Although we do not foresee physicians dramatically altering diagnostic practice using decision analysis in the era of LLMs, the core principle of utility elicitation offers lessons on aligning AI models for medicine. These lessons include the fundamental incompatibility of utilities from competing parties, the importance of how information is presented, and the benefits of enumerating and measuring both probabilities and utilities even when uncertainty remains in both.”
    Medical Artificial Intelligence and Human Values.
    Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK.
    N Engl J Med. 2024 May 30;390(20):1895-1904
  • “AI governance teams can also help provide oversight, and agencies worldwidn are grappling with how to regulate AI models, a challenge that will become more complex with foundation models and models that can reason over multiple data types.  Finally, considerations of the values of individual patients may cause physicians to ignore or override AI recommendations; the liability implications remain an active focus by legal scholars. As medical AI becomes more integrated into care, recognizing and mitigating the risks associated with dataset shift will be paramount in aligning AI outputswith human values.”
    Medical Artificial Intelligence and Human Values.
    Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK.
    N Engl J Med. 2024 May 30;390(20):1895-1904
  • “At every stage of training and deploying an AI model, human values enter. AI models are far from immune to the shifts and discrepancies of values across individual patients and societies. Past utilities may no longer be relevant or even reflect pernicious societal biases. Our shared responsibility is to ensure that the AI models we deploy accurately and explicitly reflect patient values and goals. As noted by Pauker and Kassirer in the Journal more than three decades ago in reviewing progress in medical decision analysis,  “the threat to physicians of a mathematical approach to medical decision making simply has not materialized.” Similarly, rather than replacing physicians, AI has made the consideration of values, as reflected by the guidance of a thoughtful physician, more essential than ever.”
    Medical Artificial Intelligence and Human Values.
    Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK.
    N Engl J Med. 2024 May 30;390(20):1895-1904
  • “How different is this situation from the developments in medicine where physicians are giving away their knowledge to artificial intelligence (AI) on a voluntary basis and spend hours of valuable research time sharing expert knowledge with AI systems. AI has entered the medical field so rapidly and unobtrusively that it seems as if its interactions with the profession have been accepted without due diligence or in-depth consideration. It is clear that AI applications are being developed with the speed of lightning, and from recent publications it becomes frightfully apparent what we are heading for and not all of this is good. AI may be capable of amazing performance in terms of speed, consistency, and accuracy, but all of its operations are built on knowledge derived from experts in the field. We here follow the example of the kidney pathology field to illustrate the developments, emphasizing that this field is only exemplary of other fields in medicine.”
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Feb 13;331(6):471-472. 
  • “This era will show a decrease in intellectual debatesamong colleagues, a sign of the time that computer scientists have already warned us about. While authors of literature are fighting for regulations to control the usage of AI in art, physicians should contemplate how to take advantage of the potential benefits from AI in medicine without losing control over their profession. With the issue of a landmark Executive Order in the US to ensure that America leads the way in managing the risks of AI and the EU becoming the first continent to set clear rules9 for the use of AI, physicians should realize that keeping AI within boundaries is essential for the survival of their profession and for meaningful progress in diagnosis and understanding of disease mechanisms.”
    AI's Threat to the Medical Profession.  
    Fogo AB, Kronbichler A, Bajema IM.  
    JAMA. 2024 Feb 13;331(6):471-472. 
  •  ”AI could generate more time with patients in various ways. Large language models (LLMs) of clinic visit conversations could offload many data clerk functions, such as health insurance pre-authorisations, scheduling of follow-up visits, tests, and prescriptions. Meanwhile, the emergence of deep learning apps to screen for certain medical conditions, including heart arrhythmias, skin lesions, and urinary tract infections, could also free up clinician time. Additionally, patient autonomy has been extended via chatbots that answer medical questions, decompressing the need for direct contact with doctors and nurses (figure C). Other time-saving AI functions in development include the rapid synthesis and data visualisation for the complete medical records of a patient, the up-to-date review of the corpus of medical literature pertinent to an individual, and provision of a differential diagnosis.”  
    Machines and empathy in medicine  
    Topol EJ
    www.thelancet.com Vol 402 October 21, 2023  
  • ”Yet the challenge is that the machine promotion of empathy is actually pseudo-empathy. It is empathetic in appearance, but the AI cannot truly connect with the patient or share the experience. The LLM performs a task without understanding its meaning, parroting the text from the vast inputs of its training, and is one-dimensional because of its focus on words. By contrast, a physician can exude empathy in many non-verbal ways, including eye contact and holding hands. Another shortcoming is that current LLMs such as Bard, LLaMA, PaLM 2, and GPT-4, have not had specialised fine-tuning for medical inputs. Even though the ChatGPT compared with Reddit doctor study showed high quality and accurate responses, the questions informing the model were limited. ”  
    Machines and empathy in medicine  
    Topol EJ
    www.thelancet.com Vol 402 October 21, 2023 
  • “Thus, there is a danger of mistaken answers and advice from these models, as has been seen with erroneous chatbot responses to diet questions from people with eating disorders. As LLMs improve and are trained with high-quality medical inputs, this problem may improve but is unlikely to be eradicated. Nevertheless, the potential of AI coaching clinicians to be more compassionate and sensitive by review of their patient interactions could emerge as a vital educational tool in the future, not only for medical students but for all health professionals.”  
    Machines and empathy in medicine  
    Topol EJ
    www.thelancet.com Vol 402 October 21, 2023 
  • Background: Existing (artificial intelligence [AI]) tools in radiology are modeled without necessarily considering the expectations and experience of the end user—the radiologist. The literature is scarce on the tangible parameters that AI capabilities need to meet for radiologists to consider them useful tools.
    Objective: The purpose of this study is to explore radiologists' attitudes toward AI tools in pancreatic cancer imaging and to quantitatively assess their expectations of these tools.
    Conclusion: Radiologists are open to the idea of integrating AI-based tools and have high expectations regarding the performance of these tools. Consideration of radiologists' input is important to contextualize expectations and optimize clinical adoption of existing and future AI tools. .
    Radiologists' Expectations of Artificial Intelligence in Pancreatic Cancer Imaging: How Good Is Good Enough?
    Chu LC, Ahmed T, Blanco A, Javed A, Weisberg EM, Kawamoto S, Hruban RH, Kinzler KW, Vogelstein B, Fishman EK.
    J Comput Assist Tomogr. 2023 Jul 28. doi: 10.1097/RCT.0000000000001503. Epub ahead of print.
  • Results: A total of 161 respondents completed the survey, yielding a response rate of 46.3% of the total 348 clicks on the survey link. The minimum acceptable sensitivity of an AI program for the detection of pancreatic cancer chosen by most respondents was either 90% or 95% at a specificity of 95%. The minimum size of pancreatic cancer that most respondents would find an AI useful at detecting was 5 mm. Respondents preferred AI tools that demonstrated greater sensitivity over those with greater specificity. Over half of respondents anticipated incorporating AI tools into their clinical practice within the next 5 years. .
    Radiologists' Expectations of Artificial Intelligence in Pancreatic Cancer Imaging: How Good Is Good Enough?
    Chu LC, Ahmed T, Blanco A, Javed A, Weisberg EM, Kawamoto S, Hruban RH, Kinzler KW, Vogelstein B, Fishman EK.
    J Comput Assist Tomogr. 2023 Jul 28. doi: 10.1097/RCT.0000000000001503. Epub ahead of print.
  • “In conclusion, this study demonstrates that radiologists are open to the idea of integrating AI-based tools as long as they meet high though probably attainable performance criteria. We believe that, based on continuing progress in the technical capabilities of AI aswell as instrumentation, and the results of this survey, the clinical implementation of AI technology for the detection of pancreatic cancer is aworthy and feasible goal. Future studies should transition toward investigating the preliminary experiences of current radiology AI users to guide further development of AI and to encourage AI adoption among practices not currently using AI tools.” .
    Radiologists' Expectations of Artificial Intelligence in Pancreatic Cancer Imaging: How Good Is Good Enough?
    Chu LC, Ahmed T, Blanco A, Javed A, Weisberg EM, Kawamoto S, Hruban RH, Kinzler KW, Vogelstein B, Fishman EK.
    J Comput Assist Tomogr. 2023 Jul 28. doi: 10.1097/RCT.0000000000001503. Epub ahead of print.
  • “The study highlights changes in job profiles of physicians and outlines demands for new categories of medical professionals considering AI-induced changes of work. Physicians should redefine their self-image and assume more responsibility in the age of AI-supported medicine. There is a need for the development of scenarios and concepts for future job profiles in the health professions as well as their education and training.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “Specialized tasks currently performed by physicians in all areas of medicine would likely be taken over by AI, including bureaucratic tasks, clinical decision support, and research. However, the concern that physicians will be replaced by an AI system is unfounded, according to experts; AI systems today would be designed only for a specific use case and could not replace the human factor in the patient–physician relationship. Nevertheless, the job profile and professional role of physicians would be transformed as a result of new forms of human–AI collaboration and shifts to higher-value activities.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “Due to the rapid progress in AI research and development, some observers have asked whether AI might replace physicians in the future. According to the World Health Organization (WHO), however, AI may not fully replace clinical decision-making, but it could improve decisions made by clinicians. Which medical tasks could be automated by AI systems, and which tasks should be performed by a physician—possibly assisted by AI?”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “AI can only be applied and trained if it has access to large representative data sets. Many important data formats are not yet accessible to AI systems, and there is a lack of representative, freely accessible, and annotated data sets that can be used as a basis for training AI systems. Experts call for the establishment of and adherence to quality standards in data collection and data documentation. Quality standards would also be needed for reporting the results of studies using medical AI systems to ensure transparent disclosure of their benefits and performance.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “Although AI would perform specific tasks in predefined contexts, physicians would increasingly take on the role of generalists, according to the majority of experts (63%, 15/ 24); the experts believe the physicians would synthesize and contextualize information from multiple sources, which would lead to a more holistic approach to a disease or patient case. AI systems could be considered as a consultant—a further external opinion the physician should integrate into the decision-making process. Therefore, risk analysis will become more important. A physician would still weigh various possible hypotheses and make the decision in the end, being the “filter behind the AI.””
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “Digitization and AI would challenge the medical profession and the skills it requires in entirely new ways. More and more knowledge and evidence is being accumulated in less and less time, which requires an adaptation of medical training concepts as well as new solutions for knowledge translation from research to practice and teaching, many experts stated (54%, 13/24). Moving away from the one-size-fits-all paradigm and toward precision medicine, physicians should learn to better deal with uncertainties in medical practice.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • “Some experts interviewed in this study (8%, 2/24) foresee a transformation in medical education. Much of the knowledge taught in medical school today could become worthless in the future, as AI can make knowledge available to everyone in a consolidated and processed form. Physicians and medical students will spend less time acquiring factual knowledge, because AI can make this available much faster, more comprehensively, and in a more evidence-based way.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • "Yet, the opportunities AI brings must be seized and managed by professionals trained in both AI and medicine. In addition, the medical profession should redefine its self-image, acquire new competencies related to AI, and take on more responsibility in the age of AI-enabled medicine. Ultimately, professional education will need to include AI in the core repertoire of skills for physicians. This study highlights the need to develop scenarios for the use of AI in clinical and research practice, as well as new workflow models and concepts for working in a team that not only includes an unprecedented diversity of professions and disciplines but also combines human and artificial intelligence.”
    The medical profession transformed by artificial intelligence: Qualitative study
    Lina Mosch et al.
    Digital Health 2023  Volume 8: 1–13
  • Rationale and Objectives: This study aimed to investigate radiologists' and radiographers' knowledge, perception, readiness, and chal-  lenges regarding Artificial Intelligence (AI) integration into radiology practice.  
    Results: There was a significant lack of knowledge and appreciation of the integration of AI into radiology practice. Organizations are stepping toward building AI implementation strategies. The availability of appropriate training courses is the main challenge for both radiographers and radiologists.  
    Conclusion: The excitement of AI implementation into radiology practice was accompanied by a lack of knowledge and effort required to improve the user's appreciation of AI. The knowledge gap requires collaboration between educational institutes and professional bodies to develop structured training programs for radiologists and radiographers.  
    Assessment of the Willingness of Radiologists and Radiographers to Accept the Integration of Artificial Intelligence Into Radiology Practice  
    Mohamed M. Abuzaid et al.
    Acad Radiol 2022; 29:87–94 
  • “This study indicated that radiographers and radiologists lacked an understanding of AI fundamentals and the knowledge of some aspects of AI integration into radiology. One reason for this may be the absence of local education resources. This could be addressed through the design of local education resources by universities and continuous education centre in UAE as well as other international professional societies.”  
    Assessment of the Willingness of Radiologists and Radiographers to Accept the Integration of Artificial Intelligence Into Radiology Practice  
    Mohamed M. Abuzaid et al.
    Acad Radiol 2022; 29:87–94 
  • “I observe that the expectations from AI and radiologists are fundamentally different. The expectations of AI are based on a strong and justified mistrust about the way that AI makes decisions. Because AI decisions are not well understood, it is difficult to know how the algorithms will behave in new, unexpected situations. However, this mistrust is not mirrored in our expectations of human readers. Despite well-proven idiosyncrasies and biases in human decision- making, we take comfort from the assumption that others make decisions in a way as we do, and we trust our own decision-making. Despite poor ability to explain decision- making processes in humans, we accept explanations of decisions given by other humans. Because the goal of radiology is the most accurate radiologic interpretation, our expectations of radiologists and AI should be similar, and both should reflect a healthy mistrust of complicated and partially opaque decision processes undergoing in computer algorithms and human brains.”  
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • “Because AI decisions are not well understood, it is difficult to know how the algorithms will behave in new, unexpected situations. However, this mistrust is not mirrored in our expectations of human readers. Despite well-proven idiosyncrasies and biases in human decision- making, we take comfort from the assumption that others make decisions in a way as we do, and we trust our own decision-making. Despite poor ability to explain decision- making processes in humans, we accept explanations of decisions given by other humans. Because the goal of radiology is the most accurate radiologic interpretation, our expectations of radiologists and AI should be similar, and both should reflect a healthy mistrust of complicated and partially opaque decision processes undergoing in computer algorithms and human brains.”  
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • “I observe that the expectations from AI and radiologists are fundamentally different. The expectations of AI are based on a strong and justified mistrust about the way that AI makes decisions. Because AI decisions are not well understood, it is difficult to know how the algorithms will behave in new, unexpected situations. However, this mistrust is not mirrored in our expectations of human readers. Despite well-proven idiosyncrasies and biases in human decision- making, we take comfort from the assumption that others make decisions in a way as we do, and we trust our own decision-making. Despite poor ability to explain decision- making processes in humans, we accept explanations of decisions given by other humans.”  
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • "There also may be a more important reason, beyond the time needed for development and validation, why AI algorithms are not yet a significant part of the clinical practice: Whether an algorithm will be implemented in a clinical practice depends on whether it serves the in- terests (often financial) of those who influence the decision of its implementation. This alignment of interests also affects the upstream development of the AI models. Those interests may be misaligned with the interests of patients. It is well-recognized that the incentives of many decision-makers in a health care system, such as insurers, hospitals, drug companies, or legislators, are not fully aligned or even may be highly misaligned with the interests of patients.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • Summary  
    The expectations of radiology artificial intelligence do not match expectations of radiologists in terms of performance and explain- ability.  
    Key Points  
    • Expectations of radiology artificial intelligence (AI) will guide its implementation in clinical settings.  
    • The expectations of AI are based on a strong and justified mistrust about the way that AI makes decisions, but this mistrust is not mirrored in our expectations of human readers.  
    • Expectations of radiologists differ from those of AI, particularly in terms of performance and explainability.  
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • “Can a decision made by a radiologist be explained? The short answer is: not well. As in artificial neural networks, the decisions in a human brain are made through processing of input signals by a complicated network of interconnected neurons. We cannot perceive which neurons or systems of neurons fire at different times and, therefore, we have no mechanistic understanding of how individual decisions are made. Very little of this processing, at any level, rises to human consciousness. A complete mechanistic explanation would have to involve understanding the specific processes that took place when the individual decision was made, and such an explanation is not currently possible.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • “Explanations of visual interpretations offered by radiologists resort to the assumption that we share common concepts such as shape or brightness, or more complicated concepts used in radiology such as nodule echogenicity in US or mass margin in mammography. While this can be helpful, it may not reflect how a decision was actually made. Furthermore, significant experimental data on interreader variability demonstrate that the assumption of these concepts being shared across radiologists is only partially true at best.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • "Despite these significant limitations, explanations of ra- diology decisions offered by radiologists who make them can still have two important functions: (a) they can help educate radiologists in training; and (b) they can improve confidence in individual decisions by allowing others to come to the same conclusion. However, such explanations cannot guarantee that a radiologist will not make an error in unseen and unusual scenarios. Despite the lack of such guarantee for radiologists, explanations of such quality are often expected from AI. It appears that radiologists and AI systems, for slightly different reasons, are in the same boat. Neither can offer a high level of reassurance through explanations of their decisions.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • "While on the surface very important, their significance fades away when investigated closely. Judgment calls appear to be decisions that are made with insufficient data on the potential outcomes of our decisions. Intuition appears to be a justification for decisions/opinions where we struggle to access good arguments for it. Common sense is an intuition that is hoped to be shared by many. Decisions can be made by algorithmic processing of data and in the presence of poor information and some of such processing could be called “judgment calls,” or “common sense.” There is no need for “judgment calls,” “common sense,” or “intuition” in a sense that goes beyond such processing.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • "There is a fundamental disparity between the expectations for performance, established through thorough testing, for AI and for radiologists. Radiologists are rarely tested in clinical scenarios, with large numbers of cases, and an established reference standard. Moreover, some radiologists perform below the level expected from AI (ie, the level of an average radiologist), and this is widely accepted as interreader variability status quo. This is not to say that we should not test AI since we rarely test radiologists. We should test AI and we should do it rigorously. However, we should take a step back and stress that the question that needs to be answered is what confidence do we have that introducing AI to a radiologic practice will improve the lives of the patients? And this question should be determined based on the rigorous assessment of both AI and radiologists and not on the trust that we place in our fellow human beings.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • On the one hand, we should, to a reasonable extent, mistrust AI to make correct decisions regardless of the setting. Therefore, it is crucial that we test the algorithms in real-world scenarios and on datasets that are diverse in terms of patient populations, scanner parameters, imaging protocols, technologists who ac- quire the images, and any other parameters that may reasonably affect the decision.  
    On the other hand, putting an undue burden on the algorithms because we trust ourselves as human beings may not be a correct policy. The goal should be to implement a radiologic practice that provides the most benefit to the patients. This needs to be based on a reasonable mistrust toward the algorithm, but it should not be based on an unjustified trust toward our own human ability to perceive and make decisions. that we place in our fellow human beings.”
    Do We Expect More from Radiology AI than from Radiologists?  
    Mazurowski, MA
    Radiology: Artificial Intelligence 2021; 3(4):e200221 
  • “Corporate or other managed environments will reason that if AI can perform 50% of a radiologist’s tasks, then they do not need as many radiologists, perhaps more than half as many but certainly fewer than before AI. A rational firm, whether for-profit or not-for-profit, minimizes costs. Radiologists are expensive, inconsistent, and fallible. Reading too fast, reading at the end of a long shift, or reading under other adverse circumstances such as sleep deprivation increases errors. AI is unencumbered by these deficiencies. Thus, both revenue generation and cost reduction motives could reduce the employed radiologist workforce.”  
    Artificial Intelligence for Image Interpretation: Counterpoint— The Radiologist’s Incremental Foe
    Frank J. Lexa, Saurabh Jha
    AJR:217, September 2021 
  • “For the foreseeable future, artificial intelligence (AI) will be deployed in radiology and change how we work. Initially, radiologists will reap benefits from these emerging technologies. AI will improve efficiency by prioritizing cases and improving schedules and protocols. With further improvement, AI will emerge as a second reader, providing a safety net that reduces misses. Radiologists will be pleased with their new faceless apprentices. However, AI is not a static technology. Its very nature is to keep improving as it is fed more data and as the ground truth becomes better refined. could reduce the employed radiologist workforce.”  
    Artificial Intelligence for Image Interpretation: Counterpoint— The Radiologist’s Incremental Foe
    Frank J. Lexa, Saurabh Jha
    AJR:217, September 2021 
  • "As AI continues “helping” radiologists, its adoption will also increase. Adoption is iterative with positive reinforcement to the industry, making the technology better and cheaper, increasing its adoption, in turn further improving the technology, increasing adoption yet again, and so on. At some point, the threshold for economies of scale will be crossed, and the production of AI technologies will increase markedly, and their adoption will become near universal.”
    Artificial Intelligence for Image Interpretation: Counterpoint— The Radiologist’s Incremental Foe
    Frank J. Lexa, Saurabh Jha
    AJR:217, September 2021 
  • "Most stakeholders will embrace AI. The first adopters will set the stage for radiologic AI. The rising-practice models, including corporate radiology groups, hospital-employed radiologists, and teleradiology users, will likely be the earliest adopters. How they view AI will in large part depend on how they view radiologists, as faceless commodities or as individuals with unique skills. Independent radiology groups, which are declining in number, may strive to maintain a radiologist’s individuality, but this is not guaranteed, and they too may commoditize radiologists to maximize revenue.”
    Artificial Intelligence for Image Interpretation: Counterpoint— The Radiologist’s Incremental Foe
    Frank J. Lexa, Saurabh Jha
    AJR:217, September 2021 
  • “For the foreseeable future, artificial intelligence (AI) will be deployed in radiology and change how we work. Initially, radiologists will reap benefits from these emerging technologies. AI will improve efficiency by prioritizing cases and improving schedules and protocols. With further improvement, AI will emerge as a second reader, providing a safety net that reduces misses. Radiologists will be pleased with their new faceless apprentices. However, AI is not a static technology. Its very nature is to keep improving as it is fed more data and as the ground truth becomes better refined. could reduce the employed radiologist workforce.”  
    Artificial Intelligence for Image Interpretation: Counterpoint— The Radiologist’s Incremental Foe
    Frank J. Lexa, Saurabh Jha
    AJR:217, September 2021 
  • "The most apparent benefit of using AI tools for image interpre- tation is to address the challenge of identifying true disease, while minimizing harms associated with false-positive results. Diagnostic errors often result from search error (not focusing on a finding with high-resolution foveal vision), recognition error (not focusing on a finding for a sufficient amount of time to recognize lesion characteristics), or decision-making error (actively dismissing a finding by incorrectly interpreting lesion characteristics). However, combining AI and radiologist assessment improves predictive accuracy compared with human interpretation alone and may represent a feasible solution in directly addressing these errors of interpretation. Improved diagnostic accuracy resulting from AI support may improve lesion detection, resulting in earlier diagnosis and increased treatment options.”
    Artificial Intelligence for Image Interpretation: Point—The Radiologist’s Potential Friend
    Randy C. Miles, Constance D. Lehman
    AJR:217, September 2021 
  • "Rapid growth of artificial intelligence (AI) in medical imaging has generated substantial interest in the medical imaging community over recent years. The application of AI tools for image interpretation enables precision medicine, which offers promise in improving patient outcomes and decreasing health care costs. Expanded use of this technology, while exciting and perhaps frightening, will undoubtedly redefine our value and future role as radiologists.”
    Artificial Intelligence for Image Interpretation: Point—The Radiologist’s Potential Friend
    Randy C. Miles, Constance D. Lehman
    AJR:217, September 2021 
  • “Nonetheless, as the field evolves to incorporate AI into clinical practice, the day-to-day working lives of radiologists will change. Although the initial ques- tions centered around whether radiologists would be replaced by AI, the clear question now is how radiologists will implement AI in their practices to enhance and expand their impact in health care. To that end, it is critical that radiologists be intimately involved in guiding the rapid yet careful application of AI for image interpretation in support of access to high-quality affordable diagnostic services for the diverse range of patients in need of radiologists’ care.”
    Artificial Intelligence for Image Interpretation: Point—The Radiologist’s Potential Friend
    Randy C. Miles, Constance D. Lehman
    AJR:217, September 2021 
  • "Access to diagnostic imaging services remains a challenge worldwide. Two-thirds of the global population lack access to quality diagnostic imaging services, often because of a lack of medical equipment or shortages in physicians trained to interpret imaging examinations. The increasing dependence of the practice of medicine on medical imaging will inevitably lead to increased gaps in care between areas with and those without access to these services. Providing high-quality autonomous AI interpretation could benefit large numbers of patients in underserved regions, thereby reducing health disparities.”
    Artificial Intelligence for Image Interpretation: Point—The Radiologist’s Potential Friend
    Randy C. Miles, Constance D. Lehman
    AJR:217, September 2021 
  • "The consistent interpretations provided by AI, compared with subjective human interpretations, can decrease variability in care and thus potentially improve patient safety. AI-based detection can also support worklist management by identifying abnormalities such as free air, pulmonary embolus, or intracranial hemorrhage, which may decrease turnaround time for reporting critical findings. Finally, AI may aid a broader range of interpretive tasks beyond diagnosis, including lesion characterization (e.g., benign vs malignant, histologic subtype), prognostication, and prediction of treatment response or other future outcomes.”
    Artificial Intelligence for Image Interpretation: Point—The Radiologist’s Potential Friend
    Randy C. Miles, Constance D. Lehman
    AJR:217, September 2021 
  • "teleradiology, as well as Artificial Intelligence, are already involved and will increasingly intervene in the mana- gement of abdominal emergencies. Indeed, from before the patient’s arrival in the ER through his follow-up after discharge, all stages of treatment including CT diagnosis and surgical planning are, or will be, impacted by these technologies: triage by severity of abdominal pain based on data from emergency ambulance calls, assistance for ER physicians in therapeutic decision-making, assistance for radiologists in the interpretation of images, assistance for surgeons in planning and monitoring the intervention, assistance in the post-operative diagnosis of recurrence.”
    Management of abdominal emergencies in adults using telemedicine and artificial intelligence  
    G. Gorincour et al.
    Journal of Visceral Surgery 2021 (in press) 
  • "The key to physicians’ understanding and appropriation of the vast array of new and evolving tools for the benefit of patients is their reasoned and medically-guided use. AI can individually and collectively augment physician performance. But physicians must continue to favor direct human interactions with their patients and their colleagues in order to achieve collegial and col- lective implementation of the preventive, personalized, predictive medicine, etc. that many have foreseen and promised. Whether it is right or wrong, valuable or futile, only future will tell. However, it is obvious today that the doctor/surgeon/radiologist, ‘‘augmented’’ by mod- ern technological tools, has a bright future ahead and that optimization of patient care will continue.”
    Management of abdominal emergencies in adults using telemedicine and artificial intelligence  
    G. Gorincour et al.
    Journal of Visceral Surgery 2021 (in press) 
  • Purpose: The purpose of this study is to evaluate diagnostic performance of a commercially available radiomics research prototype vs. an in-house radiomics software in the binary classification of CT images from patients with pancreatic ductal adenocarcinoma (PDAC) vs. healthy controls.
    Results: When 40 radiomics features were used in the random forest classification, in-house software achieved superior sensitivity (1.00) and accuracy (0.992) compared to the commercially available research prototype (sensitivity = 0.950, accuracy = 0.968). When the number of features was reduced to five features, diagnostic performance of the in-house soft- ware decreased to sensitivity (0.950), specificity (0.923), and accuracy (0.936). Diagnostic performance of the commercially available research prototype was unchanged.
    Conclusion: Commercially available and in-house radiomics software achieve similar diagnostic performance, which may lower the barrier of entry for radiomics research and allow more clinician-scientists to perform radiomics research.
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology (2020) 45:2469–2475
  • “This study showed that a commercially available radiomics software may be able to achieve similar diagnostic performance as an in-house radiomics software. The results obtained from one radiomics software may be transferrable to another system. Availability of commercial radiomics software may lower the barrier of entry for radiomics research and allow more researchers to engage in this exciting area of research.”
    Diagnostic performance of commercially available vs. in‐house radiomics software in classification of CT images from patients with pancreatic ductal adenocarcinoma vs. healthy controls
    Linda C. Chu · Berkan Solmaz · Seyoun Park · Satomi Kawamoto · Alan L. Yuille · Ralph H. Hruban · Elliot K. Fishman
    Abdominal Radiology (2020) 45:2469–2475
  • “However, there is also the potential for harm if these artificial images infiltrate our health care system by hackers with malicious intent. As proof of principle, Mirsky et al [3] showed that they were able to tamper with CT scans and artificially inject or remove lung cancers on the images. When the radiologists were blinded to the attack, this hack had a 99.2% success rate for cancer injection and a 95.8% success rate for cancer removal. Even when the radiologists were warned about the attack, the success of cancer injection decreased to 70%, but the cancer removal success rate remained high at 90%. This illustrates the sophistication and realistic appearance of such artificial images. These hacks can be targeted against specific patients or can be used as a more general attack on our radiologic data.”
    The Potential Dangers of Artificial Intelligence for Radiology and Radiologists
    Linda C. Chu, MD, Anima Anandkumar, PhD, Hoo Chang Shin, PhD, Elliot K. Fishman, MD
    JACR (in press)
  • “A generative adversarial network (GAN) is a recently developed deep- learning model aimed at creating new images. It simultaneously trains a generator and a discriminator network, which serves to generate artificial images and to discriminate real from artificial images, respectively. We have recently described how GANs can produce artificial images of people and audio content that fool the recipient into believing that they are authentic. As applied to medical imaging, GANs can generate synthetic images that can alter lesion size, location, and transpose abnormalities onto normal examinations. GANs have the potential to improve image quality, reduce radiation dose, augment data for training algorithms, and perform automated image segmentation.”
    The Potential Dangers of Artificial Intelligence for Radiology and Radiologists
    Linda C. Chu, MD, Anima Anandkumar, PhD, Hoo Chang Shin, PhD, Elliot K. Fishman, MD
    JACR (in press)
  • "However, there are several ways to mitigate potential AI-based hacks and attacks. These include clear security guide- lines and protocols that are uniform across the globe. As deep-fake technology gets more sophisticated, there is emerging research on AI-driven defense strategies. One example features the training of an AI to detect artificial images by image artifacts induced by GAN. However, AI-driven defense mechanisms have a long way to catch up, as seen in the related problem of defense against adversarial attacks. Recognizing these challenges, the Defense Advanced Research Projects Agency has launched the Media Forensics program to research against deep fakes. Hence, for now, the best defense against deep fakes is based on traditional cybersecurity best practices: secure all stages in the pipeline and enable strong encryption and monitoring tools.”
    The Potential Dangers of Artificial Intelligence for Radiology and Radiologists
    Linda C. Chu, MD, Anima Anandkumar, PhD, Hoo Chang Shin, PhD, Elliot K. Fishman, MD
    JACR (in press)
  • “We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support. We further find that AI-based multiclass probabilities outperformed content-based image retrieval (CBIR) representations of AI in the mobile technology environment, and AI-based support had utility in simulations of second opinions and of telemedicine triage.”
    Human–computer collaboration for skin cancer recognition
    Philipp Tschandl et al.
    Nat Med (2020). https://doi.org/10.1038/s41591-020-0942-0
  • "We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support."
    Human–computer collaboration for skin cancer recognition
    Philipp Tschandl et al.
    Nat Med (2020). https://doi.org/10.1038/s41591-020-0942-0
  • “In addition to demonstrating the potential benefits associated with good quality AI in the hands of non-expert clinicians, we find that faulty AI can mislead the entire spectrum of clinicians, includ- ing experts. Lastly, we show that insights derived from AI class-activation maps can inform improvements in human diagnosis. Together, our approach and findings offer a frame- work for future studies across the spectrum of image-based diagnostics to improve human–computer collaboration in clinical practice.”
    Human–computer collaboration for skin cancer recognition
    Philipp Tschandl et al.
    Nat Med (2020). https://doi.org/10.1038/s41591-020-0942-0
  • "This study examines human–computer collaboration from multiple angles and under varying conditions. We used the domain of skin cancer recognition for simplicity, but our study could serve as a framework for similar research in image-based diagnostic medicine. In contrast to the current narrative, our findings sug- gest that the primary focus should shift from human–computer competition to human–computer collaboration. From a regulatory perspective, the performance of AI-based systems should be tested under real-world conditions in the hands of the intended users and not as stand-alone devices. Only then can we expect to rationally adopt and improve AI-based decision support and to accelerate its evolution.”
    Human–computer collaboration for skin cancer recognition
    Philipp Tschandl et al.
    Nat Med (2020). https://doi.org/10.1038/s41591-020-0942-0

  • Human–computer collaboration for skin cancer recognition
    Philipp Tschandl et al.
    Nat Med (2020).  https://doi.org/10.1038/s41591-020-0942-0
  • Background: IBM Watson for Oncology (WFO) is a cognitive computing system helping physicians quickly identify key information in a patient’s medical record, surface relevant evidence, and explore treatment options. This study assessed the possibility of using WFO for clinical treatment in lung cancer patients.
    Methods: We evaluated the level of agreement between WFO and multidisciplinary team (MDT) for lung cancer. From January to December 2018, newly diagnosed lung cancer cases in Chonnam National University Hwasun Hospital were retrospectively examined using WFO version 18.4 according to four treatment categories (surgery, radiotherapy, chemoradiotherapy, and palliative care). Treatment recommendations were considered concordant if the MDT recommendations were designated ‘recommended’ by WFO. Concordance between MDT and WFO was analyzed by Cohen’s kappa value.
    Artificial intelligence and lung cancer treatment decision: agreement with recommendation of multidisciplinary tumor board
    Min-Seok Kim et al.
    Transl Lung Cancer Res 2020;9(3):507-514
  • Results: In total, 405 (male 340, female 65) cases with different histology (adenocarcinoma 157, squamous cell carcinoma 132, small cell carcinoma 94, others 22 cases) were enrolled. Concordance between MDT and WFO occurred in 92.4% (k=0.881, P<0.001) of all cases, and concordance differed according to clinical stages. The strength of agreement was very good in stage IV non-small cell lung carcinoma (NSCLC) (100%, k=1.000) and extensive disease small cell lung carcinoma (SCLC) (100%, k=1.000). In stage I NSCLC, the agreement strength was good (92.4%, k=0.855). The concordance was moderate in stage III NSCLC (80.8%, k=0.622) and relatively low in stage II NSCLC (83.3%, k=0.556) and limited disease SCLC (84.6%, k=0.435). There were discordant cases in surgery (7/57, 12.3%), radiotherapy (2/12, 16.7%), and chemoradiotherapy (15/129, 11.6%), but no discordance in metastatic disease patients.
    Conclusions: Treatment recommendations made by WFO and MDT were highly concordant for lung cancer cases especially in metastatic stage. However, WFO was just an assisting tool in stage I–III NSCLC and limited disease SCLC; so, patient-doctor relationship and shared decision making may be more important in this stage..
    Artificial intelligence and lung cancer treatment decision: agreement with recommendation of multidisciplinary tumor board
    Min-Seok Kim et al.
    Transl Lung Cancer Res 2020;9(3):507-514
  • Methods: We evaluated the level of agreement between WFO and multidisciplinary team (MDT) for lung cancer. From January to December 2018, newly diagnosed lung cancer cases in Chonnam National University Hwasun Hospital were retrospectively examined using WFO version 18.4 according to four treatment categories (surgery, radiotherapy, chemoradiotherapy, and palliative care). Treatment recommendations were considered concordant if the MDT recommendations were designated ‘recommended’ by WFO. Concordance between MDT and WFO was analyzed by Cohen’s kappa value.
    Conclusions: Treatment recommendations made by WFO and MDT were highly concordant for lung cancer cases especially in metastatic stage. However, WFO was just an assisting tool in stage I–III NSCLC and limited disease SCLC; so, patient-doctor relationship and shared decision making may be more important in this stage..
    Artificial intelligence and lung cancer treatment decision: agreement with recommendation of multidisciplinary tumor board
    Min-Seok Kim et al.
    Transl Lung Cancer Res 2020;9(3):507-514
  • “In conclusion, treatment decisions made by WFO exhibited a high degree of agreement with those of the MDT tumor board, and the concordance varied by stage. AI-based CDSS is expected to play an assistive role, particularly in the metastatic lung cancer stage with less complex treatment options. However, patient-doctor relationships and shared decision making may be more important in non-metastatic lung cancer because of the complexity to reach at an appropriate decision. Further study is warranted to overcome this gray area for current machine learning algorithms.”
    Artificial intelligence and lung cancer treatment decision: agreement with recommendation of multidisciplinary tumor board
    Min-Seok Kim et al.
    Transl Lung Cancer Res 2020;9(3):507-514
  • Objective — To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians.
    Conclusions — Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions.
    Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
    Myura Nagendran et al.
    BMJ 2020;368:m689 doi: 10.1136/bmj.m689 (Published 25 March 2020)
  • “Deep learning AI is an innovative and fast moving field with the potential to improve clinical outcomes. Financial investment is pouring in, global media coverage is widespread, and in some cases algorithms are already at marketing and public adoption stage. However, at present, many arguably exaggerated claims exist about equivalence with or superiority over clinicians, which presents a risk for patient safety and population health at the societal level, with AI algorithms applied in some cases to millions of patients.”
    Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
    Myura Nagendran et al.
    BMJ 2020;368:m689 doi: 10.1136/bmj.m689 (Published 25 March 2020)
  • "Overpromising language could mean that some studies might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients’ best interests. The development of a higher quality and more transparently reported evidence base moving forward will help to avoid hype, diminish research waste, and protect patients.”
    Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
    Myura Nagendran et al.
    BMJ 2020;368:m689 doi: 10.1136/bmj.m689 (Published 25 March 2020)
  • “Radiologists show a rather positive attitude towards AI to become more efficient and precise, but it does not seem to make them extremely confident about their own future. Medical students also advocate the use of AI in radiology but seem to be far more pessimistic regarding danger AI represents to the profession of the diagnostic radiologist.”
    A survey on the future of radiology among radiologists, medical students T and surgeons: Students and surgeons tend to be more skeptical about artificial intelligence and radiologists may fear that other disciplines take over
    Jasper van Hoek et al.
    European Journal of Radiology 121 (2019) 108742
  • “The diagnostic radiologist. This is also reflected in the fact that a large proportion of students answered that AI is a reason not to choose radiology as a specialty. This supposed fear might originate from a lack of information and knowledge. Following the assessment of most radiological publications – in our review of them – AI will not be a threat but rather a welcome addition to the radiological workflow. One must say that the results from our study might be worrisome. Students, and especially the best students, might not choose to go into radiology.”
    A survey on the future of radiology among radiologists, medical students T and surgeons: Students and surgeons tend to be more skeptical about artificial intelligence and radiologists may fear that other disciplines take over
    Jasper van Hoek et al.
    European Journal of Radiology 121 (2019) 108742
  • AI and Surgical Decision Making
  • Observations  Surgical decision-making is dominated by hypothetical-deductive reasoning, individual judgment, and heuristics. These factors can lead to bias, error, and preventable harm. Traditional predictive analytics and clinical decision-support systems are intended to augment surgical decision-making, but their clinical utility is compromised by time-consuming manual data management and suboptimal accuracy. These challenges can be overcome by automated artificial intelligence models fed by livestreaming electronic health record data with mobile device outputs. This approach would require data standardization, advances in model interpretability, careful implementation and monitoring, attention to ethical challenges involving algorithm bias and accountability for errors, and preservation of bedside assessment and human intuition in the decision-making process.
    Conclusions and Relevance  Integration of artificial intelligence with surgical decision-making has the potential to transform care by augmenting the decision to operate, informed consent process, identification and mitigation of modifiable risk factors, decisions regarding postoperative management, and shared decisions regarding resource use.
    Artificial Intelligence and Surgical Decision-Making
    Tyler J. Loftus, MD1; Patrick J. Tighe, MD, MS2; Amanda C. Filiberto, MD1; et al
    JAMA Surg. Published online December 11, 2019. doi:https://doi.org/10.1001/jamasurg.2019.4917
  • Conclusions and Relevance  Integration of artificial intelligence with surgical decision-making has the potential to transform care by augmenting the decision to operate, informed consent process, identification and mitigation of modifiable risk factors, decisions regarding postoperative management, and shared decisions regarding resource use.
    Artificial Intelligence and Surgical Decision-Making
    Tyler J. Loftus, MD1; Patrick J. Tighe, MD, MS2; Amanda C. Filiberto, MD1; et al
    JAMA Surg. Published online December 11, 2019. doi:https://doi.org/10.1001/jamasurg.2019.4917
  • Even patients with substantial expertise in science or particular medical problems still rely on physicians during times of stress and uncertainty, and need them to perform procedures, interpret diagnostic tests, and prescribe medications. In these situations, reciprocal trust is central to the functioning of a health system and leads to higher treatment adherence, improvements in self-reported health, and better patient experience.So the question is: as technology continues to change relationships between patients and physicians, how can patient-physician trust be maintained or even improved?
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Prior work has examined the accuracy of AI, potential for biases, and lack of explainability (“black box”), all of which may affect physicians’ and patients’ trust in health care AI, as well as the potential for AI to replace physicians. However, in settings for which care will still be provided by a physician, whether and how AI will affect trust between physicians and patients has yet to be addressed. The potential effects of AI on trust between physicians and patients should be explicitly designed and planned for.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When considering the implications of health care AI on trust, a broad range of health care AI applications need to be considered, including (1) use of health care AI by physicians and systems, such as for clinical decision support and system strengthening, physician assessment and training, quality improvement, clinical documentation, and nonclinical tasks, such as scheduling and notifications; (2) use of health care AI by patients including triage, diagnosis, and self-management; and (3) data for health care AI involving the routine use of patient data to develop, validate, and fine-tune health care AI as well as to personalize the output of health care AI.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Each of these applications has the potential to enable and disable the 3 components of trust: competency, motive, and transparency.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • Even patients with substantial expertise in science or particular medical problems still rely on physicians during times of stress and uncertainty, and need them to perform procedures, interpret diagnostic tests, and prescribe medications. In these situations, reciprocal trust is central to the functioning of a health system and leads to higher treatment adherence, improvements in self-reported health, and better patient experience.So the question is: as technology continues to change relationships between patients and physicians, how can patient-physician trust be maintained or even improved?
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Prior work has examined the accuracy of AI, potential for biases, and lack of explainability (“black box”), all of which may affect physicians’ and patients’ trust in health care AI, as well as the potential for AI to replace physicians. However, in settings for which care will still be provided by a physician, whether and how AI will affect trust between physicians and patients has yet to be addressed. The potential effects of AI on trust between physicians and patients should be explicitly designed and planned for.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “When considering the implications of health care AI on trust, a broad range of health care AI applications need to be considered, including (1) use of health care AI by physicians and systems, such as for clinical decision support and system strengthening, physician assessment and training, quality improvement, clinical documentation, and nonclinical tasks, such as scheduling and notifications; (2) use of health care AI by patients including triage, diagnosis, and self-management; and (3) data for health care AI involving the routine use of patient data to develop, validate, and fine-tune health care AI as well as to personalize the output of health care AI.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “Each of these applications has the potential to enable and disable the 3 components of trust: competency, motive, and transparency.”
    Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. 
    Nundy S, Montgomery T, Wachter RM.
    JAMA. Published online July 15, 2019. doi:10.1001/jama.2018.20563
  • “To be clear, I welcome general artificial intelligence with open arms, because it will generate unprecedented prosperity for the human race just as automation has for centuries. However, it is coun- terproductive to prematurely announce its arrival. As radiologists, it behooves us to educate ourselves so that we can cut through the hype and harness the very real power of deep learning as it exists today, even with its substantial limitations. To channel Mark Twain, the reports of radiology’s demise are greatly exaggerated.”
    Why Radiologists Have Nothing to Fear From Deep Learning
    Alex Bratt
    JACR 2019 (in press)
  • ”Even when sufficient progress is made to overcome the aforementioned shortcomings, there is no reason to think that radiologists are any more likely to be displaced than artists, journalists, or CEOs, because breaking the barriers of long-term dependencies and abstract reasoning is likely to enable sweeping automation in these fields as well.”
    Why Radiologists Have Nothing to Fear From Deep Learning
    Alex Bratt
    JACR 2019 (in press)
  • Through the application of AI, information-intensive domains such as marketing, health care, financial services, education, and professional services could become simultaneously more valuable and less ex- pensive to society. Business drudgery in every indus- try and function—overseeing routine transactions, repeatedly answering the same questions, and ex- tracting data from endless documents—could become the province of machines, freeing up human workers to be more productive and creative. Cognitive tech- nologies are also a catalyst for making other data-in- tensive technologies succeed, including autonomous vehicles, the Internet of Things, and mobile and multi- channel consumer technologies.
  • Cognitive insight.
    The second most common type of project in our study (38% of the total) used algorithms to detect patterns in vast volumes of data and interpret their meaning. Think of it as “analytics on steroids.”These machine-learning applications are being used to:
    - predict what a particular customer is likely to buy;
    - identify credit fraud in real time and detect insur- ance claims fraud
    - analyze warranty data to identify safety or quality problems in automobiles and other manufactured products
    - automate personalized targeting of digital ads; and
    - provide insurers with more-accurate and detailed actuarial modeling.
  • AI in Radiology: The Bottom Line
    - AI will put Radiologists out of business
    - AI is all hype and will soon fade like many fads
    - The reality is that AI will change all aspects of Radiology but may be our savior rather the grim reaper?
  • Reality: AI is already in our patients homes (and in yours)
    - Voice-enabled assistants that use AI have entered the homes of many patients (Amazon Alexa, Google Home)
    -- Connectivity to our patients with pre-study or post-study information
    -- Can help reduce readmissions or un-necessary ER visits by answering patients questions
  • Reality: AI can eliminate needless costs
    - Eliminate positions in customer service, billing and administration
    - Eliminate significant numbers of staff in scheduling or call centers while improving the patient experience. Think Uber and Diner Reservations or even Airline reservations
  • Reality: Machine Learning can decrease medical error
    - Can AI be the ultimate second reader?
    - Clinical applications
    -- CT
    -- MR
    -- Plain Radiographs
    -- Ultrasound
    -- Pathology
  • “Second, machine learning will displace much of the work of radiologists and anatomical pathologists. These physicians focus largely on interpreting digitized images, which can easily be fed directly to algorithms instead. Massive imaging data sets, com- bined with recent advances in computer vision, will drive rapid improvements in performance, and machine accuracy will soon exceed that of humans. Indeed, radiology is already partway there: algorithms can replace a second radiologist reading mammograms and will soon exceed human accuracy.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “The patient- safety movement will increasingly advocate the use of algorithms over humans — after all, algorithms need no sleep, and their vigilance is the same at 2 a.m. as at 9 a.m. Algorithms will also monitor and interpret streaming physiological data, replacing aspects of anesthesiology and criti- cal care. The time scale for these disruptions is years, not decades.”

    
Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “Machine learning will become an indispensable tool for clinicians seeking to truly understand their patients. As patients’ conditions and medical technologies become more complex, the role of machine learning will grow, and clinical medicine will be challenged to grow with it.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “As in other industries, this challenge will create winners and losers in medicine. But we are optimistic that patients, whose lives and medical histories shape the algorithms, will emerge as the biggest winners as machine learning transforms clinical medicine.”


    Predicting the Future — Big Data, Machine Learning, and Clinical Medicine
Obermeyer Z, Emanuel EJ
N Engl J Med 375;13 September 29, 2016
  • “For example, a radiologist typically views 4000 images in a CT scan of multiple body parts (“pan scan”) in patients with multiple trauma. The abundance of data has changed how radiologists interpret images; from pattern recognition, with clinical context, to searching for needles in haystacks; from inference to detection. The radiologist, once a maestro with a chest ra- diograph, is now often visually fatigued searching for an occult fracture in a pan scan.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Radiologists should identify cognitively simple tasks that could be addressed by artificial intelligence, such as screening for lung cancer on CT. This involves detecting, measuring, and characterizing a lung nodule, the management of which is standardized. A radiology residency or a medical degree is not needed to detect lung nodules.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “Because pathology and radiology have a similar past and a common destiny, perhaps these specialties should be merged into a single entity, the “information specialist,” whose responsibility will not be so much to extract information from images and histology but to manage the information extracted by artificial intelligence in the clinical context of the patient.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “The information specialist would interpret the important data, advise on the added value of another diagnostic test, such as the need for additional imaging, anatomical pathology, or a laboratory test, and integrate information to guide clinicians. Radiologists and pathologists will still be the physician’s physician.”

    
Adapting to Artificial Intelligence 
Radiologists and Pathologists as Information Specialists 
Jha S, Topol EJ
JAMA. Published online November 29, 2016. doi:10.1001/jama.2016.17438
  • “By virtue of its information technology-oriented infrastructure, the specialty of radiology is uniquely positioned to be at the forefront of efforts to promote data sharing across the healthcare enterprise, including particularly image sharing. The potential benefits of image sharing for clinical, research, and educational applications in radiology are immense. In this work, our group—the Association of University Radiologists (AUR) Radiology Research Alliance Task Force on Image Sharing—reviews the benefits of implementing image sharing capability, introduces current image sharing platforms and details their unique requirements, and presents emerging platforms that may see greater adoption in the future. By understanding this complex ecosystem of image sharing solutions, radiologists can become im- portant advocates for the successful implementation of these powerful image sharing resources.”


    Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “Cloud-based image sharing platforms based on interoperability standards such as the IHE-XDS-I profile are currently the most widely used method for sharing of clinical radiological images and will likely continue to grow in the coming years. Conversely, no single image sharing platform has emerged as a clear leader for research and educational applications. Radiologists, clinicians, investigators, technologists, educators, administrators, and patients all stand to benefit from medical image sharing. With their continued support, more wide- spread adoption of image sharing infrastructure will assuredly improve the standard of clinical care, research, and education in modern radiology.”

    
Image Sharing in Radiology— A Primer 
Chatterjee AR et al.
Acad Radiol 2017; 24:286–294
  • “In summary, radiologists will not be replaced by machines. Radiologists of the future will be essential data scientists of medicine. We will leverage clinical data science and ML to diagnose and treat patients better, faster, and more efficiently. Although this new clinical data science milieu will undoubtedly alter radiology practice, if performed correctly, it will empower radiologists to continue to provide better actionable recommendations on the basis of new insights from the medical images and other relevant data.”


    Big Data and Machine Learning—Strategies for Driving This Bus: A Summary of the 2016 Intersociety Summer Conference
 Kruskal JB et al.
JACR (in press)
  • “Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records.”

    
Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • “Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk.”


    Predicting healthcare trajectories from medical records: A deep learning approach.
Pham T et al.
J Biomed Inform. 2017 May;69:218-229. 
  • “The missing piece in the dialectic around artificial intelligence and machine learning in health care is understanding the key step of separating prediction from action and recommendation. Such separation of prediction from action and recommendation requires a change in how clinicians think about using models developed using machine learning. In 2001, the statistician Breiman suggested the need to move away from the culture of assuming that models that are not causal and cannot explain the underlying process are useless. Instead, clinicians should seek a partnership in which the machine predicts (at a demonstrably higher accuracy), and the human explains and decides on action.”


    What This Computer Needs Is a Physician: Humanism and Artificial Intelligence
Abraham Verghese, MD1; Nigam H. Shah, MBBS, PhD1; Robert A. Harrington, MD
JAMA (in press) doi:10.1001/jama.2017.19198
  • “The 2 cultures—computer and the physician—must work together. For example, clinicians are biased toward optimistic prediction, often overestimating life expectancy by a factor of 5, while predictive models trained from vast amounts of data do better; using these well-calibrated probability estimates of an outcome, clinicians can then can act appropriately for patients at the highest risk. The lead time a predictive model can offer to allow for an alternative action matters a great deal. Well-calibrated levels of risk for each outcome, and the timely execution of an alternative action, are needed for a model to be useful.”

    
What This Computer Needs Is a Physician: Humanism and Artificial Intelligence
Abraham Verghese, MD1; Nigam H. Shah, MBBS, PhD1; Robert A. Harrington, MD
JAMA (in press) doi:10.1001/jama.2017.19198
  • “Better diagnosis, and diagnostic algorithms providing more accurate differential diagnoses, might reshape the traditional CPC (clinical problem solving) exercise, just as the development of imaging modalities and sophisticated laboratory testing made the autopsy less relevant.”


    What This Computer Needs Is a Physician: Humanism and Artificial Intelligence
Abraham Verghese, MD1; Nigam H. Shah, MBBS, PhD1; Robert A. Harrington, MD
JAMA (in press) doi:10.1001/jama.2017.19198
  • “Human experts and machines have different strengths. Accordingly, there are tasks that are better suited for machines and others for humans. Some advantages of machines are that they can work 24 hours per day and contemporaneously. Also, machines may be designed to provide consistent analysis for a given input or series of input parameters. This allows for precision and potential for quantification in results reporting. Machines can analyze large volumes of data and find complex associations hidden within these data that may be otherwise difficult for a human to do.”


    Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)
  • “There are a number of ways in which machine learning can help radiology practices today, including many tasks that are frequently performed by radiologists and ordering clinicians, such as imaging appropriateness assessment, creating study protocols, and standardization of radiology reporting, that could benefit from automation. Although many of these examples could be implemented using conventional procedural programming methodologies, the machine learning approach holds the promise to perform these tasks with a higher level of proficiency that can improve over time as the system “learns” new data.”

    
Machine Learning in Radiology: 
Applications Beyond Image Interpretation 
Paras Lakhani et al.
J Am Coll Radiol (in press)

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.