google ads
Deep Learning: Artificial Intelligence (ai) Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Artificial Intelligence (AI)

-- OR --

  • “The experience reported by Del Gaizo et al has important implications and lessons for anyone planning to introduce AI solutions into clinical radiology practice or undertake similar research. Prospective users should assess whether their patient population is a close enough match to the population on which an AI program was developed for it to be used: They should assess the potential impact of prevalence on accuracy and predictive value. For a given combination of sensitivity and specificity, lower prevalence will result in lower estimates of PPV. Other important issues to assess are the impact on radiologists’ interpretation times and, for many clinical scenarios, impact on time to therapy. Conservatively, radiology practices introducing AI applications into their clinical operations should always undertake an assessment after implementation to determine how well the program is functioning in their respective unique environments.”
    Challenges of Implementing Artificial Intelligence–enabled Programs in the Clinical Practice of Radiology
    James H. Thrall
    Radiology: Artificial Intelligence 2024; 6(5):e240411
  • “A striking finding in the study reported by Del Gaizo et al was a positive predictive value (PPV) of only 21.1%. PPV is a function of sensitivity, specificity, and prevalence: It is the probability that a patient with a positive (abnormal) test result actually has the disease. The authors observe that the low prevalence of 2.7% in their study is the likely reason for the low PPV. Of note, McLouth et al reported a prevalence of ICH of 31% (255 of 814), indicating a different patient population than the current study. The corresponding PPV in the McLouth et al study was 91.4%. McLouth et al modeled different levels of prevalence, holding sensitivity and specificity constant, which showed PPV ranged from 80.2% at 10% prevalence to 97.3% at 50% prevalence.”
    Challenges of Implementing Artificial Intelligence–enabled Programs in the Clinical Practice of Radiology
    James H. Thrall
    Radiology: Artificial Intelligence 2024; 6(5):e240411
  • “The primary interest here is a company’s first several AI projects. Importantly, data quality is an enormous organization-wide issue (and opportunity), yet most companies have neglected it. One consequence is that they, and their senior leaders, don’t understand the issues, how to address them, and the organizational changes they must make. They are in a tough spot — they must protect themselves in the short-term, build needed longer-term capabilities, and simultaneously educate themselves so they can do so effectively.”  
    Ensure High-Quality Data Powers Your AI by Thomas C. Redman
    Harvard Business Review August 12, 2024
  • “Garbage in, garbage out” might be a useful rule of thumb, but I find it convenient to frame the idea of good data through two requirements: 1) whether it’s the “right data” to address the problem and 2) whether that “data is right,” or correct. The criteria for the latter are more familiar to most people: accuracy, absence of duplicates, and so forth. Having the “right data” is less familiar, more subtle and complex — but it’s also essential.  
    Ensure High-Quality Data Powers Your AI by Thomas C. Redman
    Harvard Business Review August 12, 2024
  • Most important data right considerations:
    - Accuracy: Accuracy is the probably the best-known feature of data quality. The essential idea is that data values must be “correct,” that is they must reflect reality. Though subtleties, such as “how closely must the data values represent reality?” are sometimes important, most structured data sets are riddled with errors. And it stands to reason that documents (unstructured data), which are used to train Large Language Models are in worse shape.
    - Absence of duplicates: It is easy for duplicate entries to slip into databases, and they can skew results. Thus, they must be kept to a minimum
    - Consistent identifiers: When pulling loan data together, is this “John Smith,” with a checking account, and that “J. E. Smith,” with a home equity line of credit, the same person. You have to know so you can integrate data together.
    - Correct labeling: Good data labels (e.g., “this is a cat,” “this loan is performing”) improve AI models.
  • “At the project level, overall responsibility for data quality must reside with the highest-level person leading the effort. They must pull together teams of people who can develop the full suite of requirements and dive into details to ensure they are met. They should consider acquiring external talent, if as is often the case, their companies lack the breadth and depth of skill needed to do so.”
    Ensure High-Quality Data Powers Your AI” by Thomas C. Redman
    Harvard Business Review August 12, 2024 
  • “Fortunately, there is a better way: Eliminate root causes of bad data upstream and those downstream don’t have to deal with them! This approach builds on what worked in manufacturing and has proven itself time and again at companies such as AT&T, Gulf Bank, Chevron, and others. As a practical matter, this works best when downstream groups take on roles as data customers; upstream groups take on roles as data creators; and the two work together to sort through quality requirements, make some basic measurements, then find and eliminate root causes of error, one at a time.”
    Ensure High-Quality Data Powers Your AI” by Thomas C. Redman
    Harvard Business Review August 12, 2024
  • “In practice, however, clinicians are challenged by how to best interpret the information they receive from AI tools. Novel AI technologies are “black boxes” and clinicians may be unsure of whether or when to make a decision that runs counter to a recommendation based on the AI algorithm providing assistance. To address this, model developers have begun adding a layer of explainability so that clinicians can better interpret the model predictions and understand when models are relying on heuristics rather than clinically relevant data elements. These heuristics can bias AI model predictions and may be the result of development in selective, nonrepresentative populations, inadequate adherence to development best practices, and limited validation. The US Food and Drug Administration (FDA) has called for explainability of model outputs in its draft guidance addressing AI technologies for clinical decision support.”
    Automation Bias and Assistive AI Risk of Harm From AI-Driven Clinical Decision Support
    Rohan Khera, MD, MS; Melissa A. Simon, MD, MPH; Joseph S. Ross, MD
    JAMA December 19, 2023 Volume 330, Number 23
  • “The results from Jabbour et al suggest that a more careful approach to evaluating AI tools is warranted before their rapid adoption, even when AI is used as assistive technology. For vignettes with AI support using the standard model, clinicians’ diagnostic accuracy increased only modestly, from 73% without AI support to 76%. For vignettes with AI support using the standard model paired with explainability heatmaps highlighting the predictive areas on chest radiographs, diagnostic accuracy improved slightly more to 78%. However, for vignettes with AI support using the systematically biased model, clinicians’ diagnostic accuracy dropped substantially to 62%. This large drop in performance was not remedied by explainability heatmaps that demonstrated inappropriate clinical sources for the predictions (ie, information from bones and soft tissues on the radiographic image, instead of lungs or heart). Even with this layer of explainability, clinician diagnostics accuracy only improved slightly (64%) and remained much lower than accuracy without any AI support.”
    Automation Bias and Assistive AI Risk of Harm From AI-Driven Clinical Decision Support
    Rohan Khera, MD, MS; Melissa A. Simon, MD, MPH; Joseph S. Ross, MD
    JAMA December 19, 2023 Volume 330, Number 23
  • “These findings are concerning. Although the study highlights the potential value of explainability metrics to accompany assistive AI-based diagnostic tools, it also clearly illustrates the major challenge of clinicians’ relying on assistive technologies, often referred to as automation bias. Even in controlled settings, without the usual pressures on time, clinicians favored automated decision making systems, relying on the AI-based tool, despite the presence of contradictory or clinically nonsensical information. If a model performs well for certain patients or in certain care scenarios, such automation bias may result in patient benefit in those settings. However, in other settings where the model is inaccurate—either systematically biased or due to imperfect performance—patients may be harmed as clinicians defer to the AI model over their own judgment. Worryingly, errors resulting from automation bias are likely to be further compounded by the usual time pressures faced by many clinicians.”
    Automation Bias and Assistive AI Risk of Harm From AI-Driven Clinical Decision Support
    Rohan Khera, MD, MS; Melissa A. Simon, MD, MPH; Joseph S. Ross, MD
    JAMA December 19, 2023 Volume 330, Number 23
  • The study demonstrates that offering explainability metrics for predictions to clinicians, expecting they will then weigh that information before making a decision, may be ineffective. The limited value of explainability metrics in this study may reflect both the nature of explainability strategies used (ie, heatmaps) and that clinicians do not have the requisite training in evaluating these measures. As AI-based assistive technology is embedded in care systems, clinicians will need training on the interpretation of technology outputs, how to evaluate the quality of the provided information using available measures of explainability, and how to infer the common sources of bias, including derivation of data from nonrepresentative populations
    Automation Bias and Assistive AI Risk of Harm From AI-Driven Clinical Decision Support
    Rohan Khera, MD, MS; Melissa A. Simon, MD, MPH; Joseph S. Ross, MD
    JAMA December 19, 2023 Volume 330, Number 23
  • “Clinical decision support tools based on imperfect AI assistive technologies have the potential to result in patient harm because clinicians may trust the output of AI tools over their own judgment. The bar for AI developers and regulatory agencies to put a product into clinical use must, therefore, be high. The task of interpreting the outputs of AI models cannot be off-loaded to clinicians, especially during a deluge of AI-driven tools that lack adequate controls, and better strategies are needed to go beyond explainability and to enable true interpretability. The future of AI-supported care is rapidly approaching, but the primary goal of implementing these tools—to improve patient care—must not be forgotten in the excitement over the technology.”
    Automation Bias and Assistive AI Risk of Harm From AI-Driven Clinical Decision Support
    Rohan Khera, MD, MS; Melissa A. Simon, MD, MPH; Joseph S. Ross, MD
    JAMA December 19, 2023 Volume 330, Number 23
  • “AI is a prime instance of a technological breakthrough that has widespread current and future possibilities in the field of medical imaging. Radiology has witnessed the adoption of these tools in everyday clinical practice, albeit with a modest impact thus far. The discrepancy between the anticipated and actual impact can be attributed to various factors, such as the absence of data from prospective real-world studies, limited generalizability, and the scarcity of comprehensive AI solutions for image interpretation. As health care professionals increasingly use radiologic AI and as large language models continue to evolve, the future of AI in medical imaging appears bright. However, it remains uncertain whether the traditional practice of radiology, in its current form, will share this promising outlook.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • ”We identify the central challenge of generalization in the use of AI algorithms in radiology and the need for validation safeguards that encompass clinician–AI collaboration, transparency, and postdeployment monitoring. Finally, we discuss the rapid progress in developing multimodal large language models in AI; this progress represents a major opportunity for the development of generalist medical AI models that can tackle the full spectrum of image-interpretation tasks and more. To aid readers who are unfamiliar with terms or ideas used for AI in general or AI in image interpretation, a Glossary is included with this article. ”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023  
  • ”There is promise in exploring radiologic AI models that can expand interpretive capabilities beyond those of human experts. For instance, AI algorithms can accurately predict clinical outcomes on the basis of CT data in cases of traumatic brain injury and cancer. In addition, AI-derived imaging biomarkers can help to quickly and objectively assess structures and pathological processes related to body composition, such as bone mineral density, visceral fat, and liver fat, which can be used to screen for various health conditions. When applied to routine CT imaging, these AI-derived biomarkers are proving useful in predicting future adverse events.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023  

  • The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 

  • The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 

  •  The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • ”Radiologists who use AI in their practices are generally satisfied with their experience and find that AI provides value to them and their patients. However, radiologists have expressed concerns about lack of knowledge, lack of trust, and changes in professional identity and autonomy. Local champions of AI, education, training, and support can help overcome these concerns. The majority of radiologists and residents expect substantial changes in the radiology profession within the next decade and believe that AI should have a role as a “co-pilot,” acting as a second reader and improving workflow tasks.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  •  “There is promise in exploring radiologic AI models that can expand interpretive capabilities beyond those of human experts. For instance, AI algorithms can accurately predict clinical outcomes on the basis of CT data in cases of traumatic brain injury and cancer. In addition, AI-derived imaging biomarkers can help to quickly and objectively assess structures and pathological processes related to body composition, such as bone mineral density, visceral fat, and liver fat, which can be used to screen for various health conditions. When applied to routine CT imaging, these AI-derived biomarkers are proving useful in predicting future adverse events.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023
  • ”Although many current radiologic AI applications are designed for radiologists, there is a small but emerging trend globally toward the use of medical-imaging AI for nonradiologist clinicians and other stakeholders (i.e., health care providers and patients). This trend presents an opportunity for improving access to medical imaging and reducing common diagnostic errors in low-resource settings and emergency departments, where there is often a lack of around-the-clock radiology coverage. For instance, one study showed that an AI system for chest radiograph interpretation, when combined with input from a nonradiology resident, had performance values that were similar to those for board-certified radiologists.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • “A popular AI application that is targeted for use by nonradiologist clinicians for detecting large-vessel occlusions in the central nervous system has resulted in a significant reduction in time to intervention and improved patient outcomes. Moreover, AI has been shown to accelerate medical-imaging acquisition outside traditional referral workflows with new, clinician-focused mobile applications for notifications of AI results. This trend, although not well established, has been cited as a potential long-term threat to radiology as a specialty because advanced AI models may reduce the complexity of technical interpretation so that a nonradiologist clinician could use imaging without relying on a radiologist.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • “Despite some evidence that clinicians receiving AI assistance can achieve better performance than unassisted clinicians, the body of research on human–AI collaboration for image interpretation offers mixed evidence regarding the value of such a collaboration. Results vary according to particular metrics, tasks, and the study cohorts in question, with studies showing that although AI can improve the performance of radiologists, sometimes AI alone performs better than a radiologist using AI.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  •  “Many AI methods are “black boxes,” meaning that their decision-making processes are not easily interpretable by humans; this can pose challenges for clinicians trying to understand and trust the recommendations of AI. Studies of the potential for explainable AI methods to build trust in clinicians have shown mixed results. Therefore, there is a need to move from evaluations centered on the stand-alone performance of models to evaluations centered on the outcomes when these algorithms are used as assistive tools in real-world clinical workflows. This approach will enable us to better understand the effectiveness and limitations of AI in clinical practice and establish safeguards for effective clinician–AI collaboration.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  •  ”However, there is a trend toward a more comprehensive approach to the development of radiologic AI, with the aim of providing more value than simply automating individual interpretation tasks. Recently developed models can identify dozens or even hundreds of findings on chest radiographs and brain CT scans obtained without contrast material, and they can provide radiologists with specific details about each finding. More and more companies are offering AI solutions that address the entire diagnostic and clinical workflow for conditions such as stroke and cancer, from screening to direct clinical referrals and follow-up. Although these comprehensive AI solutions may make it easier for medical professionals to implement and use the technology, the issues of validation and transparency remain a concern.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • “AI is a prime instance of a technological breakthrough that has widespread current and future possibilities in the field of medical imaging. Radiology has witnessed the adoption of these tools in everyday clinical practice, albeit with a modest impact thus far. The discrepancy between the anticipated and actual impact can be attributed to various factors, such as the absence of data from prospective real-world studies, limited generalizability, and the scarcity of comprehensive AI solutions for image interpretation. As health care professionals increasingly use radiologic AI and as large language models continue to evolve, the future of AI in medical imaging appears bright. However, it remains uncertain whether the traditional practice of radiology, in its current form, will share this promising outlook.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • “Given the capabilities of large language models, training new multimodal large language models with large quantities of real-world medical imaging and clinical text data, although challenging, holds promise in ushering in transformative capabilities of radiologic AI. However, the extent to which such models can exacerbate the extant problems with widespread validation remains unknown and is an important area for study and concern. Overall, the potential for generalist medical AI models to provide comprehensive solutions to the task of interpretation of radiologic images and beyond is likely to transform not only the field of radiology but also health care more broadly.”  
    The Current and Future State of AI Interpretation of Medical Images  
    Pranav Rajpurkar, Matthew P. Lungren  
    n engl j med 388;21 nejm.org May 25, 2023 
  • Through the haze of uncertainty in the interplay of these forces, 5 trends that may shape the future of diagnosis can be discerned:
    • Movement from symptom-prompted testing to continuous monitoring and assessment, and from within health settings to everyday living.
    • Shift in reliance on individual test results to interpretation of data streams and data patterns.
    • Change in the meaning of an “abnormal” test result from a deviation against a population norm to an aberrancy in an individual’s pattern of results over time.
    • Increasingly refined and specific diagnostic categories in step with the advent of increasingly differentiated treatment.
    • Augmentation of the goals of diagnostic excellence from the detection of disease to the preservation of wellness, and from indicative of the present disease to predictive of future state of health.
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11 
  • Key Points for Diagnostic Excellence
    1. Many technologic initiatives to improve future diagnostic capabilities are already underway.
    2. The future of diagnosis will be marked by massive, continuously acquired data, automated interpretation of data streams and data patterns, and personal reference over time of what constitutes a normal result.
    3. Increasingly precise diagnoses will allow clinical comparisons across more nearly alike patients and ultimately provide a unique health profile for each individual.
    4. The future of diagnosis will emphasize prediction of future health state rather than identification of current disease.
    5. Diagnostic excellence begins and ends with the patient.
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11
  • “Machine learning techniques to analyze large amounts of data in real time are becoming more sophisticated as guides both to an optimal diagnostic process and to more accurate, specific, and complete diagnostic assessments. For the foreseeable future, a combination of machine learning and human judgment may be optimal in reaching accurate, timely diagnoses. Over time, machine learning algorithms are more likely to improve in diagnostic acumen than are unaided human diagnosticians.”
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11
  • “As more health-related data are collected in a more continuous manner, individual results at a moment in time will have a comparison baseline of that individual’s previous results. A more individualized definition of abnormal—that is, a pattern of results that warrants investigation—may follow a small deviation from an individual’s own previous levels, even if still “within normal limits” of a population comparison. Self-referenced norms may prove to be both more sensitive (eg, when a small increase should trigger investigation, even if the higher result is within the population norm) and more specific (when a consistent result over time just outside the population norm is not a cause for concern).”
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11
  • “Today, clinicians think of diagnosis mainly in terms of the detection and classification of disease. Over time, increased understanding of the genomic, proteomic, metabolomic, and microbiomic underpinnings of human biology will produce greater understanding of the etiology and progression of biologic function from the state of health to the state of disease. As understanding of the precursors of disease grow more detailed and revealing, the art and science of diagnosis enlarge from the detection of present disease to the prediction of future disease. Put in equivalent, positive terms, medical diagnosis moves from characterizing the current state of health to predicting the future state of health. Then interventions may be designed to enhance, maintain, and as needed, restore health.”
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11
  • “Whether diagnostic excellence comes to depict a state and future course of health or to describe a current category of disease, physicians and other clinicians will always do well to focus on the lived experience. The aims of diagnostic excellence begin and end with the patient.”
    The Future of Diagnostic Excellence
    Fineberg HV, Song S, Wang T
    JAMA September 20, 2022 Volume 328, Number 11
  • “Following Kotelnikov’s creation of the sampling theorem, computer scientist Alan Turing invented stored program computers and, soon after, his colleague, John von Neumann, produced the architecture to convert Turing’s idea into hardware to make it perform rapidly. By 1948, Baby, the first computer, was developed in Manchester, England. This was followed by the promulgation, in 1965, of Moore’s law, which stated that everything good about computers improves by an order of magnitude every 5 years.”
    More From Moore's Law: The Journey to Toy Story and Implications for Radiology.  
    Smith AR, Lugo-Fagundo E, Fishman EK, Rowe SP, Chu LC.  
    J Am Coll Radiol. 2022 Feb 15:S1546-1440(22)00119-3. doi: 10.1016/j.jacr.2022.01.009. Epub ahead of print.  
  • “Although there is much optimism that AI will improve our diagnostic accuracy and efficiency, there is also concern that AI may potentially replace radiologists. This uncertainty has unfortunately dissuaded some medical students from pursing radiology. Although we do not yet know how AI will shape the future of radiology, our specialty can look back on our legacy as innovators and remain confident in our ability to navigate through this technological wave.”
    More From Moore's Law: The Journey to Toy Story and Implications for Radiology.  
    Smith AR, Lugo-Fagundo E, Fishman EK, Rowe SP, Chu LC.  
    J Am Coll Radiol. 2022 Feb 15:S1546-1440(22)00119-3. doi: 10.1016/j.jacr.2022.01.009. Epub ahead of print. 
  • “In medicine and radiology, we are overly focused on the short-term: to care for an individual patient, to get through a clinical day, or to survive the challenges of the fiscal year. We need to “dream big” and set long-term, 5-year or 10-year plans to pursue projects that we feel passionate about and that we have the commitment to follow through.”
    More From Moore's Law: The Journey to Toy Story and Implications for Radiology.  
    Smith AR, Lugo-Fagundo E, Fishman EK, Rowe SP, Chu LC.  
    J Am Coll Radiol. 2022 Feb 15:S1546-1440(22)00119-3. doi: 10.1016/j.jacr.2022.01.009. Epub ahead of print. 
  •  “A number of years ago, we were approached by a Japanese company to make the first digital movie at Lucas- film based on the story of the monkey character in The Journey to the West. After running the numbers, however, I knew that given Moore’s law (now broadly understood to mean that technological progress results in the doubling of computer speed every 2 years), the technology was just not ready. We needed 5 more years before we could make their request a reality. To understand Moore’s law and computer graphics, we first need to understand the pixel, and to do that, we need to travel back to 19th- century France.”
    More From Moore's Law: The Journey to Toy Story and Implications for Radiology.  
    Smith AR, Lugo-Fagundo E, Fishman EK, Rowe SP, Chu LC.  
    J Am Coll Radiol. 2022 Feb 15:S1546-1440(22)00119-3. doi: 10.1016/j.jacr.2022.01.009. Epub ahead of print. 
  • “Texture analysis represents some specific features of radiomics, a term borrowed from material science which defines the measure of the variation of a surface. In medical imaging, texture analysis defines the measure of variation of pixel intensities on a given image, region-of-interest, or volume. A rough-textured image would have a high rate of change in the high and low pixel intensity, compared with a smooth-textured material. Texture analysis as such is a subfield used in the radiomic setting. A typical example of radiomics performed used texture analysis is using it to correlate molecular and histological features of diffuse high-grade gliomas.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "Radiomics offers a nearly unlimited source of imaging biomarkers that could support cancer detection, diagnosis, assessment of prognosis, prediction of response to treatment, and monitoring of disease status. For a clinical radiologist, radiomics has the prospective to help with the diagnosis of both common and rare tumors. Visualization of tumor heterogeneity may be crucial in the assessment of tumor aggressiveness and prognosis.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "However, it should be noted that radiomic and radiogenomic analyses can be used to identify correlations, but not causes; thus, they are not expected to enable definitive assessment of genetic or other bio- markers through imaging alone. However, correlation of radiomic data with genomic or otheromic data could inform not only the decision about whether to test for certain gene alterations in biopsy samples but also the choice of biopsy sites. It also could provide information to support histopathologic findings. This is important, as it is estimated the error rate of cancer histopathology can be as high as 23%.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "A typical radiomic analysis workflow consists of five main steps: 1) image acquisition, 2) segmentation, 3) computation of radiomic features within the segmented region, 4) feature selection, model building and classification, 5) statistical analysis. Radiomic methods are not only designed to predict early overall survival  or to identify predictive pathological characteristics such as microvascular invasion, they may also predict liver tumor response to treatment. For example, there is early evidence that pre-treatment CT-derived signatures can predict survival in many types of diseases, for example in patients with Hepatocellular Carcinoma (HCC) or advanced HCC treated with sorafenib.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "Radiomic features can be divided into two important groups (Table 1): “semantic” and “agnostic” features (the texture analysis itself). Semantic features are those that are commonly used in the radiology lexicon to describe regions of interest (i. e. shape, location, vascularity, necrosis, etc). Agnostic features are those that attempt to capture lesion heterogeneity through quantitative mathematical descriptors (i.e. histograms, wavelets, etc). extracted from filtered images, and fractal features.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "Semantic features are commonly and qualitatively used by radiologists to analyze lesions, but in radiomics a computer quantitatively analyzes them. This is a crucial step in the “historical” development of radiomics: one of the first articles which has done so comes from Segal et al. , probably the first example of radiogenomics; they used a finite series of radiologist-scored quantitative features to predict gene expression patterns in hepatocellular carcinoma.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "Agnostic or texture features can be divided into three groups: statistical (first order and second order features); transform-based, structural-based analysis. The mathematical definitions of these features are independent of imaging modality. The most known texture descriptors of the image are: kurtosis, skewness, intensity histogram, descriptors of the relationships between image voxels (e.g. gray-level co-occurrence matrix (GLCM), run length matrix (RLM), size zone matrix (SZM), and neighborhood gray tone difference matrix (NGTDM), derived textures, textures extracted from filtered images, and fractal features.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 

  • Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • "Radiomics and texture analysis are innovative techniques in the field of radiology. Their development as an objective, quantifiable and reproductible technique is crucial, and it has been the more criticized aspect in the latest years. However, all the conducted studies, even if lacking of reproducibility, showed that there are many benefits of these techniques: they can provide an objective non-invasive assessment of different aspects of diseases, the more studied and accepted being the heterogeneity (of a lesion, of an organ) with use of routine imaging data, as opposed to subjective current visual analysis. Oncological applications have been the more extendedly studied since it is proved tumoral heterogeneity is one of the most important aspects of aggressiveness of almost every type of cancer. The further development of the technique, with more reproducible approaches, will help translation of its use into both clinical and research trials as well as into the development of clinical decision support systems.”
    Texture analysis imaging “what a clinical radiologist needs to know”  
    Giuseppe Corrias et al.
    European Journal of Radiology 146 (2022) 110055 
  • “Currently, one of the biggest challenges facing AI, in general, is data hungry. The acquisition of sufficient large, public, well-annotated cancer dataset is an ongoing need for AI. Although the inclusion of images, genomic data and clinical outcomes in some opened databases had a significant impact on enhancing computational clinical research. The scale, quality and diversity of the data types, such as patient history from prior reports, are potentially relevant to the risk and progression of cancer, but are time-consuming to collect. Data sharing agreement can play an important role in addressing the challenge above. Sharing of large datasets with the community can be enabled by cloud computing and advanced development of the next generation of predictive cancer models.”
    Artificial intelligence in clinical research of cancers  
    Dan Shao et al.
    Briefings in Bioinformatics, 2021, 1–11
  • "Despite AI regularly achieving high performance in medical research, the adoption of AI in real cases is limited due to the somewhat opacity of the model. The machine could not explain how it knew and why it got this result. This is often referred to as the ‘black box’ problem [108]. It is difficult to present which features of the input data contribute to the output. For example, AI can predict the optimal treatment for a patient but not provide the reasoning it used to make that prediction. Interpretable DL is a trend in alleviating this limitation.”
    Artificial intelligence in clinical research of cancers  
    Dan Shao et al.
    Briefings in Bioinformatics, 2021, 1–11 
  • "In addition, the knowledge gap between clinical and data science experts still presents significant challenges. Physicians have much experience with oncologic workup and management versus data scientists have high-level cognition in data science for understanding AI mechanisms. Further collaboration should be pursued between clinical and data science experts to bridge the gap between them.”
    Artificial intelligence in clinical research of cancers  
    Dan Shao et al.
    Briefings in Bioinformatics, 2021, 1–11 

  • How FDA Regulates Artificial Intelligence in  Medical Products
    Pew Charitable Trusts July 2021 
  • Glossary for AI Terms
    - Explainability: The ability for developers to explain in plain language how their data will be used. 
    - Generalizability: The accuracy with which results or findings can be transferred to other situations or people outside of those originally studied.
    - Good Machine Learning Practices (GMLP): AI/ML best practices (such as those for data management or evaluation), analogous to good software engineering practices or quality system practices. 
    - Machine learning (ML): An AI technique that can be used to design and train software algorithms to learn from and act on data. These algorithms can be “locked,” so that their function does not change, or “adaptive,” meaning that their behavior can change over time. 
    - Software as a Medical Device (SaMD): Defined by the International Medical Device Regulators Forum as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” 
    - Pew Charitable Trusts July 2021
  • Introduction: Concerns about radiologists being replaced by artificial intelligence (AI) from the lay media could have a negative impact on medical students’ per- ceptions of radiology as a viable specialty. The purpose of this study was to evaluate United States of America medical students’ perceptions about radiology and other medical specialties in relation to AI.
    Conclusions: US medical students believe that AI will play a significant role in medicine, particularly in radiology. However, nearly half are less enthusiastic about the field of radiology due to AI. As the majority receive information about AI from online articles, which may have negative sentiments towards AI’s impact on radiology, formal AI education and medical student outreach may help combat misinformation and help prevent the dissuading of medical students who might otherwise consider the specialty.  
    Medical Student Perspectives on the Impact of Artificial Intelligence on the Practice of Medicine  
    Christian J. Park, Paul H. Yi, Eliot L. Siegel, MD
    Current Problems in Diagnostic Radiology,Volume 50, Issue 5, 2021, Pages 614-619,

  • Medical Student Perspectives on the Impact of Artificial Intelligence on the Practice of Medicine  
    Christian J. Park, Paul H. Yi, Eliot L. Siegel, MD
    Current Problems in Diagnostic Radiology,Volume 50, Issue 5, 2021, Pages 614-619, 
  •  “Interestingly, of the respondents who chose radiology to be the most significantly impacted, 44% said that it would reduce their enthusiasm. This is a similar proportion compared to a recent survey that was performed in Europe, which found that less than half (44%) of respondents felt that this would reduce enthusiasm for the field of radiology a sentiment that was echoed in a recent survey of Canadian medical students, in which 48.6% stated that AI caused them to feel anxious regarding a career in radiology. The significance of this is not to be understated in that half of potential candidates to the specialty feel as though there is limited opportunity due to an emerging technology such as AI. These sentiments have the potential to create downstream effects, such as reduction in recruitment to the field of radiology or even medicine as whole”
    Medical Student Perspectives on the Impact of Artificial Intelligence on the Practice of Medicine  
    Christian J. Park, Paul H. Yi, Eliot L. Siegel, MD
    Current Problems in Diagnostic Radiology,Volume 50, Issue 5, 2021, Pages 614-619, 
  • “Standards of care do not usually change overnight. But today, as medical practice evolves rapidly, standards of care could shift faster than tradition suggests. Media focus on medical advances occurs well before any changes are implemented and become available as diagnostic tests, vaccines, devices, new drugs, or novel procedures. Accordingly, patients may have heightened expectations as to the innovations in care that they might receive. As the medical community incorporates deep learning and artificial intelligence (AI) into some specialties, particularly emergency radiology, it may be prudent to ponder if we are on the cusp of unrealistic public expectations regarding the use of AI in routine radiologic diagnosis. After all, patients are reading and hearing about AI. When will AI be ready for prime time, and what should we be telling our patients?”
    Using Artificial Intelligence to Interpret CT Scans: Getting Closer to Standard of Care.
    Weisberg EM, Chu LC, Fishman EK.  
    J Am Coll Radiol. 2021 Jun 17:S1546-1440(21)00461-0. doi: 10.1016/j.jacr.2021.05.008.
  • “Deep understanding of domain subject matter is nec- essary to ensure that artificial intelligence models will succeed when applied in real-world scenarios. Data scientists and clinicians without surgical practice experience should not be expected to truly understand the clinical nuances of surgical care, many of which are learned from experience. Historically, artificial intelligence applications in surgery have emerged from collaborations between data scientists and surgeons. These collaborations might be more fruitful if interested surgeons took the additional step of becoming data scientists by gaining computer science skills.”
    Building an Artificial Intelligence– Competent Surgical Workforce  
    Loftus TJ et al.
    JAMA Surgery 2021(in press)
  • “Some surgeons should take the additional step of becoming data scientists and steer clinical implementation processes. To become data scientists, surgeons need only to reinforce their foundational knowledge in math- ematics and statistics and apply their unique domain knowledge to computer science applications, which are readily learned through established training pathways.”
    Building an Artificial Intelligence– Competent Surgical Workforce  
    Loftus TJ et al.
    JAMA Surgery 2021(in press)
  • “Radiologists/radiation oncologists and dermatologists were twice as likely as ophthalmologists to consider that AI screening systems should have error levels superior to the best performing specialist (21.7%, 20.6%, and 10.5% respectively, p = 0.005). Expectations for system performance were even higher when used for diagnostic decision support by specialists. Accordingly, radiologists/radiation oncologists and dermatologists were each more likely than ophthalmologists to expect AI systems to be superior to the best performing specialist when used for decision support (30.9% and 23.9% and 19.0% respectively, p = 0.035).”  
    A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology  
    Jane Scheetz et al.
    Nature Scientific Reports (in press)
  • "The top three potential concerns about the use of AI were (1) concerns over the divestment of healthcare to large technology and data companies, (2) concerns over medical liability due to machine error, and (3) decreasing reliance on medical specialists for diagnosis and treat- ment advice. The top ranked concern for ophthalmologists and radiologists/radiation oncologists was ‘concerns over the divestment of healthcare to large technology and data companies.’ The top ranked concern for derma- tologists was ‘concerns over medical liability due to machine error’.”
    A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology  
    Jane Scheetz et al.
    Nature Scientific Reports (in press)
  • “An additional concern of respondents was a reduced reliance on medical specialists as a consequence of AI adoption. This concern is consistent with the impact of the technology on future workforce needs reported in this and other studies. It is interesting that the primary concern of respondents was the divestment of healthcare to large technology companies. General mistrust in large technology companies has been documented recently and specifically in relation to healthcare.”
    A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology  
    Jane Scheetz et al.
    Nature Scientific Reports (in press)
  • “As we envision future possibilities, one immediate step forward is to combine the image with additional clinical context, from patient record to additional clinical descriptors (such as blood tests, genomics, medications, vital signs, and nonimaging data, such as ECG). This step will provide a transition from image space to patient-level information. Collecting cohorts will enable the population-level statistical analysis to learn about disease manifestations, treatment responses, adverse reactions from and interactions between medications, and more. This step requires building complex infrastructure, along with the generation of new privacy and security regulations—between hospitals and academic research institutes, across hospitals, and in multinational consortia. As more and more data become available, DL and AI will enable unsupervised explorations within the data, thus providing for new discoveries of drugs and treatments toward the advancement and augmentation of healthcare as we know it.”
    A Review of Deep Learning in Medical Imaging: Imaging Traits, Technology Trends, Case Studies With Progress Highlights, and Future Promises  
    S. Kevin Zhou e al.  
    Proceedings Of The IEEE   (2021) in press
  • “Therefore, it is not surprising that articles have been published in recent years concerning the potential contributions of telemedicine (and teleradiology) to the diagnostic management of these patients, and also concerning the contribution of AI (albeit still in its infancy) to aid in diagnosis and treatment, including surgery. This review article presents the existing data and proposes a collaborative vision of an optimized patient pathway, giving medical meaning to the use of these tools.”
    Management of abdominal emergencies in adults using telemedicine and artificial intelligence  
    G. Gorincour et al.
    Journal of Visceral Surgery 2021 (in press) 
  • Once the decision has been made to perform abdominopelvic CT, new AI tools already exist at various stages:
    • optimization of scanner acquisition protocols according to the patient’s body habitus;  
    • immediate verification of the quality of the images acquired;  
    • immediate improvement in the quality of reconstructed images;  
    • automatic detection of urolithiasis and differentiation  from phleboliths without the need for intravenous contrast;  
    • automatic characterization of these lithiases
    • contouring, segmentation, and differentiation of the different intra-abdominal organs. To this end, more and more annotated datasets are available to help refine algorithms.  
    Management of abdominal emergencies in adults using telemedicine and artificial intelligence  
    G. Gorincour et al.
    Journal of Visceral Surgery 2021 (in press) 
  • Panel: Overview of potential threats to generalisability in clinical research and machine learning in health care, along with hypothetical examples of what they might look like in practice
    The myth of generalisability in clinical research and machine learning in health care
    Joseph Futoma, Morgan Simons, Trishan Panch et
    Lancet digital-health Vol 2 September 2020 (in press)

  • The myth of generalisability in clinical research and machine learning in health care
    Joseph Futoma, Morgan Simons, Trishan Panch et
    Lancet digital-health Vol 2 September 2020 (in press)
  • Changes in practice pattern over time
    - Improved patient outcomes through adoption of low-tidal-volume ventilation in the intensive care unit (ICU) will affect the performance of models that were developed when higher tidal volumes were standard.
    - Leucodepletion of blood for transfusion became standard of care in most countries. Models related to blood transfusion and outcomes require recalibration if validated before the practice change.
  • Differences in practice between health systems
    - Mortality predictions for patients admitted to the ICU with COVID-19 are highly sensitive to criteria for ICU admission across hospitals, which in turn vary depending on ICU demand and capacity.
  • Patient demographic variation
    - Models to predict the risk of hospitalisation from COVID-19 that are trained on data from Italy where there is a high proportion of older individuals in the population will not do well in countries with a different age distribution—eg, low-income and middle-income countries that typically have a younger population.
  • Patient genotypic and phenotypic variation
    - Model performance is linked to the composition of the training cohort with regard to disease genotypes or phenotypes, or both. These models will not translate well to populations in which the genotypic or phenotypic make-up is different. Some phenotypes of sepsis and acute respiratory distress syndrome, for example, might be over-represented or under-represented in different settings.
  • Hardware and software variation for data capture
    - Bedside monitors that have different sampling rates for the capture of physiological signals and that are measured continuously will have different susceptibilities to artifacts and will affect models that have time-series data as an input.
    - Computer-vision models for automated interpretation of CT scans are sensitive to the machines used to obtain the images.
  • Variation in other determinants of health and disease (eg, environmental, social, political, and cultural)
    - A model developed in the USA to predict neurological outcomes of premature
    - babies will not do well in a low-income country because of resource availability.
    - The relationship of patient and disease factors with clinical events, such as
    - hospital-acquired infection, will change when a health-care system is strained (eg, during a pandemic).
  • “As an interim step, the Radiology editorial board has developed a list of nine key considerations that help us evaluate AI research (Table). The goal of these considerations is to improve the soundness and applicability of AI research in diagnostic imaging. These considerations are enumerated for the authors, but manuscript reviewers and readers may also find these points to be helpful.”
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • “1. Carefully define all three image sets (training, validation, and test sets of images) of the AI experiment. As sum- marized by Park and Han, the AI algorithm is trained on an initial set of images according to a standard of reference. The training algorithm is tuned and validated on a separate set of im- ages. Finally, an independent “test” set of images is used to report final statistical results of the AI. Ideally, each of the three sets of images should be independent, without overlap. Also, the inclusion and exclusion criteria for the dataset, in addition to the justification for removing any outlier, should be explained.”
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • “ 2. Use an external test set for final statistical reporting. ML/AI models are very prone to overfitting, meaning that they work well only for images on which they were trained. Ideally, an outside set of images (eg, from another institution, the external test set) is used for final assessment to determine if the ML/AI model will generalize.”
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • “3. Use multivendor images, preferably for each phase of the AI evaluation (training, validation, test sets). Radiologists are aware that MRI scans from one vendor do not look like those from another vendor. Such differences are detected by radiomics and AI algorithms. Vendor-specific algorithms are of much less interest than multivendor AI algorithms.”
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 4. Justify the size of the training, validation, and test sets. The number of images required to train an AI algorithm depends on the application. For example, an AI model may learn image segmentation after only a few hundred images, while thousands of chest radiographs may simultaneously be needed to detect lung nodules or multiple abnormalities. In their work classifying chest radiographs as normal or abnormal, Dunnmon et al began with 200000 chest images; however, their AI algo- rithm showed little benefit for improved performance after just 20000 chest radiographs. For many applications, the “correct” number of images may be unknown at the start of the research. The research team should evaluate the relationship between the number of training images versus model performance. For the test set, traditional sample size statistical considerations can be applied to determine the minimum number of images needed.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 5. Train the AI algorithm using a standard of reference that is widely accepted in our field. For chest radiographs, a panel of expert radiologists interpreting the chest radiograph is an inferior standard of reference compared with the chest CT. Similarly, the radiology report is considered an inferior standard of reference relative to dedicated “research readings” of the chest CT scans. Although surprising to nonradiologists, this journal and other high-impact journals in our field do not consider the clinical report to be a high-quality standard of reference for any research study in our field, including AI. Clinical reports often have nuanced conclusions and are generated for patient care and not for research purposes. For instance, degenerative spine disease may have little significance at 80 years old but could be critical at age 15.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 6. Describe any preparation of images for the AI algorithm. For coronary artery disease on CT angiograms, did the AI interpret all 300 source images? Or did the authors manu- ally select relevant images or crop images to a small field of view around the heart? Such preparation and annotation of images greatly affects radiologist understanding of the AI model. Manual cropping of tumor features is standard in radiomics studies; such studies should always evaluate the relationship of the size and reproducibility of the cropped volume to the final statistical result.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 7. Benchmark the AI performance to radiology experts. For computer scientists working on AI, competitions and leader boards for the “best” AI are common. Results frequently com- pare one AI to another based on the area under the receiver op- erating characteristic curve (AUC). However, to treat a patient, physicians are much more interested in the comparison of the AI algorithm to expert readers but not just any readers. Experienced radiologist readers are preferred to benchmark an algorithm de- signed to detect radiologic abnormalities. For example, when evaluating an AI algorithm to detect stroke on CT scans, expert neuroradiologists (rather than generalists or neurologists) are known to have the highest performance.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 8. Demonstrate how the AI algorithm makes decisions. As indicated above, computer scientists conducting imaging research often summarize their results as a single AUC value. That AUC is compared with the competitor, the prior best al- gorithm. Unfortunately, the AUC value alone has little relation- ship to clinical medicine. Even a high AUC value of 0.95 may include an operating mode where 99 of 100 abnormalities are missed. To help clinicians understand the AI performance, many research teams overlay colored probability maps from the AI on the source images.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • 9. The AI algorithm should be publicly available so that claims of performance can be verified. Just like MRI or CT scanners, AI algorithms need independent validation. Commercial AI products may work in the computer laboratory but have poor function in the reading room. “Trust but verify” is essential for AI that may ultimately be used to help prescribe therapy for our patients. All AI algorithms should be made publicly available via a website such as GitHub. Commercially available algorithms are considered publicly available.
    Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board
    David A.Bluemke et al.
    Radiology 2019; (in press) https://doi.org/10.1148/radiol.2019192515
  • ”Once an algorithm is deployed into clinical practice, legal and ethical challenges must be considered. When errors are made using AI algorithms, the question arises who is responsible for the mistakes made by a computer. Is it the radiologist, the AI application itself, or the company that made the AI application responsible? This question is especially important if the algorithm has not explained the inferences in terms that can be understood by humans such as bounding boxes or saliency maps. At times radiologists may not truly understand how AI algorithms arrive at certain conclusions. If we don't understand the process behind how AI algorithms work, how can we be held solely accountable for mistakes? This “black box” problem has made many groups, including the American Medical Association, develop policies that insist developers provide transparency and explicability in algorithm development.”
    Artificial intelligence in radiology: the ecosystem essential to improving patient care
    Julie Sogania,Bibb Allen Jr,Keith Dreyer,Geraldine McGinty
    Clinical Imaging (in press)
  • ” As AI continues to evolve, healthcare as we know it will dramatically change. Radiologists have always served at the forefront in adapting new technologies in medicine, and it should be no different with the advent of the AI revolution. AI will not replace radiologists; instead those radiologists who take advantage of AI may ultimately replace those who refuse to accept it. It is crucial we build an ecosystem of key players in technology, research, radiology, and the regulatory bodies who will work together to effectively and safely integrate AI into clinical practice. As a result, adoption of this technology will expand our efficiency and decision-making capabilities, leading to earlier and better detection of disease and improved outcomes for our patients.”
    Artificial intelligence in radiology: the ecosystem essential to improving patient care
    Julie Sogania,Bibb Allen Jr,Keith Dreyer,Geraldine McGinty
    Clinical Imaging (in press)
  • “The AI-based noise reduction could improve the IQ of aorta CTA with low kV and reduced CM, which achieved the potential of radiation dose and contrast media reduction compared with conventional aorta CTA protocol.”
    Application of Artificial Intelligence–based Image Optimization for Computed Tomography Angiography of the Aorta With Low Tube Voltage and Reduced Contrast Medium Volume
    Wang, Y et al.
    Journal of Thoracic Imaging (in press)
  • Purpose: The purpose of this study was to evaluate the impact of artificial intelligence (AI)-based noise on aorta computed tomography angiography (CTA) image quality (IQ) at 80 kVp tube voltage and 40 mL contrast medium (CM)
    Results: The image noise significantly decreased while signal-to-noise ratio and contrast-to-noise ratio significantly increased in the order of group A1, B, and A2 (all P<0.05). Compared with group B, the subjective IQ score of group A1 was significantly lower (P<0.05), while that of group A2 had no significant difference (P>0.05). The effective dose and CM volume of group A were reduced by 79.18% and 50%, respectively, than that of group B.
    Application of Artificial Intelligence–based Image Optimization for Computed Tomography Angiography of the Aorta With Low Tube Voltage and Reduced Contrast Medium Volume
    Wang, Y et al.
    Journal of Thoracic Imaging (in press)
  • “This article focuses on the role of radiologists in imaging AI and suggests specific ways they can be engaged by (1) considering the clinical need for AI tools in specific clinical use cases, (2) undertaking formal evaluation of AI tools they are considering adopting in their practices, and (3) maintaining their expertise and guarding against the pitfalls of overreliance on technology.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • “Failure of AI algorithms to generalize well to new data arises because everything that AI algorithms that are trained solely on data (deep learning) “know” is based on the data that were used to train them. If the training data do not include certain types of cases that a radiology practice may encounter (eg, different diseases, different image types, artifacts), then the algorithm may provide unexpected results. Bias in training data is a common cause of AI algorithms to fail to generalize, for example, because of differences in patient populations, types of equipment, and imaging parameters used and lack of representation of rare diseases.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.

  • “There are some dangers, however, of unexpected negative consequences of AI on radiology practice, even if these algorithms perform well according to metrics on local practice data as described earlier. The first negative consequence is blind acceptance of the AI output. The AI algorithms are generally expected to be used to supplement, not replace, radiologists, who are presumed to have formulated an independent judgement before considering the output from the AI algorithm.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • ”In some cases, especially high-volume and time-pressured practices, there may be a temptation to simply accept the AI reading and not formulate an independent judgement. In that case, radiologist performance will be no better than that of the AI algorithm (of course, the same applies to showing a case to a colleague). The danger in the case of the AI algorithm, however, is that if it does not generalize well to unusual cases, it may lead radiologists astray.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • ”Patients have concerns that AI tools could produce restricted views with wrong diagnoses, and they believe that such automated systems should remain secondary to the opinions of radiologists. It will thus be beneficial for radiologists to keep these patient perspectives in mind as well as the pitfalls of assistive technologies as AI algorithms enter the market. Finally, overreliance on technology and temptation to blindly accept AI outputs could adversely affect the training of future radiologists, who may not learn the critical observation and interpretative skills that make radiology a unique discipline.”
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • TAKE-HOME POINTS
    - The pace of AI development is exploding, and the number of AI tools being marketed to radiologists is accelerating, posing challenges for radiologists to decide which tools to adopt.
    - The role of radiologists in imaging AI is to identify important clinical use cases for which these tools are needed and to evaluate their effectiveness in their clinical practice.
    - AI tools are expected to improve radiologist prac- tice, but radiologists must guard against overreliance on these technologies and the potential accompa- nying loss of clinical expertise.
    Artificial Intelligence in Imaging: The Radiologist’s Role
    Daniel L. Rubin
    J Am Coll Radiol 2019;16:1309-1317.
  • AI 2019 Reality
  • Radiology on Top But!
  • Objective: To evaluate the design characteristics of studies that evaluated the performance of artificial intelligence (AI) algorithms for the diagnostic analysis of medical images.
    Materials and Methods: PubMed MEDLINE and Embase databases were searched to identify original research articles published between January 1, 2018 and August 17, 2018 that investigated the performance of AI algorithms that analyze medical images to provide diagnostic decisions. Eligible articles were evaluated to determine 1) whether the study used external validation rather than internal validation, and in case of external validation, whether the data for validation were collected, 2) with diagnostic cohort design instead of diagnostic case-control design, 3) from multiple institutions, and 4) in a prospective manner. These are fundamental methodologic features recommended for clinical validation of AI performance in real-world practice. The studies that fulfilled the above criteria were identified. We classified the publishing journals into medical vs. non-medical journal groups. Then, the results were compared between medical and non-medical journals .
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • Results: Of 516 eligible published studies, only 6% (31 studies) performed external validation. None of the 31 studies adopted all three design features: diagnostic cohort design, the inclusion of multiple institutions, and prospective data collection for external validation. No significant difference was found between medical and non-medical journals.
    Conclusion: Nearly all of the studies published in the study period that evaluated the performance of AI algorithms for diagnostic analysis of medical images were designed as proof-of-concept technical feasibility studies and did not have the design features that are recommended for robust validation of the real-world clinical performance of AI algorithms.
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • Conclusion: Nearly all of the studies published in the study period that evaluated the performance of AI algorithms for diagnostic analysis of medical images were designed as proof-of-concept technical feasibility studies and did not have the design features that are recommended for robust validation of the real-world clinical performance of AI algorithms.
    Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
    Dong Wook Kim et al.
    Korean J Radiol 2019;20(3):405-410
  • What if AI is Dutch Tulips? Or worse?
  • Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible. But what if it turns out to be poison? Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston Tuesday that examined the use of AI to accelerate the delivery of precision medicine to the masses. He used an alarming metaphor to explain his concerns: “I think of machine learning kind of as asbestos,” he said. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.”
  • "If computers continue to obey Moore's Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to over​take humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It's tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.
    Brief Answers to the Big Questions
    Stephen Hawking
  • "For the last twenty years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in a particular environment. In this context, intelligence is related to statistical and economic notions of rationality -- that is, colloquially, the ability to make good decisions, plans or inferences. As a result of this recent work, there has been a large degree of integration and cross-fertilisation among Al, machine- learning, statis​tics, control theory, neuroscience and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine transla​tion, legged locomotion and question-answering systems.
    Brief Answers to the Big Questions
    Stephen Hawking
  • “AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever- increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of tech​nology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure."
    Brief Answers to the Big Questions
    Stephen Hawking
  • OBJECTIVE. Artificial intelligence (AI) neural networks rapidly convert disparate facts and data into highly predictive analytic models. Machine learning maps image-patient phenotype correlations opaque to standard statistics. Deep learning performs accurate image-derived tissue characterization and can generate virtual CT images from MRI datasets. Natural language processing reads medical literature and efficiently reconfigures years of PACS and electronic medical record information.
    CONCLUSION. AI logistics solve radiology informatics workflow pain points. Imaging professionals and companies will drive health care AI technology insertion. Data science and computer science will jointly potentiate the impact of AI applications for medical imaging.
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “AI is not a mindless black-box technology passively fixing the world’s data explosion problems; however, under varying degrees of human supervision, superfast computers can process massive datasets through convolu- tional neural networks (CNNs) of layered algorithms to produce predictive models that would defy standard statistical analyses.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • "Generative adversarial networks (GANs), first described in 2014, are a computing framework for explaining how deep CNNs can make mistakes in correctly predicting images of objects, speech patterns, and natural language symbols from rich datasets. Successful deep CNNs apply discriminative models that back-propagate derivatives and apply dropout algorithms to estimate the probability that an output sample has been derived from training data.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “A multilayered deep CNN can discriminate pixel depths of 32 bits, far exceeding the typical human visual resolution capacity of 8 bits. This allows AI scientists to apply GANs to attack deep CNN layers by modifying the 32-bit pixel information to the point where a computer erroneously perceives a picture of a panda as a gibbon, while humans still clear- ly see a panda. This CNN vulnerability can be exploited for medical applications: GANs can create medical records of patient characteristics to determine new drug efficacy in an uncommon disease phenotype or to derive virtual images from another entirely different digital imaging modality.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • "Global imaging system and software companies have access to diverse imaging data repositories. They are actively entering the cognitive marketplace, either alone (e.g., Philips with Illumeo) or in partnership with AI industry leaders (e.g., Agfa with IBM Watson) . Public-private partnerships in the United Kingdom (National Health Service, Cancer Research UK Imperial Centre and OPTIMAM, DeepMind Health, Google) and the United States (University of California San Francisco, Western Digital, NVIDIA) are compiling big digital mammography databases to train AI machines for accurate breast cancer screening.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • “The potential applications of AI to the field of medical imaging remain to be fully elucidated because the underlying computing technology continues to rapidly improve and to be tested in the clinical environment. One feature that is unique to this field of computer science and to AI in particular is the propensity for re- searchers from the public and private sectors to orally present and discuss their findings at scientific sessions well in advance of or in lieu of publishing full manuscripts in the peer-reviewed literature. Much of what is typically done create a solid scientific evidence basis for the use of (and reimbursement for) a new medical technology is missing from this AI orthopraxy.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14
  • ”Soon, powerful third-wave AI technologies will seamlessly link NLP skills with vision tasks, greatly enhancing human understanding of information and images. Humans informed by intelligent machines will compute novel insights from diverse digital images in big data repositories. At some future uncertain time, data science and AI applications will enhance human under- standing of the veracity of all things digital. Although this augmented future approaches, imperfect humans and machines remain purposefully and necessarily juxtaposed.”
    How Cognitive Machines Can Augment Medical Imaging
    Miller DD, Brow EW
    AJR 2019; 212:9–14

  • Artificial Intelligence- The Next Digital Frontier McKinsey Global Institute(2017)

  • Artificial Intelligence- The Next Digital Frontier McKinsey Global Institute(2017)
  • Hospitals also could improve their capacity utilization by employing AI solutions to optimize many ordinary business tasks. Virtual agents could automate routine patient interactions. Speech recognition software has been used in client services, where it has reduced the expense of processing patients by handling routine tasks such as scheduling appointments and registering people when they enter a hospital. Natural language processing can analyze journal articles and other documents and digest their contents for quick access by doctors. These kinds of applications can have a significant impact without needing to pass a regulatory review.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • We have found that if a sector was slow to adopt digital technologies, it tends to trail the pack in putting AI to use, too. Our report Digital America found that almost one-quarter of the nation’s hospitals and more than 40 percent of its office-based physicians have not yet adopted electronic health record systems.63 Even those that do have electronic record systems may not be sharing data seamlessly with the patient or with other providers; tests are repeated needlessly and patients are required to recount their medical histories over and over because these systems are not interoperable. Another MGI report, The age of analytics, found that the US health-care sector has realized only 10 to 20 percent of its opportunities to use advanced analytics and machine learning.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Patients also can benefit directly from the rise of AI in health care. Standardized treatments do not work for every patient, given the complexity of each person’s history and genetic makeup, so researchers are using advanced analytics to personalize regimens. Decisions can be based on data analysis and patient monitoring with use of remote diagnostic devices. A startup called Turbine uses AI to design personalized cancer-treatment regimens. The technology models cell biology on the molecular level and seeks to identify the best drug to use for specific tumors. It can also identify complex biomarkers and search for combination therapies by performing millions of simulated experiments each day .
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Medical practices have taken small steps toward incorporating AI into patient management, introducing speech recognition and other language AI technologies to automate steps in the process. In the future, virtual assistants equipped with speech recognition, image recognition, and machine learning tools will be able to conduct consultations, make diagnoses, and even prescribe drugs. If these systems lack enough information to reach a conclusion, a virtual agent could order additional tests and schedule them with the patient. In rural areas, virtual agents will be able to conduct remote consultations. However, this scenario would require patients, providers, and regulators to become comfortable with fully automated diagnosis and prescriptions.
    Artificial Intelligence- The Next Digital Frontier
    McKinsey Global Institute(2017)
  • Prediction Machines The Simple Economics of Artificial Intelligence
    Agrawal A, Gans J, Goldfarb A
    Harvard Business Review Press 2018
  • AI and its developments and its impact are not always obvious
    - How many saw Steve Jobs introduction of the iPhone in 2007 mean the end of the beginning for the ”Yellow Cab” industry? Uber and Lyft rely on the iPhone.
    - Do you realize that Google is only 20 years old?
  • AI in Medicine: Diagnosis vs Prediction
    - If I read a CT and find a body of the pancreas mass that looks like an PDAC am I making a prediction or a diagnosis?
    - This may help reduce the burden of proof for the FDA if it is a prediction system and not a diagnosis machine
  • Should we stop training Radiologists?
    “whether Radiologists have a future depends on whether they are best positioned to undertake these roles, if other specialists will replace them, or if new job classes will develop, such as a combined radiologist/pathologist (i.e., a role where the radiologist also analyzes biopsies, perhaps performed immediately after imaging.”
  • Should we stop training Radiologists?
    ”Therefore five clear roles for humans in the use of medical imaging will remain, at least in the short and medium term; choosing the image, using real time images in medical procedures, interpreting machine output, training machines on new technologies, and employing judgement that may lead to overriding the prediction machines recommendation, perhaps on information unavailable to the machine.”
  • “Consider Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” At the present, we overestimate the degree to which imaging diagnosis will be affected by machine learn- ing in the present moment and we underestimate the role that radiologists have to play in the development and deployment of these technologies. However, given the inevitable, it is essential for radiologists to stay abreast of developments in the machine learning field.”
    Machine Learning in Radiology: Resistance Is Futile
    Larvie M et al.
    Radiology 2019; 00:1-2
    https://doi.org/10.1148/radiol.2018182312
  • "Machine learning technologies are now deeply embedded in our medical information systems. These methods will ultimately be pervasive in the digital realm of radiology. Resistance really is futile. But that’s okay: The best applications will address pressing clinical needs and improve radiology care. Radiologists are well situated both to contribute to this technological progress, as well as to benefit from machine learning applications in their work. Done well, this will lead to improved patient outcomes and large advances for radiology practice”.
    Machine Learning in Radiology: Resistance Is Futile
    Larvie M et al.
    Radiology 2019; 00:1-2
    https://doi.org/10.1148/radiol.2018182312
  • This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy.
    Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
    Qihang Yu, Lingxi Xie, Yan Wang, Yuyin Zhou, Elliot K. Fishman, Alan L. Yuille
    AMVIX (in Press)
  • We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and vari- able background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage
  • We present the Recurrent Saliency Transformation Network, which enjoys three advantages.
    (i) Benefited by a (recurrent) global energy function, it is easier to generalize our models from training data to testing data.
    (ii) With joint optimization over two networks, both of them get improved individually.
    (iii) By incorporating multi-stage visual cues, more accurate segmentation results are obtained. As the fine stage is less likely to be confused by the lack of contexts, we also observe better convergence during iterations.
  • Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage.
  • “In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.0008310.1109/3DV.2018.00083
  • “ The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-Sørensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.00083
  • “In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-Sørensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.”
    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
    Zhu Z, Xia Y, Shen W, Fishman EK, Yuille A
    2018 International Conference on 3D Vision (3DV)
    Page(s):682–690
    DOI: 10.1109/3DV.2018.00083
  • "Machine learning is a method of data science that provides computers with the ability to learn without being programmed with explicit rules. Machine learning enables the creation of algorithms that can learn and make predictions. In contrast to rules-based algorithms, machine learning takes advantage of increased exposure to large and new data sets and has the ability to improve and learn with experience.” 


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Machine learning tasks are typically classified into three broad categories, depending on the type of task: 
supervised, unsupervised, and reinforcement learning.” 
 Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
    “In supervised learning, data labels are provided to the algorithm in the training phase (there is supervision in training). The expected outputs are usually labeled by human experts and serve as ground truth for the algorithm. The goal of the algorithm is usually to learn a general rule that maps inputs to outputs. In machine learning, ground truth refers to the data assumed to be true. In unsupervised learning, no data labels are given to the learning algorithm.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “In unsupervised learning, no data labels are given to the learning algorithm. The goal of the machine learning task is to find the hidden structure in the data and to separate data into clusters or groups. In reinforcement learning, a computer program performs a certain task in a dynamic environment in which it receives feedback in terms of positive and negative reinforcement (such as playing a game against an opponent). Reinforcement learning is learning from the consequences of interactions with an environment without being explicitly taught.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Artificial neural networks are statistical and math- ematical methods that are a subset of machine learning. These networks are inspired by the way biologic nervous systems process information with a large number of highly interconnected processing elements, which are called neurons, nodes, or cells. An artificial neural network is structured as one input layer of neurons, one or more “hidden layers,” and one output layer. Each hidden layer is made up of a set of neurons, where each neuron is fully connected to all neurons in the previous layer.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “For the foreseeable future, widespread application of machine learning algorithms in diagnostic radiology is not expected to reduce the need for radiologists. Instead, these techniques are expected to improve radiology work ow, increase radiologist productivity, and enhance patient care and satisfaction.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Collection of high-quality ground truth data, development of generalizable and diagnostically accurate techniques, and work ow integration are key challenges for the creation and adoption of machine learning models in radiology practice.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “In general, machine learning techniques are developed by using a train-test system. Three primary sets of data for training, testing, and validation are ideally needed. The training data set is used to train the model. During training, the algorithm learns from examples. The validation set is used to evaluate different model fits on a separate data and to tune the model parameters. Most training approaches tend to overfit the training data, meaning that they find relationships that fit the training data set well but do not hold in general. Therefore, successive iterations of training and validation may be performed to optimize the algorithm and avoid over fitting. In the testing set, after a machine learning algorithm is initially developed, the final model fit may then be applied to an independent testing data set to assess the performance, accuracy, and generalizability of the algorithm.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • “Fundamentally, machine learning is powerful because it is not “brittle.” A rules-based approach may break when exposed to the real world, because the real world often offers examples that are not captured within the rules that programmer uses to de ne an algorithm. With machine learning, the system simply uses statistical approximation to respond most appropriately based on its training set, which means that it is flexible. Additionally, machine learning is a powerful tool because it is generic, that is, the same concepts are used for self-driving cars as is used for medical imaging interpretation. Generalizability of machine learning allows for rapid expansion in different fields, including medicine.”


    Current Applications and Future Impact of Machine Learning in Radiology 
Garry Choy et al.
 Radiology 2018; 00:1–11
  • There are a number of ways that the field of deep learning has been characterized. Deep learning is a class of machine learning algorithms that use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised).are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation.
are part of the broader machine learning field of learning representations of data.learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.

 Wikipedia
  • Deep learning algorithms are based on distributed representations. The underlying assumption behind distributed representations is that observed data are generated by the interactions of factors organized in layers. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can be used to provide different amounts of abstraction.
 Wikipedia
  • Situational Awareness
    Situation awareness involves being aware of what is happening in the vicinity to understand how information, events, and one's own actions will impact goals and objectives, both immediately and in the near future. One with an adept sense of situation awareness generally has a high degree of knowledge with respect to inputs and outputs of a system, an innate "feel" for situations, people, and events that play out because of variables the subject can control. Lacking or inadequate situation awareness has been identified as one of the primary factors in accidents attributed to human error.[1] Thus, situation awareness is especially important in work domains where the information flow can be quite high and poor decisions may lead to serious consequences (such as piloting an airplane, functioning as a soldier, or treating critically ill or injured patients).
  • “For the biomedical image computing, machine learning, and bioinformatics scientists, the aforementioned challenges will present new and exciting opportunities for developing new feature analysis and machine learning opportunities. Clearly though, the image computing community will need to work closely with the pathology community and potentially whole slide imaging and microscopy vendors to be able to develop new and innovative solutions to many of the critical image analysis challenges in digital pathology.”


    Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • OBJECTIVE. The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, su- pervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to brie y describe ethical dilemmas and legal risk. 

    CONCLUSION. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement. 


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML comprises a broad class of statistical analysis algorithms that iteratively improve in response to training data to build models for autonomous predictions. In other words, computer program performance improves automatically with experience . The goal of an ML algorithm is to develop a mathematic model that is the data. Once this model is known data, it can be used to predict the labels of new data. Because radiology is inherently a data interpretation profession in extracting features from images and applying a large knowledge base to interpret those features—it provides ripe opportunities to apply these tools to improve practice.”


    Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760


  • 







Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
    










  • Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “Most ML relevant to radiology is super- vised. In supervised ML, data are labeled before the model is trained. For example, in training a project to identify a specific brain tumor type, the label would be tumor pathologic results or genomic information. These labels, also known as ground truth, can be as specific or general as needed to answer the question. The ML algorithm is exposed to enough of these labeled data to allow them to morph into a model designed to answer the question of interest. Because of the large number of well-labeled images required to train models, curating these datasets is often laborious and expensive.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  • “ML encompasses many powerful tools with the potential to dramatically increase the information radiologists extract from images. It is no exaggeration to suggest the tools will change radiology as dramatically as the advent of cross-sectional imaging did. We believe that owing to the narrow scope of existing applications of ML and the complexity of creating and training ML models, the possibility that radiologists will be replaced by machines is at best far in the future. Successful application of ML to the radiology domain will require that radiologists extend their knowledge of statistics and data science to supervise and correctly interpret ML-derived results.”

    
Implementing Machine Learning in Radiology Practice and Research 
Kohli M et al.
AJR 2017; 208:754–760
  •  
  • “This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on.”


    Deep Learning in Medical Image Analysis.
Shen D, Wu G, Suk HI
Annu Rev Biomed Eng. 2017 (in press)

  • Unlike in the fields of medicine and health, in the field of artificial intelligence and machine learning, the term validation often refers to the fine-tuning stage of model development, and another term, test, is used instead to mean the process of verifying model performance. 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Evaluation of the clinical performance of a diagnostic or predictive artificial intelligence model built with high-dimensional data requires use of external data from a clinical cohort that ade- quately represents the target patient population to avoid over-estimation of the results due to over fitting and spectrum bias.” 


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “The ultimate clinical verification of a diagnostic or predictive artificial intelligence tools requires a demonstration of their value through effect on patient outcomes, beyond performance metrics; this can be achieved through clinical trials or well- designed observational outcome research.”


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Fistulas should be described by the 2 epithelial structures they connect (eg, enteroenteric, enterocolic, enterocutaneous, rectovaginal, or enterovesical). Enteric fistulas within the abdominal cavity should be described as simple or complex similar to perianal fistulas . Complex, asterisk-shaped fistula complexes are often seen that tether multiple loops of small bowel and/or colon.”


    Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction 
Park SH, Han K
Radiology 2018; 286:800–809
  • “Artificial intelligence is the branch of computer science devoted to creating systems to perform tasks that ordinarily require human intelligence. This is a broad umbrella term encompassing a wide variety of subfields and techniques; in this article, we focus on deep learning as a type of machine learning.”

    
Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “Machine learning is the subfield of arti cial intelligence in which algorithms are trained to perform tasks by learning patterns from data rather than by explicit programming. In classic machine learning, expert humans discern and encode features that appear distinctive in the data, and statistical techniques are used to organize or segregate the data on the basis of these features.”


    Deep Learning: A Primer for Radiologists
Chartrand G et al.
RadioGraphics 2017; 37:2113–2131
  • “AI using deep learning demonstrates promise for detecting critical findings at noncontrast-enhanced head CT. A dedicated algorithm was required to detect SAI. Detection of SAI showed lower sensitivity in comparison to detection of HMH, but showed reasonable performance. Findings sup- port further investigation of the algorithm in a controlled and prospective clinical setting to determine whether it can independently screen noncontrast-enhanced head CT examinations and notify the interpreting radiologist of critical findings.”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press

  • “To evaluate the performance of an arti cial intelligence (AI) tool using a deep learning algorithm for detecting hemorrhage, mass effect, or hydrocephalus (HMH) at non—contrast material–enhanced head computed tomo- graphic (CT) examinations and to determine algorithm performance for detection of suspected acute infarct (SAI).”


    Automated Critical Test Findings identification and Online notification system Using artificial intelligence in imaging 
Prevedello LM et al.
Radiology (in press
  • “Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data.”

    
Toolkits and Libraries for Deep Learning 
Bradley J. Erickson et al. 
J Digit Imaging (2017) 30:400–405
  • “Even more exciting is the finding that in some cases, computers seem to be able to “see” patterns that are beyond human perception.This discovery has led to substantial and increased interest in the field of machine learning— specifically, how it might be applied to medical images.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)

  • “These algorithms have been used for several challenging tasks, such as pulmonary embolism segmentation with computed tomographic (CT) angiography (3,4), polyp detection with virtual colonoscopy or CT in the setting of colon cancer (5,6), breast cancer detection and diagnosis with mammography (7), brain tumor segmentation with magnetic resonance (MR) imaging (8), and detection of the cognitive state of the brain with functional MR imaging to diagnose neurologic disease (eg, Alzheimer disease).”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “If the algorithm system optimizes its parameters such that its performance improves—that is, more test cases are diagnosed correctly—then it is considered to be learning that task.”



    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Training: The phase during which the ma- chine learning algorithm system is given labeled example data with the answers (ie, labels)—for example, the tumor type or correct boundary of a lesion.The set of weights or decision points for the model is updated until no substantial improvement in performance is achieved.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Deep learning, also known as deep neural network learning, is a new and popular area of research that is yielding impressive results and growing fast. Early neural networks were typi- cally only a few (<5) layers deep, largely because the computing power was not sufficient for more layers and owing to challenges in updating the weights properly. Deep learning refers to the use of neural networks with many layers—typically more than 20.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “CNNs are similar to regular neural networks. The difference is that CNNs assume that the inputs have a geometric relationship—like the rows and columns of images. The input layer of a CNN has neurons arranged to produce a convolution of a small image (ie, kernel) withthe image.This kernel is then moved across the image, and its output at each location as it moves across the input image creates an output value. Although CNNs are so named because of the convolution kernels, there are other important layer types that they share with other deep neural networks. Kernels that detect important features (eg, edges and arcs) will have large outputs that contribute to the final object to be detected.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)
  • “Machine learning is already being applied in the practice of radiology, and these applications will probably grow at a rapid pace in the near future.The use of machine learning in radiology has important implications for the practice of medicine, and it is important that we engage this area of research to ensure that the best care is afforded to patients. Understanding the properties of machine learning tools is critical to ensuring that they are applied in the safest and most effective manner.”


    Machine Learning for Medical Imaging 
 Bradley J. Erickson et al.
 RadioGraphics 2017 (in press)

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.