google ads
Deep Learning: Pathology and Deep Learning Imaging Pearls - Educational Tools | CT Scanning | CT Imaging | CT Scan Protocols - CTisus
Imaging Pearls ❯ Deep Learning ❯ Pathology and Deep Learning

-- OR --

  • “Histopathology image evaluation is indispensable for cancer diagnoses and subtype classification. Standard artificial intelligence methods for histopathology image analyses have focused on optimizing specialized models for each diagnostic task. Although such methods have achieved some success, they often have limited generalizability to images generated by different digitization protocols or samples collected from different populations3. Here, to address this challenge, we devised the Clinical Histopathology Imaging Evaluation Foundation (CHIEF) model, a general purpose weakly supervised machine learning framework to extract pathology imaging features for systematic cancer evaluation. CHIEF leverages two complementary pretraining methods to extract diverse pathology representations: unsupervised pretraining for tile-level feature identification and weakly supervised pretraining for whole-slide pattern recognition.”
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • “We developed CHIEF using 60,530 whole-slide images spanning 19 anatomical sites. Through pretraining on 44 terabytes of high resolution pathology imaging datasets, CHIEF extracted microscopic representations useful for cancer cell detection, tumour origin identification, molecular profile characterization and prognostic prediction. We successfully validated CHIEF using whole-slide images from 32 independent slide sets collected from 24 hospitals and cohorts internationally. Overall, CHIEF outperformed the state-of-the-art deep learning methods by up to 36.1%, showing its ability to address domain shifts observed in samples from diverse populations and processed by different slide preparation methods. CHIEF provides a generalizable foundation for efficient digital pathology evaluation for patients with cancer.”
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • ”We established the CHIEF model, a general-purpose machine learning framework for weakly supervised histopathological image analyses. Unlike commonly used self-supervised feature extractors, CHIEF leveraged two types of pretraining procedure: unsupervised pretraining on 15 million unlabelled tile images and weakly supervised pretraining on more than 60,000 WSIs. Tile-level unsupervised pretraining established a general feature extractor for haematoxylin–eosin-stained histopathological images collected from heterogeneous publicly available databases, which captured diverse manifestations of microscopic cellular morphologies. Subsequent WSI-level weakly supervised pretraining constructed a general-purpose model by characterizing thesimilarities and differences between cancer types.”
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • CHIEF consistently attained superior performance in a variety of cancer identification tasks using either biopsy or surgical resection slides  CHIEF achieved a macro-average area under the receiver operating characteristic curve (AUROC) of 0.9397 across 15 datasets representing 11 cancer types, which is approximately 10% higher than that attained by DSMIL (a macro-average AUROC of 0.8409), 12% higher than that of ABMIL (a macro-average AUROC of 0.8233) and 14% higher than that of CLAM (a macro-average AUROC of 0.8016). In all five biopsy datasets collected from independent cohorts, CHIEF possessed AUROCs of greater than 0.96 across several cancer types, including oesophagus (CUCH-Eso), stomach (CUCH-Sto), colon (CUCH-Colon) and prostate (Diagset-B and CUCH-Pros). On independent validation with seven surgical resection slide sets spanning five cancer types (that is, colon (Dataset-PT), breast (DROID-Breast), endometrium (SMCH-Endo and CPTAC-uterine corpus endometrial carcinoma (UCEC)), lung (CPTAC-lung squamous cell carcinoma (LUSC)) and cervix (SMCH-Cervix and TissueNet)), CHIEF attained AUROCs greater than 0.90
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • “The CHIEF framework successfully characterized tumour origins, predicted clinically important genomic profiles, and stratified patients into longer-term survival and shorter-term survival groups. Furthermore, our approach established a general pathology feature extractor capable of a wide range of prediction tasks even with small sample sizes. Our results showed that CHIEF is highly adaptable to diverse pathology samplesobtained from several centres, digitized by various scanners, and obtained from different clinical procedures (that is, biopsy and surgicalresection). This new framework substantially enhanced model generalizability, a critical barrier to the clinical penetrance of conventional computational pathology models.”
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • “In conclusion, CHIEF is a foundation model useful for a wide range of pathology evaluation tasks across several cancer types. We have demonstrated the generalizability of this foundation model across several clinical applications using samples collected from 24 hospitals and patient cohorts worldwide. CHIEF required minimal image annotations and extracted detailed quantitative features from WSIs, which enabled systematic analyses of the relationships among morphological patterns, molecular aberrations and important clinical outcomes. Accurate, robust and rapid pathology sample assessment provided by CHIEF will contribute to the development of personalized cancer management.”
    A pathology foundation model for cancer diagnosis and prognosis prediction.  
    Wang X, Zhao J, Marostica E, et al.
    Nature. 2024 Sep 4. doi: 10.1038/s41586-024-07894-z. Epub ahead of print. PMID: 39232164.
  • “Artificial intelligence (AI) is an area of enormous interest that is transforming health care and biomedical research. AI systems have shown the potential to support patients, clinicians, and health-care infrastructure. AI systems could provide rapid and accurate image interpretation, disease diagnosis and prognosis, improved workflow, reduced medical errors, and lead to more efficient and accessible care. Incorporation of patient-reported outcome measures (PROMs), could advance AI systems by helping to incorporate the patient voice alongside clinical data.”   
    Embedding patient-reported outcomes at the heart of artificial intelligence health-care technologies  
    Samantha Cruz Rivera et al.  
    Lancet Digit Health 2023; 5: e168–73
  • “Pancreatic ductal adenocarcinoma (PDAC) has been left behind in the evolution of personalized medicine. Predictive markers of response to therapy are lacking in PDAC despite various histological and transcriptional classification schemes. We report an artificial intelligence (AI) approach to histologic feature examination that extracts a signature predictive of disease-specific survival (DSS) in patients with PDAC receiving adjuvant gemcitabine. We demonstrate that this AI-generated histologic signature is associated with outcomes following adjuvant gemcitabine, while three previously developed transcriptomic classification systems are not (n = 47). We externally validate this signature in an independent cohort of patients treated with adjuvant gemcitabine (n = 46). Finally, we demonstrate that the signature does not stratify survival outcomes in a third cohort of untreated patients (n = 161), suggesting that the signature is specifically predictive of treatment-related outcomes but is not generally prognostic. This imaging analysis pipeline has promise in the development of actionable markers in other clinical settings where few biomarkers currently exist.”
    Development of an artificial intelligence-derived histologic signature associated with adjuvant gemcitabine treatment outcomes in pancreatic cancer
    Vivek Nimgaonkar et al.
    Cell Reports Medicine 4, 101013, April 18, 2023
  • “In summary, this study identifies an AI-based histologic signature that stratifies disease-related outcomes among patients who have received adjuvant gemcitabine after resection of PDAC, where transcriptional profiling-based sub-typing fails to do so. This signature, if validated in prospective cohorts, has the potential to become one of the first clinically applicable predictive biomarkers in PDAC. Finally, if validated in PDAC, the imaging analysis platform underlying this signature may be generalized to other clinical settings, thereby facilitating the emergence of biomarkers to predict treatment response in diseases for which few actionable biomarkers currently exist.”
    Development of an artificial intelligence-derived histologic signature associated with adjuvant gemcitabine treatment outcomes in pancreatic cancer
    Vivek Nimgaonkar et al.
    Cell Reports Medicine 4, 101013, April 18, 2023
  • Objective: In this study we evaluate the accuracy of the newest version of a smartphone application (SA) for risk assessment of skin lesions.
    Methods: This SA uses a machine learning algorithm to compute a risk rating. The algorithm is trained on 131,873 images taken by 31,449 users in multiple countries between January 2016 and August 2018 and rated for risk by dermatologists. To evaluate the sensitivity of the algorithm we use 285 histopathologically validated skin cancer cases (including 138 malignant melanomas), from two previously published clinical studies (195 cases) and from the SA user database (90 cases). We calculate the specificity on a separate set from the SA user database containing 6000 clinically validated benign cases.
    Results: The algorithm scored a 95.1% (95% CI, 91.9% - 97.3%) sensitivity in detecting (pre)malignant conditions (93% for malignant melanoma and 97% for keratinocyte carcinomas and precursors). This level of sensitivity was achieved with a 78.3% (95% CI, 77.2%-79.3%) specificity.
    Conclusions: This SA provides a high sensitivity to detect skin cancer, however there is still room for improvement in terms of specificity. Future studies are needed to assess the impact of this SA on the health systems and its users.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • Objective: In this study we evaluate the accuracy of the newest version of a smartphone application (SA) for risk assessment of skin lesions.
    Methods: This SA uses a machine learning algorithm to compute a risk rating. The algorithm is trained on 131,873 images taken by 31,449 users in multiple countries between January 2016 and August 2018 and rated for risk by dermatologists. To evaluate the sensitivity of the algorithm we use 285 histopathologically validated skin cancer cases (including 138 malignant melanomas), from two previously published clinical studies (195 cases) and from the SA user database (90 cases). We calculate the specificity on a separate set from the SA user database containing 6000 clinically validated benign cases.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • Results: The algorithm scored a 95.1% (95% CI, 91.9% - 97.3%) sensitivity in detecting (pre)malignant conditions (93% for malignant melanoma and 97% for keratinocyte carcinomas and precursors). This level of sensitivity was achieved with a 78.3% (95% CI, 77.2%-79.3%) specificity.
    Conclusions: This SA provides a high sensitivity to detect skin cancer, however there is still room for improvement in terms of specificity. Future studies are needed to assess the impact of this SA on the health systems and its users.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • Objective: In this study we evaluate the accuracy of the newest version of a smartphone application (SA) for risk assessment of skin lesions.
    Methods: This SA uses a machine learning algorithm to compute a risk rating. The algorithm is trained on 131,873 images taken by 31,449 users in multiple countries between January 2016 and August 2018 and rated for risk by dermatologists. To evaluate the sensitivity of the algorithm we use 285 histopathologically validated skin cancer cases (including 138 malignant melanomas), from two previously published clinical studies (195 cases) and from the SA user database (90 cases). We calculate the specificity on a separate set from the SA user database containing 6000 clinically validated benign cases.
    Results: The algorithm scored a 95.1% (95% CI, 91.9% - 97.3%) sensitivity in detecting (pre)malignant conditions (93% for malignant melanoma and 97% for keratinocyte carcinomas and precursors). This level of sensitivity was achieved with a 78.3% (95% CI, 77.2%-79.3%) specificity.
    Conclusions: This SA provides a high sensitivity to detect skin cancer, however there is still room for improvement in terms of specificity. Future studies are needed to assess the impact of this SA on the health systems and its users.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • Objective: In this study we evaluate the accuracy of the newest version of a smartphone application (SA) for risk assessment of skin lesions.
    Methods: This SA uses a machine learning algorithm to compute a risk rating. The algorithm is trained on 131,873 images taken by 31,449 users in multiple countries between January 2016 and August 2018 and rated for risk by dermatologists. To evaluate the sensitivity of the algorithm we use 285 histopathologically validated skin cancer cases (including 138 malignant melanomas), from two previously published clinical studies (195 cases) and from the SA user database (90 cases). We calculate the specificity on a separate set from the SA user database containing 6000 clinically validated benign cases.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • Results: The algorithm scored a 95.1% (95% CI, 91.9% - 97.3%) sensitivity in detecting (pre)malignant conditions (93% for malignant melanoma and 97% for keratinocyte carcinomas and precursors). This level of sensitivity was achieved with a 78.3% (95% CI, 77.2%-79.3%) specificity.
    Conclusions: This SA provides a high sensitivity to detect skin cancer, however there is still room for improvement in terms of specificity. Future studies are needed to assess the impact of this SA on the health systems and its users.
    Accuracy of a smartphone application for triage of skin lesions based on machine learning algorithms
    Udrea A et al.
    J European Academy of Dermatology (in press 2019)
  • The traditional solution is for doctors to ask colleagues, or to laboriously browse reference textbooks or online resources, hoping to find an image with similar visual characteristics. The general computer vision solution to problems like this is termed content-based image retrieval (CBIR), one example of which is the “reverse image search” feature in Google Images, in which users can search for similar images by using another image as input.
  • “The tool allows a user to select a region of interest, and obtain visually-similar matches. We tested SMILY’s ability to retrieve images along a pre-specified axis of similarity (e.g. histologic feature or tumor grade), using images of tissue from the breast, colon, and prostate (3 of the most common cancer sites). We found that SMILY demonstrated promising results despite not being trained specifically on pathology images or using any labeled examples of histologic features or tumor grades.”
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • However, a problem emerged when we observed how pathologists interacted with SMILY. Specifically, users were trying to answer the nebulous question of “What looks similar to this image?” so that they could learn from past cases containing similar images. Yet, there was no way for the tool to understand the intent of the search: Was the user trying to find images that have a similar histologic feature, glandular morphology, overall architecture, or something else? In other words, users needed the ability to guide and refine the search results on a case-by-case basis in order to actually find what they were looking for. Furthermore, we observed that this need for iterative search refinement was rooted in how doctors often perform “iterative diagnosis”—by generating hypotheses, collecting data to test these hypotheses, exploring alternative hypotheses, and revisiting or retesting previous hypotheses in an iterative fashion. It became clear that, for SMILY to meet real user needs, it would need to support a different approach to user interaction.
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • “Furthermore, we observed that this need for iterative search refinement was rooted in how doctors often perform “iterative diagnosis”—by generating hypotheses, collecting data to test these hypotheses, exploring alternative hypotheses, and revisiting or retesting previous hypotheses in an iterative fashion. It became clear that, for SMILY to meet real user needs, it would need to support a different approach to user interaction.”
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • Through careful human-centered research described in our second paper, we designed and augmented SMILY with a suite of interactive refinement tools that enable end-users to express what similarity means on-the-fly: 1) refine-by-region allows pathologists to crop a region of interest within the image, limiting the search to just that region; 2) refine-by-example gives users the ability to pick a subset of the search results and retrieve more results like those; and 3) refine-by-concept sliders can be used to specify that more or less of a clinical concept be present in the search results (e.g., fused glands). Rather than requiring that these concepts be built into the machine learning model, we instead developed a method that enables end-users to create new concepts post-hoc, customizing the search algorithm towards concepts they find important for each specific use case. This enables new explorations via post-hoc tools after a machine learning model has already been trained, without needing to re-train the original model for each concept or application of interest.
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • “Interestingly, these refinement tools appeared to have supported pathologists’ decision-making process in ways beyond simply performing better on similarity searches. For example, pathologists used the observed changes to their results from iterative searches as a means of progressively tracking the likelihood of a hypothesis. When search results were surprising, many re-purposed the tools to test and understand the underlying algorithm, for example, by cropping out regions they thought were interfering with the search or by adjusting the concept sliders to increase the presence of concepts they suspected were being ignored. Beyond being passive recipients of ML results, doctors were empowered with the agency to actively test hypotheses and apply their expert domain knowledge, while simultaneously leveraging the benefits of automation.”
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • With these interactive tools enabling users to tailor each search experience to their desired intent, we are excited for SMILY’s potential to assist with searching large databases of digitized pathology images. One potential application of this technology is to index textbooks of pathology images with descriptive captions, and enable medical students or pathologists in training to search these textbooks using visual search, speeding up the educational process. Another application is for cancer researchers interested in studying the correlation of tumor morphologies with patient outcomes, to accelerate the search for similar cases. Finally, pathologists may be able to leverage tools like SMILY to locate all occurrences of a feature (e.g. signs of active cell division, or mitosis) in the same patient’s tissue sample to better understand the severity of the disease to inform cancer therapy decisions. Importantly, our findings add to the body of evidence that sophisticated machine learning algorithms need to be paired with human-centered design and interactive tooling in order to be most useful.”
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • “Importantly, our findings add to the body of evidence that sophisticated machine learning algorithms need to be paired with human-centered design and interactive tooling in order to be most useful.”
    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology
    July 19, 2019
    Narayan Hegde, Software Engineer, Google Health and Carrie J. Cai, Research Scientist, Google Research
  • “However, no algorithm can perfectly capture an expert's ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctor's specific diagnostic needs.In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm , and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these refinement tools increased the diagnostic utility of images found and increased user trust in the algorithm.”
    Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision Making
    Cai CJ et al.
    ACM ISBN 978-1-4503-5970-2/ 19/ 05. https://doi.org/10.1145/3290605.3300234
  • In this paper, we found that refinement tools not only in- creased trust and utility, but were also used for critical decision - making purposes beyond guiding an algorithm. Our work brings to light the dual challenges and opportunities of ML: although black -box ML algorithms can be difficult to un- derstand, off-the-shelf image embeddings from DNNs could enable new, lightweight ways of creating interactive refine- ment and exploration mechanisms. Ultimately, refinement too ls gave doctors the agency to hypothesis-test and apply their domain knowledge, while simultaneously leveraging the benefits of automation. Taken together, this work provides implications for how ML-based systems can augment, rather than replace, expert intelligence during critical decision-making, an area that will likely continue to rise in importance in the coming years.
    Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision Making
    Cai CJ et al.
    ACM ISBN 978-1-4503-5970-2/ 19/ 05. https://doi.org/10.1145/3290605.3300234
  • Ultimately, refinement too ls gave doctors the agency to hypothesis-test and apply their domain knowledge, while simultaneously leveraging the benefits of automation. Taken together, this work provides implications for how ML-based systems can augment, rather than replace, expert intelligence during critical decision-making, an area that will likely continue to rise in importance in the coming years.
    Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision Making
    Cai CJ et al.
    ACM ISBN 978-1-4503-5970-2/ 19/ 05. https://doi.org/10.1145/3290605.3300234
  • July 2019
  • “The diagnosis of most cancers is made by a board-certified pathologist based on a tissue biopsy under the microscope. Recent research reveals a high discordance between individual pathologists. For melanoma, the literature reports on 25-26% of discordance for classifying a benign nevus versus malignant melanoma. A recent study indicated the potential of deep learning to lower these discordances. However, the performance of deep learning in classifying histopathologic melanoma images was never compared directly to human experts. The aim of this study is to perform such a first direct comparison.”
    Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images
    Achim Hekler et al
    European Journal of Cancer 118 (2019) 91e96
  • Findings: The CNN achieved a mean sensitivity/specificity/accuracy of 76%/60%/68% over 11 test runs. In comparison, the 11 pathologists achieved a mean sensitivity/specificity/accuracy of 51.8%/66.5%/59.2%. Thus, the CNN was significantly (p Z 0.016) superior in classifying the cropped images.
    Interpretation: With limited image information available, a CNN was able to outperform 11 histopathologists in the classification of histopathological melanoma images and thus shows promise to assist human melanoma diagnoses..”
    Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images
    Achim Hekler et al
    European Journal of Cancer 118 (2019) 91e96
  • “With limited image information available, a CNN was able to systematically outperform 11 histopathologists in the classification of histopathological melanoma images and thus shows great potential to assist human melanoma diagnoses. Prospective studies that use whole slides for testing are necessary to confirm this preliminary finding.”
    Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images
    Achim Hekler et al
    European Journal of Cancer 118 (2019) 91e96
  • “It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “It is clear that molecular changes in gene expression solicit a structural and vascular change in phenotype that is in turn observable on the imaging modality under consideration. For instance, tumor morphology in standard H&E tissue specimens reflects the sum of all molecular pathways in tumor cells. By the same token radiographic imaging modalities such as MRI and CT are ultimately capturing structural and functional attributes reflective of the biological pathways and cellular morphology characterizing the disease. Historically the concept and importance of radiology-pathology fusion has been around and recognized.”

    
Image analysis and machine learning in digital pathology: Challenges and opportunities.
Madabhushi A, Lee G
Med Image Anal. 2016 Oct;33:170-5.
  • “In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.”


    Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer
Bejnordi BE et al.
 JAMA. 2017;318(22):2199–2210
  • Question
    What is the discriminative accuracy of deep learning algorithms compared with the diagnoses of pathologists in detecting lymph node metastases in tissue sections of women with breast cancer?

    
Finding
    In cross-sectional analyses that evaluated 32 algorithms submitted as part of a challenge competition, 7 deep learning algorithms showed greater discrimination than a panel of 11 pathologists in a simulated time-constrained diagnostic setting, with an area under the curve of 0.994 (best algorithm) vs 0.884 (best pathologist).


    Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer
Bejnordi BE et al.
 JAMA. 2017;318(22):2199–2210
  • “Radiology, having converted to digital images more than 25 years ago, is well-positioned to deploy AI for diagnostics. Several studies have shown considerable opportunity to sup- port radiologists in evaluating a variety of scan types including mammography for breast lesions, computed tomographic scans for pulmonary nodules and infections, and magnetic resonance images for brain tumors including the molecular classification of brain tumors.”


    Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer: Helping Artificial Intelligence Be Seen. 
Golden JA
JAMA. 2017;318(22):2184–2186
  • “Another challenge to deploying digital pathology was recently addressed. In April 2017, Philips received US Food and Drug Administration clearance for its Philips IntelliSite Pathology Solution to be used for primary pathology diagnostics. This device is used for scanning glass pathology slides and for reviewing these slides on computer monitors. Furthermore, the Philips IntelliSite Pathology Solution has been established as a predicate device that could pave the way for a host of other whole-slide scanners available today to use a 510(k) process for approval rather than a premarket analysis. Many new Food and Drug Administration–approved scanners for primary diagnosis are expected to become avail- able in the coming years.”


    Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer: Helping Artificial Intelligence Be Seen. 
Golden JA
JAMA. 2017;318(22):2184–2186
  • “Even though some reimbursement codes exist for computational analyses, they are not widely used and often are rejected. With national health care reimbursement trends moving to quality and safety metrics for value-based care rather than fee for service, the recognition of AI as part of reimbursement strategies that reward value-based care would provide important incentives to develop and implement validated algorithms.”

    
Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer: Helping Artificial Intelligence Be Seen. 
Golden JA
JAMA. 2017;318(22):2184–2186
  • “AI may be just what pathology has been waiting for. While still requiring evaluation within a normal surgical pathology workflow, deep learning has the opportunity to assist pathologists by improving the efficiency of their work, standardizing quality, and providing better prognostication. Like electron microscopy, immunohistochemistry, and molecular diagnostics ahead of AI, there is little risk of pathologists being replaced. Although their workflow is likely to change, the contributions of pathologists to patient care will continue to be critically important.”


    Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer: Helping Artificial Intelligence Be Seen. 
Golden JA
JAMA. 2017;318(22):2184–2186
  • Purpose: To develop a machine learning model that allows high- risk breast lesions (HRLs) diagnosed with image-guided needle biopsy that require surgical excision to be distinguished from HRLs that are at low risk for upgrade to cancer at surgery and thus could be surveilled. 

    Conclusion: This study provides proof of concept that a machine learn- ing model can be applied to predict the risk of upgrade of HRLs to cancer. Use of this model could decrease unnecessary surgery by nearly one-third and could help guide clinical decision making with regard to surveillance versus surgical excision of HRLs.

    
High-risk Breast lesions: A Machine Learning Model to Predict Pathologic Upgrade and Reduce Unnecessary Surgical Excision 
Manisha Bahl et al.
Radiology (in press)
  • “Instead of surgical excision of all HRLs, if HRLs categorized with our model to be at low risk for upgrade to cancer were sur- veilled and the remainder were excised, then 97.4% (37 of 38) of malignancies would be diag- nosed at surgery, and 30.6% (91 of 297) of surgeries of benign lesions could be avoided.” 


    High-risk Breast lesions: A Machine Learning Model to Predict Pathologic Upgrade and Reduce Unnecessary Surgical Excision 
Manisha Bahl et al.
Radiology (in press)
  • “Machine learning could inform shared decision making by the patient and the provider re- garding surveillance versus sur- gical excision of HRLs and thus could support more targeted, personalized approaches to patient care.”

    
High-risk Breast lesions: A Machine Learning Model to Predict Pathologic Upgrade and Reduce Unnecessary Surgical Excision 
Manisha Bahl et al.
Radiology (in press)
  • “In conclusion, machine learning can be applied as a risk prediction method to identify patients with biopsy-proven HRLs that have the potential for follow-up rather than surgical excision. Future work includes incorporation of mammographic images and histopathologic slides into the machine learning model.Use of our model based on tra- ditional structural features with an additional feature of biopsy pathologic report text has the potential to decrease unnecessary surgery by nearly one-third in women with HRLs and supports shared decision making regarding surveillance versus surgical excision of HRLs.” 


    High-risk Breast lesions: A Machine Learning Model to Predict Pathologic Upgrade and Reduce Unnecessary Surgical Excision 
Manisha Bahl et al.
Radiology (in press)

Privacy Policy

Copyright © 2024 The Johns Hopkins University, The Johns Hopkins Hospital, and The Johns Hopkins Health System Corporation. All rights reserved.