J Am Coll Radiol . 2020 Nov;17(11):1371-1381. doi: 10.1016/j.jacr.2020.08.018.
Amara Tariq, Saptarshi Purkayastha, Geetha Priya Padmanaban, Elizabeth Krupinski, Hari Trivedi, Imon Banerjee, Judy Wawira Gichoya
Purpose: Despite tremendous gains from deep learning and the promise of artificial intelligence (AI) in medicine to improve diagnosis and save costs, there exists a large translational gap to implement and use AI products in real-world clinical situations. Adoption of standards such as Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, Consolidated Standards of Reporting Trials, and the Checklist for Artificial Intelligence in Medical Imaging is increasing to improve the peer-review process and reporting of AI tools. However, no such standards exist for product-level review.
Methods: A review of clinical trials showed a paucity of evidence for radiology AI products; thus, the authors developed a 10-question assessment tool for reviewing AI products with an emphasis on their validation and result dissemination. The assessment tool was applied to commercial and open-source algorithms used for diagnosis to extract evidence on the clinical utility of the tools.
Results: There is limited technical information on methodologies for FDA-approved algorithms compared with open-source products, likely because of intellectual property concerns. Furthermore, FDA-approved products use much smaller data sets compared with open-source AI tools, because the terms of use of public data sets are limited to academic and noncommercial entities, which precludes their use in commercial products.
Conclusions: Overall, this study reveals a broad spectrum of maturity and clinical use of AI products, but a large gap exists in exploring actual performance of AI tools in clinical practice.
Read Full Article Here: https://doi.org/10.1016/j.jacr.2020.08.018