Andrew James Del Gaizo, Thomas F Osborne, Troy Shahoumian, Robert Sherrier
Radiol Artif Intell . 2024 Sep;6(5):e240067. doi: 10.1148/ryai.240067.
The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution (n = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI (n = 49 007) (P < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution (n = 52 281) (P < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist (n = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI (n = 1192) (P = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment.