• Randomized Study of the Impact of AI on Perceived Legal Liability for Radiologists

    Michael H. Bernstein, Ph.D., Brian Sheppard, S.J.D., Michael A. Bruno, M.D., Parker S. Lay, B.Sc., and Grayson L. Baird, Ph.D.

    Abstract

    Background:Artificial intelligence (AI) will have unintended consequences for radiology as the applications of AI use in patient care continue to expand. When a radiologist misses an abnormality on an image, their perceived liability may differ according to whether or not AI also missed the abnormality.

    Methods:Adults in the United States viewed a vignette describing a radiologist being sued for missing a brain bleed (n=652) or cancer (n=682). We randomly assigned participants to one of five conditions. In four conditions, they were told an AI system was used. Either AI agreed with the radiologist, also failing to find pathology (AI agree), or AI did find pathology (AI disagree). Some participants were given more context about the performance of AI: a 1% AI false omission rate (FOR) was presented when AI agreed with the radiologist�s finding (noted as AI agree + FOR), and a 50% AI false discovery rate (FDR) was presented when the AI disagreed with the radiologist�s finding (noted as AI disagree + FDR). There was also a �no AI� control condition. Otherwise, vignettes were identical. Participants indicated whether the radiologist met their duty of care as a proxy for whether they would side with the defense (radiologist) or the plaintiff.

    Results:Participants were more likely to find the radiologist legally liable in the AI disagree versus the AI agree condition (brain bleed: 72.9% vs. 50.0%, P=0.001; cancer: 78.7% vs. 63.5%, P=0.01) and in the AI disagree versus the no AI condition (brain bleed: 72.9% vs. 56.3%, P=0.01; cancer: 78.7% vs. 65.2%, P=0.04). Participants were less likely to side with the plaintiff when the additional context of FDR or FOR were provided for a brain bleed: AI disagree versus AI disagree + FDR (brain bleed: 72.9% vs. 48.8%, P=0.001; cancer: 78.7% vs. 73.1%, P=0.20), and AI agree versus AI agree + FOR (brain bleed: 50.0% vs. 34.0%, P=0.01; cancer: 63.5% vs. 56.4%, P=0.19).

    Conclusions:Radiologists who failed to find an abnormality are viewed as more culpable when they used an AI system that detected the abnormality. Presenting participants with AI error data decreased perceived liability. These findings have relevance for courtroom proceedings.