• Can a Chatbot Be a Medical Surrogate? The Use of Large Language Models in Medical Ethics Decision-Making

    Isha Harshe, B.S., B.A., Kenneth W. Goodman, Ph.D., and Gauri Agarwal, M.D.

    Abstract

    The use of AI in health care has raised numerous ethical challenges. Issues concerning data privacy, accountability, bias perpetuation, and the identification of appropriate uses and users have prompted scholars and scientists to tackle these challenges. The application of AI to address practical ethical issues in clinical settings has not been thoroughly explored. We investigated the capacity of five publicly available large language models (Chat Generative Pretrained Transformer 4o mini, Claude 3.5 Sonnet, Copilot for Microsoft 365, Meta AI Llama 3, and Gemini 1.5 Flash) to respond to medical ethics scenarios that may arise when AI is implemented in health care. We assessed and compared these responses with those of a human expert in medical ethics to analyze the extent to which AI can replicate human ethical decision-making, outline the distinctions between AI and human cognition, and evaluate the effectiveness of AI in medical ethical decision-making. Our findings indicate that while AI systems may assist in identifying considerations and guidelines for ethical decision-making, they do not consistently demonstrate the flexibility of thought that humans exhibit when addressing novel ethical cases. AI can support ethical decision-making, but it is not currently capable of showing autonomous ethical reasoning for consultation regarding patient care.