• Artificial Intelligence-Based Copilots to Generate Causal Evidence

    Maya Petersen, M.D., Ph.D., Ahmed Alaa, Ph.D., Emre Kıcıman, Ph.D., Chris Holmes, Ph.D., and Mark van der Laan, Ph.D.

    Abstract

    While there is growing consensus that real-world data should play a larger role in generating causal evidence for health care, it is less clear whether and how AI can help. Current approaches to AI-driven analysis of health data are ill-equipped to account for the many threats to causal validity. However, the current human-reliant pipeline for causal analysis also falls short: analyses are complex, require multidisciplinary expertise, and are slow, labor-intensive and error-prone. Here, we speculate how a “human-in-the-loop” AI-based system could help relieve bottlenecks to high-quality causal analyses. We describe how an AI-based causal copilot, leveraging the formal inferential structure of the causal road map, could guide and support researchers through a structured process of translating a causal question into a hypothetical experiment; translating contextual knowledge into transparent and well-justified assumptions; designing, testing, and benchmarking a corresponding statistical analysis plan and code (including integration of machine learning on multimodal data); and supporting causal interpretation of results. Such a system could augment the speed and quality with which researchers conduct causal analyses with real-world data, improve transparency and verification of analyses and assumptions, and ultimately serve as a basis for point-of-care personalized decision support.