• Lessons from the Failure of Canada's Artificial Intelligence and Data Act

    Abdullah H. Ishaque, M.D., Ph.D., Abdi Aidid, J.D., L.L.M., and Gelareh Zadeh, M.D., Ph.D.

    Abstract

    Canada’s initial attempt at AI governance, the Artificial Intelligence and Data Act (AIDA), was introduced within Bill C-27, but was ultimately terminated with the prorogation of Parliament in January 2025. AIDA sought to establish a risk-based regulatory framework; however, it was criticized for its lack of specificity, underinclusiveness, and absence of sector-specific oversight — issues that are particularly consequential for health care AI applications. The broad and generalized nature of AIDA left regulatory gaps concerning safety, bias, transparency, and patient privacy in AI-driven medical decision-making. In this article, we analyze the shortcomings of AIDA in health care AI governance and propose key reforms to guide future legislative efforts. A targeted, sector-specific approach is essential to ensure AI’s safe and effective integration into health care while fostering responsible innovation. The Canadian experience provides important lessons for global AI regulation, particularly in balancing technological progress with patient safety and ethical considerations.