Using Artifificial Intelligence in Canadian Healthcare: Legal Reasoning and Governance Challenges in Privacy, Consent, and Transparency
Date
Authors
Abstract
The growing use of artificial intelligence (AI) in Canadian healthcare raises significant governance challenges for legal systems that were largely developed before data-intensive and automated technologies became widespread. This thesis examines how Canadian law currently responds to issues created by health-related AI, including transparency, informed consent, data governance, and professional accountability.
Because there are very few Canadian court cases that deal directly with AI in clinical settings, this research analyzes court and tribunal decisions that address closely related issues involving digital technologies, complex information systems, and sensitive health data. These decisions are treated as practical sites where governance problems relevant to health AI already appear, even when artificial intelligence is not explicitly named.
Using a qualitative legal case study approach, the thesis analyzes a set of Canadian decisions to examine how courts reason about transparency, patient understanding, data flows, consent, and responsibility in technologically complex contexts. The analysis shows that Canadian courts frequently encounter difficulties when applying existing legal principles to systems that obscure how decisions are produced, rely on extensive data reuse, or embed expertise within technical infrastructures rather than individual professionals.
Across cases, courts struggle to balance demands for transparency with confidentiality obligations, to assess whether consent remains meaningful in complex information environments, and to allocate responsibility when automated or system-level tools influence decision-making. The thesis argues that these recurring legal tensions foreshadow deeper governance challenges as AI becomes more integrated into healthcare delivery. Rather than offering a statutory critique, the research highlights how judicial reasoning reveals structural pressures on concepts such as transparency, consent, and accountability in AI-adjacent contexts.
The thesis concludes by identifying areas where clearer governance approaches will be necessary, including improved consent practices, more robust expectations around transparency and explainability, clearer understandings of professional and institutional responsibility, and oversight mechanisms capable of adapting to rapid technological change. The findings demonstrate that while Canadian law provides important guiding principles, their application to health-related AI will require careful development to protect patient rights, maintain public trust, and support ethical healthcare innovation.