Using Artifificial Intelligence in Canadian Healthcare: Legal Reasoning and Governance Challenges in Privacy, Consent, and Transparency

dc.contributor.advisorStark, Charles
dc.contributor.authorThiyagaratnam, Earlel
dc.date.accessioned2026-04-01T15:02:28Z
dc.date.issued2025-11-28
dc.description.abstractThe growing use of artificial intelligence (AI) in Canadian healthcare raises significant governance challenges for legal systems that were largely developed before data-intensive and automated technologies became widespread. This thesis examines how Canadian law currently responds to issues created by health-related AI, including transparency, informed consent, data governance, and professional accountability. Because there are very few Canadian court cases that deal directly with AI in clinical settings, this research analyzes court and tribunal decisions that address closely related issues involving digital technologies, complex information systems, and sensitive health data. These decisions are treated as practical sites where governance problems relevant to health AI already appear, even when artificial intelligence is not explicitly named. Using a qualitative legal case study approach, the thesis analyzes a set of Canadian decisions to examine how courts reason about transparency, patient understanding, data flows, consent, and responsibility in technologically complex contexts. The analysis shows that Canadian courts frequently encounter difficulties when applying existing legal principles to systems that obscure how decisions are produced, rely on extensive data reuse, or embed expertise within technical infrastructures rather than individual professionals. Across cases, courts struggle to balance demands for transparency with confidentiality obligations, to assess whether consent remains meaningful in complex information environments, and to allocate responsibility when automated or system-level tools influence decision-making. The thesis argues that these recurring legal tensions foreshadow deeper governance challenges as AI becomes more integrated into healthcare delivery. Rather than offering a statutory critique, the research highlights how judicial reasoning reveals structural pressures on concepts such as transparency, consent, and accountability in AI-adjacent contexts. The thesis concludes by identifying areas where clearer governance approaches will be necessary, including improved consent practices, more robust expectations around transparency and explainability, clearer understandings of professional and institutional responsibility, and oversight mechanisms capable of adapting to rapid technological change. The findings demonstrate that while Canadian law provides important guiding principles, their application to health-related AI will require careful development to protect patient rights, maintain public trust, and support ethical healthcare innovation.
dc.identifier.urihttps://hdl.handle.net/20.500.14721/39508
dc.language.isoen
dc.publisherThe University of Western Ontario
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internationalen
dc.subjectArtificial Intelligence (AI)
dc.subjectHealthcare Law
dc.subjectInformed Consent
dc.subjectPrivacy and Data Governance
dc.subjectAlgorithmic Transparency
dc.subjectProfessional Regulation
dc.subjectAccountability
dc.subjectJudicial Reasoning
dc.titleUsing Artifificial Intelligence in Canadian Healthcare: Legal Reasoning and Governance Challenges in Privacy, Consent, and Transparency
dc.typethesis
oaire.license.conditionhttp://creativecommons.org/licenses/by-nc-nd/4.0/
thesis.degree.disciplineHealth Information Science
thesis.degree.grantorThe University of Western Ontario
thesis.degree.nameMHIS
uwo.description.laySummaryArtificial intelligence, often called AI, is increasingly used in Canadian healthcare to analyze medical information, support diagnoses, manage health data, and guide decision making. While these tools can be useful, they raise important questions about patient rights, privacy, consent, and responsibility when decisions are influenced by complex technologies. This thesis examines how Canadian courts and tribunals engage with issues that are closely connected to the use of AI in healthcare. Because there are very few Canadian legal cases that directly involve AI in clinical care, the research analyzes related cases involving digital health tools, complex data systems, and the handling of sensitive health information. These cases show how legal challenges relevant to AI already arise in practice, even when artificial intelligence is not explicitly named. The analysis focuses on judicial reasoning about transparency, meaning how clear and understandable systems are, informed consent, meaning whether individuals can meaningfully understand and agree to data use, and accountability, meaning how responsibility is assessed when decisions rely on technical systems. The findings show that courts often struggle to apply existing legal principles in situations where information flows are complex and expertise is embedded in technology rather than solely in human decision makers. These judicial tensions highlight governance challenges that are likely to intensify as AI becomes more integrated into healthcare. Rather than offering a direct evaluation of specific statutes, the thesis uses these patterns in legal reasoning to anticipate areas where future governance approaches may require greater clarity, adaptability, and institutional support. Overall, the research demonstrates that Canadian law provides important starting principles, but that ongoing attention to transparency, consent, and responsibility will be essential as healthcare technologies continue to evolve.

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Thiyagaratnam_Earlel_MHIS_2025_thesis.pdf
Size:
916.93 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
3.05 KB
Format:
Item-specific license agreed to upon submission
Description:

Collections