Using Artifificial Intelligence in Canadian Healthcare: Legal Reasoning and Governance Challenges in Privacy, Consent, and Transparency
| dc.contributor.advisor | Stark, Charles | |
| dc.contributor.author | Thiyagaratnam, Earlel | |
| dc.date.accessioned | 2026-04-01T15:02:28Z | |
| dc.date.issued | 2025-11-28 | |
| dc.description.abstract | The growing use of artificial intelligence (AI) in Canadian healthcare raises significant governance challenges for legal systems that were largely developed before data-intensive and automated technologies became widespread. This thesis examines how Canadian law currently responds to issues created by health-related AI, including transparency, informed consent, data governance, and professional accountability. Because there are very few Canadian court cases that deal directly with AI in clinical settings, this research analyzes court and tribunal decisions that address closely related issues involving digital technologies, complex information systems, and sensitive health data. These decisions are treated as practical sites where governance problems relevant to health AI already appear, even when artificial intelligence is not explicitly named. Using a qualitative legal case study approach, the thesis analyzes a set of Canadian decisions to examine how courts reason about transparency, patient understanding, data flows, consent, and responsibility in technologically complex contexts. The analysis shows that Canadian courts frequently encounter difficulties when applying existing legal principles to systems that obscure how decisions are produced, rely on extensive data reuse, or embed expertise within technical infrastructures rather than individual professionals. Across cases, courts struggle to balance demands for transparency with confidentiality obligations, to assess whether consent remains meaningful in complex information environments, and to allocate responsibility when automated or system-level tools influence decision-making. The thesis argues that these recurring legal tensions foreshadow deeper governance challenges as AI becomes more integrated into healthcare delivery. Rather than offering a statutory critique, the research highlights how judicial reasoning reveals structural pressures on concepts such as transparency, consent, and accountability in AI-adjacent contexts. The thesis concludes by identifying areas where clearer governance approaches will be necessary, including improved consent practices, more robust expectations around transparency and explainability, clearer understandings of professional and institutional responsibility, and oversight mechanisms capable of adapting to rapid technological change. The findings demonstrate that while Canadian law provides important guiding principles, their application to health-related AI will require careful development to protect patient rights, maintain public trust, and support ethical healthcare innovation. | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14721/39508 | |
| dc.language.iso | en | |
| dc.publisher | The University of Western Ontario | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
| dc.subject | Artificial Intelligence (AI) | |
| dc.subject | Healthcare Law | |
| dc.subject | Informed Consent | |
| dc.subject | Privacy and Data Governance | |
| dc.subject | Algorithmic Transparency | |
| dc.subject | Professional Regulation | |
| dc.subject | Accountability | |
| dc.subject | Judicial Reasoning | |
| dc.title | Using Artifificial Intelligence in Canadian Healthcare: Legal Reasoning and Governance Challenges in Privacy, Consent, and Transparency | |
| dc.type | thesis | |
| oaire.license.condition | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| thesis.degree.discipline | Health Information Science | |
| thesis.degree.grantor | The University of Western Ontario | |
| thesis.degree.name | MHIS | |
| uwo.description.laySummary | Artificial intelligence, often called AI, is increasingly used in Canadian healthcare to analyze medical information, support diagnoses, manage health data, and guide decision making. While these tools can be useful, they raise important questions about patient rights, privacy, consent, and responsibility when decisions are influenced by complex technologies. This thesis examines how Canadian courts and tribunals engage with issues that are closely connected to the use of AI in healthcare. Because there are very few Canadian legal cases that directly involve AI in clinical care, the research analyzes related cases involving digital health tools, complex data systems, and the handling of sensitive health information. These cases show how legal challenges relevant to AI already arise in practice, even when artificial intelligence is not explicitly named. The analysis focuses on judicial reasoning about transparency, meaning how clear and understandable systems are, informed consent, meaning whether individuals can meaningfully understand and agree to data use, and accountability, meaning how responsibility is assessed when decisions rely on technical systems. The findings show that courts often struggle to apply existing legal principles in situations where information flows are complex and expertise is embedded in technology rather than solely in human decision makers. These judicial tensions highlight governance challenges that are likely to intensify as AI becomes more integrated into healthcare. Rather than offering a direct evaluation of specific statutes, the thesis uses these patterns in legal reasoning to anticipate areas where future governance approaches may require greater clarity, adaptability, and institutional support. Overall, the research demonstrates that Canadian law provides important starting principles, but that ongoing attention to transparency, consent, and responsibility will be essential as healthcare technologies continue to evolve. |