Abstract

Excerpted From: Amber Bolden, From Biased Data Inputs to Your Discriminatory Diagnosis Outputs: A Review of Legal Liability for Artificial Intelligence in Healthcare, 30 Michigan Technology Law Review 174 (Fall, 2024) (169 Footnotes) (Full Document)

 

AmberBoldenWhile health disparities in America occur due to non-medical circumstances, certain protected classes experience healthcare disparities due to the biases of medical professionals. The existence of biased diagnoses, intentional or unintentional, is as long as the history of the medical profession. That those biases are becoming data for training algorithms raises concerns as the medical field increasingly incorporates and standardizes artificial and augmented intelligence in patient diagnosis and treatment. Currently unregulated but with life-determining potential, artificial intelligence (AI) used in patient treatment poses important questions: should the doctor, the provider, or the AI developers be liable, and for what?

When a patient believes their doctor's bias resulted in an unfair or harmful treatment or diagnosis, they can sue through one of three nonexclusive means. The options are (1) a general medical malpractice claim, which hinges on a hired expert's definition of the customary standard of care; (2) a negligence claim for failure to obtain informed consent (inadequate physician disclosure), which does not depend on a standard of care but requires the patient to show that an alternative approach would not have resulted in harm; or (3) a Title VI claim under the 1964 Civil Rights Act, which is limited to race-based discrimination and only applies to professionals receiving federal funding (excluding physicians who solely receive federal funding for treating patients covered under Medicare Part B). Proof of bias alone may be insufficient to substantiate a claim--there must also be a harm. When the same bias occurs, but the harm is due to the use of AI, it is unclear which of these paths, if any, would be best for a patient to pursue. One proposal theorizes suing the doctor directly under civil rights law. Another advocates negligence--of the AI itself, the user of the AI (health practitioner), or the enterprise (such that all involved “in the use and implementation of the AI systems should justly bear some responsibility”)--coupled with a different “““duty of care.”

However, determining liability for discriminatory predictions presents greater complexity. For example, doctors and other medical professionals may not hold the same biases as those from whom the machine learned, nor may they be aware that their reliance on AI to augment their own capabilities now includes discriminatory suggestions. Algorithm creators may possess limited insight into the data's underlying meaning, merely constructing a framework for the AI to process data and generate results. Even those who purchase the AI, and potentially the AI itself, may be unaware of its potential to yield or distinguish discriminatory outputs. Nevertheless, all of these actors could, and perhaps should, be held liable when someone suffers as a consequence of a discriminatory diagnosis or treatment.

This paper proceeds as follows. Section II will examine how bias manifests in healthcare, followed by an exploration of how non-AI biased diagnoses and treatments are currently addressed within the legal framework. Section III will explore how AI outputs can perpetuate discriminatory care. Section IV then catalogues and analyzes potential legal approaches to biased AI diagnosis and treatments moving forward. A brief conclusion follows.

 

[. . .]

 

The healthcare sector is dynamic and fluid and, with the introduction of predictive AI, will continue to evolve. While the use of AI can significantly improve the field of medicine, the technology also holds the potential to exacerbate healthcare disparities for members of protected classes. The current legal landscape makes it difficult to prove discrimination-related harm, especially when the discrimination is done by AI. Using a modified standard of care that revolves around the use of AI and applying a nuanced “Hand Formula” could help patients navigate the state malpractice claims. At the federal level, creating national guidance and amending Section 1557 could provide more uniform protection for vulnerable groups, and, under a common enterprise theory of liability, extend liability to AI developers. However, solutions are as important as consequences when trying to minimize individual discrimination and disparate impact in the healthcare sector. Healthcare institutions and AI developers should proactively collaborate to establish methods for mitigating bias in their data, the algorithm, and the humans that develop and use it; conform to industry and federal principles for transparency and fairness in AI; continue to disclose how the system works and the faults that accompany it to patients directly; and invest in insurance.


J.D. Candidate, University of Notre Dame Law School, 2024; B.S. in Chemistry, B.A. Foreign Affairs, University of Virginia, 2018.