artificial intelligence in healthcareartificial intelligence in healthcare: preparing for a new...

3
Artificial Intelligence in Healthcare: Preparing for a new legal liability landscape Alicia Bromfield and Kristin McMahon1 Artificial intelligence (AI) has captured the world’s headlines, perhaps in no field more so than healthcare. From surgical robots to diagnostic machines that can more timely and accurately detect disease process such as diabetic retinopathy by studying images of the eye, AI has the potential to transform healthcare delivery and improve patient outcomes. While AI is designed to curb human error through automated systems, machine learning and neural networks, it introduces a new set of risks to the healthcare liability landscape. To the extent a healthcare provider utilizes AI to treat a patient who has a less than a desired outcome, we anticipate liability suits against both those healthcare providers and AI software companies. In this article, we will explore anticipated liability theories that the plaintiffs’ bar may advance against defendants in cases involving new healthcare AI applications and the liability insurance needs created by such new risks. What is artificial intelligence? The term “artificial intelligence” (AI) is not universally defined. 2 When people talk about AI, they usually mean “machine learning,” a subset of AI that uses algorithms to detect patterns in data.3 AI’s transformative potential stems from its ability to integrate, parse, and synthesize large quantities of data.4 AI systems are able to identify patterns and links at a much faster rate than human healthcare providers.5 Thus, AI will play a critical role in diagnostics, customizing individual treatment plans, and keeping providers current on the latest medical research.6 As highlighted in a recent New York Times article, many organizations including Google, “are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension and heart failure.” 7 Researchers are also developing technology to automatically detect signs of disease and illness in M.R.I.s and X-rays.8 Because of the nature of machine learning, a health AI application is only as good as the training data that it works with.9 If the data programmed into the algorithm is flawed, limited, or biased, the outcome will be imperfect (“garbage in, garbage out”). Potential problematic areas include: (1) the clinical data fed into the algorithm, (2) the algorithm itself, and (3) the population within which the algorithms are used. AI systems are able to identify patterns and links at a much faster rate than human healthcare providers. Thus, AI will play a critical role in diagnostics, customizing individual treatment plans, and keeping providers current on the latest medical research. 1 We acknowledge and thank Sara Gerke, Rina Spence, and Carmel Shachar of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School for their valuable insight and discussions. 2 See for more information Sara Gerke and Joshua Feldman, “The Tricky Task of Defining AI in the Law,” Bill of Health, November 30, 2018, accessed July 8, 2019, http://blog.petrieflom.law.harvard. edu/2018/11/30/the-tricky-task-of-defining-ai-in-the-law/. 3 Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, “Artificial Intelligence in Healthcare,” Nature Biomedical Engineering 2, no. 10 (2018): 720, doi:10.1038/s41551-018-0305-z. 4 Robert David Hart, “If You’re Not a White Male, Artificial Intelligence’s Use in Healthcare Could Be Dangerous,” Quartz, August 01, 2018, accessed July 22, 2019, https:// qz.com/1023448/if-youre- not-a-white-male-artificial- intelligences-use-in-healthcare-could-be-dangerous/. 5 Ibid. 6 Ibid. 7 Cade Metz, “A.I. Shows Promise Assisting Physicians,” The New York Times, February 11, 2019, accessed July 22, 2019, 8 Ibid. 9 Sara Gerke, Daniel B. Kramer, and I. Glenn Cohen, “Ethical and Legal Challenges of Artificial Intelligence in Cardiology,” AIMed Magazine, 2019, 15, accessed July 08, 2019, https://ai-med.io/ magazine/.

Upload: others

Post on 20-Mar-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Artificial Intelligence in HealthcareArtificial Intelligence in Healthcare: Preparing for a new legal liability landscape Alicia Bromfield and Kristin McMahon1 Artificial intelligence

Artificial Intelligence in Healthcare: Preparing for a new legal liability landscape

Alicia Bromfield and Kristin McMahon1

Artificial intelligence (AI) has captured the world’s headlines, perhaps in no field more so than healthcare. From surgical robots to diagnostic machines that can more timely and accurately detect disease process such as diabetic retinopathy by studying images of the eye, AI has the potential to transform healthcare delivery and improve patient outcomes. While AI is designed to curb human error through automated systems, machine learning and neural networks, it introduces a new set of risks to the healthcare liability landscape. To the extent a healthcare provider utilizes AI to treat a patient who has a less than a desired outcome, we anticipate liability suits against both those healthcare providers and AI software companies.

In this article, we will explore anticipated liability theories that the plaintiffs’ bar may advance against defendants in cases involving new healthcare AI applications and the liability insurance needs created by such new risks.

What is artificial intelligence? The term “artificial intelligence” (AI) is not universally defined.2 When people talk about AI, they usually mean “machine learning,” a subset of AI that uses algorithms to detect patterns in data.3 AI’s transformative potential stems from its ability to integrate, parse, and synthesize large quantities of data.4 AI systems are able to identify patterns and links at a much faster rate than human healthcare providers.5 Thus, AI will play a critical role in diagnostics, customizing individual treatment plans, and keeping providers current on the latest medical research.6 As highlighted in a recent New York Times article, many organizations including Google, “are developing and testing systems that analyze electronic health records in an effort to flag medical conditions such as osteoporosis, diabetes, hypertension and heart failure.” 7 Researchers are also developing technology to automatically detect signs of disease and illness in M.R.I.s and X-rays.8

Because of the nature of machine learning, a health AI application is only as good as the training data that it works with.9 If the data programmed into the algorithm is flawed, limited, or biased, the outcome will be imperfect (“garbage in, garbage out”). Potential problematic areas include: (1) the clinical data fed into the algorithm, (2) the algorithm itself, and (3) the population within which the algorithms are used.

AI systems are able to identify patterns and links at a much

faster rate than human healthcare providers. Thus, AI will play a critical

role in diagnostics, customizing individual treatment plans, and

keeping providers current on the latest medical research.

1 We acknowledge and thank Sara Gerke, Rina Spence, and Carmel Shachar of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School for their valuable insight and discussions.

2 See for more information Sara Gerke and Joshua Feldman, “The Tricky Task of Defining AI in the Law,” Bill of Health, November 30, 2018, accessed July 8, 2019, http://blog.petrieflom.law.harvard. edu/2018/11/30/the-tricky-task-of-defining-ai-in-the-law/.

3 Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, “Artificial Intelligence in Healthcare,” Nature Biomedical Engineering 2, no. 10 (2018): 720, doi:10.1038/s41551-018-0305-z.

4 Robert David Hart, “If You’re Not a White Male, Artificial Intelligence’s Use in Healthcare Could Be Dangerous,” Quartz, August 01, 2018, accessed July 22, 2019, https://qz.com/1023448/if-youre- not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/.

5 Ibid. 6 Ibid. 7 Cade Metz, “A.I. Shows Promise Assisting Physicians,” The New

York Times, February 11, 2019, accessed July 22, 2019, 8 Ibid. 9 Sara Gerke, Daniel B. Kramer, and I. Glenn Cohen, “Ethical and

Legal Challenges of Artificial Intelligence in Cardiology,” AIMed Magazine, 2019, 15, accessed July 08, 2019, https://ai-med.io/ magazine/.

Page 2: Artificial Intelligence in HealthcareArtificial Intelligence in Healthcare: Preparing for a new legal liability landscape Alicia Bromfield and Kristin McMahon1 Artificial intelligence

10 E. Olthof, D. Nio, and W.A. Bemelman, “The Learning Curve of Robot-Assisted Laparoscopic Surgery,” in: Vanja Bozovic (ed.), Medical Robotics, available from: http://cdn.intechopen.com/pdfs/633/ InTech-The_learning_curve_of_robot_assisted_laparoscopic_surgery.pdf.

11 “What Is Product Liability?” Findlaw. Accessed July 03, 2019. https://injury.findlaw.com/product-liability/what-is-product-liability.html. W. Nicholson Price, II, “Artificial Intelligence in Health Care: Applications and Legal Implications,” The SciTech Lawyer 14, no. 1 (2017): 11, 12.

12 Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane, “Artificial Intelligence in Healthcare,” Nature Biomedical Engineering 2, no. 10 (2018): 727, doi:10.1038/s41551-018-0305-z. 13 Although “duty to train” cases have not gained much traction in the medical device field due to federal pre-emption, it is unclear whether the theory might be successfully advanced in an AI context. 14 Regardless of the ultimate success of such allegations, companies will still nevertheless need to defend against them. See, e.g., Glennen v. Allergan, Inc., Cal. Rptr.3d, 2016 WL 1732243 (Cal. Ct. App. Apr. 29,

2016), Taylor v. Intuitive Surgical, Inc., 187 Wash. 2d 743, 754, 389 P.3d 517, 523 (2017) (“While Taylor argued that ISI had a duty to train to the trial court, Taylor does not raise that claim to this court.”)

New AI legal liability landscape Assume a healthcare provider is utilizing AI-enabled clinical decision support software to treat an African American patient for heart disease. Now suppose the AI software recommends a high blood pressure/cardiovascular disease treatment plan that proves ineffective, the patient’s disease process progresses, and he dies from his condition. Upon review, the healthcare provider learns that the data feeding the algorithm contained within the AI software originated from clinical trials conducted on Caucasian subjects only. While the healthcare provider followed the recommendation of the software, the data underpinning the algorithm was arguably flawed as it was derived from a distinct clinical population that responds more favorably to a different treatment regimen that what was prescribed for his patient. Let’s assume the patient’s family files a wrongful death lawsuit for alleged negligence in the provision of care and seeks to recover monetary damages. Who will be the target defendants in this litigation?

The most obvious target would be the treating physician through a theory of medical malpractice liability. While the standard of care in the AI context is evolving, a central issue for the case against the physician will likely be whether relying on AI clinical decision software output was a breach of the standard of care. As with any new technology, medical providers will need to understand the benefits and limitations of each health AI application. Other critical factors the court or jury might consider include whether the provider substituted the AI clinical software output for his own medical judgment, the provider’s familiarity with the data feeding the algorithm, and the confidence level in its application to his patient population. We anticipate healthcare professionals will experience a learning curve similar to that of surgeons with the advent of laparoscopic/robotic surgery as they acclimate to incorporating AI effectively into their medical practice.10

In addition to the individual physician, the physician’s employer (either a group or hospital if applicable) could face vicariously liability for the acts of its employed physician or liability independent of the physician for failing to sufficiently vet or investigate the AI company that relied on flawed data and for endorsing the use of the AI software by its providers before it had been put through a rigorous credentialing process. Liability will depend on many factors including whether the hospital directed the physician to utilize the software or whether the physician independently decided to adopt the program and abide by its outputs.

The AI software company, itself, could face a litany of claims, including products liability, false advertising, and negligent training and supervision. Liability against the software company will often require a determination as to whether the AI software and its “algorithmic bias” is a “product,” and therefore subject to product liability law or is a “service,” which would require analysis under a tort theory of liability. A plaintiff in a product liability case typically needs to prove two things: (1) the product that caused injury was defective; and (2) the defect made the product unreasonably dangerous.11 Since healthcare software has been generally regarded as a support tool to assist providers in making treatment decisions, courts have so far been reluctant to apply product liability law to software developers.12 But this might change in the future when it comes to “black box” algorithms where even the physician has difficulties in interpreting the results reached by the AI.13 In addition, the competition among AI companies to be first to market with product capabilities is intense, thus setting the stage for plaintiffs’ attorneys to argue that the AI technology has not been adequately vetted and that AI companies put profits over people in prematurely releasing a defective product to the healthcare community.

AI software companies will also likely be defending against claims of negligent misrepresentation and/or false advertising. AI company websites may include statements advertising their products as providing “new levels of diagnostic certainty” or “proven to be effective.” Such promises will be cited by the plaintiffs as treatment/outcome guarantees in support of their false advertising allegations. Because AI works through machine “learning,” the program is continually evolving/changing. Thus, there may be a duty to continuously test the algorithm to ensure that its results remain sound as it learns from additional data. Plaintiffs will likely argue that the software companies failed to timely test/update their algorithms and make necessary updates.

Plaintiffs may pursue the AI software companies for failing to ensure that the providers leasing their software are trained on how to incorporate the technology into their medical practice. How will AI companies ensure healthcare professionals undergo formal training on the diagnostic capabilities and educate them on the limitations of the AI software applications? Is there a legal duty to do so? AI software companies should consider investing in healthcare training to ensure patient safety and to avoid allegations similar to those levied against other manufacturers of new healthcare technology.14

While the standard of care in the AI context is evolving, a central issue for the case against the physician will be whether relying on AI clinical decision software output was a breach of the standard of care.

Page 3: Artificial Intelligence in HealthcareArtificial Intelligence in Healthcare: Preparing for a new legal liability landscape Alicia Bromfield and Kristin McMahon1 Artificial intelligence

This document provides a general description of this program and/or service. See your policy, service contract, or program documentation for actual terms and conditions. Insurance is underwritten by Liberty Mutual Insurance Company or its affiliates or subsidiaries.

© 2019 Liberty Mutual Insurance, 175 Berkeley Street, Boston, MA 02116. 09/19

Insurance considerations In light of the evolving AI liability landscape, healthcare providers and the companies specializing in AI healthcare diagnostics and predictive analytics should evaluate coverage under their current insurance programs.

Since it is unclear how the courts will analyze healthcare AI liability, AI companies may want to ensure their current insurance programs respond to patient bodily injury claims pursued under either product or tort law. Typically, a product liability policy will cover damages due to a claim alleging bodily injury or property damage caused by an occurrence. Although the language across policies varies by company, an occurrence is generally defined as “an accident, including continuous or repeated exposure to substantially the same general harmful conditions.”15 In addition, an occurrence cannot result in injury or damage expected or intended by the Insured.16 Does a claim against an AI company constitute an “occurrence?” If so, when is the occurrence deemed to have occurred — (1) the date the patient suffers bodily injury/adverse event; or (2) the date on which the software/algorithm is run and provides the erroneous treatment recommendation; or (3) the date the AI software incorporated the “biased” clinical data into the algorithmic based software? The occurrence date is key with respect to determination of which policy period is potentially implicated and application of policy retroactive dates.

Medical professionals, too, will want to consider the impact of AI when assessing the adequacy of their insurance program. Medical professional liability insurance for physicians and other healthcare entities generally covers claims for “wrongful acts” in the rendering or failure to render professional services. Is the physician’s use of AI-enabled software considered a “professional service”? And, similar to the occurrence issue, when will the wrongful act be deemed to have occurred?

Allocations issues present as well. Will the AI software companies require providers to indemnify and/or hold them harmless if they are named as a co-defendant with the provider in any patient bodily injury suit? How will a court and/or jury apportion liability between the co-defendant AI company and the treating provider if they conclude that both the software was defective and the provider negligent?

By targeting AI companies instead of healthcare providers, plaintiffs may be able to circumvent medical malpractice damages caps. Thus, the AI software company may become the deep pocket in any litigation with no limit on recoverable damages, thereby prompting it to increase the liability limits of coverage purchased to protect itself from anticipated litigation.

15 “Occurrence.” Occurrence | Insurance Glossary Definition | IRMI.com. Accessed July 03, 2019. https://www.irmi.com/term/insurance-definitions/occurrence. 16 Ibid.

AI companies should ensure their current insurance program responds to both product liability as well as errors and omissions claims in the event of a bodily injury claim allegedly caused by AI.

Conclusion While the use of AI in healthcare will ultimately improve patient care with more timely and accurate diagnosis, treatment and even prevention of disease process, it will create new areas of liability for clinicians, provider systems, and AI companies. As the lines between “machine learning” and a provider’s individual clinical judgment blur, who will be held liable by the courts in the event of a missed diagnosis or adverse outcome? Which types of insurance policies will respond to claims arising out of an adverse health outcome caused by the AI?

It is an opportune time for regulators to weigh in on these AI healthcare issues before the first spate of patient injury claims are filed. New liability regulations tailored to health AI applications would contribute to more transparency and security for stakeholders in the field. Insurers could customize their policies and offer coverage solutions as appropriate. In the interim, to the extent AI companies and/or healthcare providers are implementing new diagnostic or predictive software, they should consult with their brokers and insurance partners with respect to how their insurance program would respond in the event of claim activity.