Hostname: page-component-76fb5796d-wq484 Total loading time: 0 Render date: 2024-04-25T19:08:20.465Z Has data issue: false hasContentIssue false

Applying a Common Enterprise Theory of Liability to Clinical AI Systems

Published online by Cambridge University Press:  17 March 2022

Benny Chan*
Affiliation:
Department of Justice Canada (Health Legal Services Unit), Ottawa, Canada

Abstract

The advent of artificial intelligence (“AI”) holds great potential to improve clinical diagnostics. At the same time, there are important questions of liability for harms arising from the use of this technology. Due to their complexity, opacity, and lack of foreseeability, AI systems are not easily accommodated by traditional liability frameworks. This difficulty is compounded in the health care space where various actors, namely physicians and health care organizations, are subject to distinct but interrelated legal duties regarding the use of health technology. Without a principled way to apportion responsibility among these actors, patients may find it difficult to recover for injuries. In this Article, I propose that physicians, manufacturers of clinical AI systems, and hospitals be considered a common enterprise for the purposes of liability. This proposed framework helps facilitate the apportioning of responsibility among disparate actors under a single legal theory. Such an approach responds to concerns about the responsibility gap engendered by clinical AI technology as it shifts away from individualistic notions of responsibility, embodied by negligence and products liability, toward a more distributed conception. In addition to favoring plaintiff recovery, a common enterprise strict liability approach would create strong incentives for the relevant actors to take care.

Type
Articles
Copyright
© 2022 The Author(s)

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Counsel, The author would like to thank Gregory Scopino, Jennifer Anderson, Nicholson Price, and two anonymous reviewers for their enormously helpful comments. Special thanks also go to the AJLM editors for their assistance throughout the publication process. The views expressed herein should not be taken as reflecting the position of the Department of Justice. The author can be contacted at bc742@georgetown.edu

References

1 Mindful of the intense and technical debate over the definition of AI, I will simply adopt Ryan Calo’s definition of AI as a “set of techniques aimed at approximating some aspect of human or animal cognition using machines” for the purposes of this Article. Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. Davis L. Rev. 399, 404 (2017). For a more detailed discussion, see also Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 Harv. J.L. & Tech. 354, 359-362 (2016).

2 See, e.g., Joachim Roski et al., How Artificial Intelligence is Changing Health and Health Care, in Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril 64 (Michael Matheny et al ed., 2019) [hereinafter Artificial Intelligence in Health Care].

3 Casey Ross & Ike Swetlitz, IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close, Stat (Sept. 5, 2017), https://www.statnews.com/2017/09/05/watson-ibm-cancer/ [https://perma.cc/P6CP-HUY6].

4 See Eur. Commn, Expert Group on Liability and New Technologies, Liability for Artificial Intelligence and Other Emerging Digital Technologies 11 (2019); Ryan Abbott, The Reasonable Robot 53 (2020) (noting that tort law “has far-reaching and sometimes complex impact on behavior,” including on the introduction and use of new technologies).

5 David Vladeck, Machines Without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117, 129 n.39 (2014).

6 See Inst. Med., To Err is Human: Building a Safer Health System 58-59 (2000) (noting that complex and tightly coupled systems are more prone to accidents. Complex systems have many specialized and interdependent parts such that if one part that serves multiple functions fails, all of the dependent functions fail as well. In systems that are tightly coupled, processes are more time-dependent and sequences are more fixed. As such, things can unravel quickly which makes it difficult to intercept errors and prevents speedy recovery from events. Activities in emergency rooms, surgical suites, and intensive care units are examples of complex and tightly coupled systems).

7 See Darling v. Charleston Cmty. Mem’l Hosp., 211 N.E.2d 253, 257 (Ill. 1965).

8 See Timothy Craig Allen, Regulating Artificial Intelligence for a Successful Pathology Future, 143 Archives Pathology & Laby Med. 1175, 1177 (2019).

9 See, e.g., Fei Jiang et al., Artificial intelligence in Healthcare: Past, Present and Future, 2 Stroke & Vascular Neurology 230, 230 (2017).

10 See, e.g., Kee Yuan Ngiam & Ing Wei Khor, Big Data and Machine Learning Algorithms for Health-Care Delivery, 20 Lancet Oncology e262, e266 (2019).

11 See, e.g., Abbott, supra note 4, at 132 (noting that products liability law may require that AI be a commercial product and not a service); Nicolas Terry, Of Regulating Healthcare AI and Robots, 18 Yale J. Health Poly, L., & Ethics 133, 162 (2019).

12 See infra notes 72-79 and accompanying text.

13 See e.g., Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant, Of, for, and by the People: The Legal Lacuna of Synthetic Persons, 25 A.I. & L. 273, 275 (2017) (opposing the extension of rights to algorithms partly due to the implications on human rights); citations infra, note 150.

14 See, e.g., W. Nicholson Price II, Artificial Intelligence in the Medical System: Four Roles for Potential Transformation, 18 Yale J. Health Poly, L., & Ethics 124, 124-25 (2019); Abbott, supra note 4, at 56 (“Why should AI not be able to outperform a person when the AI can access the entire wealth of medical literature with perfect recall, benefit from the experience of directly having treated millions of patients, and be immune to fatigue?”).

15 Artificial Intelligence in Health Care, supra note 2, at 64.

16 Canadian Med. Assn, The Future of Technology in Health and Health Care: A Primer 10 (2018), https://www.cma.ca/sites/default/files/pdf/health-advocacy/activity/2018-08-15-future-technology-health-care-e.pdf [https://perma.cc/UU2A-32U2].

17 Stephan Fihn et al., Deploying AI in Clinical Settings, in Artificial Intelligence in Health Care, supra note 2, at 154.

18 Allen, supra note 8, at 1177. While Allen is writing specifically for pathologists, his insights are readily generalizable to other areas of medicine.

19 Id.

20 Jiang et al., supra note 9, at 230.

21 Ngiam & Khor, supra note 10, at e266.

22 For tasks such as image recognition or language processing, a feature selector will need to first process the variables by “picking out identifiable characteristics from the dataset which then can be represented in a numerical matrix and understood by the algorithm.” Jenni A. M. Sidney-Gibbons & Chris J. Sidney-Gibbons, Machine Learning in Medicine: A Practical Introduction, 19 BMC Med. Rsch. Methodology 1, 2 (2019).

23 See id. at 3.

24 These algorithms “consist of layers of nodes that each use simple mathematical operations to perform a specific operation on the activation of the layer before, leading to the emergence of increasingly abstract representations of the input image.” Thomas Grote & Philipp Berens, On the Ethics of Algorithmic Decision-Making in Healthcare, 46 J. Med. Ethics 205, 206 (2020).

25 See Thomas Davenport & Ravi Kalakota, The Potential for Artificial Intelligence in Healthcare, 6 Future Healthcare J. 94, 94 (2019). Indeed, it is predicted that AI’s capacity to interpret digitized images (e.g. x-rays) will eventually outstrip that of human radiologists and pathologists. See, e.g., Canadian Med. Assn, supra note 16, at 10.

26 See, e.g., Ngiam & Khor, supra note 10, at e266.

27 See, e.g., Christopher J. Kelly et al., Key Challenges for Delivering Clinical Impact with Artificial Intelligence, 17 BMC Med. 1, 4 (2019).

28 Id.

29 Michael P. Recht et al., Integrating Artificial Intelligence into the Clinical Practice of Radiology: Challenges and Recommendations, 30 Eur. Radiology 3576, 3579 (2020).

30 Fihn et al., supra note 17, at 166.

31 Ross & Swetlitz, supra note 3.

32 Casey Ross & Ike Swetlitz, IBM’s Watson Supercomputer Recommended ‘Unsafe and Incorrect’ Cancer Treatments, Internal Documents Show, stat (July 25, 2018), https://www.statnews.com/wp-content/uploads/2018/09/IBMs-Watson-recommended-unsafe-and-incorrect-cancer-treatments-STAT.pdf [https://perma.cc/WD77-YBGA].

33 Id.

34 Julia Amann et al., Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective, 20 BMC Med. Informatics & Decision making 1, 2 (2020). The authors note that explainability has many facets and is not clearly defined. Other authors distinguish explainability from interpretability. See, e.g., Boris Babic et al., Beware Explanations from AI in Health Care, 373 Science 284, 284 (2021).

35 Grote & Berens, supra note 24, at 207.

36 Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J. L. & Tech. 889, 905 (2018). Bathaee further divides the black box problem into the categories of Strong Black Boxes and Weak Black boxes. The former refers to AI decision-making processes that are entirely opaque to human beings in that there is no way to “determine (a) how the AI arrived at a decision or prediction, (b) what information is outcome determinative to the AI, or (c) to obtain a ranking of the variables processed by the AI in the order of their importance.” Id. at 906. The latter refers to AI decision-making processes that are opaque but susceptible to reverse engineering or probing to “determine a loose ranking of the importance of the variables the AI takes into account.” Id.

37 Geoffrey Hinton, Deep Learning – A Technology with the Potential to Transform Health Care, 320 JAMA 1101, 1102 (2018).

38 See W. Nicholson Price II, Regulating Black-Box Medicine, 116 Mich. L. Rev. 421, 443 (2017) (noting that machine-learning methods are difficult to evaluate due to algorithmic opacity and complexity).

39 See Roger A. Ford & W. Nicholson Price II, Privacy and Accountability in Black-Box Medicine, 23 Mich. Telecom. & Tech. L. Rev. 1, 16 (2016) (noting that black-box algorithms identify interventions that are specific to an individual, not a cohort of patients with similar indications. The inability to randomize across a sample population and observe different outcomes makes it impossible to predict individual responses of individual patients through a clinical trial); Thomas M. Maddox et al., Questions for Artificial Intelligence in Health Care, 321 JAMA 31 (2019) (suggesting that the “use of deep learning and other analytic approaches in AI adds an additional challenge. Because these techniques, by definition, generate insights via unobservable methods, clinicians cannot apply the face validity available in more traditional clinical decision tools”); Derek C. Angus, Randomized Clinical Trials of Artificial Intelligence, 323 JAMA 1043 (2020) (commenting on the complications and uncertainties involved in conducting randomized clinical trials on AI-enabled decision support tools that are continually learning).

40 Thomas Quinn et al., The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?, A.I. Med. (forthcoming 2021) (manuscript at 3).

41 Id. For instance, an adaptive algorithm can learn from racial, ethnic, and socioeconomic disparities in care and outcomes that pervade a healthcare system. The predictions or recommendations generated by the algorithm then reinforce these biases, creating a negative feedback loop. Clinicians may inadvertently contribute to this feedback loop if, owing to time pressure or fear of liability, they treat the algorithm’s recommendations as infallible and thereby fail to notice or correct for the biased outputs. Matthew DeCamp & Charlotta Lindvall, Latent Bias and the Implementation of Artificial Intelligence in Medicine, 27 J. Am. Med. Informatics. Assn 2020, 2021 (2020).

42 Grote & Berens, supra note 24, at 207.

43 See, e.g., Shinjini Kundu, AI in Medicine Must be Explainable, 27 Nature Med. 1328, 1328 (2021).

44 Id., see also Amann et al., supra note 34, at 6 (arguing that “[s]ince clinicians are no longer able to fully comprehend the inner workings and calculations of the decision aid they are not able to explain to the patient how certain outcomes or recommendations are derived”).

45 See, e.g., Kelly et al., supra note 27, at 5 (holding that explainability would improve “experts’ ability to recognize system errors, detect rules based upon inappropriate reasoning, and identify the work required to remove bias”). Outside of the clinical context, there are important purposes served by AI explainability such as helping affected parties understand why a decision was made, providing grounds to contest adverse decisions, understanding how to achieve a desired result in the future. See Sandra Wachter, Brent Mittelstadt, & Chris Russell, Counterfactual Explanations without Opening the Blackbox: Automated Decisions and the GDPR, 31 Harv. J.L. & Tech. 841, 843 (2018).

46 See, e.g., Daniel Schönberger, Artificial Intelligence in Healthcare: A Critical Analysis of The Legal and Ethical Implications, 27 Intl J.L. & Info. Tech. 171, 195 (2019); Babic et al., supra note 34, 285-286 (arguing that explainable AI will produce few benefits and incur additional costs such as being misleading in the hands of imperfect users and underperforming in some tasks); Abbott, supra note 4, at 33 (“Even if theoretically possible to explain an AI outcome, it may be impracticable given the complexity of AI, the possible resource-intensive nature of such inquiries, and the need to maintain earlier versions of AI and specific data”).

47 See, e.g., Jeremy Petch et al., Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology, Canadian J. Cardiology (forthcoming) (arguing that “the nature of explanations as approximations may omit important information about how black box models work and why they make certain predictions”).

48 W. Nicholson Price II, Sara Gerke & I. Glenn Cohen, Potential Liability for Physicians Using Artificial Intelligence, 322 JAMA 1765, 1765 (2019).

49 See, e.g., Allen, supra note 8, at 1177.

50 Eur. Commn, supra note 4, at 11.

51 Matthew Scherer notes from “anecdotal discussions with other lawyers, the most commonly held view is that the traditional rules of products liability will apply to A.I. systems that cause harm.” Matthew U. Scherer, Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems, 19 Nev. L. J. 259, 280 (2019); see also Xavier Frank, Note, Is Watson for Oncology Per Se Unreasonably Dangerous?: Making a Case for How to Prove Products Liability Based on a Flawed Artificial Intelligence Design, 45 Am. J.L. & Med. 273, 281, 284 (2019) (arguing that products liability would be the only viable option for a plaintiff injured by Watson Oncology to bring a suit against IBM).

52 See Restatement (Third) of Torts: Prod. Liab. § 1 (Am. L. Inst. 1998); cf. Restatement (Second) of Torts § 402A(2)(a) (Am. L. Inst. 1965) (holding manufacturer liable for unreasonably dangerous defects in their products regardless of whether the manufacturer “exercised all possible care in the preparation and sale of the product”).

53 Omri Rachum-Twaig, Whose Robot Is It Anyway?: Liability for Artificial-Intelligence Based Robots, 2020 U. Ill. L. Rev. 1141, 1154 (2020).

54 See Restatement (Third) of Torts: Prod. Liab. § 2 (Am. L. Inst. 1998).

55 See, e.g., Willis L. M. Reese, Products Liability and Choice of Law: The United States Proposals to the Hague Conference, 25 Vand. L. Rev. 29, 35 (1972) (noting that “the trend in the law of products liability in nearly all nations of the world has been to favor the plaintiff by imposing increasingly strict standards of liability upon the supplier”).

56 See, e.g., John Villasenor, Products Liability Law as a Way to Address AI Harms, Brookings (Oct. 31, 2019), https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ [https://perma.cc/2NM6-ZZW9].

57 See Restatement (Third) of Torts: Prod. Liab. § 19(a) (Am. L. Inst. 1998).

58 See e,g. Karni A. Chagal-Feferkorn, Am I an Algorithm or a Product?: When Products Liability Should Apply to Algorithmic Decision-Makers, 30 Stan. L. & Poly Rev. 61, 83-84 (2019) (noting that courts “treat[ed] information as a product and applied products liability laws when errors in the information caused damage, especially when the information was integrated with a physical object”).

59 See, e.g., Terry, supra note 11 at 162.

60 See, e.g., Scherer, supra note 1, at 390 (describing this as a “thorny” issue); Iria Giuffrida, Liability for AI Decision-Making: Some Legal and Ethical Considerations, 88 Fordham L. Rev. 439, 444-45, 445 n.34 (2019). But see Vladeck, supra note 5, at 132-33 n. 52 (noting products liability cases that involve allegations of software defects).

61 See Eur. Commn, supra note 4, at 28.

62 See, e.g., Vladeck, supra note 5, at 135-36; Abbott, supra note 14, at 132.

63 Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents 144 (Univ. of Mich. Press ed., 2011).

64 Restatement (Third) of Torts: Prod. Liab. § 3 (Am. L. Inst. 1998) (“It may be inferred that the harm sustained by the plaintiff was caused by a product defect existing at the time of sale or distribution, without proof of a specific defect when the incident that harmed the plaintiff: (a) was of a kind that ordinarily occurs as a result of product defect; and (b) was not, in the particular case, solely the result of causes other than product defect existing at the time of sale or distribution”); see also Vladeck, supra note 5, at 128 n.36 (noting that “[a]n otherwise inexplicable failure, which is not fairly described as ‘ordinary,’ would likely not qualify under this standard.”).

65 See Woodrow Barfield, Liability for Autonomous and Artificially Intelligent Robots, 9 Paladyn, J. Behav. Robotics 193, 196 (2018) (noting that “the law as currently established may be useful for determining liability for mechanical defects, but not for errors resulting from the autonomous robot’s ‘thinking’; this is a major flaw in the current legal approach to autonomous robots”).

66 Id. (“Additionally, with intelligent and autonomous robots controlled by algorithms, there may be no design or manufacturing flaw that served as a causative factor in an accident, instead the robot involved in an accident could have been properly designed, but based on the structure of the computing architecture, or the learning taking place in deep neural networks, an unexpected error or reasoning flaw could have occurred”).

67 610 F. Supp. 2d 401 (E.D. Penn. 2009), aff’d, 363 Fed. Appx. 925, 927 (3d Cir. 2010).

68 The da Vinci system consists of a control console unit and four slave manipulators, three for telemanipulation of surgical instruments and one for the endoscopic camera. Functionally speaking, the system allows a surgeon to visualize the surgical field using the endoscope connected to a 3D display and transforms the surgeon’s hand movement to that of the surgical instruments. C. Freschi et al., Technical Review of the da Vinci Surgical Telemanipulator, 9 Int. J. Med. Robotics & Computer Assisted Surgery 396, 397 (2012).

69 363 Fed. Appx. 925 at 926-927.

70 The trial court did not allow a physician who had experience with robotic surgery to testify on the basis of insufficient, technical knowledge of the da Vinci system. See Margo Goldberg, Note, The Robotic Army Went Crazy! The Problem of Establishing Liability in a Monopolized Field, 38 Rutgers Computer & Tech. L. J. 225, 248 (2012).

71 Dan B. Dobbs, The Law of Torts § 466 (2001).

72 Timothy S. Hall, Reimagining the Learned Intermediary Rule for the New Pharmaceutical Marketplace, 35 Seton Hall L. Rev. 193, 217 (2004).

73 Id. at 216-17.

74 See, e.g., Jessica S. Allain, From Jeopardy! to Jaundice: The Medical Liability Implications of Dr. Watson and Other Artificial Intelligence Systems, 73 La. L. Rev. 1049, 1069 (2013) (positing that the doctrine would “eliminat[e] any duty the manufacturer may have had directly to the patient”); W. Nicholson Price II, Artificial Intelligence in Health Care: Applications and Legal Issues, 14 SciTech Law. 10, 13 (2017) (querying whether the LI doctrine should “bow to the recognition that doctors cannot fully understand all the technologies they use or the choices such technologies help them make whey they are not provided the needed and/or necessary information”).

75 See MacDonald v Ortho Pharm. Corp., 475 N.E.2d 65, 69, 71 (Mass. 1985). In this case, the Supreme Court of Massachusetts ruled that the physician who prescribed oral contraception to a patient was “relegated to a relatively passive role” and therefore the drug manufacturer could not invoke the LI doctrine as a defense. Id. at 69. Specifically, the pharmaceutical company failed to mention the risk of strokes associated with use of its contraceptive pill in the booklet distributed to patients per FDA requirements. The underlying idea here seems to be that the law will refuse to recognize the physician as an intermediary between the manufacturer and a patient in situations where the physician is not bringing her expertise and discretion to bear in consultations with the patient.

76 See Zach Harned, Matthew P. Lungren & Pranav Rajpurkar, Comment, Machine Vision, Medical AI, and Malpractice, Harv. J.L. & Tech. Dig. 1, 9 (2019), https://jolt.law.harvard.edu/digest/machine-vision-medical-ai-and-malpractice [https://perma.cc/ZHV8-U5QX] (“… if such a diagnostic system is designed to take the scan, read it, make the diagnosis, and then present it to the physician who acts merely as a messenger between the system and the patient, then it would seem that the physician is playing a relatively passive role in this provision of treatment”).

77 Curtis E. A. Karnow, The Application of Traditional Tort Theory to Embodied Machine Intelligence, in Robot Law 69 (Ryan Calo, A. Michael Froomkin & Ian Kerr eds., 2016).

78 See, e.g., A. Michael Froomkin, Ian Kerr & Joelle Pineau, When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33, 51 (2019) (recommending changes to medical malpractice law to reflect AI’s superior performance over human physicians).

79 74 Am. Jur. 2d Torts § 7.

80 Price, Gerke & Cohen, supra note 48, at 1765.

81 See Phillip G. Peters, Jr., The Quiet Demise of Deference to Custom: Malpractice Law at the Millennium, 57 Wash. & Lee L. Rev. 163, 165 (2000).

82 Michael D. Greenberg, Medical Malpractice and New Devices: Defining an Elusive Standard of Care, 19 Health Matrix 423, 428-29 (2009) (noting that half of the states have adopted the reasonable physician standard).

83 Id. at 430 (“… all versions of the malpractice standard are ultimately based on an evaluation of the appropriateness of a physician’s conduct, by comparison to what reasonable physicians either do, or should do, in similar circumstances. The latter is usually determined by reference to the customary practices of other physicians, as established through expert testimony”).

84 Id. at 434. See also Eur. Commn, supra note 4, at 23 (“Emerging digital technologies make it difficult to apply fault-based liability rules, due to the lack of well-established models of proper functioning of these technologies and the possibility of their developing as a result of learning without direct human control”).

85 See Schönberger, supra note 46, at 197 (“…in light of the opacity inherent in AI systems, it might indeed be an insurmountable burden for a patient to prove not only causation but the breach of a duty of care in the first place”). Admittedly, explainability is not the only epistemic warrant for following an AI recommendation. See I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient, 108 Geo. L. J. 1425, 1443 (2020) (“The epistemic warrant for that proposition [that AI is likely to lead to better decisions] need not be firsthand knowledge – we might think of medical AI/ML as more like a credence good, where the epistemic warrant is trust in someone else”). The challenge lies in identifying the right epistemic warrant or indicia of reliability in the absence of explainability. On this point, some have suggested the need to validate system performance in prospective trials. See, e.g., Alex John London, Artificial Intelligence and Black-Box Medical Decisions Accuracy versus Explainability, 49 Hastings Ctr. Rep. 15, 20 (2019). However, owing to variance in organizational factors, a clinical AI system that is found safe and effective in one setting may prove to be significantly less so in another. See Sara Gerke et al., The Need for a System View to Regulate Artificial Intelligence/Machine Learning-Based Software as Medical Device, 3 npj Digi. Med. 1, 2 (2020) (“due to their systemic aspects, AI/ML-based [software as medical device] will present more variance between performance in the artificial testing environment and in actual practice settings, and thus potentially more risks and less certainty over their benefits”). Nicholson Price has suggested an approach whereby providers would require some validation prior to relying on a black-box algorithm for riskier interventions. This validation would likely come in the form of procedural checks or independent computations by third parties, as opposed to clinical trials. For the riskiest and most counter-intuitive interventions, Price suggests that no black-box verification would be able to overcome the presumption of harm under a reasonable standard of care. He acknowledges however that there may be challenges of implementation, overcaution, and under-compensation associated with this risk-based approach. W. Nicholson Price II, Medical Malpractice and Black-Box Medicine, in Big Data, Health Law, and Bioethics 301-02 (Cohen et al. eds., 2018).

86 Admittedly, this is less of a problem in jurisdictions that have kept the customary medical practice standard as the plaintiff would only have to demonstrate that the physician did not do what is customarily done. However, there will likely be a high risk of liability during the transition phase that precedes the emergence of a prevailing customary practice (or one that courts consider dispositive for the purposes of liability). This may also negatively impact technological innovation as “the baseline grounding of the standard of care in customary practice privileges hewing to tradition.” Price II, supra note 85, at 304

87 Id. at 299 n.15.

88 See Andrew D. Selbst, Negligence and AI’s Human Users, 100 B. U. L. Rev. 1315, 1338 (2020).

89 This uncertainty is reflected in the scholarship. For instance, Hacker et al. posit that if a physician overrides an ML judgment based on their professional judgement, they can be shielded from liability as negligence “attaches liability only to actions failing the standard of care.” Hacker et al., Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges, 28 Artificial Iintelligence & L. 415, 424 (2020). In contrast, Cohen et al. think that owing to tort law being inherently conservative, “reliance on medical AI to deviate from the otherwise known standard of care will likely be a defense to liability.” Price, Gerke & Cohen, supra note 49, at 1766. However, they concede that this may change quickly. See id. A recent empirical study has shed some light on how these situations might play out in court. Relying on the results of an online survey of potential jurors, Tobia et al. found that following the standard of care and following the recommendation of AI tools were both effective in reducing lay judgment of liability. Kevin Tobia, Aileen Nielsen & Alexander Stremitzer, When Does Physician Use of AI Increase Liability?, 62 J. Nuclear Med. 17, 20 (2021). In a response to the study, Price et al., acknowledged that, at least with respect to potential jurors and lay knowledge, the use of AI might be close to the standard of care. However, they also note factors that complicate translating the results of the study into real life, including the ability of the judge to resolve cases against patients without trial and the fact that jurors are instructed in the law by the judge and engage in deliberative decision making. As a result, it may be difficult to predict collective juror verdicts from observing only individual juror decisions. W. Nicholson Price II, Sara Gerke & I. Glenn Cohen, How Much Can Potential Jurors Tell Us About Liability for Medical Artificial Intelligence, 62 J. Nuclear Med. 15, 19-20 (2021).

90 See Sharona Hoffman & Andy Podgurski, E-Health Hazards: Provider Liability and Electronic Health Record Systems, 24 Berkeley Tech. L.J., 1523, 1535 (2009).

91 See id.; infra Part IV.C.

92 Greenberg, supra note 82, at 439.

93 Id.

94 See e.g., Iria Giuffrida & Taylor Treece, Keeping AI Under Observation: Anticipated Impacts on Physicians’ Standard of Care, 22 Tul. J. Tech. & Intell. Prop. 111, 117-118 (2020).

95 See Efthimios Parasidis, Clinical Decision Support: Elements of a Sensible Legal Framework, 20 J. Health Care L. & Poly 183, 214 (2018)

96 See Hoffman & Podgurski, supra note 90, at 1536 (“The doctrine of ‘respondeat superior,’ which literally means ‘let the superior answer,’ establishes that employers are responsible for the acts of their employees in the course of their employment”).

97 Id.

98 Greenberg, supra note 82, at 439.

99 Parasidis, supra note 95, at 214.

100 Id., at 215.

101 See Michael D. Scott, Tort Liability for Vendors of Insecure Software: Has the Time Finally Come, 67 Md. L. Rev. 425, 446 (2008).

102 Id.

103 See Parasidis, supra note 94, at 215.

104 Id.

105 See Eur. Commn, supra note 4, at 24 (“The more complex the circumstances leading to the victim’s harm are, the harder it is to identify relevant evidence. For example, it can be difficult and costly to identify a bug in a long and complicated software code. In the case of AI, examining the process leading to a specific result (how the input data led to the output data) may be difficult, very time-consuming and expensive”).

106 Selbst points out that while interpretability can render some AI errors predictable and thus resolve the foreseeability problem, this only works some of the time. Selbst, supra note 88, at 1341.

107 Quinn et al., supra note 40.

108 Selbst, supra note 88, at 1360-61. Selbst notes that it “is a fundamental tenet of negligence law that one cannot be liable for circumstances beyond what the reasonable person can account for.” Id. at 1360.

109 Id. at 1361.

110 Id. at 1362-63.

111 See Price II, Gerke & Cohen, supra note 89, at 15-16; George Maliha et al., Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation, 99 Milbank Q. 629, 630 (2021) (noting that “physicians exist as part of an ecosystem that also includes health systems and AI/ML device manufacturers. Physician liability over use of AI/ML is inextricably linked to the liability of these other actors”).

112 See Rachum-Twaig, supra note 53, at 1164.

113 See Scherer, supra note 51, at 286.

114 The requirements for legal agency are set out in the Restatement (Second) of Agency § 1 (Am. L. Inst. 1958):

(1) Agency is the fiduciary relation which results from the manifestation of consent by one person to another that the other shall act on his behalf and subject to his control, and consent by the other so to act.

(2) The one for whom action is to be take is the principal.

(3) The one who is to act is the agent.

115 Scherer, supra note 51, at 287.

116 Id. at 286.

117 Id. at 289.

118 Id. at 287.

119 Id. at 287-88.

120 Id. at 288.

121 Id.

122 Restatement (Second) of Agency § 1 cmt. d (Am. L. Inst. 1958).

123 “The person represented has a right to control the actions of the agent.” Restatement (Third) of Agency § 1.01 cmt. C (Am. L. Inst. 1958). One exception to this rule is the doctrine of apparent agency (i.e., ostensible agency) which holds that a hospital could be liable for an independent contractor’s negligence if it represented the contractor as its employee and the patient justifiably relied on the representation. In this case, the hospital is vicariously liable despite having no right of control over the contractor. See Arthur F. Southwick, Hospital Liability: Two Theories Have been Merged, 4 J. Legal Med. 1, 9-13 (1983).

124 Nava v. Truly Nolen Exterminating, 140 Ariz. 497, 683 P.2d 296, 299-300.). Cf. S. Pac. Transp. v. Cont’l Shippers, 642 F.2d 236, 238-39 (8th Cir. 1981) (holding that shipper-members of a shippers’ association were agents of the association because the association had actual authority to act as the agent for the member defendants and the association was controlled by its members).

125 Gary E. Marchant & Lucille M. Tournas, AI Health Care Liability: From Research Trials to Court Trials, J. Health & Life Sci. L. 23, 37 (2019).

126 Southwick, supra, note 123, at 4.

127 On this point, one might note a parallel with the LI doctrine under products liability law whereby the manufacturer is shielded from liability precisely because the product (e.g., a prescription drug) interacts with the plaintiff through a professional intermediary (i.e., the physician).

128 See Allen, supra note 8.

129 Anat Lior, AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy, 46 Mitchell Hamline L. Rev. 1043, 1092 (2020).

130 Id. at 1092-93.

131 Taylor v. Intuitive Surgical, Inc., 389 P.3d 517, 520 (Wash. 2017).

132 Id., at 526 (“While doctors are recognized as the gatekeepers between the physician and patient, the hospital is the gatekeeper between the physician and the use of the da Vinci System since the hospital clears surgeons to use it. Thus, the hospital must have warnings about its risks and no tort doctrine should excuse the manufacturer from providing them”).

133 Id., at 531 (“ISI manufactured the product, ISI sold the product to Harrison, Harrison credentialed the doctor, and the doctor ultimately operated on Taylor’s Husband using the product”).

134 See Terry, supra note 11, at 161-62 (“Taylor puts several future issues on display. For example, which members of the distribution chain will face liability and under what legal theory and what are the relative responsibilities of hospitals and developers in training physicians and developing or enforcing protocols for the implementation of AI generally or its use in a particular case?”).

135 See Mihailis E. Diamantis, Algorithms Acting Badly: A Solution from Corporate Law, 89 Geo. Wash. L. Rev. 801, 820 (2021) (“The algorithmic misbehavior may result from an unexpected interaction between the algorithm (programmed by one company), the way it is used (by a second company), and the hard-ware running it (owned by a third company)”).

136 See, e.g., Mark Coeckelbergh, Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability, 26 Sci. & Engg Ethics 2051, 2056 (2020).

137 Gerke et al., supra note 85, at 2 (arguing that AI-based medical devices are systems composed of interacting elements and whose performance is dependent on “organizational factors such as resources, staffing, skills, training, culture workflow and processes”).

138 Another indication for why vicarious liability might not be a good fit for clinical AI systems can be found in the decline of the Captain of the Ship doctrine. The idea is that the chief surgeon, as the captain of the ship during surgery, is vicariously liable for the negligence of any person serving on the surgical team. The underlying justification for the doctrine is the right of control one had over the negligent activities of others. However, this justification became increasingly untenable as the size of medical teams grew and as medical professional such as anesthesiologists, nurses, surgical assistants became recognized as performing independent functions. Arthur Southwick draws the following lesson from the decline of this doctrine: “[w]hen medical care is provided by a highly specialized, sophisticated team of professional individuals all working within an institutional setting. It is frequently difficult to determine at any given point in time who is exercising direct control over whom.” Southwick, supra note 123, at 14-16.

139 Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231, 1252-53 (1992).

140 Comm. on Legal Affairs, Eur. Union Parliament, Rep. with Recommendations to the Commn on Civ. L. Rules on Robotics, at 18 (2017).

141 Id. at 17-18.

142 A.I. and Robotics Experts, Robotics Open Letter to the European Commission 1 (Apr. 5, 2018), https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf [https://perma.cc/45G8-YB7L].

143 Eur. Commn, supra note 4, at 38 (“Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons, and where this is not the case, new laws directed at individuals are a better response than creating a new category of legal person”).

144 See, e.g., Nadia Banteka, Artificially Intelligent Persons, 58 Hous. L. Rev. 537, 552 (2021) (“Even amongst established legal persons such as human beings, legal systems have created categories of humans with more or less rights and different sets of obligations. Consider, for instance, the rights enjoyed by an adult human to those enjoyed by a child. By analogy, artificial entities also fall on this spectrum and have often been conferred legal personhood with more or less restricted bundles of rights and obligations”) (emphases added); Russ Pearlman, Recognizing Artificial Intelligence As Authors and Investors Under U.S. Intellectual Property Law, 24 Rich. J.L. & Tech. 1, 29 (2018) (suggesting that AI should be granted analogous legal personhood, such as that granted to corporations and government entities); Ryan Abbott & Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53 U.C. Davis L. Rev. 323, 356 (2019) (suggesting ways in which AI criminality should be considered analogously to natural persons’ criminality).

145 Allain, supra note 74, at 1062-63.

146 Jason Chung & Amanda Zink, Hey Watson – Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine, 11 Asia Pac. J. Health L. & Ethics 51, 53 (2018).

147 See, e.g., Allen, supra note 8, at 1177-78.

148 White & Chopra, supra note 63, at 158 (“to ascribe legal personhood to an entity is to do no more than to make arrangements that facilitate a particular set of social, economic and legal relationships”).

149 See, e.g., Jacob Turner, Robot Rules: Regulating Artificial Intelligence 184-188 (2019) (arguing that “[g]ranting AI legal personality could be a valuable firewall between existing legal persons and the harm which AI could abuse. Individual AI engineers and designers might be indemnified by their employers, but eventually creators of AI systems – even at the level of major corporates – may become increasingly hesitant in releasing innovative products to the market if the programmers are unsure as to what their liability will be for unforeseeable harm”). Cf. Vikram R. Bhargava & Manuel Velasquez, Is Corporate Responsibility Relevant to Artificial Intelligence Responsibility? 17 Geo. J. L. & Pub. Poly 829, 829, 833 (arguing that the reasons for holding corporations responsible are inapplicable to AI agents since corporations are made up of and act through agents, which is not the case for AI).

150 See, e.g., Ugo Pagallo, Vital, Sophia, and Co. – The Quest for the Legal Personhood of Robots, 9 Information 230, 236-37 (2018) (noting that artificial agents lack self-consciousness, human-like intentions, and the ability to suffer – the requisites associated with granting someone, or something, legal personhood); Ryan Abbott & Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53 U.C. Davis L. Rev. 103, 154 (2019) (holding that “[f]ull-fledged legal personality for AI is equivalent to that afforded to natural persons, with all the legal rights that natural persons enjoy, would clearly be inappropriate”); John-Stewart Gordon, Artificial Moral and Legal Personhood, 36 A.I. & Socy 457, 470 (2021) (arguing that artificially intelligent robots currently fail to meet the criteria of rationality, autonomy, understanding, and social relations necessary for moral personhood).

151 See, e.g., Bert-Jaap Koops, Mireille Hildebrandt & David-Oliver Jaquet-Chiffelle, Bridging the Accountability Gap: Rights for New Entities in the Information Society?, 11 Minn. J.L. Sci. & Tech. 497, 560 (2010) (proposing that we consider “whether the attribution of a restricted legal personhood, involving certain civil rights and duties, has added value in comparison with other legal solutions”).

152 See Eur. Commn, supra note 4, at 38.

153 See infra, Part III.E.

154 Howard C. Klemme, The Enterprise Liability Theory of Torts, 47 U. Colo. L. Rev. 153, 158 (1975).

155 Gregory C. Keating, The Theory of Enterprise Liability and Common Law Strict Liability, 54 Vand. L. Rev. 1285, 1286 (2001). This theory of liability originated in worker compensation schemes enacted in England and the United States in the early 20th century and has since exerted an influence in various areas of tort law, including products liability law.

156 Id. at 1287-88.

157 See George L. Priest, The Invention of Enterprise Liability: A Critical History of the Intellectual Foundations of Modern Tort Law, 14 J. Legal Stud. 461 (1985) for one of the first extended scholarly treatments of the topic.

158 Am. L. Inst., Medical Malpractice, Reporters’ Study II: Enterprise Responsibility for Personal Injury 111, 113 (1991).

159 Id. at 114-15.

160 Id. at 118.

161 Id. at 123 (“The collective wisdom of the hospital team can be pooled to devise feasible procedures and technologies for guarding against the ever-present risk of occasional human failure by even the best doctors”).

162 See Kenneth S. Abraham & Paul C. Weiler, Enterprise Medical Liability and the Evolution of the American Health Care System, 108 Harv. L. Rev. 381, 381 (1994).

163 See Phillip G. Peters, Jr., Resuscitating Hospital Enterprise Liability, 73 Mo. L. Rev. 369, 376 (2008); Robert A. Berenson & Randall R. Bovbjerg, Enterprise Liability in the Twenty-First Century, in Medical Malpractice and the U.S. Health Care System 230-232 (William M. Sage & Rogan Kersh eds., 2006). It should be noted EL does not require no-fault liability. See e.g., Vladeck, supra note 5, at 147 n.91 (distinguishing a no-fault liability system of EL that imposes mandatory insurance and eliminates access to the judicial system from a strict liability version implemented by the courts).

164 Some managed care organizations and government organizations such as the VA voluntarily assume liability for the negligent acts of staff. See Daniel P. Kessler, Evaluating the Medical Malpractice System and Options for Reform, J. Econ. Persps. 93, 102 (2011).

165 William M. Sage, Enterprise Liability and the Emerging Managed Health Care System, 60 L. & Contemp. Probs. 159, 159 (1997).

166 Id. at 166-69.

167 Id. at 167.

168 Id. at 169.

169 Id. at 195.

170 Jack K. Kilcullen, Groping for the Reins: ERISA, HMO Malpractice, and Enterprise Liability, 22 Am. J.L. & Med. 7, 10 (1996). Kilcullen’s proposal was made at a time when HMOs exercised greater control over patients’ health care utilization, such as the choice of providers and hospitals. This control began to loosen in the second half of the 1990s due to consumer and provider backlash. See Ronald Lagoe et al., Current and Future Developments in Managed Care in the United States and Implications for Europe, 3 Health Rsch. Poly & Sys. 3-4 (2005), https://health-policy-systems.biomedcentral.com/articles/10.1186/1478-4505-3-4 [https://perma.cc/C75X-HSET].

171 See Laura D. Hermer, Aligning Incentives in Accountable Care Organizations: The Role of Medical Malpractice Reform, 17 J. Health Care L. & Poly 271, 273 (2014).

172 Thomas R. McLean, Cybersurgery – An Argument for Enterprise Liability, 23 J. Legal Med. 167, 207 (2002). The medical service payor could be “the federal government or Fortune 500 insurance company doing business as a managed care organization.” Id. at 208.

173 Id. at 207.

174 Id.

175 See, e.g., Allen, supra note 8, at 1177 (noting that EL’s removal of the need to prove negligence may help manage the risk of patient harm from AI). Jessica Allain has proposed a statutory scheme whereby an action against an AI system like Watson could proceed under EL, with the enterprise consisting of the AI as a legal person, the AI’s owner, and the physicians involved. However, EL would only be triggered once a panel of experts has determined to the court’s satisfaction that there was no hardware failure; otherwise, the action would proceed under products liability. See Allain, supra note 74, at 1076-1077. The concern here is that this preliminary step of assessing hardware failure risks being time- and resource-intensive, which would inject an additional layer of uncertainty to the recovery process.

176 Vladeck, supra note 5, at 149.

177 Id.

178 Fed. Trade Comm’n v. Tax Club, Inc., 994 F. Supp. 2d 461, 469 (S.D.N.Y. 2014); see also Consumer Fin. Prot. Bureau v. NDG Fin. Corp., No. 15-cv-5211, 2016 WL 7188792, at *16 (S.D.N.Y. Dec. 2, 2016); Fed. Trade Comm’n v. 4 Star Resol., LLC, No. 15-CV-112S, 2015 WL 7431404, at *1 (W.D.N.Y. Nov. 23, 2015); Fed. Trade Comm’n v. Vantage Point Servs., LLC, No. 15-CV-006S, 2015 WL 2354473, at *3 (W.D.N.Y. May 15, 2015) (specifying that “a common enterprise analysis is neither an alter ego inquiry nor an issue of corporate veil piercing; instead, the entities within the enterprise may be separate and distinct corporations”); Fed. Trade Comm’n v. Nudge, LLC, 430 F. Supp. 3d 1230, 1234 (D. Utah 2019); Fed. Trade Comm’n v. Fed. Check Processing, Inc., No. 12-CV-122-WMS-MJR, 2016 WL 5956073, at *2 (W.D.N.Y. Apr. 13, 2016).

179 E.g., Fed. Trade Comm’n v. Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019).

180 Delaware Watch Co. v. Fed. Trade Comm’n, 332 F.2d 745, 746 (2d Cir. 1964) (holding that the individuals were “transacting an integrated business through a maze of interrelated companies”).

181 Vladeck, supra note 5, at 149.

182 Vladeck, supra note 5, at 149.

183 Id. at 148.

184 Id. at 129 n.39.

185 If this does not turn out to be the case, Vladeck concedes that enterprise liability may be more appropriate. Id. (“Of course, if the number of driver-less vehicles was relatively small and there were issues of identifying the manufacturer of a vehicle that caused significant harm, enterprise theory of liability might be viable in that situation as well”).

186 Id. at 149.

187 See Andrea Bertolini, Comm. on Legal Affairs, Artificial Intelligence and Civil Liability, at 111 (2020), https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf [https://perma.cc/G4H6-M9XH] (holding the importance of apportioning liability between the medical practitioner, AI manufacturer, and hospital/structure that operates the AI system or employs the practitioner).

188 Mariarosaria Taddeo & Luciano Floridi, How AI Can be a Force for Good, 361 Science 751, 751 (2018). Curtis Karnow, in an article published almost 25 years ago, recognized the difficulty of assigning liability to a single actor in situations involving AI harms given the “distributed computing environment in which [artificial intelligence] programs operate.” Curtis E. A. Karnow, Liability for Distributed Artificial Intelligences, 11 Berkeley Tech. L. J. 147, 155 (1996).

189 See Grote & Berens, supra note 24, at 209.

190 See Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6 Ethics & Info. Tech. 175, 177 (2004) (arguing that “there is an increasing class of machine actions, where the traditional ways of responsibility ascription are not compatible with out sense of justice and the moral framework of society because nobody has enough control over the machine’s actions to be able to assume the responsibility for them. These cases constitute what we will call the responsibility gap”).

191 See Grote & Berens, supra note 24, at 209.

192 Coeckelbergh, supra note 136, at 2057.

193 See supra notes 122-123 and accompanying text.

194 See supra notes 131-138 and accompanying text.

195 See, e.g., IBM, Watson Health: Get the Facts (last visited June 29, 2020), https://www.ibm.com/watson-health/about/get-the-facts [https://perma.cc/LX3W-DBFQ] (“By combining human experts with augmented intelligence, IBM Watson Health helps health professionals and researchers around the world translate data and knowledge into insights to make more-informed decisions about care in hundreds of hospitals and health organizations”).

196 Bertalan Mesko, Gergely Hetényi & Zsuzsanna Győrffy, Will Artificial Intelligence Solve the Human Resource Crisis in Healthcare?, 18 BMC Health Servs. Rsch. 545, 545 (2018).

197 Allain, supra note 74, at 1062.

198 Mesko, Hetényi & Győrffy, supra note 196.

199 Am. Med. Assn, Augmented Intelligence in Health Care 2 (2018), https://www.ama-assn.org/system/files/2019-01/augmented-intelligence-policy-report.pdf [https://perma.cc/CC4S-AD9C].

200 Id.

201 See Gerke et al., supra note 85.

202 Eric J. Topol, High-Performance Medicine: The Convergence of Human and Artificial Intelligence, 25 Nature Med. 44, 44 (2019).

203 Id.

204 See Parasidis, supra note 95, at 186 (noting that “[t]he goals underlying use of [clinical decision support] mirror those of clinical practice guidelines”).

205 Topol, supra note 202, at 51.

206 See Maliha et al., supra note 111.

207 Id at 632.

208 Id. (“Physicians have a duty to independently apply the standard of care for their field, regardless of an AI/ML algorithm output”).

209 See Giuffrida & Treece, supra note 94, at 120.

210 Moreover, keeping physicians in the enterprise may be justified on the basis that, like the manufacturer and hospital, physicians benefit from the use of AI systems. As discussed below, the internal morality of strict and EL would hold that it is fair to hold the physician at least prima facie liable.

211 See Rachum-Twaig, supra note 53, at 1172.

212 Eur. Commn, supra note 4, at 40.

213 See Kenneth S. Abraham & Robert L. Rabin, Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era, 105 Va. L. Rev. 127, 153-54 (2019) (writing in the context of automated vehicles).

214 Abraham & Weiler, supra note 162, at 415 (emphasis added).

215 Am L. Inst., supra note 158, at 123.

216 Inst. Med., supra note 6, at 169.

217 Id. at 175.

218 Id.

219 Abraham & Weiler, supra note 162, at 416 (noting that “[h]ospitals are typically responsible for selecting and providing the supplies, facilities, and equipment used in treatment, as well as for hiring and firing the employees who play an important role on the patient care team. Hospitals can also grant admitting privileges to physicians, and can restrict, suspend, or terminate the privileges of doctors whose poor quality of treatment has come to the hospital’s attention”).

220 Id.

221 On this point, Allen suggests that physician medical societies, such as the College of American Pathologists, could inform standards and strategies for managing patient risk in AI implementation. See Allen, supra note 8, at 1177.

222 For instance, the hospital is in a unique position to establish nonpunitive systems for reporting and analyzing AI errors, anticipate errors by double checking for vulnerabilities, train novice practitioners through simulations, promote the free flow of information, and implement mechanisms of feedback and learning. See Inst. Med., supra note 6, at 165-182.

223 33 Ill.2d 326, 330, 211 N.E.2d 253, 256 (1965).

224 Id. at 257, 332.

225 Torin A. Dorros & T. Howard Stone, Implications of Negligent Selection and Retention of Physicians in the Age of ERISA, 21 Am. J.L. & Med. 383, 396 (1995). Darling’s progeny has further defined the scope of a hospital duty to ensure patient safety and well-being. See, e.g., Thompson v Nason Hosp., 591 A.2d 703, 707 (Pa. 1991), aff’d by Brodowski v. Ryave, 885 A.2d 1045 (Pa. 2005) (holding that a hospital has duties to: 1) to use reasonable care in the maintenance of safe and adequate facilities and equipment; 2) to select and retain only competent physicians; 3) to oversee all persons who practice medicine on hospital premises; and 4) to formulate, adopt, and enforce adequate rules and policies to ensure quality care for the patients).

226 See Price II, supra note 85, at 304.

227 Abraham & Weiler, supra note 162, at 391.

228 Id., at 391-92.

229 See Mark E. Milsop, Corporate Negligence: Defining the Duty Owned by Hospitals to Their Patients, 30 Duq. L. Rev. 639, 660 (1991).

230 See id. at 642-643.

231 Abraham & Wiler, supra note 162, at 393.

232 See Bryan Casey, Robot Ipsa Loquitur, 108 Geo. L.J. 225, 266 (2019) (“Products liability tests in force in the majority of states turn on principles of reasonableness, foreseeability, and causation that are congruent with findings of fault in negligence.”). This fault-infused liability standard leads Casey to call strict liability “zombie” regime that continues to cause analytic confusion.

233 (1868) 3 LRE & 1 App. 330 (HL). Rylands held that a person can be liable for damage to a neighbor’s property flowing from the “non-natural” use of one’s own property. The idea is that while a property owner is free to store objects that have the propensity to escape and cause mischief to neighboring lands, this is done at the property owner’s own (legal) peril.

234 See Restatement (Second) of Torts § 520 (Am. L. Inst. 1977); Restatement (Third) of Torts: Liability for Physical & Emotional Harm §20 (Am. L. Inst. 2010).

235 See John C. P. Goldberg & Benjamin C. Zipursky, The Strict Liability in Fault and the Fault in Strict Liability, 85 Fordham L. Rev. 743, 761 (2016).

236 See Vladeck, supra note 5, at 146.

237 Id.

238 Kilcullen, supra note 170, at 15.

239 See Greenman v. Yuba Power Prod., Inc., 377 P.2d 897, 901 (Cal. 1963) (holding that the very purpose of products liability law is to countervail the power imbalance that exists between manufacturers and injured persons who are powerless to protect themselves). Despite the revolutionary potential of the Greenman decision, the promise of strict liability went unfulfilled due to doctrinal confusion over the meaning of “defect” and the tendency of courts to look for fault even when applying a strict-liability standard. See Andrzej Rapaczynski, Driverless Cars and the Much Delayed Tort Law Revolution 20 (Columbia L. & Econs. Working Paper No. 540, 2016), https://scholarship.law.columbia.edu/faculty_scholarship/1962/ [https://perma.cc/NGQ5-CDSA] (arguing that the best way to operationalize strict liability would have been to ask “who was most likely to be able to bring about safety improvements in the future, even if such improvements were not yet possible and even if we could not as yet specify them with any degree of precision. In other words, the relevant question is: Do we expect technical improvements in the design and/or the manufacturing process to be the best way of lowering the future cost of accidents of the type at issue, or do we expect some improvements to come from a more skillful or better calibrated use by the consumers, from medical advances in predicting or treating the injuries, or perhaps from some other inventions or behavior modifications?”).

240 Luciano Floridi, Distributed Morality in an Information Society, 19 Sci. & Engg Ethics 727, 729 (2012) [hereinafter Distributed Morality]. His only requirement for an agent to be included in such a system is that the agent exercise some degree of autonomy, interact with other agents and their environment, and is capable of learning from these interactions.

241 Luciano Floridi, Faultless Responsibility: on the Nature and Allocation of Moral Responsibility for Distributed Moral Actions, 374 Phil. Transactions Royal Socy A 2-3 (2016) [hereinafter Faultless Responsibility]. Floridi notes that reaching a satisfactory output in a social network is “achieved through hard and soft legislation, rules and codes of conducts, nudging, incentives and disincentives; in other words, through social pushes and pulls.” Id. at 7. This resonates strongly with the deterrence and distributive functions of tort law.

242 Restatement (Third) of Torts: Liability for Physical & Emotional Harm §20 (Am. L. Inst. 2010). This formulation more or less encapsulates the six factors set out in the Second Restatement for what counts as an abnormal activity: “(a) Existence of a high degree of risk of some harm to the person, land or chattels of others; (b) Likelihood that the harm that results from it will be great; (c) Inability to eliminate the risk by the exercise of reasonable are; (d) Extent to which the activity is not a matter of common usage; (e) Inappropriateness of the activity to the place where it is carried on; and (f) Extent to which its value to the community is outweighed by its dangerous attributes.” Restatement (Second) of Torts § 520 (Am. L. Inst. 1977). The term “abnormally dangerous activities” replaced the First Restatement’s language of “ultrahazardous activities”, though the substance of this latter category remains in the law. See Goldberg & Zipursky, supra note 235, at 760.

243 We may even reach a point where every hospital and insurance company require the use of AI systems, with the failure to do so being legally actionable in the even of a bad outcome. See Froomkin et al., supra note 78, at 49-50.

244 To borrow Floridi’s language, these are parties who “output, as a whole, a distributed action that is morally-loaded, by activating themselves and by interacting with other agents according to some specific inputs and thresholds, in ways that are assumed to be morally neutral.” Faultless Responsibility, supra note 241, at 7. CEL coupled with strict liability can be interpreted as a legal expression of this line of moral reasoning.

245 See, e.g., Escola v. Coca Cola Bottling Co. of Fresno, 150 P.2d 436, 440, 443-44 (Cal. 1944) (Traynor, J., concurring) (proposing a shift from negligence to a strict liability standard for defective products based on the public policy that manufacturers are the best situated to anticipate product hazards. Moreover, manufacturing processes are often secretive and the consumer lacks the means to investigate a product’s soundness on their own); see also Vladeck, supra note 5, at 146 (characterizing strict liability as a “court-compelled insurance regime to address the inadequacy of tort law to resolve questions of liability that may push beyond the frontiers of science and technology”).

246 Vladeck, supra note 5, at 147.

247 Liability could, for instance, incentivize manufacturers to make their code “crashworthy” in incorporating “state-of-the-art techniques in software fault tolerance.” See Bryan H. Choi, Crashworthy Code, 94 Wash. L. Rev. 39, 47 (2019).

248 Am. L. Inst., Perspectives on the Tort System and the Liability Crisis, Reporters’ Study V: The Institutional Framework 3, 25 (1991).

249 See e.g., Bathaee, supra note 36, at 931.

250 Under Keating’s taxonomy, harm-based strict liability – such as products liability law, abnormally dangerous activity law, and nuisance law – addresses justifiable conduct causing physical harm. In contrast, autonomy-based strict liability – such as trespass and battery – address innocent or morally blameless conduct that infringes on autonomy rights (over persons and property). In both instances, “[t]he object of the law’s criticism is not the defendant’s primary conduct in inflicting injury, but his secondary conflict in failing to repair the harm justifiably inflicted.” Gregory Keating, Is There Really No Liability Without Fault? A Critique of Goldberg & Zipursky, 85 Fordham L. Rev. Res Gestae 24, 30 (2016-2017).

251 Gregory C. Keating, Products Liability as Enterprise Liability, 10 J. Tort L. 41, 66 (2017).

252 Cf. Goldberg & Zipursky, supra note 235, at 766-767. Goldberg and Zipursky argue that Keating’s idea of “conditional wrongs” is untenable on the basis that a plaintiff does not have to prove that the strictly liable defendant failed to offer to pay for the damage caused. Their position is that the “predicate of liability is the doing of the harm, not the doing of the harm plus the failure to step forward to offer to pay.” Any pre-emptive payment from the defendant would be a matter of restitution, which presupposes the existence of a tort. Keating’s response is simple but, I think, effective: “[Golding and Zipursky] are right that no such proof is needed. Plaintiff need only prove that the defendant harmed her. The duty to repair the harm arises when harm is inflicted. If plaintiff and defendant cannot agree on what such reparation requires, the matter is for a court to determine simply because no one can unilaterally determine that they have discharged their legal obligations.” Gregory Keating, Liability Without Regard to Fault: A Comment on Goldberg & Zipursky 8 n.41, (Univ. of S. Cal. L. Sch., Working Paper No. 232, 2016) https://law.bepress.com/cgi/viewcontent.cgi?article=1367&context=usclwps-lss [https://perma.cc/JQ5A-LMJ3]

253 Keating, supra note 250, at 70.

254 Id. at 72-74.

255 Id. at 71.

256 See Froomkin, et al., supra note 78, at 64. (“[W]e presume that [Machine Learning] diagnostics will follow the path of many other digital technologies and exhibit high fixed costs but relatively low marginal costs”).

257 Keating, supra note 251, at 71.

258 Proponents of EL have long advocated for these sorts of experiments. See, e.g., Paul C. Weiler, Reforming Medical Malpractice in a Radically Moderate – and Ethical – Fashion, 54 DePaul L. Rev. 205, 231 (2005) (proposing that professional athletes’ associations, such as the National Hockey League Players’ Association, experiment with a EL-style, no-fault regime for their players).

259 See, e.g., Kessler, supra note 164, at 102.

260 See Maliha et al., supra note 111.

261 Id.

262 See Hermer, supra note 171, at 297.

263 Technology companies typically carry technology errors and omissions insurance, which is designed to cover financial loss and not bodily injury or property damage. General liability policy also excludes professional liability and therefore liability for bodily injury. Insurers have started to offer coverage for contingent bodily injury under technology errors and omissions policies, though at the moment only a limited number of insurers are willing to add this coverage. See Thompson Mackey, Artificial Intelligence and Professional Liability, Risk Mgmt. Mag. (June 11, 2018), http://www.rmmagazine.com/articles/article/2018/06/11/-Artificial-Intelligence-and-Professional-Liability [https://perma.cc/VER6-Z8H9].

264 George Maliha et al., To Spur Growth in AI, We Need a New Approach to Legal Liability, Harv. Bus. Rev. (July 13, 2021), https://hbr.org/2021/07/to-spur-growth-in-ai-we-need-a-new-approach-to-legal-liability [https://perma.cc/PH3H-YHGQ].

265 See, e.g., Helen Smith & Kit Fotheringham, Artificial Intelligence in Clinical Decision-Making: Rethinking Liability, 20 Med. L. Intl 131, 148 (2020) (“… it is debatably not enterprise liability if the economic impact of a claim is assigned to an insurer rather than directly impacting the actors who caused the harm”).

266 See Ted Baker & Charles Silver, How Liability Insurers Protect Patients and Improve Safety, 68 DePaul L. Rev. 209, 237 (2019).

267 See Vladeck, supra note 5 (“Conferring ‘personhood’ on those machines would resolve the agency question; the machines would become principals in their own right and along with new legal status would come new legal burdens, including the burden of self-insurance. This is a different form of cost-spreading than focusing on the vehicle’s creators, and it may have the virtue of necessitating that a broader audience – including the vehicle’s owner – participate in funding the insurance pool, and that too may be more fair”).

268 Gerhard Wagner, Robot, Inc.: Personhood for Autonomous Systems?, 88 Fordham L. Rev. 591, 608 (2019).

269 See id., at 610; Eur. Commn, supra note 4, at 38 (holding that “[a]ny additional personality should go hand-in-hand with funds assigned to such electronic persons, so that claims can be effectively brought against them. This would amount to putting a cap on liability and – as experience with corporations has shown – subsequent attempts to circumvent such restrictions by pursuing claims against natural or legal persons to whom electronic persons can be attributed, effectively ‘piercing the electronic veil’”).

270 Karnow, supra note 189, at 193-196.

271 Comm. on Legal Affairs, supra note 140, at ¶¶ 58-59. Unlike Karnow’s proposal, the EU’s insurance scheme does not include risk certification.

272 See Wagner, supra note 268, at 610 (“If the manufacturers have to front the costs of insurance, they will pass these costs on to the buyers or operators of the robot. In one form or another, they would end up with the users. The same outcome occurs if users contribute directly to the asset cushion or become liable for insurance premiums. In the end, therefore, the robot’s producers and users must pay for the harm the robot cases. The ePerson is only a conduit to channel the costs of coverage to the manufacturers and users.”); see also Karnow, The Application of Traditional Tort Theory to Embodied Machine Intelligence, supra note 77, at 51 (“In an age of mass markets and long distribution chains, costs could be allocated across a large number of sales, and manufacturers were in a position accordingly to spread costs including by purchasing insurance. Why not similarly spread the costs of injury?”).