当前位置: X-MOL 学术Hastings Center Rep. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care
Hastings Center Report ( IF 2.3 ) Pub Date : 2021-04-06 , DOI: 10.1002/hast.1248
Ryan Marshall Felder

The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these AI systems in health care, or is it merely one of the necessary criteria? I argue that accountability, which holds an important role in preserving the patient-physician trust that allows the institution of medicine to function, contributes further to an account of AI system justification. Hence, I endorse the vanishing accountability principle: accountability in medicine, in addition to statistical validation, must be preserved. AI systems that introduce problematic gaps in accountability should not be implemented.

中文翻译:

接受黑匣子问题:如何证明医疗保健中的人工智能系统是合理的

在医疗保健中使用不透明、无法解释的人工智能系统在医学上可能是有益的,但由于这种不透明性,它通常被视为潜在的道德问题——因为这些系统是黑匣子。亚历克斯·约翰·伦敦(Alex John London)最近认为,考虑到许多标准疗法在解释上是不透明的,而且我们可以依靠系统的统计验证来决定是否实施它们,因此不透明性通常不是问题。但是,统计验证是否足以证明在医疗保健中实施这些人工智能系统是合理的,还是仅仅是必要的标准之一?我认为,问责制在维护允许医学机构运作的患者-医生信任方面发挥着重要作用,进一步有助于解释人工智能系统的合理性。因此,我赞同消失的问责原则:除了统计验证之外,医学中的问责必须保留。不应实施在问责制方面引入问题差距的人工智能系统
更新日期:2021-04-06
down
wechat
bug