当前位置: X-MOL 学术Minds Mach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Misdirected Principle with a Catch: Explicability for AI
Minds and Machines ( IF 7.4 ) Pub Date : 2019-10-15 , DOI: 10.1007/s11023-019-09509-3
Scott Robbins

There is widespread agreement that there should be a principle requiring that artificial intelligence (AI) be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” (Floridi et al. in Minds Mach 28(4):689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.

中文翻译:

一个误入歧途的原则:人工智能的可解释性

人们普遍认为,应该有一个原则要求人工智能 (AI) 是“可解释的”。微软、谷歌、世界经济论坛、欧盟委员会的人工智能道德准则草案等都包含了一个属于“可解释性”保护伞下的人工智能原则。粗略地说,该原则指出,“为了让人工智能促进而不是限制人类自主,我们的‘关于谁应该做决定的决定’必须基于人工智能将如何代替我们采取行动的知识”(Floridi et al. in Minds Mach 28(4) ):689–707, 2018)。有一种强烈的直觉,如果一个算法决定,例如,是否给某人贷款,那么该算法应该是可解释的。然而,我在这里争辩说,这样的原则被误导了。要求可解释性的属性应依附于特定的行动或决定,而不是做出该决定的实体。推动可解释性的道德需要的是决策的背景和潜在危害,而不是决策的过程。与此相关的是,人工智能被用于许多低风险的目的,对于这些目的,没有必要要求它是可解释的。要求可解释性的原则会阻止我们从在这些情况下使用的 AI 中获益。最后,只有当我们已经知道手头的决定可以接受哪些考虑因素时,可解释的 AI 给出的解释才会富有成效。如果我们已经有了这些考虑,那么就没有必要使用现代人工智能算法,因为标准自动化是可用的。换句话说,
更新日期:2019-10-15
down
wechat
bug