当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Statistically responsible artificial intelligences
Ethics and Information Technology ( IF 3.633 ) Pub Date : 2021-04-09 , DOI: 10.1007/s10676-021-09591-1
Nicholas Smith , Darby Vickers

As artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.



中文翻译:

统计负责的人工智能

随着人工智能(AI)的普及,它将越来越多地涉入新颖的,具有道德意义的情况。因此,了解机器对道德负责意味着什么对机器伦理很重要。赋予AI道德责任的任何方法对于与之交互的人类都必须是可理解且直观的。我们认为,适当的方法是确定认可机构如何按照人类道德责任的标准解释:斯特劳森主义解释。我们没有断言我们的斯特劳森主义方法要么是唯一值得考虑的方法,要么显然是正确的方法,但我们认为,最好是将根本不同的道德责任观念(例如,一个用于AI的观念,一个用于人类的观念)结合到一起。单一内聚帐户。在Strawsonian框架下,当人们适当地服从一组特定的态度(即反应性态度)并确定在什么条件下使机器具有相同的态度时,他们就负有道德责任。尽管Strawsonian帐户传统上适用于个人,但在Strawsonian框架下,并非个人但具有这些态度的实体可以作为道德责任的候选人。我们得出的结论是,弱小的AI永远不会在道德上负责,而具有适当情感能力的强大的AI可能在道德上负责。尽管Strawsonian帐户传统上适用于个人,但在Strawsonian框架下,并非个人但具有这些态度的实体可以作为道德责任的候选人。我们得出的结论是,弱小的AI永远不会在道德上负责,而具有适当情感能力的强大的AI可能在道德上负责。尽管Strawsonian帐户传统上适用于个人,但在Strawsonian框架下,并非个人但具有这些态度的实体可以作为道德责任的候选人。我们得出的结论是,弱小的AI永远不会在道德上负责,而具有适当情感能力的强大的AI可能在道德上负责。

更新日期:2021-04-09
down
wechat
bug