当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence
arXiv - CS - Artificial Intelligence Pub Date : 2021-07-29 , DOI: arxiv-2107.13734
Desmond C. Ong

The recent rapid advancements in artificial intelligence research and deployment have sparked more discussion about the potential ramifications of socially- and emotionally-intelligent AI. The question is not if research can produce such affectively-aware AI, but when it will. What will it mean for society when machines -- and the corporations and governments they serve -- can "read" people's minds and emotions? What should developers and operators of such AI do, and what should they not do? The goal of this article is to pre-empt some of the potential implications of these developments, and propose a set of guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI, in order to guide researchers, industry professionals, and policy-makers. We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-\`a-vis the entities that deploy such AI -- which we term Operators. Our analysis produces two pillars that clarify the responsibilities of each of these stakeholders: Provable Beneficence, which rests on proving the effectiveness of the AI, and Responsible Stewardship, which governs responsible collection, use, and storage of data and the decisions made from such data. We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.

中文翻译:

指导情感感知人工智能发展的伦理框架

最近人工智能研究和部署的快速进步引发了更多关于社交和情感智能人工智能潜在后果的讨论。问题不在于研究是否可以产生这种具有情感意识的人工智能,而是何时会产生。当机器——以及它们所服务的公司和政府——能够“读取”人们的思想和情感时,这对社会意味着什么?这种人工智能的开发者和运营者应该做什么,不应该做什么?本文的目的是预先排除这些发展的一些潜在影响,并提出一套评估情感意识 AI 的(道德和)伦理后果的指南,以指导研究人员、行业专业人士和政策制定者。我们提出了一个多利益相关者分析框架,将 AI 开发人员的道德责任与部署此类 AI 的实体(我们称为运营商)分开。我们的分析产生了两个支柱,阐明了每个利益相关者的责任:可证明的有益性,它依赖于证明人工智能的有效性,以及负责任的管理,管理负责任的数据收集、使用和存储以及根据此类数据做出的决策. 我们以对研究人员、开发商、运营商以及监管机构和立法者的建议作为结尾。这取决于证明 AI 的有效性,以及负责管理数据的负责任收集、使用和存储以及根据此类数据做出的决策。我们以对研究人员、开发商、运营商以及监管机构和立法者的建议作为结尾。这取决于证明 AI 的有效性,以及负责管理数据的负责任收集、使用和存储以及根据此类数据做出的决策。我们以对研究人员、开发商、运营商以及监管机构和立法者的建议作为结尾。
更新日期:2021-07-30
down
wechat
bug