当前位置: X-MOL 学术Minds Mach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Embedding Values in Artificial Intelligence (AI) Systems
Minds and Machines ( IF 4.2 ) Pub Date : 2020-09-01 , DOI: 10.1007/s11023-020-09537-4
Ibo van de Poel

Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. This account understands embodied values as the result of design activities intended to embed those values in such systems. AI systems are here understood as a special kind of sociotechnical system that, like traditional sociotechnical systems, are composed of technical artifacts, human agents, and institutions but—in addition—contain artificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system. The specific challenges and opportunities of embedding values in AI systems are discussed, and some lessons for better embedding values in AI systems are drawn.

中文翻译:

在人工智能 (AI) 系统中嵌入价值

欧盟人工智能高级专家组和 IEEE 等组织最近制定了在设计和部署人工智能 (AI) 时应遵守的伦理原则和(道德)价值观。这些包括尊重自治、无恶意、公平、透明、可解释性和问责制。但是我们如何确保和验证人工智能系统确实尊重这些价值观?为了帮助回答这个问题,我提出了一个解释来确定人工智能系统何时可以体现某些价值。该帐户将体现值理解为旨在将这些值嵌入此类系统的设计活动的结果。人工智能系统在这里被理解为一种特殊的社会技术系统,与传统的社会技术系统一样,它由技术人工制品、人类代理、和机构,但此外还包含人工代理和某些规范人工代理与系统其他元素之间交互的技术规范。讨论了在人工智能系统中嵌入价值的具体挑战和机遇,并总结了一些在人工智能系统中更好地嵌入价值的经验教训。
更新日期:2020-09-01
down
wechat
bug