当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hard choices in artificial intelligence
Artificial Intelligence ( IF 14.4 ) Pub Date : 2021-07-14 , DOI: 10.1016/j.artint.2021.103555
Roel Dobbe 1 , Thomas Krendl Gilbert 2 , Yonatan Mintz 3
Affiliation  

As AI systems are integrated into high stakes social domains, researchers now examine how to design and operate them in a safe and ethical manner. However, the criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this paper, we examine the vagueness in debates about the safety and ethical behavior of AI systems. We show how this vagueness cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment. Drawing from a new sociotechnical lexicon, we redefine vagueness in terms of distinct design challenges at key stages in AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) empowers developers by 1) identifying points of overlap between design decisions and major sociotechnical challenges; 2) motivating the creation of stakeholder feedback channels so that safety issues can be exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.



中文翻译:

人工智能的艰难选择

随着人工智能系统被集成到高风险的社会领域,研究人员现在研究如何以安全和合乎道德的方式设计和操作它们。然而,在复杂的社会环境中识别和诊断安全风险的标准仍然不明确且存在争议。在本文中,我们研究了关于 AI 系统的安全性和道德行为的争论中的模糊性。我们展示了如何不能仅通过数学形式主义来解决这种模糊性,而是需要对发展的政治以及部署的背景进行审议。我们借鉴了一个新的社会技术词典,重新定义了人工智能系统开发关键阶段不同设计挑战的模糊性。人工智能艰难选择的结果框架(HCAI) 通过 1) 确定设计决策和主要社会技术挑战之间的重叠点来赋予开发人员权力;2) 鼓励创建利益相关者反馈渠道,以便彻底解决安全问题。因此,HCAI 有助于及时讨论民主社会中人工智能发展的状况,认为审议应该是人工智能安全的目标,而不仅仅是确保它的程序。

更新日期:2021-07-30
down
wechat
bug