当前位置: X-MOL 学术Science and Engineering Ethics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Actionable Principles for Artificial Intelligence Policy: Three Pathways
Science and Engineering Ethics ( IF 2.7 ) Pub Date : 2021-02-19 , DOI: 10.1007/s11948-020-00277-3
Charlotte Stix 1
Affiliation  

In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.



中文翻译:


人工智能政策的可行原则:三种途径



在制定以伦理为依据的人工智能(AI)政府政策时,目前采取的途径之一是借鉴“人工智能伦理原则”。然而,这些人工智能道德原则往往未能在政府政策中得到落实。本文提出了一个用于开发“人工智能可行原则”的新颖框架。该方法承认人工智能道德原则的相关性,并着眼于方法论要素,以提高其在政策流程中的实际可实施性。作为案例研究,内容摘自欧盟委员会“人工智能高级别专家组”《值得信赖的人工智能道德准则》的制定过程。随后,根据这些元素为开发“人工智能可行原则”的原型框架做出贡献的能力,对这些元素进行扩展和评估。针对这一原型框架的形成,本文提出以下三个主张:(1)初步景观评估; (2)多方参与和跨部门反馈; (3) 支持实施和可操作性的机制。

更新日期:2021-02-19
down
wechat
bug