当前位置: X-MOL 学术Science and Engineering Ethics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Actionable Principles for Artificial Intelligence Policy: Three Pathways
Science and Engineering Ethics ( IF 3.7 ) Pub Date : 2021-02-19 , DOI: 10.1007/s11948-020-00277-3
Charlotte Stix 1
Affiliation  

In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.



中文翻译:

人工智能政策的可操作原则:三种途径

在制定以伦理为依据的人工智能 (AI) 政府政策时,目前采取的一种途径是借鉴​​“人工智能伦理原则”。然而,这些人工智能伦理原则往往未能在政府政策中得到落实。本文提出了一个新的框架来制定“人工智能的可操作原则”。该方法承认人工智能伦理原则的相关性,并关注方法论要素,以提高其在政策过程中的实际可实施性。作为案例研究,从《可信人工智能伦理指南》的制定过程中提取要素欧盟委员会“人工智能高级专家组”。随后,根据这些元素为开发“人工智能可行原则”的原型框架做出贡献的能力,对这些元素进行了扩展和评估。为形成这样一个原型框架,本文提出以下三个命题:(1)初步景观评估;(2) 多方参与和跨部门反馈;(3) 支持实施和可操作性的机制。

更新日期:2021-02-19
down
wechat
bug