当前位置:
X-MOL 学术
›
arXiv.cs.CY
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Actionable Principles for Artificial Intelligence Policy: Three Pathways
arXiv - CS - Computers and Society Pub Date : 2021-02-24 , DOI: arxiv-2102.12406 Charlotte Stix
arXiv - CS - Computers and Society Pub Date : 2021-02-24 , DOI: arxiv-2102.12406 Charlotte Stix
In the development of governmental policy for artificial intelligence (AI)
that is informed by ethics, one avenue currently pursued is that of drawing on
AI Ethics Principles. However, these AI Ethics Principles often fail to be
actioned in governmental policy. This paper proposes a novel framework for the
development of Actionable Principles for AI. The approach acknowledges the
relevance of AI Ethics Principles and homes in on methodological elements to
increase their practical implementability in policy processes. As a case study,
elements are extracted from the development process of the Ethics Guidelines
for Trustworthy AI of the European Commissions High Level Expert Group on AI.
Subsequently, these elements are expanded on and evaluated in light of their
ability to contribute to a prototype framework for the development of
Actionable Principles for AI. The paper proposes the following three
propositions for the formation of such a prototype framework: (1) preliminary
landscape assessments; (2) multi-stakeholder participation and cross-sectoral
feedback; and, (3) mechanisms to support implementation and
operationalizability.
中文翻译:
人工智能政策的可行原则:三种途径
在以伦理为基础的政府人工智能政策的发展中,目前正在寻求的一种途径是利用人工智能伦理原则。但是,这些AI道德原则通常无法在政府政策中采取行动。本文为AI的可行原则的发展提出了一个新颖的框架。该方法承认AI伦理原则和家在方法元素上的相关性,以增加其在政策流程中的实际可实施性。作为案例研究,从欧洲委员会AI高级专家组的“可信AI道德准则”的制定过程中提取了要素。随后,这些元素将根据其为开发AI可行原则开发原型框架的能力进行扩展和评估。本文提出了以下三个提议,以形成这样一个原型框架:(1)初步景观评估;(2)多方利益相关者的参与和跨部门反馈;(3)支持实施和可操作性的机制。
更新日期:2021-02-25
中文翻译:
人工智能政策的可行原则:三种途径
在以伦理为基础的政府人工智能政策的发展中,目前正在寻求的一种途径是利用人工智能伦理原则。但是,这些AI道德原则通常无法在政府政策中采取行动。本文为AI的可行原则的发展提出了一个新颖的框架。该方法承认AI伦理原则和家在方法元素上的相关性,以增加其在政策流程中的实际可实施性。作为案例研究,从欧洲委员会AI高级专家组的“可信AI道德准则”的制定过程中提取了要素。随后,这些元素将根据其为开发AI可行原则开发原型框架的能力进行扩展和评估。本文提出了以下三个提议,以形成这样一个原型框架:(1)初步景观评估;(2)多方利益相关者的参与和跨部门反馈;(3)支持实施和可操作性的机制。