当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ethics as a service: a pragmatic operationalisation of AI Ethics
arXiv - CS - Computers and Society Pub Date : 2021-02-11 , DOI: arxiv-2102.09364
Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mokander, Luciano Floridi

As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the what and the how of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed Ethics as a Service.

中文翻译:

道德即服务:人工智能道德的务实运营

随着人工智能(AI)尤其是机器学习(ML)的潜在用途范围的增加,人们对相关伦理问题的意识也有所提高。这种意识的提高使人们意识到,现有的法律法规无法为个人,团体,社会和环境提供足够的保护,使其免受AI危害。为了响应这一认识,以原则为基础的道德规范,准则和框架不断泛滥。然而,越来越清楚的是,人工智能伦理原则的理论与人工智能系统的实际设计之间存在着巨大的差距。在先前的工作中,我们分析了是否有可能通过使用旨在帮助AI开发人员,工程师,设计师将原则转化为实践。我们得出的结论是,这种封闭方法目前无效,因为几乎所有现有的翻译工具和方法要么太灵活(因此容易受到道德规范的洗礼),要么太严格(对上下文没有反应)。这就提出了一个问题:即使在技术指导下,AI伦理学也难以在算法设计过程中扎根,整个亲道德的设计努力是否会徒劳?而且,如果没有,那么如何使AI道德对于AI练习者有用?这是我们寻求解决的问题,方法是探索为什么即使原则和技术转换工具受到限制,仍然需要这些原则和技术翻译工具,以及如何通过为被称为道德即服务的概念提供理论基础来克服这些限制。我们得出的结论是,这种封闭方法目前无效,因为几乎所有现有的翻译工具和方法要么太灵活(因此容易受到道德规范的洗礼),要么太严格(对上下文没有反应)。这就提出了一个问题:即使在技术指导下,AI伦理学也难以在算法设计过程中扎根,整个亲道德的设计努力是否会徒劳?而且,如果没有,那么如何使AI道德对于AI练习者有用?这是我们寻求解决的问题,方法是探索为什么即使原则和技术转换工具受到限制,仍然需要这些原则和技术翻译工具,以及如何通过为被称为道德即服务的概念提供理论基础来克服这些限制。我们得出的结论是,这种封闭方法目前无效,因为几乎所有现有的翻译工具和方法要么太灵活(因此容易受到道德规范的洗礼),要么太严格(对上下文没有反应)。这就提出了一个问题:即使在技术指导下,AI伦理学也难以在算法设计过程中扎根,整个亲道德的设计努力是否会徒劳?而且,如果没有,那么如何使AI道德对于AI练习者有用?这是我们寻求解决的问题,方法是探索为什么即使原则和技术转换工具受到限制,仍然需要这些原则和技术翻译工具,以及如何通过为被称为道德即服务的概念提供理论基础来克服这些限制。
更新日期:2021-02-19
down
wechat
bug