当前位置: X-MOL 学术IEEE Technol. Soc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Wide Human-Rights Approach to Artificial Intelligence Regulation in Europe
IEEE Technology and Society Magazine ( IF 2.1 ) Pub Date : 2021-06-03 , DOI: 10.1109/mts.2021.3056284
Jesus Salgado-Criado , Celia Fernandez-Aller

Editor's note: This article was written before the publication by the EU Commission of its proposal for an artificial intelligence (AI) regulation [29]. In a first and provisional analysis of the proposed regulation, we observe that the proposed regulation incorporates some of the basic principles laid down in our article: it prioritizes fundamental rights and incorporates some human rights principles, such as accountability, and the inclusion of governance through supervisory authorities to implement and enforce the regulation. Nevertheless, we still feel that many of the suggestions present in our article, which would help to operationalize the regulation, are not addressed. One example is the reduced scope of the regulation to a list of “high risk applications,” leaving without a legal framework all other AI applications. We believe that the principles that inspire the regulation should also be applied in “lower risk applications.” Defining only the compliance process for AI developers, but leaving open the specific technical requirements that these high risk applications shall meet leaves untouched the existing gap between legal language and engineering practice. There are no described mechanisms by which all stakeholders (other than developers and implementers) can influence AI development, monitor their performance or claim redress if harmed. These shortcomings and other issues presented in our article leave the door open to loopholes that we hope the European Parliament can fix during the legislative process.

中文翻译:


欧洲人工智能监管的广泛人权方法



编者注:本文是在欧盟委员会发布人工智能 (AI) 监管提案之前撰写的 [29]。在对拟议法规的初步和临时分析中,我们观察到拟议法规纳入了我们文章中规定的一些基本原则:它优先考虑基本权利并纳入一些人权原则,例如问责制以及通过以下方式纳入治理:监督机构实施和执行该法规。尽管如此,我们仍然认为我们的文章中提出的许多有助于实施该法规的建议并未得到解决。一个例子是将监管范围缩小到“高风险应用程序”列表,从而使所有其他人工智能应用程序都没有法律框架。我们认为,激发监管的原则也应该适用于“低风险应用”。仅定义人工智能开发人员的合规流程,但对这些高风险应用程序应满足的具体技术要求保持开放,不会影响法律语言和工程实践之间现有的差距。没有描述任何机制可以让所有利益相关者(开发者和实施者除外)可以影响人工智能开发、监控其性能或在受到伤害时要求赔偿。我们文章中提出的这些缺点和其他问题留下了漏洞之门,我们希望欧洲议会能够在立法过程中修复这些漏洞。
更新日期:2021-06-03
down
wechat
bug