当前位置: X-MOL 学术Regul. Gov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Regulating for trust: Can law establish trust in artificial intelligence?
Regulation & Governance ( IF 3.203 ) Pub Date : 2023-11-30 , DOI: 10.1111/rego.12568
Aurelia Tamò‐Larrieux 1 , Clement Guitton 2 , Simon Mayer 2 , Christoph Lutz 3
Affiliation  

The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI).” In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.

中文翻译:

信任监管:法律能否建立对人工智能的信任?

当前的政治和监管话语经常引用“值得信赖的人工智能(AI)”一词。在欧洲,确保人工智能可信的努力已经从可信人工智能高级别专家组道德准则开始,现在已融入欧盟人工智能法案的监管讨论中。在全球范围内,政策制定者正在积极推行举措,例如美国关于安全、可靠和值得信赖的人工智能的行政命令,或关于人工智能的布莱奇利宣言,其前提是正确的监管策略可以塑造对人工智能的信任。为了分析这个前提的有效性,我们建议考虑更广泛的关于自动化信任的文献。在此基础上,我们构建了一个框架来分析更广泛地影响人工智能和自动化信任的 16 个因素。我们分析这些因素之间的相互作用,并理清它们,以确定监管对每个因素可能产生的影响。因此,本文为政策制定者和法律学者提供了衡量不同监管策略的基础,特别是区分监管更有可能影响人工智能信任的策略(例如,监管人工智能可能完成的任务类型)和监管更可能影响人工智能信任的策略。它对信任的影响更为有限(例如,提高自满意识和自动化偏见的措施)。我们的分析强调了细致入微的监管在塑造人机关系方面的关键作用,并为政策制定者提供了有针对性的方法,以讨论如何简化未来人工智能治理的监管工作。
更新日期:2023-12-01
down
wechat
bug