当前位置: X-MOL 学术Regul. Gov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market
Regulation & Governance ( IF 3.203 ) Pub Date : 2023-12-11 , DOI: 10.1111/rego.12563
Regine Paul 1
Affiliation  

The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.

中文翻译:

欧洲人工智能“受到全世界的信任”:基于风险的监管和竞争性共同人工智能市场的形成

欧盟委员会率先对人工智能 (AI) 进行强制监管,其中包括基于道德理由完全禁止某些应用程序的提案。其监管策略的核心是名义上的“基于风险”的方法,并采取与风险水平成比例的干预措施。然而,既没有将基于风险的监管视为理性解决问题的努力的标准解释,也没有寻求组织合法性的理论,这两者都在《监管与监管》中得到了重点讨论。治理,充分解释了委员会对风险启发式的吸引力。本文通过三个贡献来回应这一僵局。首先,它丰富了基于风险的监管学术(超越人工智能),为新兴技术监管的建构主义和批判性政治经济学账户奠定了坚实的基础,以捕捉定义和制定相对于全球经济竞争力的风险的表演政治。其次,它概念化了风险分析在文化政治经济学框架内的作用:作为不确定监管领域的话语和监管差异化的强大认知工具(符号论和结构化)是委员会在追求未来欧洲共同人工智能市场时所运用的。第三,本文通过对大量文件样本和专家访谈的定性分析,对委员会在人工智能监管中基于风险的符号系统和结构进行了深入的实证重建。研究发现,委员会对风险分析的使用,将某些人工智能用途视为存在深刻价值冲突的问题而取缔,并严格控制(至少是在话语上)所谓的高风险人工智能系统,使布鲁塞尔能够塑造其想要的欧洲商标首先,“尖端人工智能……全世界都值得信赖”。
更新日期:2023-12-11
down
wechat
bug