当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Governing AI safety through independent audits
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2021-07-20 , DOI: 10.1038/s42256-021-00370-7
Gregory Falco 1, 2, 3 , Anton Dahbura 2 , Cara LaPointe 2, 4 , Ben Shneiderman 5 , Julia Badger 6 , Ryan Carrier 7 , David Danks 8 , Martin Eling 9 , Alwyn Goodloe 10 , Jerry Gupta 11 , Christopher Hart 12 , Marina Jirotka 13 , Henric Johnson 14 , Ashley J. Llorens 4 , Alan K. Mackworth 15 , Carsten Maple 16 , Sigurður Emil Pálsson 17 , Frank Pasquale 18 , Alan Winfield 19 , Zee Kin Yeong 20
Affiliation  

Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.



中文翻译:

通过独立审计管理人工智能安全

高度自动化的系统正变得无所不在。它们的功能范围从自动驾驶汽车到先进的医疗诊断,并提供许多好处。然而,在备受瞩目的碰撞和事故中,保障挑战变得越来越明显。此类系统的治理对于获得广泛的公众信任至关重要。之前已经提出了治理原则,为自动化系统开发人员提供了有抱负的指导;然而,鉴于颁布和执行这些原则所需的成本和程序过多,因此它们的实施通常是不切实际的。该观点由跨政府组织、行业和学术界的国际和多学科团队撰写,提出了一种机制来推动高度自动化系统的广泛保证:独立审计。根据提议,人工智能系统的独立审计将体现三个“AAA”治理原则,即预期风险评估、运营审计跟踪和系统遵守司法管辖区要求。人工智能系统的独立审计是应对繁琐且无法执行的保证挑战的务实方法。

更新日期:2021-07-21
down
wechat
bug