当前位置: X-MOL 学术J. Exp. Theor. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The risks associated with Artificial General Intelligence: A systematic review
Journal of Experimental & Theoretical Artificial Intelligence ( IF 2.2 ) Pub Date : 2021-08-13 , DOI: 10.1080/0952813x.2021.1964003
Scott McLean 1 , Gemma J. M. Read 1 , Jason Thompson 1, 2 , Chris Baber 3 , Neville A. Stanton 1 , Paul M. Salmon 1
Affiliation  

ABSTRACT

Artificial General intelligence (AGI) offers enormous benefits for humanity, yet it also poses great risk. The aim of this systematic review was to summarise the peer reviewed literature on the risks associated with AGI. The review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Sixteen articles were deemed eligible for inclusion. Article types included in the review were classified as philosophical discussions, applications of modelling techniques, and assessment of current frameworks and processes in relation to AGI. The review identified a range of risks associated with AGI, including AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks. Several limitations of the AGI literature base were also identified, including a limited number of peer reviewed articles and modelling techniques focused on AGI risk, a lack of specific risk research in which domains that AGI may be implemented, a lack of specific definitions of the AGI functionality, and a lack of standardised AGI terminology. Recommendations to address the identified issues with AGI risk research are required to guide AGI design, implementation, and management.



中文翻译:

与通用人工智能相关的风险:系统回顾

摘要

通用人工智能(AGI)为人类带来了巨大的好处,但也带来了巨大的风险。本系统综述的目的是总结有关 AGI 相关风险的同行评审文献。该审查遵循系统审查和荟萃分析的首选报告项目 (PRISMA) 指南。十六篇文章被认为有资格纳入。评论中包含的文章类型分为哲学讨论、建模技术的应用以及与 AGI 相关的当前框架和流程的评估。审查确定了与 AGI 相关的一系列风险,包括 AGI 脱离人类所有者/管理者的控制、被赋予或制定不安全的目标、不安全的 AGI 的发展、道德、道德和价值观较差的 AGI;AGI管理不善,和存在风险。还发现了 AGI 文献库的一些局限性,包括关注 AGI 风险的同行评审文章和建模技术数量有限、缺乏 AGI 可能实施领域的具体风险研究、缺乏 AGI 的具体定义功能,并且缺乏标准化的 AGI 术语。需要提出解决 AGI 风险研究中已识别问题的建议来指导 AGI 设计、实施和管理。以及缺乏标准化的 AGI 术语。需要提出解决 AGI 风险研究中已识别问题的建议来指导 AGI 设计、实施和管理。以及缺乏标准化的 AGI 术语。需要提出解决 AGI 风险研究中已识别问题的建议来指导 AGI 设计、实施和管理。

更新日期:2021-08-13
down
wechat
bug