当前位置: X-MOL 学术Philosophies › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Understanding and Avoiding AI Failures: A Practical Guide
Philosophies Pub Date : 2021-06-28 , DOI: 10.3390/philosophies6030053
Robert Williams , Roman Yampolskiy

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.

中文翻译:

理解和避免人工智能失败:实用指南

随着人工智能技术的能力和普及程度不断提高,人工智能事故正变得越来越普遍。基于正常事故理论、高可靠性理论和开放系统理论,我们创建了一个框架来理解与人工智能应用相关的风险。该框架旨在将注意力集中在相关的系统属性上,而不需要大量的准确性。此外,我们还使用 AI 安全原则来量化 AI 中提高智能和类人素质的独特风险。这两个领域共同提供了当代人工智能风险的更完整图景。通过关注事故附近的系统属性,而不是寻找事故的根本原因,我们确定了应在哪些方面关注当前一代 AI 系统的安全性。
更新日期:2021-06-28
down
wechat
bug