当前位置: X-MOL 学术Ratio › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Existential risk from AI and orthogonality: Can we have it both ways?
Ratio ( IF 0.6 ) Pub Date : 2021-07-15 , DOI: 10.1111/rati.12320
Vincent C. Müller 1, 2, 3 , Michael Cannon 1
Affiliation  

The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be joined as premises and the argument for the existential risk of AI turns out invalid. If the interpretation is incorrect and both premises use the same notion of intelligence, then at least one of the premises is false and the orthogonality thesis remains itself orthogonal to the argument to existential risk from AI. In either case, the standard argument for existential risk from AI is not sound.—Having said that, there remains a risk of instrumental AI to cause very significant damage if designed or used badly, though this is not due to superintelligence or a singularity.

中文翻译:

人工智能和正交性带来的存在风险:我们可以兼得吗?

人工智能 (AI) 对人类构成生存风险这一结论的标准论据使用两个前提:(1)人工智能可能达到超智能水平,此时我们人类将失去控制(“奇点主张”);(2) 任何水​​平的智力都可以与任何目标相一致(“正交论”)。我们发现,奇点主张需要“一般智能”的概念,而正交论则需要“工具智能”的概念。如果这种解释是正确的,它们就不能作为前提加入,并且关于人工智能存在风险的论点将被证明是无效的。如果解释不正确并且两个前提都使用相同的智能概念,那么至少有一个前提是错误的,并且正交性论题本身仍然与人工智能存在风险的论点正交。在任何一种情况下,关于人工智能存在风险的标准论点都不合理。——话虽如此,如果设计或使用不当,工具人工智能仍然存在造成非常重大损害的风险,尽管这不是由于超级智能或奇点。
更新日期:2021-07-15
down
wechat
bug