当前位置: X-MOL 学术J. Exp. Theor. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Entropic boundary conditions towards safe artificial superintelligence
Journal of Experimental & Theoretical Artificial Intelligence ( IF 2.2 ) Pub Date : 2021-07-19 , DOI: 10.1080/0952813x.2021.1952653
Santiago Núñez-Corrales 1 , Eric Jakobsson 2
Affiliation  

ABSTRACT

Artificial superintelligent (ASI) agents that will not cause harm to humans or other organisms are central to mitigating a growing contemporary global safety concern as artificial intelligent agents become more sophisticated. We argue that it is not necessary to resort to implementing an explicit theory of ethics, and that doing so may entail intractable difficulties and unacceptable risks. We attempt to provide some insight into the matter by defining a minimal set of boundary conditions potentially capable of decreasing the probability of conflict with synthetic intellects intended to prevent aggression towards organisms. Our argument departs from causal entropic forces as good general predictors of future action in ASI agents. We reason that maximising future freedom of action implies reducing the amount of repeated computation needed to find good solutions to a large number of problems, for which living systems are good exemplars: a safe ASI should find living organisms intrinsically valuable. We describe empirically-bounded ASI agents whose actions are constrained by the character of physical laws and their own evolutionary history as emerging from H. sapiens, conceptually and memetically, if not genetically. Plausible consequences and practical concerns for experimentation are characterised, and implications for life in the universe are discussed.



中文翻译:

安全人工智能的熵边界条件

摘要

随着人工智能代理变得越来越复杂,不会对人类或其他生物造成伤害的人工超级智能 (ASI) 代理对于缓解日益增长的当代全球安全问题至关重要。我们认为,没有必要诉诸实施明确的伦理学理论,这样做可能会带来棘手的困难和不可接受的风险。我们试图通过定义一组最小的边界条件来提供对该问题的一些见解,这些边界条件可能能够降低与旨在防止对生物体的侵略的合成智能发生冲突的可能性。我们的论点偏离了因果熵力作为 ASI 代理未来行动的良好一般预测因子。我们的理由是,最大化未来的行动自由意味着减少为大量问题找到好的解决方案所需的重复计算量,生命系统是这些问题的良好范例:一个安全的 ASI 应该发现生物体具有内在价值。我们描述了经验有界的 ASI 代理,它们的行为受到物理定律的特征和它们自己的进化历史的约束,就像从H. sapiens,如果不是基因的话,在概念上和模因上。描述了实验的合理后果和实际问题,并讨论了对宇宙生命的影响。

更新日期:2021-07-19
down
wechat
bug