当前位置: X-MOL 学术Cogn. Syst. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Applying Deutsch’s concept of good explanations to artificial intelligence and neuroscience - an initial exploration
Cognitive Systems Research ( IF 2.1 ) Pub Date : 2021-06-01 , DOI: 10.1016/j.cogsys.2020.12.002
Daniel C. Elton

Artificial intelligence has made great strides since the deep learning revolution, but AI systems still struggle to extrapolate outside of their training data and adapt to new situations. For inspiration we look to the domain of science, where scientists have been able to develop theories which show remarkable ability to extrapolate and sometimes predict the existence of phenomena which have never been observed before. According to David Deutsch, this type of extrapolation, which he calls "reach", is due to scientific theories being hard to vary. In this work we investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning such as the bias-variance trade-off and Occam's razor. We distinguish internal variability, how much a model/theory can be varied internally while still yielding the same predictions, with external variability, which is how much a model must be varied to accurately predict new, out-of-distribution data. We discuss how to measure internal variability using the size of the Rashomon set and how to measure external variability using Kolmogorov complexity. We explore what role hard-to-vary explanations play in intelligence by looking at the human brain and distinguish two learning systems in the brain. The first system operates similar to deep learning and likely underlies most of perception and motor control while the second is a more creative system capable of generating hard-to-vary explanations of the world. We argue that figuring out how replicate this second system, which is capable of generating hard-to-vary explanations, is a key challenge which needs to be solved in order to realize artificial general intelligence. We make contact with the framework of Popperian epistemology which rejects induction and asserts that knowledge generation is an evolutionary process which proceeds through conjecture and refutation.

中文翻译:

将多伊奇的良好解释概念应用于人工智能和神经科学——初步探索

自深度学习革命以来,人工智能取得了长足的进步,但人工智能系统仍然难以在训练数据之外进行推断并适应新情况。为了获得灵感,我们将目光投向了科学领域,在该领域中,科学家们已经能够发展出一些理论,这些理论显示出非凡的推断能力,有时甚至可以预测以前从未观察到的现象的存在。根据 David Deutsch 的说法,这种被他称为“范围”的推断是由于科学理论难以改变。在这项工作中,我们研究了 Deutsch 的难以变化原则,以及它与深度学习中更正式的原则(例如偏差-方差权衡和奥卡姆剃刀)的关系。我们区分内部可变性,模型/理论可以在内部变化多少,同时仍然产生相同的预测,具有外部可变性,即模型必须改变多少才能准确预测新的分布外数据。我们讨论如何使用 Rashomon 集的大小来衡量内部可变性,以及如何使用 Kolmogorov 复杂度来衡量外部可变性。我们通过观察人脑并区分大脑中的两个学习系统来探索难以改变的解释在智力中的作用。第一个系统的操作类似于深度学习,可能是大部分感知和运动控制的基础,而第二个系统是一个更具创造性的系统,能够对世界产生难以改变的解释。我们认为,弄清楚如何复制这个能够产生难以改变的解释的第二个系统,是实现通用人工智能需要解决的关键挑战。我们接触了波普尔认识论的框架,该框架拒绝归纳并断言知识生成是一个通过猜想和反驳进行的进化过程。
更新日期:2021-06-01
down
wechat
bug