当前位置: X-MOL 学术Psychological Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A theory of learning to infer.
Psychological Review ( IF 5.1 ) Pub Date : 2020-04-01 , DOI: 10.1037/rev0000178
Ishita Dasgupta 1 , Eric Schulz 2 , Joshua B Tenenbaum 3 , Samuel J Gershman 2
Affiliation  

Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people underreact to prior probabilities (base rate neglect), other studies find that people underreact to the likelihood of the data (conservatism). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model learns to infer. We show that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

中文翻译:

学习推理的理论。

贝叶斯认知理论假设人们可以合理地整合概率。但是,一些经验发现与这一命题相矛盾:人类的概率推论倾向于系统地偏离最优性。令人费解的是,这些偏差有时会朝相反的方向发展。一些研究表明人们对先验概率反应不足(基本比率忽略),而其他研究发现人们对数据的可能性反应不足(保守主义)。我们认为出现这些偏差是因为人脑不仅仅依赖于通用机制来近似贝叶斯推断,而贝叶斯推断在查询之间是不变的。相反,大脑配备了将查询映射到概率分布的识别模型。优化此识别模型的参数,以使输出平均尽可能接近真实后验。由于我们有限的计算资源,因此识别模型将分配其资源,以便对高概率查询比对低概率查询更准确。通过适应查询分布,识别模型学会了推断。我们证明了该理论可以解释人们对数据或先前反应不足的原因和时间,并且一项新的实验表明,可以通过操纵查询分布来系统地控制这两种反应不足的形式。该理论还解释了一系列相关现象:记忆效应,信念偏见以及概率推理中响应变异性的结构。我们还将讨论该理论如何与基于先前采样的近似推断相结合。(PsycInfo数据库记录(c)2020 APA,保留所有权利)。
更新日期:2020-04-01
down
wechat
bug