当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Controversial stimuli: Pitting neural networks against each other as models of human cognition [Colloquium Papers (free online)]
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2020-11-24 , DOI: 10.1073/pnas.1912334117
Tal Golan 1 , Prashant C Raju 2 , Nikolaus Kriegeskorte 1, 3, 4, 5
Affiliation  

Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models’ ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative–generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models’ inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.



中文翻译:

有争议的刺激:神经网络作为人类认知模型的相互竞争[座谈会论文(免费在线)]

不同的科学理论可以做出类似的预测。为了在理论之间做出裁决,我们必须设计实验,让理论做出不同的预测。在这里,我们考虑将深度神经网络与人类视觉识别模型进行比较的问题。为了有效比较模型预测人类反应的能力,我们合成了有争议的刺激:不同模型产生不同反应的图像。我们将此方法应用于两个视觉识别任务:手写数字 (MNIST) 和小型自然图像中的对象 (CIFAR-10)。对于每项任务,我们综合了有争议的刺激,以最大化采用不同架构和识别算法的模型之间的分歧。人类受试者观看了数百个此类刺激以及自然示例,并判断每个图像中每个数字/物体类别存在的概率。我们量化了每个模型预测人类判断的准确程度。表现最好的模型是 MNIST 的生成综合分析模型(基于变分自动编码器)和 CIFAR-10 的混合判别生成联合能量模型。这些对图像分布进行建模的深度神经网络 (DNN) 比仅学习将图像映射到标签的纯判别式 DNN 表现更好。没有一个候选模型能够完全解释人类的反应。有争议的刺激概括了对抗性例子的概念,消除了假设真实模型的需要。与自然图像不同,有争议的刺激并不局限于训练的刺激分布模型,因此提供了严格的分布外测试,揭示了模型的归纳偏差。因此,有争议的刺激为模型与人类感知之间的差异提供了有力的探索。

更新日期:2020-11-25
down
wechat
bug