当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Causal importance of low-level feature selectivity for generalization in image recognition.
Neural Networks ( IF 6.0 ) Pub Date : 2020-02-24 , DOI: 10.1016/j.neunet.2020.02.009
Jumpei Ukita 1
Affiliation  

Although our brain and deep neural networks (DNNs) can perform high-level sensory-perception tasks, such as image or speech recognition, the inner mechanism of these hierarchical information-processing systems is poorly understood in both neuroscience and machine learning. Recently, Morcos et al. (2018) examined the effect of class-selective units in DNNs, i.e., units with high-level selectivity, on network generalization, concluding that hidden units that are selectively activated by specific input patterns may harm the network’s performance. In this study, we revisited their hypothesis, considering units with selectivity for lower-level features, and argue that selective units are not always harmful to the network performance. Specifically, by using DNNs trained for image classification, we analyzed the orientation selectivity of individual units, a low-level selectivity widely studied in visual neuroscience. We found that orientation-selective units exist in both lower and higher layers of these DNNs, as in our brain. In particular, units in lower layers became more orientation-selective as the generalization performance improved during the course of training. Consistently, networks that generalized better were more orientation-selective in the lower layers. We finally revealed that ablating these selective units in the lower layers substantially degraded the generalization performance of the networks, at least by disrupting the shift-invariance of the higher layers. These results suggest that orientation selectivity can play a causally important role in object recognition, and that, contrary to the triviality of units with high-level selectivity, lower-layer units with selectivity for low-level features may be indispensable for generalization, at least for the several network architectures.



中文翻译:

低级特征选择性对于图像识别泛化的因果重要性。

尽管我们的大脑和深度神经网络(DNN)可以执行高级的感官感知任务,例如图像或语音识别,但是在神经科学和机器学习中,对这些分层信息处理系统的内部机制知之甚少。最近,Morcos等。(2018)研究了DNN中的类选择单元(即具有高选择性的单元)对网络泛化的影响,得出结论认为,由特定输入模式选择性激活的隐藏单元可能会损害网络的性能。在这项研究中,我们重新考虑了其假设,考虑了对较低级别特征具有选择性的单元,并认为选择性单元并不总是对网络性能有害。具体来说,通过使用经过训练的图像分类DNN,我们分析了单个单元的方向选择性,这是在视觉神经科学中广泛研究的低水平选择性。我们发现在这些DNN的上下两层都存在方向选择单元,就像我们的大脑一样。特别是,随着训练过程中泛化性能的提高,较低层中的单元变得更具方向选择性。一致地,泛化得更好的网络在较低的层中更具方向选择性。我们最终揭示,至少通过破坏较高层的平移不变性,消除较低层中的这些选择性单元会大大降低网络的泛化性能。这些结果表明,方向选择性可以在物体识别中起重要作用,并且

更新日期:2020-02-24
down
wechat
bug