当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical Sparse Coding of Objects in Deep Convolutional Neural Networks
Frontiers in Computational Neuroscience ( IF 2.1 ) Pub Date : 2020-12-09 , DOI: 10.3389/fncom.2020.578158
Xingyu Liu , Zonglei Zhen , Jia Liu

Recently, deep convolutional neural networks (DCNNs) have attained human-level performances on challenging object recognition tasks owing to their complex internal representation. However, it remains unclear how objects are represented in DCNNs with an overwhelming number of features and non-linear operations. In parallel, the same question has been extensively studied in primates' brain, and three types of coding schemes have been found: one object is coded by the entire neuronal population (distributed coding), or by one single neuron (local coding), or by a subset of neuronal population (sparse coding). Here we asked whether DCNNs adopted any of these coding schemes to represent objects. Specifically, we used the population sparseness index, which is widely-used in neurophysiological studies on primates' brain, to characterize the degree of sparseness at each layer in representative DCNNs pretrained for object categorization. We found that the sparse coding scheme was adopted at all layers of the DCNNs, and the degree of sparseness increased along the hierarchy. That is, the coding scheme shifted from distributed-like coding at lower layers to local-like coding at higher layers. Further, the degree of sparseness was positively correlated with DCNNs' performance in object categorization, suggesting that the coding scheme was related to behavioral performance. Finally, with the lesion approach, we demonstrated that both external learning experiences and built-in gating operations were necessary to construct such a hierarchical coding scheme. In sum, our study provides direct evidence that DCNNs adopted a hierarchically-evolved sparse coding scheme as the biological brain does, suggesting the possibility of an implementation-independent principle underling object recognition.

中文翻译:

深度卷积神经网络中对象的分层稀疏编码

最近,深度卷积神经网络(DCNN)由于其复杂的内部表示,在具有挑战性的物体识别任务上已经达到了人类水平的表现。然而,目前尚不清楚如何在具有大量特征和非线性操作的 DCNN 中表示对象。与此同时,同样的问题在灵长类动物的大脑中得到了广泛的研究,并发现了三种类型的编码方案:一个对象由整个神经元群体编码(分布式编码),或由单个神经元编码(局部编码),或通过神经元群的子集(稀疏编码)。在这里,我们询问 DCNN 是否采用了这些编码方案中的任何一种来表示对象。具体来说,我们使用了种群稀疏指数,该指数在灵长类动物大脑的神经生理学研究中被广泛使用,表征为对象分类进行预训练的代表性 DCNN 中每一层的稀疏程度。我们发现 DCNN 的所有层都采用了稀疏编码方案,并且稀疏程度沿着层次结构增加。也就是说,编码方案从较低层的分布式编码转变为较高层的局部编码。此外,稀疏程度与 DCNN 在对象分类中的表现呈正相关,表明编码方案与行为表现有关。最后,通过病变方法,我们证明了外部学习经验和内置门控操作对于构建这种分层编码方案是必要的。总共,
更新日期:2020-12-09
down
wechat
bug