当前位置: X-MOL 学术J. Math. Neurosc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neurally plausible mechanisms for learning selective and invariant representations.
The Journal of Mathematical Neuroscience ( IF 2.3 ) Pub Date : 2020-08-18 , DOI: 10.1186/s13408-020-00088-7
Fabio Anselmi 1, 2, 3 , Ankit Patel 1, 4 , Lorenzo Rosasco 2
Affiliation  

Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success—supervised learning and the backpropagation algorithm—are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.

中文翻译:

用于学习选择性表示和不变表示的神经机制。

已知腹侧流中视觉刺激的编码对于保持物体身份的讨厌变换是不变的。确实,最近的理论和实验工作表明,视觉皮层的主要挑战是建立这种令人讨厌的不变表示。最近,人工卷积网络已经成功地学习了不变性,并且出人意料地以前所未有的精度预测了猕猴和小鼠视觉皮层的皮质反应。但是,实现这种成功的一些关键因素(监督学习和反向传播算法)在神经上是不可行的。这使得很难将对卷积网络的理解与大脑联系起来。相反,大脑中不变表示的许多现有神经学上合理的理论都涉及无监督学习,并且与特定的可塑性规则紧密相关。为了弥合这一差距,我们研究了简单复杂单元模型的实例化,并针对广泛的无监督学习规则(包括Hebbian学习)表明,我们可以学习不依赖于有限正交组的扰动变换的对象表示形式。 。这些发现可能对发展视觉上可行的理论和模型有关,这些理论和模型是视觉皮层或人工神经网络如何建立选择性来区分物体和对现实世界的有害变换的不变性。我们研究了简单复杂单元模型的实例化,并针对广泛的无监督学习规则(包括Hebbian学习)表明,我们可以学习不变于属于有限正交组的扰动变换的对象表示。这些发现可能对发展视觉上可行的理论和模型有关,这些理论和模型是视觉皮层或人工神经网络如何建立选择性来区分物体和对现实世界的有害变换的不变性。我们研究了简单复杂单元模型的实例化,并针对广泛的无监督学习规则(包括Hebbian学习)表明,我们可以学习不变于属于有限正交组的扰动变换的对象表示。这些发现可能对发展视觉上可行的理论和模型有关,这些理论和模型是视觉皮层或人工神经网络如何建立选择性来区分物体和对现实世界的有害变换的不变性。
更新日期:2020-08-19
down
wechat
bug