当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Contextual Integration in Cortical and Convolutional Neural Networks
Frontiers in Computational Neuroscience ( IF 2.1 ) Pub Date : 2020-04-23 , DOI: 10.3389/fncom.2020.00031
Ramakrishnan Iyer 1 , Brian Hu 1 , Stefan Mihalas 1
Affiliation  

It has been suggested that neurons can represent sensory input using probability distributions and neural circuits can perform probabilistic inference. Lateral connections between neurons have been shown to have non-random connectivity and modulate responses to stimuli within the classical receptive field. Large-scale efforts mapping local cortical connectivity describe cell type specific connections from inhibitory neurons and like-to-like connectivity between excitatory neurons. To relate the observed connectivity to computations, we propose a neuronal network model that approximates Bayesian inference of the probability of different features being present at different image locations. We show that the lateral connections between excitatory neurons in a circuit implementing contextual integration in this should depend on correlations between unit activities, minus a global inhibitory drive. The model naturally suggests the need for two types of inhibitory gates (normalization, surround inhibition). First, using natural scene statistics and classical receptive fields corresponding to simple cells parameterized with data from mouse primary visual cortex, we show that the predicted connectivity qualitatively matches with that measured in mouse cortex: neurons with similar orientation tuning have stronger connectivity, and both excitatory and inhibitory connectivity have a modest spatial extent, comparable to that observed in mouse visual cortex. We incorporate lateral connections learned using this model into convolutional neural networks. Features are defined by supervised learning on the task, and the lateral connections provide an unsupervised learning of feature context in multiple layers. Since the lateral connections provide contextual information when the feedforward input is locally corrupted, we show that incorporating such lateral connections into convolutional neural networks makes them more robust to noise and leads to better performance on noisy versions of the MNIST dataset. Decomposing the predicted lateral connectivity matrices into low-rank and sparse components introduces additional cell types into these networks. We explore effects of cell-type specific perturbations on network computation. Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by feedforward connections when the input is unreliable and demonstrate the potential usefulness of combining supervised and unsupervised learning techniques in real-world vision tasks.

中文翻译:

皮质和卷积神经网络中的上下文整合

有人提出,神经元可以使用概率分布来表示感觉输入,而神经回路可以执行概率推理。神经元之间的横向连接已被证明具有非随机连接性,并在经典感受野内调节对刺激的反应。映射局部皮质连接的大规模努力描述了来自抑制性神经元的细胞类型特异性连接和兴奋性神经元之间的同类连接。为了将观察到的连通性与计算联系起来,我们提出了一种神经元网络模型,该模型近似贝叶斯推断不同特征出现在不同图像位置的概率。我们表明,在此实施上下文整合的电路中兴奋性神经元之间的横向连接应该取决于单元活动之间的相关性,减去全局抑制驱动。该模型自然表明需要两种类型的抑制门(标准化、环绕抑制)。首先,使用自然场景统计数据和对应于用小鼠初级视觉皮层数据参数化的简单细胞的经典感受野,我们表明预测的连通性与在小鼠皮层中测量的连通性定性匹配:具有相似方向调谐的神经元具有更强的连通性,并且两者都具有兴奋性和抑制连接具有适度的空间范围,与在小鼠视觉皮层中观察到的相当。我们将使用该模型学习到的横向连接整合到卷积神经网络中。特征由任务上的监督学习定义,横向连接提供多层特征上下文的无监督学习。由于当前馈输入局部损坏时横向连接提供上下文信息,我们表明将这种横向连接合并到卷积神经网络中可以使它们对噪声更加鲁棒,并在 MNIST 数据集的噪声版本上产生更好的性能。将预测的横向连接矩阵分解为低秩和稀疏分量将额外的细胞类型引入这些网络。我们探索细胞类型特定扰动对网络计算的影响。我们的框架可能适用于在其他任务上训练的网络,
更新日期:2020-04-23
down
wechat
bug