当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning probabilistic neural representations with randomly connected circuits.
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2020-10-06 , DOI: 10.1073/pnas.1912804117
Ori Maoz 1, 2 , Gašper Tkačik 3 , Mohamad Saleh Esteki 4 , Roozbeh Kiani 5, 6, 7 , Elad Schneidman 8
Affiliation  

The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.



中文翻译:

通过随机连接的电路学习概率神经表示。

大脑使用嘈杂的基于尖峰的神经代码,概率性地表示复杂刺激和运动动作并进行概率推理。进行此类神经计算以及进行有监督和无监督学习的基础的关键组成部分是估计传入的高维神经活动模式的惊奇或可能性的能力。尽管在神经反应和深度学习的统计建模方面取得了进展,但当前的方法要么无法扩展到大量的神经种群,要么无法使用生物学上现实的机制来实施。受到实际神经元电路稀疏和随机连接的启发,我们提出了一种神经代码模型,该模型可以准确地估计单个峰值模式的可能性,并具有简单,可扩展,有效,易学且逼真的神经实现。该模型在猴子视觉和前额叶皮层中同时记录的> 100个神经元的尖峰活动上的性能与最新模型相当或更好。重要的是,可以使用少量样本并使用利用神经电路固有噪声的局部学习规则来学习模型。随机连接中较慢的结构变化与重新布线和修剪过程一致,进一步提高了所得神经表示的效率和稀疏性。我们的研究结果融合了来自神经解剖学,机器学习和理论神经科学的见解,从而建议将随机稀疏连接性作为神经元计算的关键设计原则。猴子视觉和前额叶皮层中的100个神经元与最新模型相当或更好。重要的是,可以使用少量样本并使用利用神经电路固有噪声的局部学习规则来学习模型。随机连接中较慢的结构变化与重新布线和修剪过程一致,进一步提高了所得神经表示的效率和稀疏性。我们的研究结果融合了来自神经解剖学,机器学习和理论神经科学的见解,从而提出了随机稀疏连通性作为神经元计算的关键设计原则的建议。猴子视觉和前额叶皮层中的100个神经元与最新模型相当或更好。重要的是,可以使用少量样本并使用利用神经电路固有噪声的局部学习规则来学习模型。随机连接中较慢的结构更改与重新布线和修剪过程一致,进一步提高了所得神经表示的效率和稀疏性。我们的研究结果融合了来自神经解剖学,机器学习和理论神经科学的见解,从而提出了随机稀疏连通性作为神经元计算的关键设计原则的建议。随机连接的结构变化与重新布线和修剪过程一致,进一步提高了所得神经表示的效率和稀疏性。我们的研究结果融合了来自神经解剖学,机器学习和理论神经科学的见解,从而提出了随机稀疏连通性作为神经元计算的关键设计原则的建议。随机连接的结构变化与重新布线和修剪过程一致,进一步提高了所得神经表示的效率和稀疏性。我们的研究结果融合了来自神经解剖学,机器学习和理论神经科学的见解,从而建议将随机稀疏连接性作为神经元计算的关键设计原则。

更新日期:2020-10-07
down
wechat
bug