当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Digital computing through randomness and order in neural networks
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2022-08-10 , DOI: 10.1073/pnas.2115335119
Alexandre Pitti 1 , Claudio Weidmann 1 , Mathias Quoy 1, 2
Affiliation  

We propose that coding and decoding in the brain are achieved through digital computation using three principles: relative ordinal coding of inputs, random connections between neurons, and belief voting. Due to randomization and despite the coarseness of the relative codes, we show that these principles are sufficient for coding and decoding sequences with error-free reconstruction. In particular, the number of neurons needed grows linearly with the size of the input repertoire growing exponentially. We illustrate our model by reconstructing sequences with repertoires on the order of a billion items. From this, we derive the Shannon equations for the capacity limit to learn and transfer information in the neural population, which is then generalized to any type of neural network. Following the maximum entropy principle of efficient coding, we show that random connections serve to decorrelate redundant information in incoming signals, creating more compact codes for neurons and therefore, conveying a larger amount of information. Henceforth, despite the unreliability of the relative codes, few neurons become necessary to discriminate the original signal without error. Finally, we discuss the significance of this digital computation model regarding neurobiological findings in the brain and more generally with artificial intelligence algorithms, with a view toward a neural information theory and the design of digital neural networks.

中文翻译:

通过神经网络中的随机性和顺序进行数字计算

我们提出大脑中的编码和解码是通过数字计算使用三个原则实现的:输入的相对顺序编码、神经元之间的随机连接和信念投票。由于随机化和尽管相关代码粗糙,我们表明这些原则足以编码和解码具有无错误重建的序列。特别是,所需的神经元数量随着输入库的大小呈指数增长而线性增长。我们通过用大约十亿个项目的曲目重建序列来说明我们的模型。由此,我们推导出神经群体中学习和传递信息的能力限制的香农方程,然后将其推广到任何类型的神经网络。遵循高效编码的最大熵原则,我们表明,随机连接有助于消除传入信号中的冗余信息,为神经元创建更紧凑的代码,从而传达更大量的信息。此后,尽管相关代码不可靠,但几乎没有神经元可以无误地区分原始信号。最后,我们讨论了这种数字计算模型对大脑神经生物学发现的意义,更广泛地说是人工智能算法,着眼于神经信息论和数字神经网络的设计。需要很少的神经元来无误地辨别原始信号。最后,我们讨论了这种数字计算模型对大脑神经生物学发现的意义,更广泛地说是人工智能算法,着眼于神经信息论和数字神经网络的设计。需要很少的神经元来无误地辨别原始信号。最后,我们讨论了这种数字计算模型对大脑神经生物学发现的意义,更广泛地说是人工智能算法,着眼于神经信息论和数字神经网络的设计。
更新日期:2022-08-10
down
wechat
bug