当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sparse coding with a somato-dendritic rule.
Neural Networks ( IF 7.8 ) Pub Date : 2020-06-26 , DOI: 10.1016/j.neunet.2020.06.007
Damien Drix 1 , Verena V Hafner 2 , Michael Schmuker 3
Affiliation  

Cortical neurons are silent most of the time: sparse activity enables low-energy computation in the brain, and promises to do the same in neuromorphic hardware. Beyond power efficiency, sparse codes have favourable properties for associative learning, as they can store more information than local codes but are easier to read out than dense codes. Auto-encoders with a sparse constraint can learn sparse codes, and so can single-layer networks that combine recurrent inhibition with unsupervised Hebbian learning. But the latter usually require fast homeostatic plasticity, which could lead to catastrophic forgetting in embodied agents that learn continuously. Here we set out to explore whether plasticity at recurrent inhibitory synapses could take up that role instead, regulating both the population sparseness and the firing rates of individual neurons. We put the idea to the test in a network that employs compartmentalised inputs to solve the task: rate-based dendritic compartments integrate the feedforward input, while spiking integrate-and-fire somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic homeostatic plasticity is not strictly required for regulating sparseness: inhibitory synaptic plasticity can have the same effect. Our work illustrates the usefulness of compartmentalised inputs, and makes the case for moving beyond point neuron models in artificial spiking neural networks.



中文翻译:

带有树突规则的稀疏编码。

皮质神经元大多数时候都保持沉默:稀疏活动可在大脑中进行低能计算,并有望在神经形态硬件中实现同样的效果。除了功率效率之外,稀疏代码还具有关联学习的优势,因为它们可以存储比本地代码更多的信息,但比密集代码更易于读取。具有稀疏约束的自动编码器可以学习稀疏代码,将递归抑制与无监督Hebbian学习相结合的单层网络也可以学习。但是,后者通常需要快速的体内稳态可塑性,这可能导致在不断学习的具体化主体中发生灾难性的遗忘。在这里,我们着手探讨在反复抑制性突触中的可塑性是否可以代替这种作用,从而调节种群稀疏性和单个神经元的放电率。我们将这个想法放在一个使用隔离输入解决方案的网络中进行了测试:基于速率的树突隔离将前馈输入集成在一起,而尖峰集成并发射的somas通过反复抑制来竞争。躯体树突学习规则允许体细胞抑制来调节树突中的非线性Hebbian学习。在MNIST数字和自然图像上训练后,网络发现形成输入的稀疏编码并支持线性解码的独立组件。这些发现证实,调节稀疏性并非严格要求固有的稳态可塑性:抑制性突触可塑性具有相同的作用。我们的工作说明了分隔输入的有用性,并为在人工加标神经网络中超越点神经元模型提供了理由。

更新日期:2020-08-01
down
wechat
bug