当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning in Deep Neural Networks Using a Biologically Inspired Optimizer
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-04-23 , DOI: arxiv-2104.11604
Giorgia Dellaferrera, Stanislaw Wozniak, Giacomo Indiveri, Angeliki Pantazi, Evangelos Eleftheriou

Plasticity circuits in the brain are known to be influenced by the distribution of the synaptic weights through the mechanisms of synaptic integration and local regulation of synaptic strength. However, the complex interplay of stimulation-dependent plasticity with local learning signals is disregarded by most of the artificial neural network training algorithms devised so far. Here, we propose a novel biologically inspired optimizer for artificial (ANNs) and spiking neural networks (SNNs) that incorporates key principles of synaptic integration observed in dendrites of cortical neurons: GRAPES (Group Responsibility for Adjusting the Propagation of Error Signals). GRAPES implements a weight-distribution dependent modulation of the error signal at each node of the neural network. We show that this biologically inspired mechanism leads to a systematic improvement of the convergence rate of the network, and substantially improves classification accuracy of ANNs and SNNs with both feedforward and recurrent architectures. Furthermore, we demonstrate that GRAPES supports performance scalability for models of increasing complexity and mitigates catastrophic forgetting by enabling networks to generalize to unseen tasks based on previously acquired knowledge. The local characteristics of GRAPES minimize the required memory resources, making it optimally suited for dedicated hardware implementations. Overall, our work indicates that reconciling neurophysiology insights with machine intelligence is key to boosting the performance of neural networks.

中文翻译:

使用生物启发的优化器在深度神经网络中学习

已知大脑中的可塑性回路通过突触整合和突触强度的局部调节机制受到突触权重分布的影响。但是,到目前为止,大多数人工神经网络训练算法都忽略了刺激依赖性可塑性与局部学习信号之间的复杂相互作用。在这里,我们提出了一种新颖的生物启发式优化器,用于人工(ANN)和尖峰神经网络(SNN),该优化器结合了在皮质神经元树突中观察到的突触整合的关键原理:GRAPES(调整错误信号传播的群体责任)。GRAPES在神经网络的每个节点上实现了误差信号的权重分布相关调制。我们表明,这种受生物启发的机制导致网络收敛速度的系统性提高,并通过前馈和递归体系结构大大提高了ANN和SNN的分类准确性。此外,我们证明了GRAPES支持日益复杂的模型的性能可伸缩性,并通过使网络能够基于先前获得的知识将其推广到看不见的任务上,从而减轻了灾难性的遗忘。GRAPES的本地特征将所需的内存资源减到最少,使其最适合专用的硬件实现。总体而言,我们的工作表明,将神经生理学见解与机器智能相协调是提高神经网络性能的关键。并通过前馈和递归架构显着提高了ANN和SNN的分类准确性。此外,我们证明了GRAPES支持日益复杂的模型的性能可伸缩性,并通过使网络能够基于先前获得的知识将其推广到看不见的任务上,从而减轻了灾难性的遗忘。GRAPES的本地特征将所需的内存资源减到最少,使其最适合专用的硬件实现。总体而言,我们的工作表明,将神经生理学见解与机器智能相协调是提高神经网络性能的关键。并通过前馈和递归架构显着提高了ANN和SNN的分类准确性。此外,我们证明了GRAPES支持日益复杂的模型的性能可伸缩性,并通过使网络能够基于先前获得的知识将其推广到看不见的任务上,从而减轻了灾难性的遗忘。GRAPES的本地特征将所需的内存资源减到最少,使其最适合专用的硬件实现。总体而言,我们的工作表明,将神经生理学见解与机器智能相协调是提高神经网络性能的关键。我们证明了GRAPES支持复杂性不断提高的模型的性能可伸缩性,并通过使网络能够基于先前获得的知识将其推广到看不见的任务上,从而减轻了灾难性的遗忘。GRAPES的本地特征将所需的内存资源减到最少,使其最适合专用的硬件实现。总体而言,我们的工作表明,将神经生理学见解与机器智能相协调是提高神经网络性能的关键。我们证明了GRAPES支持复杂性不断提高的模型的性能可伸缩性,并通过使网络能够基于先前获得的知识将其推广到看不见的任务上,从而减轻了灾难性的遗忘。GRAPES的本地特征将所需的内存资源减到最少,使其最适合专用的硬件实现。总体而言,我们的工作表明,将神经生理学见解与机器智能相协调是提高神经网络性能的关键。
更新日期:2021-04-26
down
wechat
bug