当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Spike Learning With Local Classifiers
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 7-22-2022 , DOI: 10.1109/tcyb.2022.3188015
Chenxiang Ma 1 , Rui Yan 2 , Zhaofei Yu 3 , Qiang Yu 4
Affiliation  

Backpropagation has been successfully generalized to optimize deep spiking neural networks (SNNs), where, nevertheless, gradients need to be propagated back through all layers, resulting in a massive consumption of computing resources and an obstacle to the parallelization of training. A biologically motivated scheme of local learning provides an alternative to efficiently train deep networks but often suffers a low performance of accuracy on practical tasks. Thus, how to train deep SNNs with the local learning scheme to achieve both efficient and accurate performance still remains an important challenge. In this study, we focus on a supervised local learning scheme where each layer is independently optimized with an auxiliary classifier. Accordingly, we first propose a spike-based efficient local learning rule by only considering the direct dependencies in the current time. We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively. The effectiveness and performance of our proposed methods are extensively evaluated with six mainstream datasets. Experimental results show that our methods can successfully scale up to large networks and substantially outperform the spike-based local learning baselines on all studied benchmarks. Our results also reveal that gradients with temporal dependencies are essential for high performance on temporal tasks, while they have negligible effects on rate-based tasks. Our work is significant as it brings the performance of spike-based local learning to a new level with the computational benefits being retained.

中文翻译:


使用局部分类器进行深度尖峰学习



反向传播已成功推广到优化深度尖峰神经网络(SNN),然而,梯度需要通过所有层反向传播,导致计算资源的大量消耗,并阻碍训练的并行化。本地学习的生物动机方案提供了一种有效训练深度网络的替代方案,但在实际任务中往往准确性较低。因此,如何使用局部学习方案训练深度 SNN 以实现高效且准确的性能仍然是一个重要的挑战。在本研究中,我们专注于监督局部学习方案,其中每一层都使用辅助分类器独立优化。因此,我们首先通过仅考虑当前时间的直接依赖关系,提出一种基于尖峰的高效局部学习规则。然后,我们提出了两种变体,分别通过向后和向前过程附加了时间依赖性。我们提出的方法的有效性和性能通过六个主流数据集进行了广泛的评估。实验结果表明,我们的方法可以成功地扩展到大型网络,并且在所有研究的基准上大大优于基于尖峰的本地学习基线。我们的结果还表明,具有时间依赖性的梯度对于时间任务的高性能至关重要,而它们对基于速率的任务的影响可以忽略不计。我们的工作意义重大,因为它将基于尖峰的本地学习的性能提升到了一个新的水平,同时保留了计算优势。
更新日期:2024-08-26
down
wechat
bug