当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Few-Shot Learning in Spiking Neural Networks by Multi-Timescale Optimization
Neural Computation ( IF 2.7 ) Pub Date : 2021-08-19 , DOI: 10.1162/neco_a_01423
Runhao Jiang 1 , Jie Zhang 1 , Rui Yan 2 , Huajin Tang 3
Affiliation  

Learning new concepts rapidly from a few examples is an open issue in spike-based machine learning. This few-shot learning imposes substantial challenges to the current learning methodologies of spiking neuron networks (SNNs) due to the lack of task-related priori knowledge. The recent learning-to-learn (L2L) approach allows SNNs to acquire priori knowledge through example-level learning and task-level optimization. However, existing L2L-based frameworks do not target the neural dynamics (i.e., neuronal and synaptic parameter changes) on different timescales. This diversity of temporal dynamics is an important attribute in spike-based learning, which facilitates the networks to rapidly acquire knowledge from very few examples and gradually integrate this knowledge. In this work, we consider the neural dynamics on various timescales and provide a multi-timescale optimization (MTSO) framework for SNNs. This framework introduces an adaptive-gated LSTM to accommodate two different timescales of neural dynamics: short-term learning and long-term evolution. Short-term learning is a fast knowledge acquisition process achieved by a novel surrogate gradient online learning (SGOL) algorithm, where the LSTM guides gradient updating of SNN on a short timescale through an adaptive learning rate and weight decay gating. The long-term evolution aims to slowly integrate acquired knowledge and form a priori, which can be achieved by optimizing the LSTM guidance process to tune SNN parameters on a long timescale. Experimental results demonstrate that the collaborative optimization of multi-timescale neural dynamics can make SNNs achieve promising performance for the few-shot learning tasks.



中文翻译:

通过多时间尺度优化在脉冲神经网络中进行 Few-Shot 学习

从几个例子中快速学习新概念是基于脉冲的机器学习中的一个开放问题。由于缺乏与任务相关的先验知识,这种小样本学习对当前的脉冲神经元网络 (SNN) 学习方法提出了重大挑战。最近的学习到学习 (L2L) 方法允许 SNN 通过示例级学习和任务级优化来获取先验知识。然而,现有的基于 L2L 的框架并不针对不同时间尺度上的神经动力学(即神经元和突触参数变化)。这种时间动态的多样性是基于脉冲的学习的一个重要属性,它有助于网络从极少的示例中快速获取知识并逐渐整合这些知识。在这项工作中,我们考虑了各种时间尺度上的神经动力学,并为 SNN 提供了多时间尺度优化 (MTSO) 框架。该框架引入了一个自适应门控 LSTM,以适应两种不同时间尺度的神经动力学:短期学习和长期进化。短期学习是通过一种新颖的代理梯度在线学习(SGOL)算法实现的快速知识获取过程,其中 LSTM 通过自适应学习率和权重衰减门控在短时间内引导 SNN 的梯度更新。长期进化旨在慢慢整合获得的知识并形成先验,这可以通过优化 LSTM 引导过程来在较长时间尺度上调整 SNN 参数来实现。

更新日期:2021-09-12
down
wechat
bug