当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
P-CRITICAL: A Reservoir Autoregulation Plasticity Rule for Neuromorphic Hardware
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-11 , DOI: arxiv-2009.05593
Ismael Balafrej and Jean Rouat

Backpropagation algorithms on recurrent artificial neural networks require an unfolding of accumulated states over time. These states must be kept in memory for an undefined period of time which is task-dependent. This paper uses the reservoir computing paradigm where an untrained recurrent neural network layer is used as a preprocessor stage to learn temporal and limited data. These so-called reservoirs require either extensive fine-tuning or neuroplasticity with unsupervised learning rules. We propose a new local plasticity rule named P-CRITICAL designed for automatic reservoir tuning that translates well to Intel's Loihi research chip, a recent neuromorphic processor. We compare our approach on well-known datasets from the machine learning community while using a spiking neuronal architecture. We observe an improved performance on tasks coming from various modalities without the need to tune parameters. Such algorithms could be a key to end-to-end energy-efficient neuromorphic-based machine learning on edge devices.

中文翻译:

P-CRITICAL:神经形态硬件的水库自动调节可塑性规则

循环人工神经网络上的反向传播算法需要随时间展开累积状态。这些状态必须在内存中保留一段不确定的时间,这取决于任务。本文使用储层计算范式,其中使用未经训练的循环神经网络层作为预处理器阶段来学习时间和有限数据。这些所谓的水库需要广泛的微调或具有无监督学习规则的神经可塑性。我们提出了一种名为 P-CRITICAL 的新局部可塑性规则,专为自动储层调整而设计,可以很好地转化为英特尔的 Loihi 研究芯片,这是一种最近的神经形态处理器。我们在使用尖峰神经元架构的同时,在机器学习社区的知名数据集上比较了我们的方法。我们观察到来自各种模式的任务的性能提高,而无需调整参数。这种算法可能是在边缘设备上进行端到端、节能的基于神经形态的机器学习的关键。
更新日期:2020-09-15
down
wechat
bug