当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network
Neural Networks ( IF 7.8 ) Pub Date : 2020-11-27 , DOI: 10.1016/j.neunet.2020.11.005
V.A. Demin , D.V. Nekhaev , I.A. Surazhevsky , K.E. Nikiruy , A.V. Emelyanov , S.N. Nikolaev , V.V. Rylkov , M.V. Kovalchuk

This work is aimed to study experimental and theoretical approaches for searching effective local training rules for unsupervised pattern recognition by high-performance memristor-based Spiking Neural Networks (SNNs). First, the possibility of weight change using Spike-Timing-Dependent Plasticity (STDP) is demonstrated with a pair of hardware analog neurons connected through a (CoFeB)x(LiNbO3)1x nanocomposite memristor. Next, the learning convergence to a solution of binary clusterization task is analyzed in a wide range of memristive STDP parameters for a single-layer fully connected feedforward SNN. The memristive STDP behavior supplying convergence in this simple task is shown also to provide it in the handwritten digit recognition domain by the more complex SNN architecture with a Winner-Take-All competition between neurons. To investigate basic conditions necessary for training convergence, an original probabilistic generative model of a rate-based single-layer network with independent or competing neurons is built and thoroughly analyzed. The main result is a statement of “correlation growth-anticorrelation decay” principle which prompts near-optimal policy to configure model parameters. This principle is in line with requiring the binary clusterization convergence which can be defined as the necessary condition for optimal learning and used as the simple benchmark for tuning parameters of various neural network realizations with population-rate information coding. At last, a heuristic algorithm is described to experimentally find out the convergence conditions in a memristive SNN, including robustness to a device variability. Due to the generality of the proposed approach, it can be applied to a wide range of memristors and neurons of software- or hardware-based rate-coding single-layer SNNs when searching for local rules that ensure their unsupervised learning convergence in a pattern recognition task domain.



中文翻译:

忆阻尖峰神经网络中基于STDP的模式识别学习的必要条件

这项工作旨在研究实验和理论方法,以通过基于忆阻器的高性能尖峰神经网络(SNN)搜索有效的局部训练规则,以实现无监督模式识别。首先,通过一对通过(CoFeB)连接的硬件模拟神经元,演示了使用峰值定时依赖可塑性(STDP)来减轻体重的可能性。X(LiNbO 31个-X纳米复合忆阻器。接下来,针对单层完全连接前馈SNN,在广泛的忆阻STDP参数中分析了二进制聚类任务解决方案的学习收敛性。忆阻性STDP行为在此简单任务中提供了收敛性,还显示了它通过更复杂的SNN架构在手写数字识别域中提供了收敛性,并在神经元之间实现了Winner-Take-All竞争。为了研究训练收敛所必需的基本条件,建立了具有独立或竞争神经元的基于速率的单层网络的原始概率生成模型,并对其进行了全面分析。主要结果是陈述了“相关增长-反相关衰减”原理,该原理促使近乎最优的策略来配置模型参数。该原理符合要求的二进制聚类收敛,可以将其定义为最佳学习的必要条件,并且可以用作使用人口率信息编码来调整各种神经网络实现的参数的简单基准。最后,描述了一种启发式算法,以实验方式找出忆阻SNN中的收敛条件,包括对设备可变性的鲁棒性。由于所提方法的通用性,在搜索确保在模式识别中无监督学习收敛的局部规则时,它可以应用于基于软件或硬件的速率编码单层SNN的多种忆阻器和神经元任务域。

更新日期:2020-12-05
down
wechat
bug