当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Interclass Margin Maximization Learning Algorithm for Evolving Spiking Neural Network
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 1-23-2018 , DOI: 10.1109/tcyb.2018.2791282
Shirin Dora , Suresh Sundaram , Narasimhan Sundararajan

This paper presents a new learning algorithm developed for a three layered spiking neural network for pattern classification problems. The learning algorithm maximizes the interclass margin and is referred to as the two stage margin maximization spiking neural network (TMM-SNN). In the structure learning stage, the learning algorithm completely evolves the hidden layer neurons in the first epoch. Further, TMM-SNN updates the weights of the hidden neurons for multiple epochs using the newly developed normalized membrane potential learning rule such that the interclass margins (based on the response of hidden neurons) are maximized. The normalized membrane potential learning rule considers both the local information in the spike train generated by a presynaptic neuron and the existing knowledge (synaptic weights) stored in the network to update the synaptic weights. After the first stage, the number of hidden neurons and their parameters are not updated. In the output weights learning stage, TMM-SNN updates the weights of the output layer neurons for multiple epochs to maximize the interclass margins (based on the response of output neurons). Performance of TMMSNN is evaluated using ten benchmark data sets from the UCI machine learning repository. Statistical performance comparison of TMM-SNN with other existing learning algorithms for SNNs is conducted using the nonparametric Friedman test followed by a pairwise comparison using the Fisher's least significant difference method. The results clearly indicate that TMM-SNN achieves better generalization performance in comparison to other algorithms.

中文翻译:


一种用于进化尖峰神经网络的类间保证金最大化学习算法



本文提出了一种为三层尖峰神经网络开发的新学习算法,用于解决模式分类问题。该学习算法最大化类间间隔,被称为两阶段间隔最大化尖峰神经网络(TMM-SNN)。在结构学习阶段,学习算法在第一个epoch中完全进化了隐藏层神经元。此外,TMM-SNN 使用新开发的归一化膜电位学习规则更新多个时期的隐藏神经元的权重,以使类间间隔(基于隐藏神经元的响应)最大化。归一化膜电位学习规则考虑了突触前神经元生成的尖峰序列中的局部信息和网络中存储的现有知识(突触权重)来更新突触权重。第一阶段之后,隐藏神经元的数量及其参数不再更新。在输出权重学习阶段,TMM-SNN 更新多个 epoch 的输出层神经元的权重,以最大化类间间隔(基于输出神经元的响应)。使用 UCI 机器学习存储库中的 10 个基准数据集评估 TMMSNN 的性能。 TMM-SNN 与其他现有 SNN 学习算法的统计性能比较是使用非参数 Friedman 检验进行的,然后使用 Fisher 最小显着差异法进行成对比较。结果清楚地表明,与其他算法相比,TMM-SNN 实现了更好的泛化性能。
更新日期:2024-08-22
down
wechat
bug