当前位置: X-MOL 学术Ann. Math. Artif. Intel. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lattice map spiking neural networks (LM-SNNs) for clustering and classifying image data
Annals of Mathematics and Artificial Intelligence ( IF 1.2 ) Pub Date : 2019-09-21 , DOI: 10.1007/s10472-019-09665-3
Hananel Hazan , Daniel J. Saunders , Darpan T. Sanghavi , Hava Siegelmann , Robert Kozma

Spiking neural networks (SNNs) with a lattice architecture are introduced in this work, combining several desirable properties of SNNs and self-organized maps (SOMs). Networks are trained with biologically motivated, unsupervised learning rules to obtain a self-organized grid of filters via cooperative and competitive excitatory-inhibitory interactions. Several inhibition strategies are developed and tested, such as (i) incrementally increasing inhibition level over the course of network training, and (ii) switching the inhibition level from low to high (two-level) after an initial training segment. During the labeling phase, the spiking activity generated by data with known labels is used to assign neurons to categories of data, which are then used to evaluate the network’s classification ability on a held-out set of test data. Several biologically plausible evaluation rules are proposed and compared, including a population-level confidence rating, and an n-gram inspired method. The effectiveness of the proposed self-organized learning mechanism is tested using the MNIST benchmark dataset, as well as using images produced by playing the Atari Breakout game.

中文翻译:

用于聚类和分类图像数据的格子映射尖峰神经网络 (LM-SNN)

在这项工作中引入了具有晶格架构的尖峰神经网络 (SNN),结合了 SNN 和自组织映射 (SOM) 的几个理想特性。网络使用生物动机、无监督学习规则进行训练,以通过合作和竞争的兴奋-抑制相互作用获得自组织的过滤器网格。开发和测试了几种抑制策略,例如(i)在网络训练过程中逐渐增加抑制水平,以及(ii)在初始训练段之后将抑制水平从低切换到高(两级)。在标记阶段,由具有已知标签的数据生成的尖峰活动用于将神经元分配给数据类别,然后用于评估网络对一组保留测试数据的分类能力。提出并比较了几种生物学上合理的评估规则,包括群体级别的置信等级和 n-gram 启发的方法。使用 MNIST 基准数据集以及通过玩 Atari Breakout 游戏生成的图像来测试所提出的自组织学习机制的有效性。
更新日期:2019-09-21
down
wechat
bug