当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Adaptive Locally Connected Neuron Model: Focusing Neuron
Neurocomputing ( IF 5.5 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.neucom.2020.08.008
F. Boray Tek

This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70$\%$ of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.

中文翻译:

自适应局部连接神经元模型:聚焦神经元

本文提出了一种新的人工神经元模型,能够在输入的拓扑域中学习其感受野。该模型提供适用于任何领域的自适应和可微局部连接(可塑性)。除了反向传播算法之外,它不需要其他工具来学习控制感受野位置和孔径的参数。这项研究探讨了这种能力是否使神经元专注于信息输入并产生优于完全连接神经元的任何优势。实验包括将一两个隐藏层的神经元网络聚焦在合成和众所周知的图像识别数据集上的测试。结果表明,聚焦神经元可以将它们的感受野移向更多信息输入。在简单的两隐藏层网络中,在二维空间数据集的分类中,聚焦层优于密集层。此外,即使修剪了 70$\%$ 的权重,聚焦网络也比密集网络表现更好。对卷积网络的测试表明,使用聚焦层而不是密集层对卷积特征进行分类可能在某些数据集中效果更好。
更新日期:2021-01-01
down
wechat
bug