当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hebbian semi-supervised learning in a sample efficiency setting
Neural Networks ( IF 7.8 ) Pub Date : 2021-08-13 , DOI: 10.1016/j.neunet.2021.08.003
Gabriele Lagani 1 , Fabrizio Falchi 2 , Claudio Gennaro 2 , Giuseppe Amato 2
Affiliation  

We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples.

We performed experiments on various object recognition datasets, in different regimes of sample efficiency, comparing our semi-supervised (Hebbian for internal layers + SGD for the final fully connected layer) approach with end-to-end supervised backprop training, and with semi-supervised learning based on Variational Auto-Encoder (VAE). The results show that, in regimes where the number of available labeled samples is low, our semi-supervised approach outperforms the other approaches in almost all the cases.



中文翻译:

样本效率设置中的 Hebbian 半监督学习

我们建议在深度卷积神经网络 (DCNN) 中解决样本效率问题,采用将 Hebbian 学习与梯度下降相结合的半监督训练策略:所有内部层(卷积层和全连接层)均使用基于 Hebbian 学习的无监督方法,最后一个全连接层(分类层)使用随机梯度下降(SGD)进行训练。事实上,由于 Hebbian 学习是一种无监督的学习方法,它的潜力在于可以在没有标签的情况下训练 DCNN 的内部层。只有最终的全连接层必须用标记的示例进行训练。

我们在不同的样本效率机制下对各种对象识别数据集进行了实验,将我们的半监督(内部层为 Hebbian + 最后全连接层为 SGD)方法与端到端监督反向传播训练和半监督方法进行了比较。基于变分自动编码器 (VAE) 的监督学习。结果表明,在可用标记样本数量较少的情况下,我们的半监督方法几乎在所有情况下都优于其他方法。

更新日期:2021-08-13
down
wechat
bug