当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning image features with fewer labels using a semi-supervised deep convolutional network.
Neural Networks ( IF 6.0 ) Pub Date : 2020-08-25 , DOI: 10.1016/j.neunet.2020.08.016
Fernando P Dos Santos 1 , Cemre Zor 2 , Josef Kittler 3 , Moacir A Ponti 1
Affiliation  

Learning feature embeddings for pattern recognition is a relevant task for many applications. Deep learning methods such as convolutional neural networks can be employed for this assignment with different training strategies: leveraging pre-trained models as baselines; training from scratch with the target dataset; or fine-tuning from the pre-trained model. Although there are separate systems used for learning features from labelled and unlabelled data, there are few models combining all available information. Therefore, in this paper, we present a novel semi-supervised deep network training strategy that comprises a convolutional network and an autoencoder using a joint classification and reconstruction loss function. We show our network improves the learned feature embedding when including the unlabelled data in the training process. The results using the feature embedding obtained by our network achieve better classification accuracy when compared with competing methods, as well as offering good generalisation in the context of transfer learning. Furthermore, the proposed network ensemble and loss function is highly extensible and applicable in many recognition tasks.



中文翻译:

使用半监督的深度卷积网络以较少的标签学习图像特征。

学习特征嵌入以进行模式识别是许多应用程序的重要任务。可以将卷积神经网络之类的深度学习方法用于此任务,并采用不同的训练策略:以预先训练的模型为基准;从头开始训练目标数据集;或根据预先训练的模型进行微调。尽管有单独的系统用于从标记和未标记的数据中学习特征,但是很少有模型会结合所有可用信息。因此,在本文中,我们提出了一种新颖的半监督深度网络训练策略,该策略包括卷积网络和使用联合分类和重构损失函数的自动编码器。我们显示,在训练过程中包含未标记的数据时,我们的网络可改善学习的特征嵌入。与竞争方法相比,使用我们的网络获得的特征嵌入结果可以实现更好的分类准确性,并且可以在迁移学习的情况下提供良好的概括性。此外,所提出的网络集成和丢失功能是高度可扩展的,并且可应用于许多识别任务。

更新日期:2020-08-30
down
wechat
bug