当前位置: X-MOL 学术IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Classification of High Spatial Resolution Remote Sensing Scenes Method using Transfer Learning and Deep Convolutional Neural Network
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing ( IF 5.5 ) Pub Date : 2020-01-01 , DOI: 10.1109/jstars.2020.2988477
Wenmei Li 1 , Ziteng Wang 2 , Yu Wang 2 , Jiaqi Wu 1 , Juan Wang 2 , Yan Jia 1 , Guan Gui 2
Affiliation  

The deep convolutional neural network (DeCNN) is considered one of promising techniques for classifying the high-spatial-resolution remote sensing (HSRRS) scenes, due to its powerful feature extraction capabilities. It is well-known that huge high-quality labeled datasets are required for achieving the better classification performances and preventing overfitting, during the training DeCNN model process. However, the lack of high-quality datasets limits the applications of DeCNN. In order to solve this problem, in this article, we propose a HSRRS image scene classification method using transfer learning and the DeCNN (TL-DeCNN) model in a few shot HSRRS scene samples. Specifically, three typical DeCNNs of VGG19, ResNet50, and InceptionV3, trained on the ImageNet2015, the weights of their convolutional layer for that of the TL-DeCNN are transferred, respectively. Then, TL-DeCNN just needs to fine-tune its classification module on the few shot HSRRS scene samples in a few epochs. Experimental results indicate that our proposed TL-DeCNN method provides absolute dominance results without overfitting, when compared with the VGG19, ResNet50, and InceptionV3, directly trained on the few shot samples.

中文翻译:

使用迁移学习和深度卷积神经网络的高空间分辨率遥感场景分类方法

由于其强大的特征提取能力,深度卷积神经网络(DeCNN)被认为是对高空间分辨率遥感(HSRRS)场景进行分类的有前途的技术之一。众所周知,在训练 DeCNN 模型过程中,需要大量的高质量标记数据集来实现更好的分类性能和防止过度拟合。然而,缺乏高质量数据集限制了 DeCNN 的应用。为了解决这个问题,在本文中,我们提出了一种使用迁移学习和DeCNN(TL-DeCNN)模型在少量HSRRS场景样本中的HSRRS图像场景分类方法。具体来说,在 ImageNet2015 上训练的 VGG19、ResNet50 和 InceptionV3 的三个典型 DeCNN,它们的卷积层的权重被转移到 TL-DeCNN 的权重,分别。然后,TL-DeCNN 只需要在几个 epoch 中对少数镜头 HSRRS 场景样本微调其分类模块。实验结果表明,与直接在少数镜头样本上训练的 VGG19、ResNet50 和 InceptionV3 相比,我们提出的 TL-DeCNN 方法提供了绝对优势结果而不会过度拟合。
更新日期:2020-01-01
down
wechat
bug