当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sample generation based on a supervised Wasserstein Generative Adversarial Network for high-resolution remote-sensing scene classification
Information Sciences ( IF 8.1 ) Pub Date : 2020-06-17 , DOI: 10.1016/j.ins.2020.06.018
Wei Han , Lizhe Wang , Ruyi Feng , Lang Gao , Xiaodao Chen , Ze Deng , Jia Chen , Peng Liu

As high-resolution remote-sensing (HRRS) images have become increasingly widely available, scene classification focusing on the smart classification of land cover and land use has also attracted more attention. However, mainstream methods encounter a severe problem in that many annotation samples are required to obtain an ideal model for scene classification. In the remote sensing community, there is no dataset with a comparative scale to ImageNet (which contains over 14 million images) to meet the sample requirements of the convolutional neural network (CNN)-based methods. In addition, labeling new images is both labor intensive and time consuming. To address these problems, we present a new generative adversarial network (GAN)-based remote-sensing image generation method (GAN-RSIGM) that can be applied to create high-resolution annotated samples for scene classification. In GAN-RSIGM, the Wasserstein distance is used to measure the difference between the generator distribution and the real data distribution. This addresses the problem of the gradient disappearing during sample generation, and distinctly promotes a generator distribution close to the real data distribution. An auxiliary classifier is added to the discriminator, guiding the generator to produce consistent and distinct images. With regard to the network structure, the discriminator and the generator are implemented by stacking residual blocks, which further stabilize the training process of the GAN-RSIGM. Extensive experiments were conducted to evaluate the proposed method with two public HRRS datasets. The experimental results demonstrated that the proposed method could achieve satisfactory performance for high-quality annotation sample generation, scene classification, and data augmentation.



中文翻译:

基于监督的Wasserstein生成对抗网络的样本生成,用于高分辨率遥感场景分类

随着高分辨率遥感(HRRS)图像的日益普及,以土地覆盖和土地利用的智能分类为重点的场景分类也引起了更多关注。然而,主流方法遇到了严重的问题,即需要许多注释样本来获得用于场景分类的理想模型。在遥感社区中,没有能够与ImageNet相比规模较大的数据集(包含超过1400万张图像)来满足基于卷积神经网络(CNN)的方法的样本需求。另外,标记新图像既费力又费时。为了解决这些问题,我们提出了一种新的基于生成对抗网络(GAN)的遥感图像生成方法(GAN-RSIGM)可用于创建用于场景分类的高分辨率带注释的样本。在GAN-RSIGM中,Wasserstein距离用于测量生成器分布与实际数据分布之间的差异。这解决了样品生成过程中梯度消失的问题,并明显促进了发生器分布接近真实数据分布。辅助分类器被添加到鉴别器,引导生成器产生一致且不同的图像。关于网络结构,鉴别器和生成器通过堆叠剩余块来实现,这进一步稳定了GAN-RSIGM的训练过程。进行了广泛的实验,以使用两个公共HRRS数据集评估提出的方法。

更新日期:2020-06-17
down
wechat
bug