当前位置: X-MOL 学术Int. J. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel unsupervised adversarial domain adaptation network for remotely sensed scene classification
International Journal of Remote Sensing ( IF 3.0 ) Pub Date : 2020-05-31 , DOI: 10.1080/01431161.2020.1736727
Wei Liu 1 , Finlin Su 1
Affiliation  

ABSTRACT High-resolution remote sensing scene classification is a widely applicable task. Due to the diversity of natural scenes and acquisition methods, in different satellite images, scenes of the same class are often variable in texture, background, illumination and spatial resolution. Thus, it is hard for a remote sensing sense database to contain enough representative images. In this case, the generalization error of traditional supervised classification methods might be too large to generate ideal results. Domain adaptation (DA) has been applied to image classification by reducing feature distribution discrepancy between the source domain (where labels are available) and the target domain (where images need to be classified). In this paper, we propose an unsupervised adversarial domain adaptation method boosted by a domain confusion network (ADA-BDC) which aims at adapting the images from different domains to appear as if drawn from the same domain and improve the transferability of our classifier. For this purpose, the feature extractor in ADA-BDC makes the source and target distributions closer by training a Generative Adversarial Nets (GAN) model. After that, a transferred classifier trained by transferred source domain features is able to acquire a better classification accuracy on the target domain than a non-transferred classifier. In this paper, the experiments are conducted on four remote sensing scene benchmark datasets which are different in spatial scale, resolution, land-cover pattern, etc. Experimental results show that our proposed method is able to improve the transferability across different datasets and improve the classification overall accuracy by 17.18% on average. The comparative experiments demonstrate that the proposed DA network outperforms the compared state-of-the-art domain adaptation methods on remote sensing image scene classification.

中文翻译:

一种用于遥感场景分类的新型无监督对抗域适应网络

摘要 高分辨率遥感场景分类是一项广泛适用的任务。由于自然场景和获取方式的多样性,在不同的卫星图像中,同一类场景在纹理、背景、光照和空间分辨率等方面往往存在差异。因此,遥感感知数据库很难包含足够的代表性图像。在这种情况下,传统监督分类方法的泛化误差可能太大而无法产生理想的结果。通过减少源域(标签可用)和目标域(图像需要分类)之间的特征分布差异,域自适应 (DA) 已应用于图像分类。在本文中,我们提出了一种由域混淆网络 (ADA-BDC) 推动的无监督对抗域自适应方法,旨在使来自不同域的图像看起来好像来自同一域,并提高我们分类器的可迁移性。为此,ADA-BDC 中的特征提取器通过训练生成对抗网络 (GAN) 模型使源分布和目标分布更接近。之后,通过迁移源域特征训练的迁移分类器能够在目标域上获得比非迁移分类器更好的分类精度。本文在四个空间尺度、分辨率、土地覆盖格局等不同的遥感场景基准数据集上进行了实验。实验结果表明,我们提出的方法能够提高跨不同数据集的可迁移性,并将分类总体准确率平均提高 17.18%。比较实验表明,所提出的 DA 网络在遥感图像场景分类方面优于比较的最先进的域适应方法。
更新日期:2020-05-31
down
wechat
bug