当前位置: X-MOL 学术Int. J. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating generative adversarial networks based image-level domain transfer for multi-source remote sensing image segmentation and object detection
International Journal of Remote Sensing ( IF 3.4 ) Pub Date : 2020-07-07 , DOI: 10.1080/01431161.2020.1757782
Xue Li 1 , Muying Luo 2 , Shunping Ji 2 , Li Zhang 1 , Meng Lu 3
Affiliation  

ABSTRACT Appearances and qualities of remote sensing images are affected by different atmospheric conditions, quality of sensors, and radiometric calibrations. This heavily challenges the generalization ability of a deep learning or other machine learning model: the performance of a model pretrained on a source remote sensing data set can significantly decrease when applied to a different target data set. The popular generative adversarial networks (GANs) can realize style or appearance transfer between a source and target data sets, which may boost the performance of a deep learning model through generating new target images similar to source samples. In this study, we comprehensively evaluate the performance of GAN-based image-level transfer methods on convolutional neural network (CNN) based image processing models that are trained on one dataset and tested on another one. Firstly, we designed the framework for the evaluation process. The framework consists of two main parts, the GAN-based image-level domain adaptation, which transfers a target image to a new image with similar probability distribution of source image space, and the CNN-based image processing tasks, which are used to test the effects of GAN-based domain adaptation. Second, the domain adaptation is implemented with two mainstream GAN methods for style transfer, the CycleGAN and the AgGAN. The image processing contains two major tasks, segmentation and object detection. The former and the latter are designed based on the widely applied U-Net and Faster R-CNN, respectively. Finally, three experiments, associated with three datasets, are designed to cover different application cases, a change detection case where temporal data is collected from the same scene, a two-city case where images are collected from different regions and a two-sensor case where images are obtained from aerial and satellite platforms respectively. Results revealed that, the GAN-based image transfer can significantly boost the performance of the segmentation model in the change detection case, however, it did not surpass conventional methods; in the other two cases, the GAN-based methods obtained worse results. In object detection, almost all the methods failed to boost the performance of the Faster R-CNN and the GAN-based methods performed the worst.

中文翻译:

评估基于生成对抗网络的多源遥感图像分割和目标检测的图像级域转移

摘要 遥感图像的外观和质量受不同大气条件、传感器质量和辐射校准的影响。这严重挑战了深度学习或其他机器学习模型的泛化能力:在源遥感数据集上预训练的模型在应用于不同的目标数据集时会显着降低。流行的生成对抗网络 (GAN) 可以实现源数据集和目标数据集之间的样式或外观迁移,这可以通过生成类似于源样本的新目标图像来提高深度学习模型的性能。在这项研究中,我们综合评估了基于 GAN 的图像级传输方法在基于卷积神经网络 (CNN) 的图像处理模型上的性能,这些模型在一个数据集上训练并在另一个数据集上进行测试。首先,我们设计了评估过程的框架。该框架由两个主要部分组成,基于 GAN 的图像级域自适应,将目标图像转移到具有相似源图像空间概率分布的新图像,以及基于 CNN 的图像处理任务,用于测试基于 GAN 的域适应的影响。其次,领域适应是通过两种主流的风格迁移 GAN 方法实现的,CycleGAN 和 AgGAN。图像处理包含两个主要任务,分割和对象检测。前者和后者分别基于广泛应用的 U-Net 和 Faster R-CNN 设计。最后,与三个数据集相关的三个实验旨在涵盖不同的应用案例,一个变化检测案例,从同一场景收集时间数据,一个两城市案例,从不同地区收集图像和两个传感器案例其中图像分别从航空和卫星平台获得。结果表明,基于 GAN 的图像传输可以显着提升分割模型在变化检测情况下的性能,但并没有超越传统方法;在另外两种情况下,基于 GAN 的方法获得了更差的结果。在物体检测中,
更新日期:2020-07-07
down
wechat
bug