当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A deep translation (GAN) based change detection network for optical and SAR remote sensing images
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2021-07-23 , DOI: 10.1016/j.isprsjprs.2021.07.007
Xinghua Li 1 , Zhengshun Du 1 , Yanyuan Huang 1 , Zhenyu Tan 2
Affiliation  

With the development of space-based imaging technology, a larger and larger number of images with different modalities and resolutions are available. The optical images reflect the abundant spectral information and geometric shape of ground objects, whose qualities are degraded easily in poor atmospheric conditions. Although synthetic aperture radar (SAR) images cannot provide the spectral features of the region of interest (ROI), they can capture all-weather and all-time polarization information. In nature, optical and SAR images encapsulate lots of complementary information, which is of great significance for change detection (CD) in poor weather situations. However, due to the difference in imaging mechanisms of optical and SAR images, it is difficult to conduct their CD directly using the traditional difference or ratio algorithms. Most recent CD methods bring image translation to reduce their difference, but the results are obtained by ordinary algebraic methods and threshold segmentation with limited accuracy. Towards this end, this work proposes a deep translation based change detection network (DTCDN) for optical and SAR images. The deep translation firstly maps images from one domain (e.g., optical) to another domain (e.g., SAR) through a cyclic structure into the same feature space. With the similar characteristics after deep translation, they become comparable. Different from most previous researches, the translation results are imported to a supervised CD network that utilizes deep context features to separate the unchanged pixels and changed pixels. In the experiments, the proposed DTCDN was tested on four representative data sets from Gloucester, California, and Shuguang village. Compared with state-of-the-art methods, the effectiveness and robustness of the proposed method were confirmed.



中文翻译:

基于深度翻译 (GAN) 的光学和 SAR 遥感图像变化检测网络

随着天基成像技术的发展,可以获得越来越多不同模态和分辨率的图像。光学图像反映了地物丰富的光谱信息和几何形状,在恶劣的大气条件下,其质量容易下降。尽管合成孔径雷达 (SAR) 图像无法提供感兴趣区域 (ROI) 的光谱特征,但它们可以捕获全天候和全天候的极化信息。在自然界中,光学和 SAR 图像封装了大量互补信息,这对于恶劣天气情况下的变化检测(CD)具有重要意义。然而,由于光学和SAR图像成像机制的不同,很难直接使用传统的差分或比率算法对其进行CD。大多数最近的 CD 方法都带来了图像转换以减少它们的差异,但结果是通过普通的代数方法和阈值分割获得的,精度有限。为此,这项工作提出了一种用于光学和 SAR 图像的基于深度翻译的变化检测网络 (DTCDN)。深度翻译首先通过循环结构将图像从一个域(例如,光学)映射到另一个域(例如,SAR)到相同的特征空间。深度翻译后具有相似的特征,它们具有可比性。与之前的大多数研究不同,翻译结果被导入到一个有监督的 CD 网络中,该网络利用深度上下文特征来分离不变像素和变化像素。在实验中,提议的 DTCDN 在来自加利福尼亚州格洛斯特的四个代表性数据集上进行了测试,和曙光村。与最先进的方法相比,该方法的有效性和鲁棒性得到了证实。

更新日期:2021-07-23
down
wechat
bug