当前位置: X-MOL 学术J. Appl. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Super-resolution method using generative adversarial network for Gaofen wide-field-view images
Journal of Applied Remote Sensing ( IF 1.4 ) Pub Date : 2021-06-01 , DOI: 10.1117/1.jrs.15.028506
Ziyun Zhang 1 , Chengming Zhang 1 , Menxin Wu 2 , Yingjuan Han 3 , Hao Yin 1 , Ailing Kong 1 , Fangfang Chen 1
Affiliation  

Accurate information on the spatial distribution of crops is of great significance for scientific research and production practices. Such accurate information can be extracted from high-spatial-resolution optical remote sensing images. However, acquiring these images with a wide coverage is difficult. We established a model named multispectral super-resolution generative adversarial network (MS_SRGAN) for generating high-resolution 4-m images using Gaofen 1 wide-field-view (WFV) 16-m images. The MS_SRGAN model contains a generator and a discriminator. The generator network is composed of feature extraction units and feature fusion units with a symmetric structure, and the attention mechanism is introduced to constrain the spectral value of the feature map during feature extraction. The generator loss introduces feature loss to describe the feature difference of the image. This is realized using pre-trained discriminator parameters and a partial discriminator network. In addition to realizing feature loss, the discriminator network, which is a simple convolutional neural network, also realizes adversarial loss. Adversarial loss can provide some fake high frequency details to the generator to get a more sharpened image. In the Gaofen 1 WFV image test, the performance of MS_SRGAN was compared with that of Bicubic, EDSR, SRGAN, and ESRGAN. The results show that the spectral angle mapper (3.387) and structural similarity index measure (0.998) of MS_SRGAN are higher than those of the other models. In addition, the image obtained by MS_SRGAN is more realistic; its texture details and color distribution are closer to the reference image to a greater extent.

中文翻译:

基于生成对抗网络的高分宽视场图像超分辨率方法

准确掌握作物空间分布信息,对科学研究和生产实践具有重要意义。这种准确的信息可以从高空间分辨率的光学遥感图像中提取出来。然而,以广泛的覆盖范围获取这些图像是困难的。我们建立了一个名为多光谱超分辨率生成对抗网络 (MS_SRGAN) 的模型,用于使用高分 1 宽视场 (WFV) 16 米图像生成高分辨率 4 米图像。MS_SRGAN 模型包含一个生成器和一个鉴别器。生成器网络由对称结构的特征提取单元和特征融合单元组成,在特征提取时引入attention机制来约束特征图的谱值。生成器损失引入特征损失来描述图像的特征差异。这是使用预训练的鉴别器参数和部分鉴别器网络实现的。除了实现特征损失外,鉴别器网络这个简单的卷积神经网络也实现了对抗性损失。对抗性损失可以为生成器提供一些虚假的高频细节,以获得更清晰的图像。在 Gaofen 1 WFV 图像测试中,MS_SRGAN 的性能与 Bicubic、EDSR、SRGAN 和 ESRGAN 的性能进行了比较。结果表明,MS_SRGAN的光谱角度映射器(3.387)和结构相似性指数度量(0.998)高于其他模型。另外,MS_SRGAN得到的图像更逼真;
更新日期:2021-06-25
down
wechat
bug