当前位置: X-MOL 学术Int. J. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatiotemporal Fusion of Remote Sensing Images using a Convolutional Neural Network with Attention and Multiscale Mechanisms
International Journal of Remote Sensing ( IF 3.4 ) Pub Date : 2020-12-29 , DOI: 10.1080/01431161.2020.1809742
Weisheng Li 1 , Xiayan Zhang 1 , Yidong Peng 1 , Meilin Dong 1
Affiliation  

ABSTRACT In this paper, we propose a new spatiotemporal fusion method based on a convolutional neural network to which we added attention and multiscale mechanisms (AMNet). Different from the previous spatiotemporal fusion methods, the residual image obtained by subtracting moderate resolution imaging spectroradiometer (MODIS) images at two times is directly used to train the network, and two special structures of multiscale mechanism and attention mechanism are used to increase the accuracy of fusion. Our proposed method uses one pair of images to achieve spatiotemporal fusion. The work is mainly divided into three steps. The first step is to extract feature maps of two types of images at different scales and fuse them separately. The second step is to use the attention mechanism to focus on the important information in the feature maps. And the third step is to reconstruct the image. We used two classical datasets for the experiment, and compared our experimental results with the other three state-of-the-art spatiotemporal fusion methods. The results of our proposed method have richer spatial details and more accurate prediction of temporal changes.

中文翻译:

使用具有注意力和多尺度机制的卷积神经网络对遥感图像进行时空融合

摘要在本文中,我们提出了一种基于卷积神经网络的新时空融合方法,我们在其中添加了注意力和多尺度机制(AMNet)。与以往的时空融合方法不同,直接利用两次中分辨率成像光谱仪(MODIS)图像相减得到的残差图像训练网络,并利用多尺度机制和注意力机制两种特殊结构来提高计算精度。融合。我们提出的方法使用一对图像来实现时空融合。工作主要分为三个步骤。第一步是提取不同尺度的两类图像的特征图,并分别融合。第二步是使用注意力机制来关注特征图中的重要信息。第三步是重建图像。我们在实验中使用了两个经典数据集,并将我们的实验结果与其他三种最先进的时空融合方法进行了比较。我们提出的方法的结果具有更丰富的空间细节和更准确的时间变化预测。
更新日期:2020-12-29
down
wechat
bug