当前位置: X-MOL 学术J. Appl. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Two-stream spatiotemporal image fusion network based on difference transformation
Journal of Applied Remote Sensing ( IF 1.7 ) Pub Date : 2022-09-01 , DOI: 10.1117/1.jrs.16.038506
Shuai Fang 1 , Siyuan Meng 1 , Jing Zhang 1 , Yang Cao 2
Affiliation  

For satellite imaging instruments, the tradeoff between spatial and temporal resolution leads to the spatial–temporal contradiction of image sequences. Spatiotemporal image fusion (STIF) provides a solution to generate images with both high-spatial and high-temporal resolutions, thus expanding the applications of existing satellite images. Most deep learning-based STIF methods throw the task to network as a whole and construct an end-to-end model without caring about the intermediate physical process. This leads to high complexity, less interpretability, and low accuracy of the fusion model. To address this problem, we propose a two-stream difference transformation spatiotemporal fusion (TSDTSF), which includes transformation and fusion streams. In the transformation stream, an image difference transformation module reduces the pixel distribution difference of images from different sensors with the same spatial resolution, and a feature difference transformation module improves the feature quality of low-resolution images. The fusion stream focuses on feature fusion and image reconstruction. The TSDTSF shows superior performance in accuracy, vision quality, and robustness. The experimental results show that TSDTSF achieves the effect of the average coefficient of determination (R2 = 0.7847) and the root mean square error (RMSE = 0.0266), which is better than the suboptimal method average (R2 = 0.7519) and (RMSE = 0.0289). The quantitative and qualitative experimental results on various datasets demonstrate our superiority over the state-of-the-art methods.

中文翻译:

基于差分变换的两流时空图像融合网络

对于卫星成像仪器而言,时空分辨率的权衡导致图像序列的时空矛盾。时空图像融合(STIF)提供了一种生成高时空分辨率图像的解决方案,从而扩展了现有卫星图像的应用。大多数基于深度学习的 STIF 方法将任务作为一个整体抛给网络并构建端到端模型,而不关心中间物理过程。这导致融合模型的复杂性高、可解释性低、准确性低。为了解决这个问题,我们提出了一种双流差分变换时空融合(TSDTSF),它包括变换流和融合流。在转换流中,图像差分变换模块减少了相同空间分辨率下不同传感器图像的像素分布差异,特征差分变换模块提高了低分辨率图像的特征质量。融合流侧重于特征融合和图像重建。TSDTSF 在准确性、视觉质量和稳健性方面表现出卓越的性能。实验结果表明,TSDTSF达到了平均决定系数(R2 = 0.7847)和均方根误差(RMSE = 0.0266)的效果,优于次优方法平均值(R2 = 0.7519)和(RMSE = 0.0289) )。各种数据集上的定量和定性实验结果证明了我们优于最先进的方法。
更新日期:2022-09-01
down
wechat
bug