当前位置: X-MOL 学术Remote Sens. Environ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Virtual image pair-based spatio-temporal fusion
Remote Sensing of Environment ( IF 13.5 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.rse.2020.112009
Qunming Wang , Yijie Tang , Xiaohua Tong , Peter M. Atkinson

Abstract Spatio-temporal fusion is a technique used to produce images with both fine spatial and temporal resolution. Generally, the principle of existing spatio-temporal fusion methods can be characterized by a unified framework of prediction based on two parts: (i) the known fine spatial resolution images (e.g., Landsat images), and (ii) the fine spatial resolution increment predicted from the available coarse spatial resolution increment (i.e., a downscaling process), that is, the difference between the coarse spatial resolution images (e.g., MODIS images) acquired at the known and prediction times. Owing to seasonal changes and land cover changes, there always exist large differences between images acquired at different times, resulting in a large increment and, further, great uncertainty in downscaling. In this paper, a virtual image pair-based spatio-temporal fusion (VIPSTF) approach was proposed to deal with this problem. VIPSTF is based on the concept of a virtual image pair (VIP), which is produced based on the available, known MODIS-Landsat image pairs. We demonstrate theoretically that compared to the known image pairs, the VIP is closer to the data at the prediction time. The VIP can capture more fine spatial resolution information directly from known images and reduce the challenge in downscaling. VIPSTF is a flexible framework suitable for existing spatial weighting- and spatial unmixing-based methods, and two versions VIPSTF-SW and VIPSTF-SU are, thus, developed. Experimental results on a heterogeneous site and a site experiencing land cover type changes show that both spatial weighting- and spatial unmixing-based methods can be enhanced by VIPSTF, and the advantage is particularly noticeable when the observed image pairs are temporally far from the prediction time. Moreover, VIPSTF is free of the need for image pair selection and robust to the use of multiple image pairs. VIPSTF is also computationally faster than the original methods when using multiple image pairs. The concept of VIP provides a new insight to enhance spatio-temporal fusion by making fuller use of the observed image pairs and reducing the uncertainty of estimating the fine spatial resolution increment.

中文翻译:

基于虚拟图像对的时空融合

摘要 时空融合是一种用于产生具有良好空间和时间分辨率的图像的技术。通常,现有时空融合方法的原理可以通过基于两部分的统一预测框架来表征:(i)已知的精细空间分辨率图像(例如 Landsat 图像),以及(ii)精细空间分辨率增量根据可用的粗空间分辨率增量(即缩小过程)进行预测,即在已知时间和预测时间获取的粗空间分辨率图像(例如,MODIS 图像)之间的差异。由于季节变化和土地覆盖变化,不同时间获取的图像之间总是存在较大差异,导致增量较大,进而导致降尺度的不确定性较大。在本文中,提出了一种基于虚拟图像对的时空融合(VIPSTF)方法来解决这个问题。VIPSTF 基于虚拟图像对 (VIP) 的概念,该概念基于可用的已知 MODIS-Landsat 图像对生成。我们从理论上证明,与已知图像对相比,VIP 更接近预测时的数据。VIP 可以直接从已知图像中捕获更精细的空间分辨率信息,并减少降尺度的挑战。VIPSTF 是一个灵活的框架,适用于现有的基于空间加权和空间解混的方法,因此开发了 VIPSTF-SW 和 VIPSTF-SU 两个版本。在异质站点和经历土地覆盖类型变化的站点上的实验结果表明,VIPSTF 可以增强基于空间加权和空间分解的方法,当观察到的图像对在时间上远离预测时间时,优势尤其明显。此外,VIPPSTF 不需要图像对选择,并且对使用多个图像对具有鲁棒性。当使用多个图像对时,VIPPSTF 的计算速度也比原始方法快。VIP 的概念通过更充分地利用观察到的图像对并减少估计精细空间分辨率增量的不确定性,为增强时空融合提供了新的见解。
更新日期:2020-11-01
down
wechat
bug