当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep multisensor learning for missing-modality all-weather mapping
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 12.7 ) Pub Date : 2021-03-04 , DOI: 10.1016/j.isprsjprs.2020.12.009
Zhuo Zheng , Ailong Ma , Liangpei Zhang , Yanfei Zhong

Multisensor Earth observation has significantly accelerated the development of multisensor collaborative remote sensing applications such as all-weather mapping using synthetic aperture radar (SAR) images and optical images. However, in the real-world application scenarios, not all data sources may be available, namely, the missing-modality problem, e.g., the poor imaging conditions obstruct the optical sensors, and only SAR images are available for the mapping. This real-world scenario raises the challenge of how to leverage historical multisensor data to improve the representation ability of the available model. The knowledge transfer based and knowledge distillation based approaches, as feasible solutions, can be used to transfer knowledge from other sensor models to the available model. However, these approaches suffer from the problem of forgotten knowledge and the multi-modality co-registration problem, which means that the leveraging of historical multisensor data is inefficient. The essential problem lies in the fact that these approaches are designed following the single-sensor data-driven approach. In this paper, a registration-free multisensor data-driven learning method, namely, deep multisensor learning, in a new perspective of knowledge retention, is proposed to overcome the above problems by learning a meta-sensory representation. To explore the existence of the meta-sensory representation, the meta-sensory representation hypothesis is first proposed, which reveals that the essential difference of the deep models trained on data from different sensors lies in the parameter distribution of the sensor-invariant and sensor-specific operations. Based on this hypothesis, a prototype network is proposed to learn the meta-sensory representation by modeling the knowledge retention mechanism, using the proposed difference alignment operation (DiffAlignOp). DiffAlignOp enables the prototype network to dynamically generate sensor-specific networks to gather supervised signals from registration-free multisensor data. This dynamic network generation is differentiable. Therefore, multisensor gradients can be obtained to learn the meta-sensory representation. To demonstrate the flexibility and practicality of deep multisensor learning, the application of all-weather mapping in a missing-modality scenario was performed. The experiments were conducted on a large public multisensor all-weather mapping dataset, which consists of high-resolution optical and SAR imagery with a spatial resolution of 0.5 m. The experimental results suggest that deep multisensor learning is superior to the other learning approaches in performance and stability, and reveals the importance of meta-sensory representation in multisensor remote sensing applications.



中文翻译:

深度多传感器学习,用于全天候丢失模式的制图

多传感器地球观测极大地加速了多传感器协作遥感应用的开发,例如使用合成孔径雷达(SAR)图像和光学图像的全天候地图绘制。但是,在实际应用场景中,并非所有数据源都可用,即,缺少模态问题,例如不良的成像条件阻碍了光学传感器,并且只有SAR图像可用于映射。这种现实情况提出了如何利用历史多传感器数据来改善可用模型的表示能力的挑战。作为可行的解决方案,基于知识转移和基于知识蒸馏的方法可用于将知识从其他传感器模型转移到可用模型。然而,这些方法都存在知识被遗忘的问题和多模式共注册问题,这意味着利用历史多传感器数据效率低下。根本问题在于以下事实:这些方法是按照单传感器数据驱动的方法设计的。为了解决上述问题,本文提出了一种免注册的多传感器数据驱动的学习方法,即深度多传感器学习,以解决上述问题。为了探讨元感觉表示的存在,首先提出了元感觉表示假设,这表明,针对来自不同传感器的数据训练的深度模型的本质区别在于传感器不变操作和特定于传感器的操作的参数分布。基于此假设,提出了一个原型网络,通过使用建议的差异对齐操作(DiffAlignOp)对知识保留机制进行建模,以学习元感官表示。DiffAlignOp使原型网络能够动态生成特定于传感器的网络,以从免注册的多传感器数据中收集监督信号。这种动态网络生成是可区分的。因此,可以获取多传感器梯度来学习元感觉表示。为了展示深度多传感器学习的灵活性和实用性,进行了全天候制图在缺失模式场景中的应用。实验是在大型公共多传感器全天候地图数据集上进行的,该数据集由空间分辨率为0.5 m的高分辨率光学和SAR图像组成。实验结果表明,深度多传感器学习在性能和稳定性方面优于其他学习方法,并且揭示了元传感器表示在多传感器遥感应用中的重要性。

更新日期:2021-03-04
down
wechat
bug