当前位置: X-MOL 学术IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Wildfire Detection From Multisensor Satellite Imagery Using Deep Semantic Segmentation
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing ( IF 5.5 ) Pub Date : 2021-06-30 , DOI: 10.1109/jstars.2021.3093625
Dmitry Rashkovetsky , Florian Mauracher , Martin Langer , Michael Schmitt

Deriving the extent of areas affected by wildfires is critical to fire management, protection of the population, damage assessment, and better understanding of the consequences of fires. In the past two decades, several algorithms utilizing data from Earth observation satellites have been developed to detect fire-affected areas. However, most of these methods require the establishment of complex functional relationships between numerous remote sensing data parameters. In contrast, more recently, deep learning has found its way into the application, having the advantage of being able to detect patterns in complex data by learning from examples automatically. In this article, a workflow for the detection of fire-affected areas from satellite imagery acquired in the visible, infrared, and microwave domains is described. Using this workflow, the fire detection potentials of four sources of freely available satellite imagery were investigated: the C-SAR instrument on board Sentinel-1, the multispectral instrument on board Sentinel-2, the sea and land surface temperature instrument on board Sentinel-3, and the MODIS instrument on board Terra and Aqua. For each of them, a single-input convolutional neural network based on the well-known U-Net architecture was trained on a newly created dataset. The performance of the resulting four single-instrument models was evaluated in presence of clouds and in clear conditions. In addition, the potential of combining predictions from pairs of single-instrument models was investigated. The results show that fusion of Sentinel-2 and Sentinel-3 data provides the best detection rate in clear conditions, whereas the fusion of Sentinel-1 and Sentinel-2 data shows a significant benefit in cloudy weather.

中文翻译:

使用深度语义分割从多传感器卫星图像中检测野火

推算受野火影响的地区范围对于火灾管理、保护人口、评估损失以及更好地了解火灾后果至关重要。在过去的二十年中,已经开发了几种利用地球观测卫星数据的算法来检测受火灾影响的地区。然而,这些方法中的大多数都需要在众多遥感数据参数之间建立复杂的函数关系。相比之下,最近,深度学习已进入应用程序,其优势在于能够通过自动从示例中学习来检测复杂数据中的模式。在本文中,描述了从可见光、红外和微波域中获取的卫星图像中检测受火灾影响区域的工作流程。使用此工作流程,研究了四个免费可用卫星图像来源的火灾探测潜力:Sentinel-1 上的 C-SAR 仪器、Sentinel-2 上的多光谱仪器、Sentinel-3 上的海洋和陆地表面温度仪器,以及Terra 和 Aqua 上的 MODIS 仪器。对于它们中的每一个,基于著名的 U-Net 架构的单输入卷积神经网络在新创建的数据集上进行训练。在有云和晴朗条件下评估了所得四种单仪器模型的性能。此外,还研究了组合来自成对单仪器模型的预测的潜力。结果表明,Sentinel-2 和 Sentinel-3 数据的融合在晴朗条件下提供了最佳检测率,
更新日期:2021-07-27
down
wechat
bug