当前位置: X-MOL 学术Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Hierarchical Deep-Learning Approach for Rapid Windthrow Detection on PlanetScope and High-Resolution Aerial Image Data
Remote Sensing ( IF 4.2 ) Pub Date : 2020-07-02 , DOI: 10.3390/rs12132121
Wolfgang Deigele , Melanie Brandmeier , Christoph Straub

Forest damage due to storms causes economic loss and requires a fast response to prevent further damage such as bark beetle infestations. By using Convolutional Neural Networks (CNNs) in conjunction with a GIS, we aim at completely streamlining the detection and mapping process for forest agencies. We developed and tested different CNNs for rapid windthrow detection based on PlanetScope satellite data and high-resolution aerial image data. Depending on the meteorological situation after the storm, PlanetScope data might be rapidly available due to its high temporal resolution, while the acquisition of high-resolution airborne data often takes weeks to a month and is, therefore, used in a second step for more detailed mapping. The study area is located in Bavaria, Germany (ca. 165 km2), and labels for damaged areas were provided by the Bavarian State Institute of Forestry (LWF). Modifications of a U-Net architecture were compared to other approaches using transfer learning (e.g., VGG19) to find the most efficient architecture for the task on both datasets while keeping the computational time low. A custom implementation of U-Net proved to be more accurate than transfer learning, especially on medium (3 m) resolution PlanetScope imagery (intersection over union score (IoU) 0.55) where transfer learning completely failed. Results for transfer learning based on VGG19 on high-resolution aerial image data are comparable to results from the custom U-Net architecture (IoU 0.76 vs. 0.73). When using both architectures on a dataset from a different area (located in Hesse, Germany), however, we find that the custom implementations have problems generalizing on aerial image data while VGG19 still detects most damage in these images. For PlanetScope data, VGG19 again fails while U-Net achieves reasonable mappings. Results highlight the potential of Deep Learning algorithms to detect damaged areas with an IoU of 0.73 on airborne data and 0.55 on Planet Dove data. The proposed workflow with complete integration into ArcGIS is well-suited for rapid first assessments after a storm event that allows for better planning of the flight campaign followed by detailed mapping in a second stage.

中文翻译:

用于PlanetScope和高分辨率航空影像数据的快速风向探测的分层深度学习方法

暴风雨造成的森林破坏会造成经济损失,需要迅速做出反应以防止进一步的破坏,例如树皮甲虫的侵扰。通过将卷积神经网络(CNN)与GIS结合使用,我们旨在完全简化森林机构的检测和制图过程。我们根据PlanetScope卫星数据和高分辨率航空影像数据开发并测试了不同的CNN,以进行快速的风向探测。根据风暴后的气象情况,PlanetScope数据由于具有较高的时间分辨率而可能会迅速获得,而高分辨率机载数据的获取通常需要数周至一个月的时间,因此,在第二步中使用它可以获得更详细的信息映射。研究区域位于德国巴伐利亚州(约165 km 2),受损区域的标签由巴伐利亚州立林业研究所(LWF)提供。使用转移学习(例如VGG19)将U-Net架构的修改与其他方法进行比较,以在两个数据集上找到任务最有效的架构,同时保持较低的计算时间。事实证明,U-Net的自定义实现比迁移学习更准确,尤其是在中等(3 m)分辨率的PlanetScope图像(联合得分相交(IoU)0.55)上,迁移学习完全失败。基于VGG19的高分辨率航空影像数据进行转移学习的结果与自定义U-Net架构的结果相当(IoU 0.76与0.73)。但是,在来自不同区域(位于德国黑森)的数据集上使用这两种架构时,我们发现,自定义实现在航空影像数据的通用化方面存在问题,而VGG19仍能检测到这些影像中的大部分损坏。对于PlanetScope数据,VGG19再次失败,而U-Net实现了合理的映射。结果突出了深度学习算法检测机载数据的IoU为0.73,Planet Dove数据的IoU为0.55的潜力。完全集成到ArcGIS中的拟议工作流程非常适合风暴事件后的快速第一评估,从而可以更好地计划飞行活动,然后在第二阶段进行详细映射。结果突出了深度学习算法检测机载数据的IoU为0.73,Planet Dove数据的IoU为0.55的潜力。完全集成到ArcGIS中的拟议工作流程非常适合风暴事件后的快速第一评估,从而可以更好地计划飞行活动,然后在第二阶段进行详细映射。结果突出了深度学习算法检测机载数据的IoU为0.73,Planet Dove数据的IoU为0.55的潜力。拟议的完全集成到ArcGIS中的工作流程非常适合在风暴事件发生后进行快速的第一次评估,从而可以更好地计划飞行活动,然后在第二阶段进行详细的制图。
更新日期:2020-07-02
down
wechat
bug