当前位置: X-MOL 学术GISci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Using object-based image analysis to detect laughing gull nests
GIScience & Remote Sensing ( IF 6.7 ) Pub Date : 2021-11-11 , DOI: 10.1080/15481603.2021.1999376
Benjamin F. Martini 1 , Douglas A. Miller 2
Affiliation  

ABSTRACT

Remote sensing has long been used to study wildlife; however, manual methods of detecting wildlife in aerial imagery are often time-consuming and prone to human error, and newer computer vision techniques have not yet been extensively applied to wildlife surveys. We used the object-based image analysis (OBIA) software eCognition to detect laughing gull (Leucophaeus atricilla) nests in Jamaica Bay as part of an ongoing monitoring effort at the John F. Kennedy International Airport. Our technique uses a combination of high resolution 4-band aerial imagery captured via manned aircraft with a multispectral UltraCam Falcon M2 camera, LiDAR point cloud data, and land cover data derived from a bathymetric LiDAR point cloud to classify and extract laughing gull nests. Our ruleset uses the site (topographic position of nest objects), tone (spectral characteristic of nest objects), shape, size, and association (nearby objects commonly found with the objects of interest that help identify them) elements of image interpretation, as well as NDVI and a sublevel object examination to classify and extract nests. The ruleset achieves a producer’s accuracy of 98% as well as a user’s accuracy of 65% and a kappa of 0.696, indicating that it extracts a majority of the nests in the imagery while reducing errors of commission to only 35% of the final results. The remaining errors of commission are difficult for the software to differentiate without also impacting the number of nests successfully extracted and are best addressed by a manual verification of output results as part of a semi-automated workflow in which the OBIA is used to complete the initial search of the imagery and the results are then systematically verified by the user to remove errors. This eliminates the need to manually search entire sets of imagery for nests, resulting in a much more efficient and less error prone methodology than previous unassisted image interpretation techniques. Because of the extensibility of OBIA software and the increasing availability of imagery due to small unmanned aircraft systems (sUAS), our methodology and its benefits have great potential for adaptation to other species surveyed using aerial imagery to enhance wildlife population monitoring.



中文翻译:

使用基于对象的图像分析来检测笑鸥巢

摘要

遥感长期以来被用于研究野生动物;然而,在航拍图像中检测野生动物的手动方法通常非常耗时且容易出现人为错误,而且较新的计算机视觉技术尚未广泛应用于野生动物调查。我们使用基于对象的图像分析 (OBIA) 软件 eCognition 检测笑鸥 ( Leucophaeus atricilla) 在牙买加湾筑巢,作为约翰·肯尼迪国际机场持续监测工作的一部分。我们的技术将通过有人驾驶飞机捕获的高分辨率 4 波段航拍图像与多光谱 UltraCam Falcon M2 相机、LiDAR 点云数据和来自测深 LiDAR 点云的土地覆盖数据相结合,对笑鸥巢进行分类和提取。我们的规则集使用图像解释的站点(嵌套对象的地形位置)、色调(嵌套对象的光谱特征)、形状、大小和关联(通常与感兴趣的对象一起发现的附近对象,有助于识别它们)元素,以及作为 NDVI 和子级别对象检查,以对巢进行分类和提取。该规则集实现了 98% 的生产者准确率和 65% 的用户准确率和 0.696 的 kappa,表明它提取了图像中的大部分巢穴,同时将佣金错误减少到最终结果的 35%。剩余的委托错误对于软件来说很难区分而不影响成功提取的巢的数量,最好通过手动验证输出结果作为半自动工作流程的一部分来解决,其中 OBIA 用于完成初始搜索图像,然后用户系统地验证结果以消除错误。这消除了手动搜索巢穴的整个图像集的需要,导致比以前的无辅助图像解释技术更有效且不易出错的方法。

更新日期:2021-12-14
down
wechat
bug