当前位置: X-MOL 学术Remote Sens. Ecol. Conserv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Land cover mapping with ultra‐high‐resolution aerial imagery
Remote Sensing in Ecology and Conservation ( IF 5.5 ) Pub Date : 2020-12-29 , DOI: 10.1002/rse2.189
Ned Horning 1
Affiliation  

There has been an increase in the use of UAVs11 Expansion of the acronym “UAV” has several interpretations. The most common is unmanned aircraft vehicle, but some public and private organizations replace “unmanned” with gender neutral terms such as unoccupied, uncrewed, unpiloted and uninhabited. and other low altitude aerial imaging platforms for mapping land cover. Creating accurate land cover maps from these ultra‐high (sub‐decimeter) resolution images is difficult due to many factors, including high intra‐object variability and variable illumination and reflection geometry within objects. Unfortunately, traditional image processing methods are not well suited to this classification task. On top of this already complicated situation is the reality that, in many cases, the imagery is acquired under less‐than‐ideal weather conditions using consumer‐grade UAVs and cameras.

Along with an increase in the use of consumer drones for mapping, there has also been an increase in the performance of machine‐learning‐based image processing software and hardware required to run many of these algorithms. Machine learning algorithms, including those categorized as deep learning, are designed to identify patterns in data without being programmed to identify specific patterns using preconceived rules or statistical relationships. Much work is going into improving the state of the art and the state of the practice to improve our ability to map land cover objects using ultra‐high‐resolution imagery.

In this special issue of Remote Sensing in Ecology and Conservation, five papers present research to map terrestrial and marine environments using imagery acquired from UAVs and other platforms. A range of image processing techniques are presented, each tailored to respond to specific mapping challenges. Working in the marine realm, Ridge et al. (2020) developed a novel deep learning tool called OysterNet that uses a convolutional neural network (CNN) to automate the detection and delineation of oyster reefs. Their approach has the potential to greatly reduce the level of effort required to perform this time‐consuming manual task, thereby opening the possibility of mapping large areas which is impracticable using traditional methods. The work presented provides a foundation for future research to monitor oyster reefs and other biogenically structured coastal objects.

Moving to terrestrial landscapes, Havrilla et al. (2020) present research on mapping and monitoring biological soil crust (biocrust) condition and extent. Mapping biocrusts is not new, but this research extends previous capabilities by developing methods to map fine‐scale patterns using sub‐centimeter resolution aerial imagery acquired using an UAV. Their approach uses object‐based image analysis (OBIA) with an image segmentation step followed by classification into biocrust categories with a support vector machine algorithm. The resulting classification was then used to calculate various landscape metrics. Their results include lessons learned and guidance for researchers and practitioners. These fine‐scale mapping methods are important tools to support land management as well as improving our ability to reliably scale biocrust mapping to satellite imagery.

The remaining three papers in this special issue focus on mapping different vegetation types. Räsänen et al. (2020) studied peatland vegetation patterns in the northern boreal vegetation zone of Finland. Their research project incorporated a variety of image products ranging from 0.02 m resolution UAV imagery to 4 m resolution satellite imagery as well as airborne lidar. They show how these different data sets complement each other in this compositionally and spatially complex environment. Instead of classifying vegetation into discrete categories, they used random forest regression to create fractional cover maps of plant functional type and other metrics. Their results highlight that different data sets and processing methods need to be considered and tested when mapping any unique cover type to maximize the quality of the output product. This paper provides general guidance that will be of benefit to people working in peatlands and other complex environments.

The study by Kattenborn et al. (2020) uses UAV imagery to predict fractional cover for three landscapes with varied cover types such as trees, other woody vegetation, grasses and forbs, and moss. This research project developed a novel workflow for mapping species and vegetation type using CNN‐based regression to output continuous cover maps instead of classified images typically output from CNNs. Their work provides insight into dealing with the trade‐off between spatial detail and predictive accuracy that must always be weighed when deciding on an appropriate land cover classification workflow. This work incorporated 3D information derived using structure‐from‐motion processing of the UAV imagery. Surprisingly, they found that this 3D information did little to improve their results. The paper concludes with lessons learned and suggestions for future research.

The fifth paper, by Horning et al. (2020), focused on using open source software products to classify cheatgrass (Bromus tectorum) and other cover types in the Great Basin of the USA. UAV imagery acquired from three flying heights ranging from 10 m to 90 m above ground level were used to compare four different machine‐learning‐based classification methods. The classification methods included pixel‐based and OBIA approaches and the following classification algorithms, random forest, CNN, and fully connected neural networks. The different workflows were compared for each flying height using classification accuracy as well as a visual assessment to convey the classification results.

The papers in this special issue show that much progress has been made in recent years with regard to mapping land cover using ultra‐high‐resolution imagery and that there is still much to do. The workflows presented aim to automate classification workflows, to the extent practical, that previously relied primarily on time‐consuming manual methods. That said, it should be kept in mind that the initial goal of these improved methods is to augment and not replace a human analyst. The overall intent is to match or exceed the accuracy with which a trained human can identify and label land cover objects, and to do so faster and more objectively. One common theme in these papers is that there is not a one‐size‐fits‐all solution to classifying ultra‐high‐resolution aerial imagery. With increased spatial resolution often comes increased complexity within an image and new processing methods are needed. These papers make a significant contribution to that need. These are exciting times and I look forward to upcoming novel methods for extracting information from the ever‐growing archive of UAV and other ultra‐high‐resolution aerial imagery.



中文翻译:

用超高分辨率的航空影像进行土地覆盖制图

无人机的使用有所增加11缩写“ UAV”的扩展有几种解释。最常见的是无人驾驶飞机,但是一些公共和私人组织用无性别,无人值守,无人驾驶和无人居住的性别中立术语代替“无人驾驶”。以及其他用于绘制土地覆被的低空航空成像平台。由于许多因素,包括高的物体内部可变性以及物体内部的可变照明和反射几何形状,很难从这些超高分辨率(亚分米)分辨率的图像中创建准确的土地覆盖图。不幸的是,传统的图像处理方法不太适合该分类任务。在这种已经很复杂的情况之上,现实是,在许多情况下,使用消费级无人机和摄像机在不理想的天气条件下采集了图像。

随着使用消费者无人机进行映射的增加,运行许多这些算法所需的基于机器学习的图像处理软件和硬件的性能也得到了提高。机器学习算法(包括那些归类为深度学习的算法)旨在识别数据中的模式,而无需通过编程来使用先入为主的规则或统计关系来识别特定模式。为了改善我们使用超高分辨率图像绘制土地覆盖物的能力,正在做大量工作来改善现有技术水平和实践水平。

在本期生态与保护遥感专刊中,有五篇论文提出了使用从无人机和其他平台获取的图像来绘制陆地和海洋环境地图的研究。提出了一系列图像处理技术,每种技术都经过了调整以应对特定的制图挑战。在海洋领域工作,Ridge等人。(2020年)开发了一种名为OysterNet的新型深度学习工具,该工具使用卷积神经网络(CNN)自动检测和描绘牡蛎礁。他们的方法有可能极大地减少执行此耗时的手动任务所需的工作量,从而开辟了使用传统方法无法实现的绘制大区域的可能性。提出的工作为监测牡蛎礁和其他生物成因结构的沿海物体提供了基础。

搬到陆地景观,哈夫里拉等。(2020年)目前在测绘和监测生物土壤结皮(biocrust)的状况和范围方面的研究。绘制生物结皮并不是什么新鲜事,但是这项研究通过开发方法来扩展以前的功能,该方法可以使用通过无人机获得的亚厘米分辨率的航空影像来绘制精细模式。他们的方法是使用基于对象的图像分析(OBIA)和图像分割步骤,然后使用支持向量机算法将其分类为生物外壳类别。然后将所得分类用于计算各种景观指标。他们的结果包括汲取的教训以及对研究人员和从业人员的指导。这些精细的制图方法是支持土地管理以及提高我们可靠地将生物地壳制图缩放到卫星图像的能力的重要工具。

本期特刊的其余三篇论文集中于绘制不同的植被类型。Räsänen等。(2020年)研究了芬兰北部北方植被区的泥炭地植被模式。他们的研究项目包括从0.02 m分辨率的无人机图像到4 m分辨率的卫星图像以及机载激光雷达等各种图像产品。它们显示了这些不同的数据集如何在这种结构和空间复杂的环境中相互补充。他们没有将植被分类为离散的类别,而是使用了随机森林回归来创建植物功能类型和其他度量的分数覆盖图。他们的结果强调,在映射任何唯一的封面类型以最大化输出产品的质量时,需要考虑和测试不同的数据集和处理方法。

Kattenborn等人的研究。(2020年)使用无人机图像来预测具有不同覆盖类型的三个景观的分数覆盖率,例如树木,其他木本植物,草和Forb以及苔藓。该研究项目开发了一种新颖的工作流程,可使用基于CNN的回归来绘制物种和植被类型,以输出连续的覆盖图,而不是通常从CNN输出的分类图像。他们的工作提供了解决空间细节和预测精度之间权衡问题的见识,在决定合适的土地覆被分类工作流程时必须始终权衡一下。这项工作结合了3D信息,这些信息是通过使用无人机图像的运动结构进行处理而得出的。令人惊讶的是,他们发现3D信息对改善结果没有多大作用。本文以总结的经验教训和对未来研究的建议作为结尾。

霍宁等人的第五篇论文。(2020年),重点研究了使用开放源代码软件产品对美国大盆地的che草(Bromus tectorum)和其他覆盖类型进行分类。使用从地面10 m至90 m的三个飞行高度获取的无人机图像来比较四种不同的基于机器学习的分类方法。分类方法包括基于像素的方法和OBIA方法,以及以下分类算法,随机森林,CNN和完全连接的神经网络。使用分类精度以及目测评估来传达分类结果,从而比较每个飞行高度的不同工作流程。

本期特刊的论文显示,近年来在使用超高分辨率图像绘制土地覆盖图方面已经取得了很大进展,并且还有很多工作要做。提出的工作流旨在在实际范围内自动化分类工作流,该工作流以前主要依赖于耗时的手动方法。就是说,应该记住,这些改进方法的最初目标是增加而不是代替人工分析人员。总体目的是匹配或超过受过训练的人员可以识别和标记土地覆盖物的精度,并且要更快,更客观地进行。这些论文的一个共同主题是,没有一种统一的解决方案来对超高分辨率的航空影像进行分类。随着空间分辨率的提高,图像内的复杂性通常会增加,因此需要新的处理方法。这些论文对这一需求做出了重大贡献。这是一段激动人心的时刻,我期待着从不断增长的无人机档案库和其他超高分辨率航空影像中提取信息的新方法。

更新日期:2020-12-29
down
wechat
bug