当前位置: X-MOL 学术PeerJ › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fusion neural networks for plant classification: learning to combine RGB, hyperspectral, and lidar data
PeerJ ( IF 2.3 ) Pub Date : 2021-07-29 , DOI: 10.7717/peerj.11790
Victoria M Scholl 1, 2 , Joseph McGlinchy 1 , Teo Price-Broncucia 3 , Jennifer K Balch 1, 2 , Maxwell B Joseph 1
Affiliation  

Airborne remote sensing offers unprecedented opportunities to efficiently monitor vegetation, but methods to delineate and classify individual plant species using the collected data are still actively being developed and improved. The Integrating Data science with Trees and Remote Sensing (IDTReeS) plant identification competition openly invited scientists to create and compare individual tree mapping methods. Participants were tasked with training taxon identification algorithms based on two sites, to then transfer their methods to a third unseen site, using field-based plant observations in combination with airborne remote sensing image data products from the National Ecological Observatory Network (NEON). These data were captured by a high resolution digital camera sensitive to red, green, blue (RGB) light, hyperspectral imaging spectrometer spanning the visible to shortwave infrared wavelengths, and lidar systems to capture the spectral and structural properties of vegetation. As participants in the IDTReeS competition, we developed a two-stage deep learning approach to integrate NEON remote sensing data from all three sensors and classify individual plant species and genera. The first stage was a convolutional neural network that generates taxon probabilities from RGB images, and the second stage was a fusion neural network that “learns” how to combine these probabilities with hyperspectral and lidar data. Our two-stage approach leverages the ability of neural networks to flexibly and automatically extract descriptive features from complex image data with high dimensionality. Our method achieved an overall classification accuracy of 0.51 based on the training set, and 0.32 based on the test set which contained data from an unseen site with unknown taxa classes. Although transferability of classification algorithms to unseen sites with unknown species and genus classes proved to be a challenging task, developing methods with openly available NEON data that will be collected in a standardized format for 30 years allows for continual improvements and major gains for members of the computational ecology community. We outline promising directions related to data preparation and processing techniques for further investigation, and provide our code to contribute to open reproducible science efforts.

中文翻译:

用于植物分类的融合神经网络:学习组合 RGB、高光谱和激光雷达数据

机载遥感为有效监测植被提供了前所未有的机会,但使用收集到的数据对单个植物物种进行描绘和分类的方法仍在积极开发和改进中。将数据科学与树木和遥感相结合 (IDTReeS) 植物识别竞赛公开邀请科学家创建和比较单独的树木制图方法。参与者的任务是训练基于两个站点的分类单元识别算法,然后将他们的方法转移到第三个看不见的站点,使用基于现场的植物观察以及来自国家生态观测网络 (NEON) 的机载遥感图像数据产品。这些数据由对红、绿、蓝 (RGB) 光敏感的高分辨率数码相机捕获,跨越可见光到短波红外波长的高光谱成像光谱仪,以及捕捉植被光谱和结构特性的激光雷达系统。作为 IDTReeS 竞赛的参与者,我们开发了一种两阶段深度学习方法来整合来自所有三个传感器的 NEON 遥感数据并对单个植物物种和属进行分类。第一阶段是卷积神经网络,从 RGB 图像生成分类单元概率,第二阶段是融合神经网络,它“学习”如何将这些概率与高光谱和激光雷达数据相结合。我们的两阶段方法利用神经网络从高维复杂图像数据中灵活自动提取描述性特征的能力。我们的方法实现了 0 的整体分类精度。51 基于训练集,0.32 基于测试集,其中包含来自未知类群的未知站点的数据。尽管将分类算法转移到未知物种和属类的未知地点被证明是一项具有挑战性的任务,但开发具有公开可用 NEON 数据的方法,这些数据将以标准化格式收集 30 年,可以持续改进并为成员带来重大收益。计算生态社区。我们概述了与数据准备和处理技术相关的有前景的方向以供进一步调查,并提供我们的代码以促进开放可重复的科学工作。尽管将分类算法转移到未知物种和属类的未知地点被证明是一项具有挑战性的任务,但开发具有公开可用 NEON 数据的方法,这些数据将以标准化格式收集 30 年,可以持续改进并为成员带来重大收益。计算生态社区。我们概述了与数据准备和处理技术相关的有前景的方向以供进一步调查,并提供我们的代码以促进开放可重复的科学工作。尽管将分类算法转移到未知物种和属类的未知地点被证明是一项具有挑战性的任务,但开发具有公开可用 NEON 数据的方法,这些数据将以标准化格式收集 30 年,可以持续改进并为成员带来重大收益。计算生态社区。我们概述了与数据准备和处理技术相关的有前景的方向以供进一步调查,并提供我们的代码以促进开放可重复的科学工作。
更新日期:2021-07-29
down
wechat
bug