当前位置: X-MOL 学术Remote Sens. Environ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Soybean yield prediction from UAV using multimodal data fusion and deep learning
Remote Sensing of Environment ( IF 13.5 ) Pub Date : 2020-02-01 , DOI: 10.1016/j.rse.2019.111599
Maitiniyazi Maimaitijiang , Vasit Sagan , Paheding Sidike , Sean Hartling , Flavio Esposito , Felix B. Fritschi

Abstract Preharvest crop yield prediction is critical for grain policy making and food security. Early estimation of yield at field or plot scale also contributes to high-throughput plant phenotyping and precision agriculture. New developments in Unmanned Aerial Vehicle (UAV) platforms and sensor technology facilitate cost-effective data collection through simultaneous multi-sensor/multimodal data collection at very high spatial and spectral resolutions. The objective of this study is to evaluate the power of UAV-based multimodal data fusion using RGB, multispectral and thermal sensors to estimate soybean (Glycine max) grain yield within the framework of Deep Neural Network (DNN). RGB, multispectral, and thermal images were collected using a low-cost multi-sensory UAV from a test site in Columbia, Missouri, USA. Multimodal information, such as canopy spectral, structure, thermal and texture features, was extracted and combined to predict crop grain yield using Partial Least Squares Regression (PLSR), Random Forest Regression (RFR), Support Vector Regression (SVR), input-level feature fusion based DNN (DNN-F1) and intermediate-level feature fusion based DNN (DNN-F2). The results can be summarized in three messages: (1) multimodal data fusion improves the yield prediction accuracy and is more adaptable to spatial variations; (2) DNN-based models improve yield prediction model accuracy: the highest accuracy was obtained by DNN-F2 with an R2 of 0.720 and a relative root mean square error (RMSE%) of 15.9%; (3) DNN-based models were less prone to saturation effects, and exhibited more adaptive performance in predicting grain yields across the Dwight, Pana and AG3432 soybean genotypes in our study. Furthermore, DNN-based models demonstrated consistent performance over space with less spatial dependency and variations. This study indicates that multimodal data fusion using low-cost UAV within a DNN framework can provide a relatively accurate and robust estimation of crop yield, and deliver valuable insight for high-throughput phenotyping and crop field management with high spatial precision.

中文翻译:

基于多模态数据融合和深度学习的无人机大豆产量预测

摘要 收获前作物产量预测对于粮食政策制定和粮食安全至关重要。田间或地块规模的早期产量估算也有助于高通量植物表型分析和精准农业。无人机 (UAV) 平台和传感器技术的新发展通过以非常高的空间和光谱分辨率同时收集多传感器/多模态数据,促进了经济高效的数据收集。本研究的目的是评估基于无人机的多模态数据融合使用 RGB、多光谱和热传感器在深度神经网络 (DNN) 框架内估计大豆(最大甘氨酸)谷物产量的能力。RGB、多光谱和热图像是使用低成本多感官无人机从美国密苏里州哥伦比亚的一个试验场收集的。多模态信息,使用偏最小二乘回归 (PLSR)、随机森林回归 (RFR)、支持向量回归 (SVR)、输入级特征融合等方法提取并组合冠层光谱、结构、热和纹理特征以预测作物产量DNN (DNN-F1) 和基于中级特征融合的 DNN (DNN-F2)。结果可以概括为三个信息:(1)多模态数据融合提高了良率预测精度,更适应空间变化;(2)基于DNN的模型提高了良率预测模型的精度:DNN-F2获得了最高的精度,R2为0.720,相对均方根误差(RMSE%)为15.9%;(3) 基于 DNN 的模型不太容易出现饱和效应,并且在预测整个德怀特的谷物产量方面表现出更强的适应性,我们研究中的 Pana 和 AG3432 大豆基因型。此外,基于 DNN 的模型在空间依赖性和变化较少的情况下表现出一致的空间性能。该研究表明,在 DNN 框架内使用低成本无人机的多模态数据融合可以提供相对准确和稳健的作物产量估计,并为高空间精度的高通量表型分析和作物田间管理提供有价值的见解。
更新日期:2020-02-01
down
wechat
bug