当前位置: X-MOL 学术Precision Agric. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks
Precision Agriculture ( IF 5.4 ) Pub Date : 2021-03-26 , DOI: 10.1007/s11119-021-09803-0
Reji Jayakumari , Rama Rao Nidamanuri , Anandakumar M. Ramiya

Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.



中文翻译:

使用深度学习卷积神经网络的3D LiDAR点云中蔬菜作物的对象级分类

在植物或地块一级的作物歧视对于现代技术驱动的农业至关重要。多光谱和高光谱遥感数据已广泛用于作物分类。即使光谱数据可以成功地对行作物和果园进行分类,但它们在区分植物或斑块水平上的蔬菜和谷物作物方面仍然受到限制。地面激光扫描是一种潜在的遥感方法,它提供了独特的结构特征,可用于对植物或斑块级别的农作物进行分类。这项研究的目的是改进和应用高级深度学习框架,以使用高分辨率LiDAR点云对三种蔬菜作物(卷心菜,番茄和茄子)进行基于对象的分类。来自地面激光扫描仪(TLS)的点云是在印度班加罗尔的农业科学大学的实验地块上获得的。作为该方法的一部分,设计了一个名为CropPointNet的深度卷积神经网络(CNN)模型,用于从3D角度对作物进行语义分割。CropPointNet是PointNet深度CNN模型的改编版本,该模型是为在典型的计算机视觉场景中分割室内物体而开发的。除了适应作物的3D点云分割外,CropPointNet所做的重大方法改进还包括用于训练点云的随机采样方案,以及对网络架构的优化,以实现基于结构属性的点对非结构化对象的点云分割,例如TLS点云作物。3D作物分类的性能已经过验证,并与两种流行的深度学习架构进行了比较:PointNet和基于动态图的卷积神经网络(DGCNN)。结果表明,对三种作物的作物点云进行了基于植物级别的基于对象的一致分类,总精度为81%或更高。这项研究中提出的CropPointNet体系结构可以推广到其他行作物和自然植被类型的分割和分类。

更新日期:2021-03-27
down
wechat
bug