Skip to main content
Log in

Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks

  • Published:
Precision Agriculture Aims and scope Submit manuscript

Abstract

Crop discrimination at the plant or patch level is vital for modern technology-enabled agriculture. Multispectral and hyperspectral remote sensing data have been widely used for crop classification. Even though spectral data are successful in classifying row-crops and orchards, they are limited in discriminating vegetable and cereal crops at plant or patch level. Terrestrial laser scanning is a potential remote sensing approach that offers distinct structural features useful for classification of crops at plant or patch level. The objective of this research is the improvement and application of an advanced deep learning framework for object-based classification of three vegetable crops: cabbage, tomato, and eggplant using high-resolution LiDAR point cloud. Point clouds from a terrestrial laser scanner (TLS) were acquired over experimental plots of the University of Agricultural Sciences, Bengaluru, India. As part of the methodology, a deep convolution neural network (CNN) model named CropPointNet is devised for the semantic segmentation of crops from a 3D perspective. The CropPointNet is an adaptation of the PointNet deep CNN model developed for the segmentation of indoor objects in a typical computer vision scenario. Apart from adapting to 3D point cloud segmentation of crops, the significant methodological improvements made in the CropPointNet are a random sampling scheme for training point cloud, and optimization of the network architecture to enable structural attribute-based segmentation of point clouds of unstructured objects such as TLS point clouds crops. The performance of the 3D crop classification has been validated and compared against two popular deep learning architectures: PointNet, and the Dynamic Graph-based Convolutional Neural Network (DGCNN). Results indicate consistent plant level object-based classification of crop point cloud with overall accuracies of 81% or better for all the three crops. The CropPointNet architecture proposed in this research can be generalized for segmentation and classification of other row crops and natural vegetation types.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

As per the regulations of our institute, we can share the data acquired in this research upon individual request until November 2021. Thereafter, we will be hosting the data on a commonly accessible platform with DOI.

References

  • Avola, G., Di Gennaro, S. F., Cantini, C., Riggi, E., Muratore, F., Tornambè, C., & Matese, A. (2019). Remotely sensed vegetation indices to discriminate field-grown olive cultivars. Remote Sensing, 11(10), 1242.

    Article  Google Scholar 

  • Axelsson, P. E. (2000). DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry and Remote Sensing., 33, 110–117.

    Google Scholar 

  • Bellakaout, A., Cherkaoui, M., Ettarid, M., & Touzani, A. (2016). Automatic 3D extraction of buildings, vegetation and roads from LIDAR data. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences., 41, 173–180.

    Article  Google Scholar 

  • Eckart, B., Kim, K. & Jan, K. (2018). EOE: Expected overlap estimation over unstructured point cloud data, In Proceedings - 2018 International Conference on 3D Vision, 3DV 2018. Institute of Electrical and Electronics Engineers Inc. 747–755. Doi:https://doi.org/10.1109/3DV.2018.00090

  • Handique, B. K., Khan, A. Q., Goswami, C., Prashnani, M., Gupta, C., & Raju, P. L. N. (2017). Crop discrimination using multispectral sensor onboard unmanned aerial vehicle. Proceedings of the National Academy of Sciences, India Section A: Physical Sciences., 87(4), 713–719.

    Article  Google Scholar 

  • Jin, S., Su, Y., Gao, S., Wu, F., Hu, T., Liu, J., Li, W., Wang, D., Chen, S., Jiang, Y., & Pang, S. (2018). Deep learning: individual maize segmentation from terrestrial lidar data using faster R-CNN and regional growth algorithms. Frontiers in plant science, 9, 866.

    Article  Google Scholar 

  • Johnson, R. A., Miller, I., & Freund, J. E. (2000). Probability and statistics for engineers. Pearson Education.

    Google Scholar 

  • Kingma, D.P. & Ba, J. (2014). Adam: A method for stochastic optimizationarXiv preprint arXiv: 1412.6980.

  • Koen, B. V. (1988). Toward a definition of the engineering method. European Journal of Engineering Education, 13(3), 307–315. https://doi.org/10.1080/03043798808939429

    Article  Google Scholar 

  • Lawin, F. J., Danelljan, M., & Felsberg, M. (2017). Deep projective 3D semantic segmentation. In F. J. Lawin, M. Danelljan, & M. Felsberg (Eds.), Lecture Notes in Computer Science (pp. 95–107). Springer Verlag.

    Google Scholar 

  • Liu, Y., Piramanayagam, S., Monteiro, S. T., & Saber, E. (2019). Semantic segmentation of multisensor remote sensing imagery with deep ConvNets and higher-order conditional random fields. Journal of Applied Remote Sensing, 13(1), 016501.

    Google Scholar 

  • Lowphansirikul, C., Kim, K.S., Vinayaraj, P. & Tuarob, S. (2019). 3D Semantic segmentation of large-scale point-clouds in urban areas using deep learning. In Proceedings of the IEEE 11th International Conference on Knowledge and Smart Technology, 23–26 Jan. 2019. https://doi.org/10.1109/KST.2019.8687813.

  • Meng, Q., Hashimoto, Y. & Satoh, S.I. (2019). Fundus image classification and retinal disease localization with limited supervision. In Asian Conference on Pattern Recognition. Springer

  • Murray, J., Fennell, T. H., Blackburn, G. A., Whyatt, J. D., & Li, B. (2020). The novel use of proximal photogrammetry and terrestrial LiDAR to quantify the structural complexity of orchard trees. Precision Agriculture, 21, 473–483.

    Article  Google Scholar 

  • Ozdarici-Ok, A., Ok, A., & Schindler, K. (2015). Mapping of agricultural crops from single high-resolution multispectral images Data-driven smoothing vs. parcel-based smoothing. Remote Sensing, 7(5), 5611–5638.

    Article  Google Scholar 

  • Paulus, S., Dupuis, J., Mahlein, A. K., & Kuhlmann, H. (2013). Surface feature-based classification of plant organs from 3D laser scanned point clouds for plant phenotyping. BMC Bioinformatics, 14(1), 238.

    Article  Google Scholar 

  • Paulus, S., Dupuis, J., Riedel, S., & Kuhlmann, H. (2014). Automated analysis of barley organs using 3D laser scanning: An approach for high throughput phenotyping. Sensors, 14(7), 12670–12686.

    Article  Google Scholar 

  • Qi, C.R., Su, H., Mo, K. & Guibas, L. J. (2017a). PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 77–85. Doi:https://doi.org/10.1109/CVPR.2017.16

  • Qi, C. R., Yi, L.,. Su, H., & Guibas, L. J. (2017b). PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 1st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.

  • Soilán, R. M., Lindenbergh, R., Riveiro, R. B., & Sánchez, R. A. (2019). Pointnet for the automatic classification of aerial point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4, 445–452.

    Article  Google Scholar 

  • Sun, S., Li, C., & Paterson, A. H. (2017). In-field high-throughput phenotyping of cotton plant height using LiDAR. Remote Sensing. https://doi.org/10.3390/rs9040377

    Article  PubMed  PubMed Central  Google Scholar 

  • Varfolomeev, I., Yakimchuk, I., & Safonov, I. (2019). An application of deep neural networks for segmentation of microtomographic images of rock samples. Computers, 8, 72.

    Article  Google Scholar 

  • Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M. & Solomon, J.M. (2018). Dynamic graph cnn for learning on point clouds. arXiv preprint, arXiv: 1801.07829.

  • Weiss, U., & Biber, P. (2011). Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robotics and autonomous systems, 59(5), 265–273.

    Article  Google Scholar 

  • Weiss, U., Biber, P., Laible, S., Bohlmann, K. & Zell, A. (2010). Plant species classification using a 3D LIDAR sensor and machine learning. In Proceedings of 2010 Ninth International Conference on Machine Learning and Applications, 339–345. IEEE.

  • Zhang, X., Sun, Y., Shang, K., Zhang, L., & Wang, S. (2016). Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(9), 4117–4128.

    Article  Google Scholar 

Download references

Acknowledgements

This work has been carried out as part of an Indo-German research cooperation (DBT: The Rural-Urban Interface of Bengaluru- a Space of Transitions in Agriculture, Economics, and Society; DFG: Research Unit FOR2432/1) funded by the Department of Biotechnology (DBT), Government of India and the German Research Foundation (DFG), Germany. We gratefully acknowledge the financial support from DBT and DFG provided in the form of a research grant (DBT: DBT/IN/German/DFG/14/BVCR/2016; DFG: WA 2135/4-1 and BU 1308/13-1).

Funding

Stated in the Acknowledgements.

Author information

Authors and Affiliations

Authors

Contributions

R R Nidamanuri supervised the research and partly wrote the manuscript; A M Ramiya guided the implementations; J Reji wrote the codes, implemented the work processes, and partly wrote the manuscript.

Corresponding author

Correspondence to Rama Rao Nidamanuri.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jayakumari, R., Nidamanuri, R.R. & Ramiya, A.M. Object-level classification of vegetable crops in 3D LiDAR point cloud using deep learning convolutional neural networks. Precision Agric 22, 1617–1633 (2021). https://doi.org/10.1007/s11119-021-09803-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11119-021-09803-0

Keywords

Navigation