Abstract
Accurately mapping farmlands is important for precision agriculture practices. Unmanned aerial vehicles (UAV) embedded with multispectral cameras are commonly used to map plants in agricultural landscapes. However, separating plantation fields from the remaining objects in a multispectral scene is a difficult task for traditional algorithms. In this connection, deep learning methods that perform semantic segmentation could help improve the overall outcome. In this study, state-of-the-art deep learning methods to semantic segment citrus-trees in multispectral images were evaluated. For this purpose, a multispectral camera that operates at the green (530–570 nm), red (640–680 nm), red-edge (730–740 nm) and also near-infrared (770–810 nm) spectral regions was used. The performance of the following five state-of-the-art pixelwise methods were evaluated: fully convolutional network (FCN), U-Net, SegNet, dynamic dilated convolution network (DDCN) and DeepLabV3 + . The results indicated that the evaluated methods performed similarly in the proposed task, returning F1-Scores between 94.00% (FCN and U-Net) and 94.42% (DDCN). It was also determined the inference time needed per area and, although the DDCN method was slower, based on a qualitative analysis, it performed better in highly shadow-affected areas. This study demonstrated that the semantic segmentation of citrus orchards is highly achievable with deep neural networks. The state-of-the-art deep learning methods investigated here proved to be equally suitable to solve this task, providing fast solutions with inference time varying from 0.98 to 4.36 min per hectare. This approach could be incorporated into similar research, and contribute to decision-making and accurate mapping of plantation fields.
Similar content being viewed by others
References
Ampatzidis, Y., & Partel, V. (2019). UAV-based high throughput phenotyping in citrus utilizing multispectral imaging and artificial intelligence. Remote Sensing, 11(4), 410–429. https://doi.org/10.3390/rs11040410.
Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.
Ball, J. E., Anderson, D. T., & Chan, C. S. (2017). Comprehensive survey of deep learning in remote sensing: theories, tools, and challenges for the community. Journal of Applied Remote Sensing, 11(04), 042609. https://doi.org/10.1117/1.JRS.11.042609.
Bosilj, P., Aptoula, E., Duckett, T., & Cielniak, G. (2020). Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. Journal of Field Robotics, 37(1), 7–19. https://doi.org/10.1002/rob.21869.
Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848. https://arxiv.org/pdf/1606.00915.pdf.
Chiu, M. T., Xu, X., Wei, Y., Huang, Z., Schwing, A., Brunner, R., et al. (2020). Agriculture-vision: A large aerial image database for agricultural pattern analysis. Non-peer reviewed preprint at [cs.CV]. http://arxiv.org/abs/2001.01306.
Csillik, O., Cherbini, J., Johnson, R., Lyons, A., & Kelly, M. (2018). Identification of citrus trees from unmanned aerial vehicle imagery using convolutional neural networks. Drones, 2(4), 39–55. https://doi.org/10.3390/drones2040039.
Ganesh, P., Volle, K., Burks, T. F., & Mehta, S. S. (2019). Deep Orange: Mask R-CNN based Orange Detection and Segmentation. IFAC-PapersOnLine, 52(30), 70–75. https://doi.org/10.1016/j.ifacol.2019.12.499.
Ghamisi, P., Plaza, J., Chen, Y., Li, J., & Plaza, A. J. (2017). Advanced spectral classifiers for hyperspectral images: A review. IEEE Geoscience and Remote Sensing, 5(1), 8–32. https://doi.org/10.1109/MGRS.2016.2616418.
Goodfellow, I., Yoshua B., and Courville, A. (2016). Deep learning. Massachusetts, USA: MIT Press.
Hunt, E. R., & Daughtry, C. S. T. (2018). What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture? International Journal of Remote Sensing, 39(15–16), 5345–5376. https://doi.org/10.1080/01431161.2017.1410300.
Kamilaris, A., & Prenafeta-Boldú, F. X. (2018). Deep learning in agriculture: A survey. Computers and Electronics in Agriculture, 147, 70–90. https://doi.org/10.1016/j.compag.2018.02.016.
Kemker, R., Salvaggio, C., & Kanan, C. (2018). Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. ISPRS Journal of Photogrammetry and Remote Sensing, 145(2017), 60–77. https://doi.org/10.1016/j.isprsjprs.2018.04.014.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90. https://doi.org/10.1145/3065386.
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539.
Leiva, J. N., Robbins, J., Saraswat, D., She, Y., & Ehsani, R. (2017). Evaluating remotely sensed plant count accuracy with differing unmanned aircraft system altitudes, physical canopy separations, and ground covers. Journal of Applied Remote Sensing, 11(3), 036003. https://doi.org/10.1117/1.JRS.11.036003.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440). New York, USA: IEEE. https://doi.org/10.1109/CVPR.2015.7298965.
Luo W Li Y Urtasun R & Zemel, R. (2016). Understanding the effective receptive field in deep convolutional neural networks. In Advances in neural information processing systems (pp. 4898–4906). Barcelona, Spain: NIPS. Retrieved January 25, 2017 from https://arxiv.org/pdf/1701.04128.pdf.
Majeed, Y., Zhang, J., Zhang, X., Fu, L., Karkee, M., Zhang, Q., et al. (2020). Deep learning based segmentation for automated training of apple trees on trellis wires. Computers and Electronics in Agriculture, 170, 105277. https://doi.org/10.1016/j.compag.2020.105277.
Nevalainen, O., Honkavaara, E., Tuominen, S., Viljanen, N., Hakala, T., Yu, X., et al. (2017). Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sensing, 9, 185. https://doi.org/10.3390/rs9030185.
Nogueira, K., Penatti, O. A., & Dos Santos, J. A. (2017). Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognition, 61, 539–556. https://doi.org/10.1016/j.patcog.2016.07.001.
Nogueira, K., Fadel, S. G., & Dourado, Í. C. (2018). Exploiting ConvNet Diversity for Flooding Identification. IEEE Geoscience and Remote Sensing Letters, 15(9), 446–1450. https://doi.org/10.1109/LGRS.2018.2845549.
Nogueira, K., Dalla Mura, M., Chanussot, J., Schwartz, W. R., & dos Santos, J. A. (2019a). Dynamic multicontext segmentation of remote sensing images based on convolutional networks. IEEE Transactions on Geoscience and Remote Sensing, 57(10), 7503–7520. https://doi.org/10.1109/TGRS.2019.2913861.
Nogueira, K., Cesar, C., Gama, P. H., Machado, G. L., & dos Santos, J. A. (2019b). A tool for bridge detection in major infrastructure works using satellite images. In XV Workshop de Visão Computacional (WVC) (pp. 72–77). New York, USA: IEEE.
Osco, L. P., Paula, A., Ramos, M., Pereira, D. R., Akemi, É., Moriya, S., et al. (2019). Predicting canopy nitrogen content in citrus-trees using random forest algorithm associated to spectral vegetation indices from UAV-imagery. Remote Sensing, 11(24), 2925–2942. https://doi.org/10.3390/rs11242925.
Osco, L. P., Ramos, A. P. M., Moriya, É. A. S., Bavaresco, L. G., de Lima, B. C., Estrabis, N., et al. (2019). Modeling hyperspectral response of water-stress induced lettuce plants using artificial neural networks. Remote Sensing, 11(23), 2797. https://doi.org/10.3390/rs11232797.
Osco, L. P., Ramos, A. P. M., Pinheiro, M. M. F., Moriya, É. A. S., Imai, N. N., Estrabis, N., et al. (2020). A machine learning framework to predict nutrient content in valencia-orange leaf hyperspectral measurements. Remote Sensing, 12(6), 906. https://doi.org/10.3390/rs12060906.
Osco, L. P., Arruda, M. S., Junior, J. M., da Silva, N. B., Ramos, A. P. M., Moriya, É. A. S., et al. (2020). A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 160, 97–106. https://doi.org/10.1016/j.isprsjprs.2019.12.010.
Ozdarici-Ok, A. (2015). Automatic detection and delineation of citrus trees from VHR satellite imagery. International Journal of Remote Sensing, 36(17), 4275–4296. https://doi.org/10.1080/01431161.2015.1079663.
Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031.
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. Wells, & A. Frangi (Eds.), Medical image computing and computer-assisted intervention – MICCAI 2015. Cham, Switzerland: Springer.
Sa, I., Popović, M., Khanna, R., Chen, Z., Lottes, P., Liebisch, F., et al. (2018). WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. Remote Sensing, 10(9), 1423. https://doi.org/10.3390/rs10091423.
TensorFlow. (2020). API TensorFlow Core v2.3.0 - Python. Retrieved March 01, 2020, from https://www.tensorflow.org/api_docs/python/tf.
Weiss, M., Jacob, F., & Duveiller, G. (2020). Remote sensing for agricultural applications: A meta-review. Remote Sensing of Environment, 236, 111402. https://doi.org/10.1016/j.rse.2019.111402.
Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In D. Fleet, T. Pajdla, & B. Schiele (Eds.), Tinne Tuytelaars European conference on computer vision (pp. 818–833). Cham, Switzerland: Springer.
Zhang, H., Li, Y., Zhang, Y., & Shen, Q. (2017). Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sensing Letters, 8(5), 438–447. https://doi.org/10.1080/2150704X.2017.1280200.
Acknowledgements
The authors acknowledge the support of UFMS (Federal University of Mato Grosso do Sul) and CAPES (Finance code 001).
Funding
This research was funded by CNPq (p: 303559/2019-5, 433783/2018-4 and 304173/2016-9), CAPES Print (p: 88881.311850/2018-01) and Fundect (p: 59/300.066/2015 and 59/300.095/2015).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Osco, L.P., Nogueira, K., Marques Ramos, A.P. et al. Semantic segmentation of citrus-orchard using deep neural networks and multispectral UAV-based imagery. Precision Agric 22, 1171–1188 (2021). https://doi.org/10.1007/s11119-020-09777-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11119-020-09777-5