Skip to main content
Log in

Semantic segmentation of citrus-orchard using deep neural networks and multispectral UAV-based imagery

  • Published:
Precision Agriculture Aims and scope Submit manuscript

Abstract

Accurately mapping farmlands is important for precision agriculture practices. Unmanned aerial vehicles (UAV) embedded with multispectral cameras are commonly used to map plants in agricultural landscapes. However, separating plantation fields from the remaining objects in a multispectral scene is a difficult task for traditional algorithms. In this connection, deep learning methods that perform semantic segmentation could help improve the overall outcome. In this study, state-of-the-art deep learning methods to semantic segment citrus-trees in multispectral images were evaluated. For this purpose, a multispectral camera that operates at the green (530–570 nm), red (640–680 nm), red-edge (730–740 nm) and also near-infrared (770–810 nm) spectral regions was used. The performance of the following five state-of-the-art pixelwise methods were evaluated: fully convolutional network (FCN), U-Net, SegNet, dynamic dilated convolution network (DDCN) and DeepLabV3 + . The results indicated that the evaluated methods performed similarly in the proposed task, returning F1-Scores between 94.00% (FCN and U-Net) and 94.42% (DDCN). It was also determined the inference time needed per area and, although the DDCN method was slower, based on a qualitative analysis, it performed better in highly shadow-affected areas. This study demonstrated that the semantic segmentation of citrus orchards is highly achievable with deep neural networks. The state-of-the-art deep learning methods investigated here proved to be equally suitable to solve this task, providing fast solutions with inference time varying from 0.98 to 4.36 min per hectare. This approach could be incorporated into similar research, and contribute to decision-making and accurate mapping of plantation fields.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

Download references

Acknowledgements

The authors acknowledge the support of UFMS (Federal University of Mato Grosso do Sul) and CAPES (Finance code 001).

Funding

This research was funded by CNPq (p: 303559/2019-5, 433783/2018-4 and 304173/2016-9), CAPES Print (p: 88881.311850/2018-01) and Fundect (p: 59/300.066/2015 and 59/300.095/2015).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas Prado Osco.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Osco, L.P., Nogueira, K., Marques Ramos, A.P. et al. Semantic segmentation of citrus-orchard using deep neural networks and multispectral UAV-based imagery. Precision Agric 22, 1171–1188 (2021). https://doi.org/10.1007/s11119-020-09777-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11119-020-09777-5

Keywords

Navigation