Elsevier

Biosystems Engineering

Volume 204, April 2021, Pages 198-211
Biosystems Engineering

Research Paper
Real time detection of inter-row ryegrass in wheat farms using deep learning

https://doi.org/10.1016/j.biosystemseng.2021.01.019Get rights and content

Highlights

  • Deep neural network is proposed to segment inter-row ryegrass in a wheat field.

  • Dataset is captured by an agricultural robot with different wheat growth stages.

  • Method outperforms five state-of-the-art methods especially on detecting ryegrass.

  • Mean accuracy increases by 1.97%, 8.95% for pixel-wise, object-wise segmentation.

  • Proposed method runs in real-time at 48.95 FPS using a consumer level GPU.

A key challenge for autonomous precision weeding is to reliably and accurately detect weed plants and crop plants in real time to minimise damage to surrounding crop plants while performing weeding actions. Specifically for a wheat farm, classifying ryegrass weed plants is particularly difficult even with human eyes since ryegrass shows visually very similar shape and texture to the crop plants themselves. A Deep Neural Network (DNN) that exploits the geometric location of ryegrass is proposed for the real time segmentation of inter-row ryegrass weeds in a wheat field. Our proposed method introduces two subnets in a conventional encoder-decoder style DNN to improve segmentation accuracy. The two subnets treat inter-row and intra-row pixels differently, and provide corrections to preliminary segmentation results of the conventional encoder-decoder DNN. A dataset captured in a wheat farm by an agricultural robot at different time instances is used to evaluate the segmentation performance, and the proposed method performs the best among various popular semantic segmentation algorithms. The proposed method runs at 48.95 Frames Per Second (FPS) with a consumer level graphics processing unit, thus is real-time deployable at camera frame rate.

Introduction

Autonomous weeding is a critical step in precision farming as it directly impacts crop health and yield (Slaughter et al., 2008). Detection of weed plants in crop plant rows reliably and precisely is a critical requirement for precision weeding, which comes in ways of spot spraying, mechanical tillage, mechanical stamping or laser burning, whilst minimising damage to surrounding vegetation (Lottes et al., 2018, 2020).

Wheat is one of the most important crop plants in many regions, including Australia (Golzarian & Frick, 2011). Among several common weed plant species existing in Australian wheat farms, the ryegrass weed plant is the most difficult one to detect and classify against wheat, since both of them have similar leaf shapes and show very similar optical reflectance in visible spectral range, as shown in the left image of Fig. 1(b). Conventionally, to get rid of ryegrass, farmers have to clean the field before seeding, apply specific herbicide, and carry out manual weeding during the growth stage of wheat. Such a process leads to additional costs of materials and labour, and extra usage of herbicide is not environmentally friendly. For robotic autonomous weeding, the visual similarity between wheat and ryegrass also makes off-the-shelf algorithms (Milioto & Stachniss, 2019; Badrinarayanan et al., 2017; Chen et al., 2017; Ronneberger et al., 2015; Zhao et al., 2017) work undesirably in detecting ryegrass in a wheat farm.

We propose a novel end-to-end trainable DNN based semantic segmentation method to classify inter-row ryegrass weed plants from wheat in real time by exploiting the geometric locations of ryegrass weed plants. Due to the competition existing against crop plants, weed plants can hardly survive in a densely grown wheat row. Therefore, there are very few intra-row ryegrass weed plants, and the majority of them are inter-row. The pixelwise segmentation results can be used to guide various type of weeding actuators to remove weed plants while avoiding damaging crop plants. The main part of the proposed method is based on the popular encoder-decoder structure, which is widely used in several state-of-the-art approaches (Milioto & Stachniss, 2019; Badrinarayanan et al., 2017; Chen et al., 2017; Ronneberger et al., 2015; Zhao et al., 2017). In addition to the encoder-decoder style main part, we introduce two novel subnets to improve or correct the preliminary segmentation results from the main part of the DNN, as inspired by the refinement architecture adopted in a cascaded manner in LiteFlowNet (Hui et al., 2018).

The objective of this paper is to tackle the challenging problem of semantic segmentation of inter-row ryegrass weed plants in a wheat farm, where conventional methods work undesirably in terms of segmentation accuracy or execution speed. We investigate whether the proposed method with the two novel subnets that exploit geometric location of weed plants can effectively improve segmentation performance both in terms of the pixel-wise accuracy and the object-wise accuracy, compared to conventional state-of-the-art methods. Furthermore, we examine the runtime speed of the proposed method to assure its real-time operation. Overall, we propose a novel method which obtains high segmentation accuracy and real-time execution speed that makes it suitable for robotic weeding.

The contributions of this paper are two-fold. Firstly, to the best of the authors' knowledge, the proposed method serves as the first DNN based method to detect and classify ryegrass weed plants in a wheat farm via semantic segmentation, which provides a more accurate localisation of ryegrass compared to bounding box based detection for autonomous weeding action. Secondly, the proposed method introduces two novel subnets on top of a conventional encoder-decoder structured main segmentation net. By employing the two subnets, segmentation accuracy, especially for ryegrass weed plants, can be improved to a large degree, as detailed later in section 4.

The remainder of the paper is organised as follows. In section 2, related works of weed plant detection and crop plant-weed plant classification are discussed. In section 3, the details of the proposed method are described. In section 4, the experimental results of inter-row weed plant detection using the proposed method and performance comparisons with several state-of-the-art methods are presented. section 5 presents conclusions and discussions about further work.

Section snippets

Related work

Vision based crop plant-weed plant classification is a common approach for autonomous weeding systems, and there exist many works towards robust classification, detection and segmentation of crop plants and weed plants. Among them, many of the earlier approaches rely on hand crafted features (Haug et al., 2014; Milioto et al., 2017). In order to maximise the separability of crop plants and weed plants, parameters of the hand-crafted features are often optimised to adapt to a specific

The proposed method

The proposed method consists of a main segmentation net and two subnets. Details about them are provided in sections 3.1 Main segmentation net, 3.2 Subnets for correction or refinement.

Experimental results

In this section, we evaluate the performance of the proposed method using the evaluation matrix defined in section 3.4, i.e. F1 scores in Eq. (17) and IOUs of each individual class, mean accuracy and mean IOU in Eq. (15) and Eq. (16). For comparison purposes, we also show segmentation results of five state-of-the-art semantic segmentation methods, which are Bonnet (Milioto & Stachniss, 2019), SegNet (Badrinarayanan et al., 2017), PSPNet (Zhao et al., 2017), DeepLabV3 (Chen et al., 2017) and

Conclusions

A DNN for real time segmentation of inter-row ryegrass weed plants in a wheat farm is proposed. The method exploits the geometric location of ryegrass weed plants, and introduces two novel subnets architecture to a conventional encoder-decoder style DNN to improve the segmentation accuracy, especially for detecting inter-row ryegrass weed plants. Comprehensive evaluation results show that the proposed method with two novel subnets yields superior performance over five other state-of-the-art

Funding

This work was supported by the Grains Research & Development Corporation Australia [grant number 2018-PROC-9175526].

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

We thank Mr. Guy Coleman and Dr. Michael Walsh for their great support during the preparation of the testbed and the acquisition of the dataset.

References (27)

  • K. Girma et al.

    Identification of optical spectral signatures for detecting cheat and ryegrass in winter wheat

    Crop Science

    (2005)
  • M.R. Golzarian et al.

    Classification of images of wheat, ryegrass and brome grass species at early growth stages using principal component analysis

    Plant Methods

    (2011)
  • S. Haug et al.

    Plant classification system for crop/weed discrimination without segmentation

  • Cited by (30)

    • Plant image recognition with deep learning: A review

      2023, Computers and Electronics in Agriculture
    • An improved YOLOv5-based vegetable disease detection method

      2022, Computers and Electronics in Agriculture
      Citation Excerpt :

      Compared with traditional methods, this nondestructive recognition technology can identify crop diseases in the visible light range, with higher accuracy, faster detection speed and better stability (Jia et al., 2019). In the past five years, weed identification (Su et al., 2021; Jiang et al., 2020) and disease detection of plants (Bi et al., 2020; Jiang et al., 2019) using deep learning techniques by convolutional neural networks (CNNs) have been widely used. Unlike traditional machine learning-based models that manually select features, CNNs automatically extract advanced and stable features through an end-to-end pipeline, thus significantly improving the utility of plant leaf detection (Fu et al., 2021; Kim et al., 2020).

    • Deep learning-based precision agriculture through weed recognition in sugar beet fields

      2022, Sustainable Computing: Informatics and Systems
      Citation Excerpt :

      The proposed model yielded an MIoU of 94% for detecting weed in remote sensing images. Su et al. [25] developed a CNN-based model to segment ryegrass weeds in a wheat field. The proposed model, which consisted of two encoder-decoder networks, achieved a 71.86% MIoU.

    View all citing articles on Scopus
    View full text