Skip to main content

Image analysis-based recognition and quantification of grain number per panicle in rice

Abstract

Background

The number grain per panicle of rice is an important phenotypic trait and a significant index for variety screening and cultivation management. The methods that are currently used to count the number of grains per panicle are manually conducted, making them labor intensive and time consuming. Existing image-based grain counting methods had difficulty in separating overlapped grains.

Results

In this study, we aimed to develop an image analysis-based method to quickly quantify the number of rice grains per panicle. We compared the counting accuracy of several methods among different image acquisition devices and multiple panicle shapes on both Indica and Japonica subspecies of rice. The linear regression model developed in this study had a grain counting accuracy greater than 96% and 97% for Japonica and Indica rice, respectively. Moreover, while the deep learning model that we used was more time consuming than the linear regression model, the average counting accuracy was greater than 99%.

Conclusions

We developed a rice grain counting method that accurately counts the number of grains on a detached panicle, and believe this method can be a huge asset for guiding the development of high throughput methods for counting the grain number per panicle in other crops.

Background

Phenomics involves the gathering of high-dimensional phenotypic data to screen mutants with unique traits and identify the corresponding genes [1]. Current methods for obtaining phenotypic data are generally manual [2], making them time-consuming, labor-intensive, and less accurate. Therefore, such approaches have been impractical for high-throughput measurements during plant growth and development.

The number of rice grains per panicle is a key trait that effects grain cultivation, management, and subsequent yield [3,4,5], as well as being an important parameter for evaluating the potential of new rice cultivars [6]. Rapid measurement of grain number per panicle could improve the efficiency of scientific research and cultivar development.

Image analysis-based methods have been widely used in many aspects of plant phenotyping. Image-analysis based high-throughput phenotyping platforms have also been applied to measure phenotypic traits of rice, including: plant height, the green leaf area, and rice tiller number [7]. Yang et al. [8] measured the number of panicles on plants using multi-angel color images and an artificial neural network algorithm. The authors reported a reliable, automatic, high-throughput leaf scorer (HLS) for the evaluation of leaf traits, including leaf number, size, shape, and color [9]. Feng et al. [10] developed a hyperspectral imaging system for the accurate prediction of the above-ground biomass of individual rice plants in the visible and near-infrared spectral regions. Zhou et al. [11, 12] used image analysis techniques to assess plant nitrogen and water status. Huang et al. [13] developed a prototype for the automatic measurement of panicle length using dual-cameras, which were equipped with a long-focus lens and a short-focus lens to capture a detailed and complete image of the rice panicle. In addition, image-based methods have been used to characterize seed morphology, including: seeds size, shape, color, and endosperm structure [14,15,16]. With the advancement of modern optical imaging and automation technology, hardware is no longer a bottleneck for phenotyping. Instead, the analysis and processing of multi-disciplinary optical images have become the new bottleneck [17].

The research on the rapid counting of grain number per panicle has been carried out in different ways. Generally, the panicle is spread out on a white background and held in place by metal pins so that branches and grains are nonoverlapping [14, 18]. It is also an effective way to spread the grains after threshing [16]. These methods are not suitable for rice panicles with severe adhesions in the Yangtze River Basin. Currently, there are two primary methods for the determination of grain number per panicle. The first method is to count the number of grains manually after threshing, which is an incredibly time-consuming and labor intensive process. During the processing of threshed grains, due to the existence of a large amount of awns and overlaid and clustered grains, it is very challenging for traditional algorithms to identify individual rice kernels when they are touching [19, 20]. Husking the grains would make them smoother and easy to separate, but husking also produces broken rice kernels and complicates the counting procedure.

The second methods for determining grain number on each panicle is the most common method and is called on-panicle counting method, which involves counting the number of grains in a spikelet. Collecting an image of the entire panicle is also problematic due to overlaid and clustered grains. To some extent, three-dimensional image acquisition may solve the problem of the touching grains, but equipment to conduct this analyses is expensive and complicated to use.

In this study, we proposed a new counting method that uses image processing and deep learning algorithm to detect rice grain from the image of the primary branch was acquired using digital scanner. Our method would solve grain overlap or clustering problems, be more cost-effective and user-friendly, and facilitate high throughput counting of grain number per panicle in rice.

Methods

Field experiment

Since panicle morphology is effected by variety and species, we used two varieties of Indica rice (Yangliangyou No. 6 and Fengyouxiangzhan) and two varieties of Japonica (Wuyunjing No. 27 and Nanjing No. 9108). In addition, due to the fact that cultivation conditions can affect the grain size and grain number of per panicle, the experiment was organized as two-factor randomized complete block design with seeding density (150, 225, and 300 × 104 plant ha−1) and fertilizer (150, 225, and 300 × kg ha−1). At the full-heading stage, fifteen panicles per sample group were randomly harvested (Table 1). Total of 540 panicles, which included the spikelet and panicle of different sizes and shapes, were collected in the study.

Table 1 Basic information of experimental materials

Image acquisition

The different postures of rice panicles have great influences on image recognition. We divided the panicles into three groups based on manual shaping (Fig. 1). Shape A: Panicles were not manually shaped and panicle images were obtained in the natural state. Shape B: the primary branches were manually separated. The moisture content of panicle branch in the mature stage was low and the branch would undergo inert deformation. However, the stem and branch were not completely fractured, and the panicle image was taken in a natural unfolded state. Shape C: the primary branches were removed from panicle stem. The panicle stem and branch were completely fractured, and the panicle image was taken in a natural scattered state. A two-image acquisition method was used for the panicles. The first way is that the samples were placed on a black light-absorbing fabric and the rice panicle images were acquired using digital camera (SONY, model ILCE-6300; equipped with E PZ 16–50 mm, f 3.5–5.6 lens) at a distance of 20 cm above the samples. The image size was 4032 × 3024. The second way is the panicle samples were placed on a scanner (Canon, model LiDE 400, resolution 4800 dpi, and scan speed: color, A4, 300 dpi, 8 s). The scanner was connected to a computer and data were transferred via a high-speed USB2.0 Type-C device. The panicle images were acquired through digital scanner and the image size was 2480 × 3507. The scanner was covered with black light-absorbing fabric to avoid light noise caused by reflection and projection. The basic information of image data was shown in Table 2.

Fig. 1
figure 1

Images of three different panicle shapes acquired using a scanner of the Japonica rice and Indica rice. a Japonica rice panicles without manual shaping. b The primary branches of Japonica rice were manually separated. c The primary branches of Japonica rice were removed manually. d Indica rice panicles without manual shaping. e The primary branches of Indica rice were manually separated. f The primary branches of Indica rice were removed manually

Table 2 Basic information of image dataset

Image pre-processing

As shown in Fig. 2a, the contrast between the foreground and background was clear, and the foreground panicles were easily detected using the Otsu Image segmentation algorithm (Fig. 2b) [21]. During image acquisition, black light-absorbing fabric was used to prevent noise caused by light reflection and projection. However, repeated use of same fabric caused contamination by the impurities on the panicle surface and produced some noise. The original image size is 4032 × 3024 or 2480 × 3507, the size of the noise connection area is less than 100 pixels, and the panicle size is more than 100,000 pixels. So, we removed the noise by setting a threshold of 1000-pixels and extracting the connected area (Fig. 2c). The stem of the panicle was not investigated and would be removed. The width of the unfocused stem is 5–10 pixels, only less than half of the width of rice grains (30–50 pixels). Using a 5 × 5 disk mask to erode the denoising image for three times and then dilating it for three times (Fig. 2d). The branch image (without stem) (Fig. 2d) was subtracted from the denoised binary image (Fig. 2c) to obtain a stem image with noise (Fig. 2e) which was generated due to the excessive operation of eroding panicles in the previous step. Similarly, we removed the noise by setting a 200-pixels threshold and extracting the connected area (Fig. 2f). Next, the stem image (without noise) (Fig. 2f) was subtracted from the denoised binary image (Fig. 2c) to obtain a binary image without the stem or noise (Fig. 2g). The RGB image without stems or noise (Fig. 2h) were used for further image processing.

Fig. 2
figure 2

Flow chart of image preprocessing procedures. The scanner-acquired images of Japonica rice were used as an example

Algorithm for the calculation of grain number per panicle

For untouched grains, the general counting method is to calculate the number of connected regions in binary images. When it touched, several methods were used to split the clustered kernels, including: dilation and erosion operation, the watershed method, corner detection, and feature matching. However, each method has its limitations in our study, which will be discussed in detail later in this paper. We designed two methods as follows.

Linear regression algorithm

Coverage, corner point, etc. are effective feature parameters in image processing and analysis [22]. In this study, three parameters were used as candidates for the construction of linear regression models to count the number of grains per panicle, including: coverage degree (CD), skeleton (Sk), and contour (Co). The result of each parameters extraction is shown in Fig. 3.

Fig. 3
figure 3

Parameter extraction of CD, Sk and Co. a Original image of branches. b Extraction of CD parameter. c Extraction of Sk parameter. d Extraction of Co parameter

The parameter of all primary branch was first extracted from each panicle images. The sum of each branch parameter was used as the entire panicle parameter. All three parameters using the following equation:

$$ CD = \frac{{N_{cd} }}{{S_{im} }} $$
(1)
$$ Sk = \frac{{N_{sk} }}{{S_{im} }} $$
(2)
$$ Co = \frac{{N_{co} }}{{S_{im} }} $$
(3)

where Sim is the size of original image (Fig. 2a); Ncd is the number of pixels with value of 1 in coverage image (Fig. 3b); Nsk is the number of pixels with value of 1 in skeleton image (Fig. 3c); and Nco is the number of pixels with value of 1 in contour image (Fig. 3d).

The parameters were normalized using min–max normalization method as shown in the following equation:

$$ CD^{{\prime }} = \frac{{CD - CD_{min} }}{{CD_{max} - CD_{min} }} $$
(4)
$$ Sk^{{\prime }} = \frac{{Sk - Sk_{min} }}{{Sk_{max} - Sk_{min} }} $$
(5)
$$ Co^{{\prime }} = \frac{{Co - Co_{min} }}{{Co_{max} - Co_{min} }} $$
(6)

where CDʹ is the normalized CD, CDmin is the minimum value of CD, CDmax is the maximum value of CD, Skʹ is the normalized Sk, Skmin is the minimum value of Sk, Skmax is the maximum value of Sk, Coʹ is the normalized Co, Comin is the minimum value of Co, Comax is the maximum value of Co.

Since the actual measurement was an integer and the model-predicted number was not always an integer, we rounded predicted number to integers for comparison. Table 2 is the image information used for regression model training and verification. In order to increase the sample size, the 45 shape A panicles were processed into 20 shape B panicles and 30 shape C panicles to acquire images again. The constructed model was evaluated using R2 and RMSE.

Deep learning algorithm

The second method used in this study was the deep learning method (Fig. 4), which is popular and offers high accuracy performance. In this study, the superior Faster RCNN + ResNet101 network was used for grain identification [23, 24]. Due to the heavy manual labeling work, only 120 original images were randomly selected for the model training and validation (Table 2). After preprocessing, images containing multiple branches and stems were separated into images containing a single branch, and saved as a new image. Total of 1337 images were obtained (Fig. 4a). These images were labeled manually and are available from the author. We divided 70% of the dataset into training sets and 30% of them into validation sets. 400 original images were used for testing. Specifically, an original image was segmented into sub-images. The sub-images were separately identified and then aggregated into test results for one image. The images were sent to the Resnet101 feature extraction network (Fig. 4b) to generate the feature map. The authors of ResNet101 proposed a residual structure (Fig. 4c) to resolve the degradation problem. The selective search (SS) was replaced by region proposal network (RPN) (Fig. 4d). The RPN considers nine possible reference windows (Fig. 4e) at each sliding window (SW) position which can improve the speed and accuracy of object extraction. Finally, training and validation progress is performed by minimizing classification loss and box regression loss. Detailed information about the hardware, software, and model hyperparameters are provided in Table 3. The other hyperparameters were consistent with the original research [24].

Fig. 4
figure 4

The flowchart of Deep Learning method. a Dataset. b Resnet101 convolutional network. c Residual learning: a building block. d Region proposal network (RPN). e RPN principle. f Fast RCNN network

Table 3 The hardware, software, and hyperparameters configurations for the deep learning model

Results and analysis

Comparison on image manually counting method

We compared the counting accuracy of image manual counting methods on two image acquisition ways, two different species of rice and three manually shaped panicles. The ground truth (GT) is panicle grain manual counting. We found that regardless of the image acquisition method, for Shape A, the manual counting accuracy remained below 80%, and was the lowest one of the rice shapes (Table 4). The results indicated that Shape A was not useful because it led to low panicle grain counting accuracy. The maximum counting accuracy for Shape A was 80%, even with the most potent and optimized algorithm. In addition, a 95% counting accuracy could be achieved in the digital camera acquired images, which was at least 3% lower than those in scanner acquired images. This is primary due to the grain distribution on the panicle, which is not even and there being a high quantity of touching grains that are spread out during scanning. The counting accuracies for Shape B and Shape C were both greater than 95%.

Table 4 The accuracy of image manual counting for different groups

Linear model analysis

We established 3 univariate linear regression models for counting the grain number on each panicle using CDʹ, Skʹ, and Coʹ. We also compared these 3 models in images that contained different panicle shape manipulation and were acquired using different digital devices. As shown in Fig. 5, for all three models, the R2 was > 0.90. The CDʹ-based linear regression model had an R2 > 0.95, and had a better performance than Skʹ- or Coʹ-based models. Overall, images that were acquired from a scanner had lower RMSE than images acquired form a digital camera. Furthermore, the RMSE of Shape C + Indica rice images was lower than that of Shape B + Japonica rice images.

Fig. 5
figure 5

The linear regression model analysis of images with different panicle shapes, and images acquired using scanner and camera

We validated the CDʹ-based linear regression model by comparing the model predictions for grain number with the actual measured number (Fig. 6). With an R2 = 0.9831 and RMSE of 5.9481 for Indica rice, and R2 = 0.975 and RMSE of 6.4405 for Japonica rice, the Scanner + Shape C method had the best performance in all ways. The Scanner + Shape B also had better performance as compared with the Camera + Shape B and Shape C. Therefore, scanner-acquired Shape C, in combination with CDʹ-based linear regression model, provided the most accurate grain counting.

Fig. 6
figure 6

Validation of CDʹ-based linear regression model

We also constructed a multiple linear regression model based on the three parameters. Table 5 shows the optimal model results obtained after screening. Compared to the univariate linear model (Figs. 5, 6), the multiple linear regression model requires more input factors, however their accuracy has not been greatly improved. Therefore, this study is more inclined to the univariate linear regression model.

Table 5 Training and validation of optimal multiple linear regression model

Deep learning model

The deep learning model gave satisfactory performance for grain detection. The grains of both Indica and Japonica rice were easily detectable (Fig. 7). The counting accuracies of the deep learning models were all > 98%, and were unlikely to be affected by devices and panicle shape (Table 6). Specifically, the Scanner + Shape C method had the highest counting accuracy of 99.38% and a false detection rate of 0%, as well as a very low miss detection rate.

Fig. 7
figure 7

Recognition of grains using deep learning algorithm. a Original image of Japonica rice. b Original image of Indica rice. c Recognition of Japonica rice grains. d Recognition of Indica rice grains

Table 6 Grain counting accuracy of the deep learning model

Discussion

In this study, we not only developed a linear regression model and a deep learning model to count the grain number per rice panicle, but also tested the grain counting efficiency of traditional image-based counting methods. Traditional grain counting methods rely primarily on the separation of touching grains, and mainly algorithms include the dilation and erosion method [25], the improved watershed algorithm [26], and feature point matching method [27, 28]. The commonly encountered problems of these traditional methods are shown in Fig. 8. The dilation and erosion method often fail to separate touching kernels (Fig. 8c). The improved watershed algorithm was able to detect the kernel edges, but the kernel surface was over-separated and could not account kernel number accurately (Fig. 8d). The corner detection and feature matching methods (Fig. 8e) had an unsymmetrical detection capability and no corner point was detected on the yellow line side, while more corners were detected on the other side. Therefore, we do not believe that this method can provide satisfactory separation.

Fig. 8
figure 8

Limitations of traditional image-analysis based grain recognition methods. a Original image. b Binary image. c Dilation and erosion operation results. d Improved watershed method. e Corner detection and feature matching method

The removal of stems during image preprocessing improves the counting accuracy. We compared the model performance in the presence and absence of stems using accuracy as comparison criteria (Table 7). Both the linear model and the deep learning model improved counting accuracy after stem removal, with a maximum improvement of 3%. The deep learning model had significantly higher counting accuracy compared to the linear model, which exhibited as high as a 10% accuracy difference between different rice types. Meanwhile, the deep learning model had a similar performance regardless of rice types. These results demonstrated that the impact of rice type on the model performance was significant for the linear model, but had less of an effect on the deep learning model.

Table 7 The effect of stems on the counting accuracy of different models

To further validate the robustness of linear regression models, we used two varieties of Indica rice (Yangdao No. 6 and Liangyoupeijiu) and two varieties of Japonica (Lianjing No. 7 and Huaidao No. 5) as new materials to analyze subspecies varieties. Take the Scanner + Shape C method as an example, and the verification results are shown in Fig. 9. The R2 and RMSE of the Indica rice model were 0.9737 and 7.1120 respectively. The R2 and RMSE of the Japonica rice model were 0.9623 and 6.4746 respectively. Compared with the results of Fig. 6, there is a slight decrease. So, the robustness of linear regression models is greatly affected by the shape of the panicle and grain, and is less affected by the size of the panicle and grain. The shape is mainly controlled by the subspecies, and the size is controlled by cultivation measures such as variety, density and nitrogen application rate.

Fig. 9
figure 9

Robustness analysis of linear regression models

The deep learning model did have a slightly higher performance for the Japonica rice than the Indica rice. We analyzed images from test set that had a low counting accuracy (Fig. 10). The Indica rice grains were long and slender, which make them more likely to form a cluster and consequently, can’t be detected as effectively by the deep learning model. Therefore, the counting accuracy of Indica rice is lower than that of Japonica rice regardless of model.

Fig. 10
figure 10

Limitation of the deep learning model in Indica rice grain counting

When using the image-based algorithm, the running time of the model was also an important factor. We took 20 panicle images as a group and calculated the time spent. Repeat 5 times and take the average as results (Table 8). The time needed for counting was twofold: the time needed for image acquisition and the time needed to run the algorithm. The Shape A image acquisition time was the fastest among these conditions. However, the results in Table 4 showed that the accuracy of using Shape A images was low. The use of X-ray technique or three-dimension (3D) image will have a better performance on Shape A. Charytanowicz et al. [29] used X-ray images to evaluate geometric features for wheat grain classification. However, X-ray equipment is expensive. Long-term acceptance of X-rays can cause a lot of damage to human body. Generally, the accuracy of the 3D scanner is high, which is suitable for obtaining large objects such as forest trees [30]. For small targets such as rice, expensive higher precision instruments are required. In addition, the acquisition and processing of 3D images may take more time. In summary, we take Shape B and Shape C as research objects rather than Shape A.

Table 8 Time needed for each work

The time needed to acquire images using a digital camera was 50% of that using scanner, and the time needed for Shape C was shorter than Shape B. During the time experiment, Shape B needs to carefully separate the intertwined branches, and the violent operation will cause the individual grains to fall off and affect the subsequent treatment. Shape C can be separated from the stem by cutting a knife. Therefore, Shape B takes more time than Shape C. When running the linear model, most of the time was used for image processing and for the extraction of parameters. Meanwhile, when running deep learning model, most time was spent on parameter loading. The linear regression model required significantly less time than the deep learning model due to the fact that the deep learning model must load millions of parameters and involves large amount of data execution. The Scanner + Shape B + Deep Learning method takes 8 min, which is only about one-third of the manual counting time. The brain power expended by manual counting was not included yet. Thus, using multiple sets of high-performance graphics processing unit (GPU) could significantly accelerate data execution [31]. In addition, model compression or the establishment of a simpler deep learning model could also reduce the model running time [32].

Conclusion

In summary, we established two models to count the grain number per panicle, a linear regression model and a deep learning model, which had a counting accuracy greater than 96% and 99%, respectively. However, the deep learning model required more time than the linear regression model. If we consider the time cost, linear regression model is recommended for counting the rice grain number per panicle. Otherwise, the deep learning model would be best to optimizing accuracy. We believe our high-throughput and rapid method for counting the number of rice grains per panicle is a useful tool for rice phenomics research.

Availability of data and materials

All data analyzed during this study are presented in this published article.

References

  1. Dhondt S, Wuyts N, Inze D. Cell to whole-plant phenotyping: the best is yet to come. Trends Plant Sci. 2013;18(8):433–44.

    Article  Google Scholar 

  2. Liu T, Wu W, Chen W, Sun CM, Chen C, Wang R, Zhu XK, Guo WS. A shadow-based method to calculate the percentage of filled rice grains. Biosyst Eng. 2016;150:79–88.

    Article  Google Scholar 

  3. Garcia GA, Serrago RA, Dreccer MF, Miralles DJ. Post-anthesis warm nights reduce grain weight in field-grown wheat and barley. Field Crop Res. 2016;195:50–9.

    Article  Google Scholar 

  4. Li JM, Thomson M, McCouch SR. Fine mapping of a grain-weight quantitative trait locus in the pericentromeric region of rice chromosome 3. Genetics. 2004;168(4):2187–95.

    Article  CAS  Google Scholar 

  5. Slafer GA, Savin R, Sadras VO. Coarse and fine regulation of wheat yield components in response to genotype and environment. Field Crop Res. 2014;157:71–83.

    Article  Google Scholar 

  6. Ferrante A, Cartelle J, Savin R, Slafer GA. Yield determination, interplay between major components and yield stability in a traditional and a contemporary wheat across a wide range of environments. Field Crop Res. 2017;203:114–27.

    Article  Google Scholar 

  7. Duan LF, Huang CL, Chen GX, Xiong LZ, Liu Q, Yang WN. Determination of rice panicle numbers during heading by multi-angle imaging. Crop J. 2015;3(3):211–9.

    Article  Google Scholar 

  8. Yang WN, Guo ZL, Huang CL, Duan LF, Chen GX, Jiang N, Fang W, Feng H, Xie WB, Lian XM, Wang GW, Luo QM, Zhang QF, Liu Q, Xiong LZ. Combining high-throughput phenotyping and genome-wide association studies to reveal natural genetic variation in rice. Nat Commun. 2014;5:5087.

    Article  CAS  Google Scholar 

  9. Yang WN, Guo ZL, Huang CL, Wang K, Jiang N, Feng H, Chen GX, Liu Q, Xiong LZ. Genome-wide association study of rice (Oryza sativa L.) leaf traits with a high-throughput leaf scorer. J Exp Bot. 2015;66(18):5605–15.

    Article  CAS  Google Scholar 

  10. Feng H, Jiang N, Huang CL, Fang W, Yang WN, Chen GX, Xiong LZ, Liu Q. A hyperspectral imaging system for an accurate prediction of the above-ground biomass of individual rice plants. Rev Sci Instrum. 2013;84(9):095107.

    Article  Google Scholar 

  11. Tavakoli H, Gebbers R. Assessing nitrogen and water status of winter wheat using a digital camera. Comput Electron Agric. 2019;157:558–67.

    Article  Google Scholar 

  12. Zhou CY, Le J, Hua DX, He TY, Mao JD. Imaging analysis of chlorophyll fluorescence induction for monitoring plant water and nitrogen treatments. Measurement. 2019;136:478–86.

    Article  Google Scholar 

  13. Huang CL, Yang WN, Duan LF, Jiang N, Chen GX, Xiong LZ, Liu Q. Rice panicle length measuring system based on dual-camera imaging. Comput Electron Agric. 2013;98:158–65.

    Article  Google Scholar 

  14. AL-Tam F, Adam H, dos Anjos A, Lorieux M, Larmande P, Ghesquiere A, Jouannic S, Shahbazkia HR. P-TRAP: a panicle trait phenotyping tool. BMC Plant Biol. 2013;13:122.

    Article  Google Scholar 

  15. Tanabata T, Shibaya T, Hori K, Ebana K, Yano M. SmartGrain: high-throughput phenotyping software for measuring seed shape through image analysis. Plant Physiol. 2012;160(4):1871–80.

    Article  CAS  Google Scholar 

  16. Whan AP, Smith AB, Cavanagh CR, Ral JPF, Shaw LM, Howitt CA, Bischof L. GrainScan: a low cost, fast method for grain size and colour measurements. Plant Methods. 2014;10:23.

    Article  Google Scholar 

  17. Houle D, Govindaraju DR, Omholt S. Phenomics: the next challenge. Nat Rev Genet. 2010;11(12):855–66.

    Article  CAS  Google Scholar 

  18. Crowell S, Falcao AX, Shah A, Wilson Z, Greenberg AJ, McCouch SR. High-resolution inflorescence phenotyping using a novel image-analysis pipeline, PANorama. Plant Physiol. 2014;165(2):479–95.

    Article  CAS  Google Scholar 

  19. Bleau A, Leon LJ. Watershed-based segmentation and region merging. Comput Vis Image Underst. 2000;77(3):317–70.

    Article  Google Scholar 

  20. Lin P, Chen YM, He Y, Hu GW. A novel matching algorithm for splitting touching rice kernels based on contour curvature analysis. Comput Electron Agric. 2014;109:124–33.

    Article  Google Scholar 

  21. Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 2007;9(1):62–6.

    Article  Google Scholar 

  22. Liu T, Yang TL, Li CY, Li R, Wu W, Zhong XC, Sun CM, Guo WS. A method to calculate the number of wheat seedlings in the 1st to the 3rd leaf growth stages. Plant Methods. 2018;14:101.

    Article  Google Scholar 

  23. He K, Zhang X, Ren S, Jian S, editors. Deep residual learning for image recognition. In: IEEE conference on computer vision & pattern recognition. 2016.

  24. Ren SQ, He KM, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Intell. 2017;39(6):1137–49.

    Article  Google Scholar 

  25. Shatadal P, Jayas DS, Bulley NR. Digital image analysis for software separation and classification of touching grains. I. Disconnect algorithm. Trans ASAE. 1995;38(2):645–9.

    Article  Google Scholar 

  26. Osma-Ruiz V, Godino-Llorente JI, Saenz-Lechon N, Gomez-Vilda P. An improved watershed algorithm based on efficient computation of shortest paths. Pattern Recogn. 2007;40(3):1078–90.

    Article  Google Scholar 

  27. Liu T, Chen W, Wang YF, Wu W, Sun CM, Ding JF, Guo WS. Rice and wheat grain counting method and software development based on Android system. Comput Electron Agric. 2017;141:302–9.

    Article  Google Scholar 

  28. Yao Y, Wu W, Yang TL, Liu T, Chen W, Chen C, Li R, Zhou T, Sun CM, Zhou Y, Li XL. Head rice rate measurement based on concave point matching. Sci Rep. 2017;7:41353.

    Article  CAS  Google Scholar 

  29. Charytanowicz M, Kulczycki P, Kowalski PA, Lukasik S, Czabak-Garbacz R. An evaluation of utilizing geometric features for wheat grain classification using X-ray images. Comput Electron Agric. 2018;144:260–8.

    Article  Google Scholar 

  30. Pierzchala M, Giguere P, Astrup R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput Electron Agric. 2018;145:217–25.

    Article  Google Scholar 

  31. Qin CZ, Zhan LJ. Parallelizing flow-accumulation calculations on graphics processing units-From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm. Comput Geosci. 2012;43:7–16.

    Article  Google Scholar 

  32. Cheng Y, Wang D, Zhou P, Zhang T. Model compression and acceleration for deep neural networks the principles, progress, and challenges. IEEE Signal Proc Mag. 2018;35(1):126–36.

    Article  Google Scholar 

Download references

Acknowledgements

We thank LetPub (http://www.letpub.com) for linguistic assistance during manuscript preparation.

Funding

This research was mainly supported by the National Key Research and Development Program of China (2018YFD0300802, 2018YFD0300805), the National Natural Science Foundation of China (31872852, 31701355, 31671615), the Independent Innovation Project of Jiangsu Province (CX(18)1002), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX18_2371) and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Author information

Authors and Affiliations

Authors

Contributions

WW and WG performed all experiments and analyzed the data. TL, PZ and TY performed some experiments and analyzed the data. CL and XZ contributed the conceptual design and provided supervision. WW, CS and SL wrote the main manuscript text and prepared all figures. All authors were involved in preparing and revising the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Chengming Sun, Shengping Liu or Wenshan Guo.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Yes. All authors have seen the manuscript and approved to submit to your journal. All authors agree to publish.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, W., Liu, T., Zhou, P. et al. Image analysis-based recognition and quantification of grain number per panicle in rice. Plant Methods 15, 122 (2019). https://doi.org/10.1186/s13007-019-0510-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13007-019-0510-0

Keywords