当前位置: X-MOL 学术J. Field Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated crop plant detection based on the fusion of color and depth images for robotic weed control
Journal of Field Robotics ( IF 8.3 ) Pub Date : 2019-07-23 , DOI: 10.1002/rob.21897
Jingyao Gai 1 , Lie Tang 1 , Brian L. Steward 1
Affiliation  

Robotic weeding enables weed control near or within crop rows automatically, precisely and effectively. A computer‐vision system was developed for detecting crop plants at different growth stages for robotic weed control. Fusion of color images and depth images was investigated as a means of enhancing the detection accuracy of crop plants under conditions of high weed population. In‐field images of broccoli and lettuce were acquired 3–27 days after transplanting with a Kinect v2 sensor. The image processing pipeline included data preprocessing, vegetation pixel segmentation, plant extraction, feature extraction, feature‐based localization refinement, and crop plant classification. For the detection of broccoli and lettuce, the color‐depth fusion algorithm produced high true‐positive detection rates (91.7% and 90.8%, respectively) and low average false discovery rates (1.1% and 4.0%, respectively). Mean absolute localization errors of the crop plant stems were 26.8 and 7.4 mm for broccoli and lettuce, respectively. The fusion of color and depth was proved beneficial to the segmentation of crop plants from background, which improved the average segmentation success rates from 87.2% (depth‐based) and 76.4% (color‐based) to 96.6% for broccoli, and from 74.2% (depth‐based) and 81.2% (color‐based) to 92.4% for lettuce, respectively. The fusion‐based algorithm had reduced performance in detecting crop plants at early growth stages.

中文翻译:

基于颜色和深度图像融合的作物自动检测,用于机器人除草

机器人除草可以自动,精确,有效地控制作物行附近或行内的杂草。开发了一种计算机视觉系统,用于检测处于不同生长期的作物,以进行机器人除草。彩色图像和深度图像的融合被研究为提高杂草高种群条件下农作物检测精度的一种手段。使用Kinect v2传感器移植后3–27天,获取了西兰花和生菜的实地图像。图像处理管道包括数据预处理,植被像素分割,植物提取,特征提取,基于特征的定位细化和农作物分类。对于西兰花和生菜的检测,色深融合算法产生了很高的真阳性检出率(91.7%和90.8%,)和较低的平均错误发现率(分别为1.1%和4.0%)。西兰花和生菜的农作物茎的平均绝对定位误差分别为26.8和7.4 mm。事实证明,颜色和深度的融合有利于从背景中分割作物,将西兰花的平均分割成功率从87.2%(基于深度)和76.4%(基于颜色)提高到96.6%,从74.2生菜的百分比(基于深度)和81.2%(基于颜色)至92.4%。基于融合的算法在早期生长阶段检测农作物的性能降低。事实证明,颜色和深度的融合有利于从背景中分割作物,将西兰花的平均分割成功率从87.2%(基于深度)和76.4%(基于颜色)提高到96.6%,从74.2生菜的百分比(基于深度)和81.2%(基于颜色)至92.4%。基于融合的算法在早期生长阶段检测农作物的性能降低。事实证明,颜色和深度的融合有利于从背景中分割作物,将西兰花的平均分割成功率从87.2%(基于深度)和76.4%(基于颜色)提高到96.6%,从74.2生菜的百分比(基于深度)和81.2%(基于颜色)至92.4%。基于融合的算法在早期生长阶段检测农作物的性能降低。
更新日期:2019-07-23
down
wechat
bug