当前位置: X-MOL 学术Comput. Electron. Agric. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Root anatomy based on root cross-section image analysis with deep learning
Computers and Electronics in Agriculture ( IF 8.3 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.compag.2020.105549
Chaoxin Wang , Xukun Li , Doina Caragea , Raju Bheemanahallia , S.V. Krishna Jagadish

Abstract Aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has potential to further increase food production. However, belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we use a fully automated approach, specifically, the Faster Region-based Convolutional Neural Network (Faster R-CNN), to identify anatomical traits in root cross-section images. By training Faster R-CNN models on root cross-section images, we can detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes can be used to estimate the root diameter, stele diameter, late metaxylem number, and average diameter. Experimental evaluation using standard object detection metrics, such as intersection-over-union and mean average precision, has shown that the Faster R-CNN models trained on rice root cross-section images can accurately detect root, stele and late metaxylem objects. Furthermore, the results have shown that the measurements estimated based on predicted bounding boxes have small root mean square error when compared with the corresponding ground truth values, suggesting that Faster R-CNN can be used to accurately detect anatomical features. Finally, a comparison with Mask R-CNN, an instance segmentation approach, has shown that the Faster R-CNN network produces overall better results given a small training set. A webserver for performing root anatomy using the Faster R-CNN models trained on rice images, and a link to a GitHub repository containing a copy of the Faster R-CNN code are made available to the research community. The labeled images used for training and evaluating the Faster R-CNN models are also available from the GitHub repository.

中文翻译:

基于深度学习的牙根横截面图像分析的牙根解剖

摘要 近年来地上植物效率显着提高,这种提高导致全球粮食产量稳步增长。地下植物效率的提高有可能进一步增加粮食产量。然而,由于根表型所带来的固有挑战,地下植物根更难研究。已经提出了几种用于识别根横截面图像中根解剖特征的工具。然而,现有的工具不是完全自动化的,需要大量的人力来产生准确的结果。为了解决这个限制,我们使用了一种完全自动化的方法,特别是基于更快区域的卷积神经网络 (Faster R-CNN),来识别根横截面图像中的解剖特征。通过在根截面图像上训练 Faster R-CNN 模型,我们可以检测诸如根、石碑和晚期后木质部之类的对象,并预测这些对象周围的矩形边界框。随后,边界框可用于估计根直径、中柱直径、晚期后木质部数和平均直径。使用标准对象检测指标(例如交集交叉和平均精度)的实验评估表明,在水稻根部横截面图像上训练的 Faster R-CNN 模型可以准确地检测根、石碑和晚期后木质部对象。此外,结果表明,与相应的地面真值相比,基于预测边界框估计的测量值具有较小的均方根误差,表明 Faster R-CNN 可用于准确检测解剖特征。最后,与Mask R-CNN的比较,实例分割方法表明,在给定较小的训练集的情况下,Faster R-CNN 网络总体上会产生更好的结果。使用在水稻图像上训练的 Faster R-CNN 模型执行根解剖的网络服务器,以及包含 Faster R-CNN 代码副本的 GitHub 存储库的链接可供研究社区使用。用于训练和评估 Faster R-CNN 模型的标记图像也可从 GitHub 存储库中获得。
更新日期:2020-08-01
down
wechat
bug