当前位置: X-MOL 学术Plant Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping.
Plant Methods ( IF 4.7 ) Pub Date : 2020-07-09 , DOI: 10.1186/s13007-020-00637-x
Michael Henke 1 , Astrid Junker 1 , Kerstin Neumann 1 , Thomas Altmann 1 , Evgeny Gladilin 1
Affiliation  

Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts. Here, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of $$93\%$$ ( $$SD=5\%$$ ) using our two-step registration-classification approach. Automated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner.

中文翻译:

用于高通量温室植物表型分析的多模态图像自动分割的两步注册分类方法。

大量图像数据的自动分割是高通量植物表型分析的主要瓶颈之一。发育中植物的动态光学外观、不均匀的场景照明、植物和背景区域中的阴影和反射使单峰植物图像的自动分割变得复杂。为了克服单模态数据中颜色信息模糊的问题,可以将不同模态的图像组合成一个虚拟的多光谱立方体。然而,由于光室之间植物的重新定位引起的运动伪影,多模态图像的对齐通常受到模糊伪影的影响。这里,我们提出了一种温室植物图像自动分割的方法,该方法基于荧光 (FLU) 和可见光 (VIS) 相机图像的共同配准,然后使用不同的物种和相机视图分离植物和边缘背景区域。定制的分类模型。我们的实验结果包括与手动分割的ground truth数据的直接比较表明,从不同的相机视图中获取的不同发育阶段的不同植物类型的图像可以自动分割,平均准确率为$93\%$$($$SD=5 \%$$ ) 使用我们的两步注册分类方法。显示高度可变的光学植物和背景外观的任意温室图像的自动分割对于依赖于不变性检测的数据分类技术来说是一项具有挑战性的任务。为了克服单峰图像分析的局限性,开发了一种两步配准分类方法来组合分析荧光和可见光图像。我们的实验结果表明,这种算法方法能够准确分割适合以全自动高通量方式应用的不同 FLU/VIS 植物图像。
更新日期:2020-07-09
down
wechat
bug