当前位置: X-MOL 学术New Phytol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep-learning-based removal of autofluorescence and fluorescence quantification in plant-colonizing bacteria in vivo
New Phytologist ( IF 9.4 ) Pub Date : 2022-06-26 , DOI: 10.1111/nph.18344
Xun Jiang 1 , Tobias Pees 1 , Barbara Reinhold-Hurek 1
Affiliation  

Introduction

Plants contain abundant molecules such as chlorophyll and lignin which can emit fluorescent light with a wide range of wavelengths (García-Plazaola et al., 2015). As a consequence of these, the autofluorescence common in plant tissues in fluorescence microscopy may interfere with the specific detection of fluorescent staining and fluorescent proteins. In functional analyses of interactions between microbes such as endophytes and roots, the labeling of bacteria with fluorescent proteins or application of transcriptional fusions with reporter genes to visualize and measure bacterial gene expression is very common (Buschart et al., 2012; Chen et al., 2015; Sarkar et al., 2017). For fluorescent staining, the signal may be enhanced by increasing the amount of the fluorescent dye and eliminating the autofluorescence by irradiation with light before staining (Neumann & Gabel, 2002), or by treating the sample with some autofluorescence reducing agent (Xie et al., 2017). However, for in vivo studies applying fluorescent proteins, the autofluorescence is problematic, especially when the fluorescent protein is not strongly expressed. Recently, several novel approaches, such as Förster Resonant Energy Transfer (FRET), Fluorescence Lifetime Imaging (FLIM) and high-resolution 3D imaging (Müller et al., 2013; Kodama, 2016; Dumur et al., 2019), were used to distinguish the fluorescence from the autofluorescence background, yet they were used to address specific experimental questions such as protein–protein interaction or studies on conformational changes.

With the increasing parallel calculation power of computers, deep-learning-based methods have become more and more popular in daily life and research to solve complicated problems. The convolutional neural network (CNN), a class of deep neural networks, is based on the shared-weight architecture of the convolution kernels/filters that shift over input features and provide translation equivariant responses (Zhang et al., 1990). It is most commonly applied to extract spatial features of pixels from images. The idea of CNN came from the study of the animal visual cortex. In animals, the individual neurons are responsible only for a small region of the viewed image, termed the receptive field. The overlapping of the receptive fields from many different neurons will provide an understanding of the image (Hubel & Wiesel, 1968; Fukushima, 1980; Matsugu et al., 2003). Also similar to the animal neural system, the CNN model is capable of optimizing the convolutional kernels/filters through the training/learning process with backward propagation (Goodfellow et al., 2020). This is an advantage over other traditional machine learning algorithms, which are dependent on human intervention in feature extraction. Through image scanning with the convolutional kernel/filters, the spatial features between multidimensional distributed pixels can be collected. In practice, the pooling layer and activation functions would be applied after the convolutional layer. Pooling layers will enhance the collected spatial features (Ciresan et al., 2012), and the activation function, such as ReLU or Sigmoid, will generate the multi-hidden layers in the model (Rattay, 1986).

In order to generate an image according to the extracted features from CNN layers, fractionally-strided convolutional or so-called deconvolutional layers can be used, which is commonly applied to generate artificial images in Generative Adversarial Networks (GAN) (Goodfellow et al., 2020).

A confocal laser scanning microscopy (CLSM) image can contain several different channels, and each channel is a collection of photons from a certain range of wavelengths. For example, the tdTomato channel collects photons not only from the tdTomato fluorescence, but also from the autofluorescence which is at the same wavelength. Often, with proper settings, the leakage of fluorescence from other channels can be avoided; however, in plant samples autofluorescence is inevitable, even when using a narrow gate or wavelength window for photon detection. Might a CNN-based approach generate an image that indicates the background autofluorescence in a range of certain wavelengths, and thereby improve specific fluorescence detection by subtracting the background autofluorescence from the genuine channel? We address this question using a well-studied model system for interactions between nitrogen (N)-fixing endophytes and cereals, Azoarcus olearius BH72 and rice (Reinhold-Hurek & Hurek, 2011; Chen et al., 2020). Azoarcus olearius deeply invades rice roots inter- and intracellularly, yet rice roots show strong autofluorescence of plant cell walls (Hurek et al., 1994b; Reinhold-Hurek & Hurek, 1998; Miché et al., 2006). In association with roots, several plant-infection-related bacterial genes show various degrees of expression, as visualized by transcriptional fusions with reporter genes encoding fluorescent proteins (Hauberg-Lotte et al., 2012; Krause et al., 2017; Sarkar et al., 2017). However, the fluorescence quantification in vivo was impeded by autofluorescence. Here we report a novel deep-learning-based approach for autofluorescence removal in CLSM images of plant root tissue, by subtracting computer-generated background autofluorescence from the genuine channel. In our implementation, root autofluorescence largely was reduced, which allowed us to localize, assess and even quantify single-cell endophytic gene expression with fluorescence. This allowed us to demonstrate differential induction of bacterial N-fixation genes in different root microniches with high spatial resolution, suggesting highest potential for N fixation inside the root. Python-based software was developed to provide an easy tool for other researchers to test and apply this approach to a wide range of studies.



中文翻译:

基于深度学习的植物定植细菌体内自发荧光和荧光定量去除

介绍

植物含有丰富的分子,如叶绿素和木质素,它们可以发出各种波长的荧光(García-Plazaola et al .,  2015)。因此,荧光显微镜中植物组织中常见的自发荧光可能会干扰荧光染色和荧光蛋白的特异性检测。在诸如内生菌和根等微生物之间相互作用的功能分析中,用荧光蛋白标记细菌或应用与报告基因的转录融合来可视化和测量细菌基因表达是非常常见的(Buschart等人,  2012 年;Chen等人。 ,  2015 年;Sarkar等人.,  2017 年)。对于荧光染色,可以通过增加荧光染料的量和在染色前用光照射消除自发荧光来增强信号(Neumann & Gabel,  2002),或者用一些自发荧光还原剂处理样品(Xie et al . ,  2017 年)。然而,对于应用荧光蛋白的体内研究,自发荧光是有问题的,尤其是当荧光蛋白没有强烈表达时。最近,一些新的方法,如 Förster 共振能量转移 (FRET)、荧光寿命成像 (FLIM) 和高分辨率 3D 成像 (Müller et al .,  2013 ; Kodama, 2016 年;Dumur等人,  2019 年)被用来区分荧光和自发荧光背景,但它们被用来解决特定的实验问题,例如蛋白质-蛋白质相互作用或构象变化的研究。

随着计算机并行计算能力的提高,基于深度学习的方法在日常生活和解决复杂问题的研究中越来越流行。卷积神经网络 (CNN) 是一类深度神经网络,基于卷积核/滤波器的共享权重架构,可转换输入特征并提供平移等变响应 (Zhang et al .,  1990)。它最常用于从图像中提取像素的空间特征。CNN的想法来自对动物视觉皮层的研究。在动物中,单个神经元只负责观看图像的一小部分区域,称为感受野。来自许多不同神经元的感受野的重叠将提供对图像的理解(Hubel & Wiesel,  1968 ; Fukushima,  1980 ; Matsugu et al .,  2003)。同样类似于动物神经系统,CNN 模型能够通过反向传播的训练/学习过程优化卷积核/滤波器(Goodfellow等人,  2020)。与其他传统机器学习算法相比,这是一个优势,后者在特征提取中依赖于人为干预。通过卷积核/滤波器的图像扫描,可以收集多维分布像素之间的空间特征。在实践中,池化层和激活函数将在卷积层之后应用。池化层将增强收集到的空间特征(Ciresan et al .,  2012),激活函数,如 ReLU 或 Sigmoid,将在模型中生成多重隐藏层(Rattay,  1986)。

为了根据从 CNN 层提取的特征生成图像,可以使用分数步幅卷积或所谓的反卷积层,这通常用于生成对抗网络(GAN)中的人工图像(Goodfellow等人,  2020 年)。

共焦激光扫描显微镜 (CLSM) 图像可以包含多个不同的通道,每个通道都是来自特定波长范围的光子的集合。例如,tdTomato 通道不仅从 tdTomato 荧光中收集光子,而且从相同波长的自发荧光中收集光子。通常,通过适当的设置,可以避免其他通道的荧光泄漏;然而,在植物样品中,自发荧光是不可避免的,即使在使用窄门或波长窗口进行光子检测时也是如此。基于 CNN 的方法是否会生成一个图像,以指示特定波长范围内的背景自发荧光,从而通过从真正的通道中减去背景自发荧光来改善特异性荧光检测?我们使用经过充分研究的模型系统来解决这个问题,该模型系统用于固氮 (N) 内生菌和谷物之间的相互作用,Azoarcus olearius BH72和水稻(Reinhold-Hurek & Hurek,  2011 ; Chen et al .,  2020)。Azoarcus olearius 在细胞间和细胞内深入侵入水稻根部,但水稻根部显示出强烈的植物细胞壁自发荧光(Hurek et al .,  1994b ; Reinhold-Hurek & Hurek,  1998 ; Miché et al .,  2006)。与根相关,几种植物感染相关的细菌基因表现出不同程度的表达,通过与编码荧光蛋白的报告基因的转录融合可见(Hauberg-Lotte et al .,  2012 ; Krause et al ..,  2017 ; Sarkar等人,  2017 年)。然而,体内荧光定量受到自发荧光的阻碍。在这里,我们报告了一种新的基于深度学习的方法,用于去除植物根组织 CLSM 图像中的自发荧光,方法是从真实通道中减去计算机生成的背景自发荧光。在我们的实施中,根部自发荧光大大减少,这使我们能够用荧光定位、评估甚至量化单细胞内生基因表达。这使我们能够以高空间分辨率证明不同根微位中细菌固氮基因的差异诱导,表明根内固氮的潜力最高。蟒蛇_基于软件的开发为其他研究人员提供了一个简单的工具来测试并将这种方法应用于广泛的研究。

更新日期:2022-06-26
down
wechat
bug