当前位置: X-MOL 学术Light Sci. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised content-preserving transformation for optical microscopy
Light: Science & Applications ( IF 20.6 ) Pub Date : 2021-03-01 , DOI: 10.1038/s41377-021-00484-y
Xinyang Li 1, 2, 3 , Guoxun Zhang 1, 3 , Hui Qiao 1, 3 , Feng Bao 1, 3 , Yue Deng 4, 5 , Jiamin Wu 1, 3 , Yangfan He 6, 7, 8 , Jingping Yun 6, 7, 8 , Xing Lin 1, 3, 9 , Hao Xie 1, 3 , Haoqian Wang 2, 3 , Qionghai Dai 1, 3
Affiliation  

The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.



中文翻译:


光学显微镜的无监督内容保留转换



深度学习的发展和大量成像数据的开放获取共同为计算图像转换提供了潜在的解决方案,这正在逐渐改变光学成像和生物医学研究的格局。然而,当前深度学习的实现通常以监督方式运行,并且它们对费力且容易出错的数据注释程序的依赖仍然是更广泛适用性的障碍。在这里,我们提出了一种无监督图像转换,以促进光学显微镜深度学习的利用,即使在某些无法应用监督模型的情况下也是如此。通过引入显着性约束,称为光学显微镜无监督内容保留变换(UTOM)的无监督模型可以学习两个图像域之间的映射,而不需要配对训练数据,同时避免图像内容的失真。 UTOM 在广泛的生物医学图像转换任务中表现出了良好的性能,包括计算机组织学染色、荧光图像恢复和虚拟荧光标记。定量评估表明,UTOM 在不同的成像条件和模式下实现了稳定、高保真的图像转换。我们预计我们的框架将鼓励训练神经网络的范式转变,并使人工智能在生物医学成像中得到更多应用。

更新日期:2021-03-01
down
wechat
bug