当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-07-14 , DOI: arxiv-2007.06959
Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Michael B. Gotway, Jianming Liang

Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e.,CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis .

中文翻译:

通过自我发现、自我分类和自我恢复学习语义丰富的表示

医学图像与人体解剖学的丰富语义自然相关,反映在大量重复出现的解剖模式中,提供独特的潜力来促进深度语义表示学习并为不同的医学应用产生语义更强大的模型。但是,医学图像中嵌入的这种强大而自由的语义究竟如何用于自我监督学习,在很大程度上仍未得到探索。为此,我们训练深度模型,通过医学图像下解剖结构的自我发现、自我分类和自我恢复来学习语义丰富的视觉表示,从而产生语义丰富的、通用的、预训练的 3D 模型,命名为语义创世纪。我们使用所有公开可用的预训练模型检查我们的语义起源,通过自我监督或完全监督,针对六个不同的目标任务,涵盖各种医学模式(即 CT、MRI 和 X 射线)的分类和分割。我们的大量实验表明,Semantic Genesis 显着超过其所有 3D 对应物以及事实上的基于 ImageNet 的 2D 迁移学习。这种表现归功于我们新颖的自监督学习框架,鼓励深度模型从嵌入医学图像的一致解剖结构产生的丰富解剖模式中学习引人注目的语义表示。代码和预训练的 Semantic Genesis 可在 https://github.com/JLiangLab/SemanticGenesis 获得。我们的大量实验表明,Semantic Genesis 显着超过其所有 3D 对应物以及事实上的基于 ImageNet 的 2D 迁移学习。这种表现归功于我们新颖的自监督学习框架,鼓励深度模型从嵌入医学图像的一致解剖结构产生的丰富解剖模式中学习引人注目的语义表示。代码和预训练的 Semantic Genesis 可在 https://github.com/JLiangLab/SemanticGenesis 获得。我们的大量实验表明,Semantic Genesis 显着超过其所有 3D 对应物以及事实上的基于 ImageNet 的 2D 迁移学习。这种表现归功于我们新颖的自监督学习框架,鼓励深度模型从嵌入医学图像的一致解剖结构产生的丰富解剖模式中学习引人注目的语义表示。代码和预训练的 Semantic Genesis 可在 https://github.com/JLiangLab/SemanticGenesis 获得。鼓励深度模型从嵌入在医学图像中的一致解剖结构产生的丰富解剖模式中学习引人注目的语义表示。代码和预训练的 Semantic Genesis 可在 https://github.com/JLiangLab/SemanticGenesis 获得。鼓励深度模型从嵌入在医学图像中的一致解剖结构产生的丰富解剖模式中学习引人注目的语义表示。代码和预训练的 Semantic Genesis 可在 https://github.com/JLiangLab/SemanticGenesis 获得。
更新日期:2020-07-15
down
wechat
bug