当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-supervised retinal thickness prediction enables deep learning from unlabelled data to boost classification of diabetic retinopathy
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2020-11-09 , DOI: 10.1038/s42256-020-00247-1
Olle G. Holmberg , Niklas D. Köhler , Thiago Martins , Jakob Siedlecki , Tina Herold , Leonie Keidel , Ben Asani , Johannes Schiefelbein , Siegfried Priglinger , Karsten U. Kortuem , Fabian J. Theis

Access to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. Although at present transfer learning from pre-trained models can help with cases lacking data, this limits design choices and generally results in the use of unnecessarily large models. Here we propose a self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabelled, cross-modal medical imaging data, which will allow the creation of accurate and efficient models. We demonstrate the utility of the scheme by accurately predicting retinal thickness measurements based on optical coherence tomography from simple infrared fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-stage scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on optical coherence tomography and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since optical coherence tomography is not required for predictions. We expect this concept to apply to other multimodal clinical imaging, health records and genomics data, and to corresponding sample-starved learning problems.

A preprint version of the article is available at bioRxiv.


中文翻译:

自我监督的视网膜厚度预测可从未标记的数据进行深度学习,以增强糖尿病性视网膜病变的分类

获取大量的带注释的样本对于训练医学成像中的精确深度学习模型构成了巨大的挑战。尽管目前从预先训练的模型中进行转移学习可以帮助解决缺少数据的案例,但这限制了设计选择,并且通常导致使用不必要的大型模型。在这里,我们提出了一种自我监督的训练方案,用于从未标记的,交叉模式的医学影像数据中获得高质量的,经过预训练的网络,这将允许创建准确而有效的模型。我们通过从简单的红外眼底图像基于光学相干断层扫描准确预测视网膜厚度测量值,证明了该方案的实用性。后来,在缺乏培训数据的情况下,在单独的糖尿病性视网膜病变分类任务中,学习的表现优于高级分类器。我们的跨模态三阶段方案有效地用光学相干断层扫描上的1,009个语义分段替换了26,343例糖尿病性视网膜病变注解,并且仅使用25%的眼底图像即可达到相同的分类精度,而没有任何缺点,因为预测不需要光学相干断层扫描。我们希望这个概念可以应用于其他多模式临床影像,健康记录和基因组数据,以及相应的样本匮乏的学习问题。009在光学相干断层扫描上进行语义分割,仅使用眼底图像的25%即可达到相同的分类精度,而没有任何缺点,因为预测不需要光学相干断层扫描。我们希望这个概念可以应用于其他多模式临床影像,健康记录和基因组数据,以及相应的样本匮乏的学习问题。009在光学相干断层扫描上进行语义分割,仅使用眼底图像的25%即可达到相同的分类精度,而没有任何缺点,因为预测不需要光学相干断层扫描。我们希望这个概念可以应用于其他多模式临床影像,健康记录和基因组数据,以及相应的样本匮乏的学习问题。

该文章的预印本可从bioRxiv获得。
更新日期:2020-11-09
down
wechat
bug