当前位置: X-MOL 学术Artif. Intell. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Classification of diabetic retinopathy using unlabeled data and knowledge distillation
Artificial Intelligence in Medicine ( IF 7.5 ) Pub Date : 2021-09-17 , DOI: 10.1016/j.artmed.2021.102176
Sajjad Abbasi 1 , Mohsen Hajabdollahi 1 , Pejman Khadivi 2 , Nader Karimi 1 , Roshanak Roshandel 2 , Shahram Shirani 3 , Shadrokh Samavi 4
Affiliation  

Over the last decade, advances in Machine Learning and Artificial Intelligence have highlighted their potential as a diagnostic tool in the healthcare domain. Despite the widespread availability of medical images, their usefulness is severely hampered by a lack of access to labeled data. For example, while Convolutional Neural Networks (CNNs) have emerged as an essential analytical tool in image processing, their impact is curtailed by training limitations due to insufficient labeled data availability. Transfer Learning enables models developed for one task to be reused for a second task. Knowledge distillation enables transferring knowledge from a pre-trained model to another. However, it suffers from limitations, and the two models' constraints need to be architecturally similar. Knowledge distillation addresses some of the shortcomings of transfer learning by generalizing a complex model to a lighter model. However, some parts of the knowledge may not be distilled by knowledge distillation sufficiently. In this paper, a novel knowledge distillation approach using transfer learning is proposed. The proposed approach transfers the complete knowledge of a model to a new smaller one. Unlabeled data are used in an unsupervised manner to transfer the new smaller model's maximum amount of knowledge. The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce. The proposed approach is evaluated in classifying images for diagnosing Diabetic Retinopathy on two publicly available datasets, including Messidor and EyePACS. Simulation results demonstrate that the approach effectively transfers knowledge from a complex model to a lighter one. Furthermore, experimental results illustrate that different small models' performance is improved significantly using unlabeled data and knowledge distillation.



中文翻译:

使用未标记数据和知识蒸馏对糖尿病视网膜病变进行分类

在过去十年中,机器学习和人工智能的进步凸显了它们作为医疗保健领域诊断工具的潜力。尽管医学图像广泛可用,但由于无法访问标记数据,它们的实用性受到严重阻碍。例如,虽然卷积神经网络 (CNN) 已成为图像处理中必不可少的分析工具,但由于标记数据可用性不足,训练限制削弱了其影响。迁移学习使为一项任务开发的模型能够重用于第二项任务。知识蒸馏可以将知识从预先训练的模型转移到另一个模型。然而,它受到限制,两个模型的约束需要在架构上相似。知识蒸馏通过将复杂模型推广到更轻的模型来解决迁移学习的一些缺点。但是,知识蒸馏可能无法充分蒸馏某些部分的知识。在本文中,提出了一种使用迁移学习的新型知识蒸馏方法。所提出的方法将模型的完整知识转移到一个新的较小的模型中。以无监督的方式使用未标记的数据来传输新的较小模型的最大知识量。所提出的方法可以有益于医学图像分析,其中标记数据通常很少。在对两个公开可用的数据集(包括 Messidor 和 EyePACS)上诊断糖尿病视网膜病变的图像进行分类时,对所提出的方法进行了评估。仿真结果表明,该方法有效地将知识从复杂模型转移到更轻的模型。此外,实验结果表明,使用未标记数据和知识蒸馏显着提高了不同小模型的性能。

更新日期:2021-09-28
down
wechat
bug