当前位置: X-MOL 学术Int. J. Med. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Development of a deep learning-based image eligibility verification system for detecting and filtering out ineligible fundus images: A multicentre study
International Journal of Medical Informatics ( IF 4.9 ) Pub Date : 2020-12-13 , DOI: 10.1016/j.ijmedinf.2020.104363
Zhongwen Li , Jiewei Jiang , Heding Zhou , Qinxiang Zheng , Xiaotian Liu , Kuan Chen , Hongfei Weng , Wei Chen

Background

Recent advances in artificial intelligence (AI) have shown great promise in detecting some diseases based on medical images. Most studies developed AI diagnostic systems only using eligible images. However, in real-world settings, ineligible images (including poor-quality and poor-location images) that can compromise downstream analysis are inevitable, leading to uncertainty about the performance of these AI systems. This study aims to develop a deep learning-based image eligibility verification system (DLIEVS) for detecting and filtering out ineligible fundus images.

Methods

A total of 18,031 fundus images (9,188 subjects) collected from 4 clinical centres were used to develop and evaluate the DLIEVS for detecting eligible, poor-location, and poor-quality fundus images. Four deep learning algorithms (AlexNet, DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best model for the DLIEVS. The performance of the DLIEVS was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard determined by retina experts.

Results

In the internal test dataset, the best algorithm (DenseNet121) achieved AUCs of 1.000, 0.999, and 1.000 for the classification of eligible, poor-location, and poor-quality images, respectively. In the external test datasets, the AUCs of the best algorithm (DenseNet121) for detecting eligible, poor-location, and poor-quality images were ranged from 0.999–1.000, 0.997–1.000, and 0.997–0.999, respectively.

Conclusions

Our DLIEVS can accurately discriminate poor-quality and poor-location images from eligible images. This system has the potential to serve as a pre-screening technique to filter out ineligible images obtained from real-world settings, ensuring only eligible images will be applied in the subsequent image-based AI diagnostic analyses.



中文翻译:

基于深度学习的图像资格验证系统的开发,用于检测和过滤不合格的眼底图像:一项多中心研究

背景

人工智能(AI)的最新进展显示了基于医学图像检测某些疾病的巨大希望。大多数研究仅使用合格的图像来开发AI诊断系统。但是,在现实环境中,不可避免地会损害下游分析的不合格图像(包括质量较差和位置不佳的图像)会导致这些AI系统的性能不确定。这项研究旨在开发一种基于深度学习的图像资格验证系统(DLIEVS),用于检测和过滤出不合格的眼底图像。

方法

从四个临床中心收集的总共18,031个眼底图像(9,188名受试者)用于开发和评估DLIEVS,以检测合格的,位置较差和质量较差的眼底图像。四种深度学习算法(AlexNet,DenseNet121,Inception V3和ResNet50)被用来训练模型以获得DLIEVS的最佳模型。与视网膜专家确定的参考标准相比,使用接收器工作特征曲线(AUC),灵敏度和特异性下的面积评估DLIEVS的性能。

结果

在内部测试数据集中,最佳算法(DenseNet121)分别对合格,劣质位置和劣质图像进行分类的AUC为1.000、0.999和1.000。在外部测试数据集中,用于检测合格,劣质定位和劣质图像的最佳算法(DenseNet121)的AUC分别为0.999–1.000、0.997–1.000和0.997–0.999。

结论

我们的DLIEVS可以准确地将劣质和劣质图像与合格图像区分开。该系统有可能用作预筛选技术,以过滤掉从现实环境设置中获得的不合格图像,从而确保仅合格的图像将用于后续基于图像的AI诊断分析中。

更新日期:2021-01-01
down
wechat
bug