当前位置: X-MOL 学术Br. J. Ophthalmol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quality assessment of colour fundus and fluorescein angiography images using deep learning
British Journal of Ophthalmology ( IF 3.7 ) Pub Date : 2024-01-01 , DOI: 10.1136/bjo-2022-321963
Michael König 1 , Philipp Seeböck 1 , Bianca S Gerendas 1 , Georgios Mylonas 1 , Rudolf Winklhofer 1 , Ioanna Dimakopoulou 1 , Ursula Margarethe Schmidt-Erfurth 2
Affiliation  

Background/aims Image quality assessment (IQA) is crucial for both reading centres in clinical studies and routine practice, as only adequate quality allows clinicians to correctly identify diseases and treat patients accordingly. Here we aim to develop a neural network for automated real-time IQA in colour fundus (CF) and fluorescein angiography (FA) images. Methods Training and evaluation of two neural networks were conducted using 2272 CF and 2492 FA images, with binary labels in four (contrast, focus, illumination, shadow and reflection) and three (contrast, focus, noise) modality specific categories plus an overall quality ranking. Performance was compared with a second human grader, evaluated on an external public dataset and in a clinical trial use-case. Results The networks achieved a F1-score/area under the receiving operator characteristic/precision recall curve of 0.907/0.963/0.966 for CF and 0.822/0.918/0.889 for FA in overall quality prediction with similar results in most categories. A clear relation between model uncertainty and prediction error was observed. In the clinical trial use-case evaluation, the networks achieved an accuracy of 0.930 for CF and 0.895 for FA. Conclusion The presented method allows automated IQA in real time, demonstrating human-level performance for CF as well as FA. Such models can help to overcome the problem of human intergrader and intragrader variability by providing objective and reproducible IQA results. It has particular relevance for real-time feedback in multicentre clinical studies, when images are uploaded to central reading centre portals. Moreover, automated IQA as preprocessing step can support integrating automated approaches into clinical practice. Data are available upon reasonable request. Unfortunately, due to privacy restrictions, the image dataset can not be made publicly available. However, access to the data may be shared upon reasonable request.

中文翻译:


使用深度学习对彩色眼底和荧光素血管造影图像进行质量评估



背景/目标 图像质量评估(IQA)对于临床研究和日常实践中的阅片中心都至关重要,因为只有足够的质量才能使临床医生正确识别疾病并相应地治疗患者。在这里,我们的目标是开发一种神经网络,用于彩色眼底 (CF) 和荧光素血管造影 (FA) 图像中的自动实时 IQA。方法 使用 2272 个 CF 和 2492 个 FA 图像进行两个神经网络的训练和评估,具有四个(对比度、聚焦、照明、阴影和反射)和三个(对比度、聚焦、噪声)模态特定类别的二进制标签以及总体质量排行。与第二位人类评分者进行了性能比较,并在外部公共数据集和临床试验用例中进行了评估。结果 在整体质量预测中,网络在接收算子特征/精度召回曲线下实现了 F1 分数/面积,CF 为 0.907/0.963/0.966,FA 为 0.822/0.918/0.889,大多数类别的结果相似。观察到模型不确定性和预测误差之间存在明确的关系。在临床试验用例评估中,网络的 CF 准确度为 0.930,FA 准确度为 0.895。结论所提出的方法允许实时自动化 IQA,展示了 CF 和 FA 的人类水平性能。此类模型可以通过提供客观且可重复的 IQA 结果来帮助克服人类分级器和内部分级器变异性的问题。当图像上传到中央阅读中心门户时,它与多中心临床研究中的实时反馈特别相关。此外,自动化 IQA 作为预处理步骤可以支持将自动化方法集成到临床实践中。数据可根据合理要求提供。 不幸的是,由于隐私限制,图像数据集无法公开。但是,可以根据合理请求共享对数据的访问。
更新日期:2023-12-18
down
wechat
bug