当前位置: X-MOL 学术Br. J. Ophthalmol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Accurate detection and grading of pterygium through smartphone by a fusion training model
British Journal of Ophthalmology ( IF 4.1 ) Pub Date : 2024-03-01 , DOI: 10.1136/bjo-2022-322552
Yuwen Liu 1, 2 , Changsheng Xu 1, 3 , Shaopan Wang 1, 3 , Yuguang Chen 1, 3 , Xiang Lin 1, 4 , Shujia Guo 1 , Zhaolin Liu 1, 5 , Yuqian Wang 1 , Houjian Zhang 1 , Yuli Guo 1 , Caihong Huang 1 , Huping Wu 6 , Ying Li 7 , Qian Chen 1 , Jiaoyue Hu 1, 6 , Zhiming Luo 8 , Zuguo Liu 6, 9
Affiliation  

Background/aims To improve the accuracy of pterygium screening and detection through smartphones, we established a fusion training model by blending a large number of slit-lamp image data with a small proportion of smartphone data. Method Two datasets were used, a slit-lamp image dataset containing 20 987 images and a smartphone-based image dataset containing 1094 images. The RFRC (Faster RCNN based on ResNet101) model for the detection model. The SRU-Net (U-Net based on SE-ResNeXt50) for the segmentation models. The open-cv algorithm measured the width, length and area of pterygium in the cornea. Results The detection model (trained by slit-lamp images) obtained the mean accuracy of 95.24%. The fusion segmentation model (trained by smartphone and slit-lamp images) achieved a microaverage F1 score of 0.8981, sensitivity of 0.8709, specificity of 0.9668 and area under the curve (AUC) of 0.9295. Compared with the same group of patients’ smartphone and slit-lamp images, the fusion model performance in smartphone-based images (F1 score of 0.9313, sensitivity of 0.9360, specificity of 0.9613, AUC of 0.9426, accuracy of 92.38%) is close to the model (trained by slit-lamp images) in slit-lamp images (F1 score of 0.9448, sensitivity of 0.9165, specificity of 0.9689, AUC of 0.9569 and accuracy of 94.29%). Conclusion Our fusion model method got high pterygium detection and grading accuracy in insufficient smartphone data, and its performance is comparable to experienced ophthalmologists and works well in different smartphone brands. Data are available on reasonable request. The data generated and/or analysed during the current study are available on reasonable request from the corresponding author ZuL (zuguoliu@xmu.edu.cn).

中文翻译:

通过融合训练模型通过智能手机对翼状胬肉进行准确检测和分级

背景/目的为了提高通过智能手机进行翼状胬肉筛查和检测的准确性,我们通过将大量裂隙灯图像数据与一小部分智能手机数据混合来建立融合训练模型。方法 使用两个数据集,一个包含 20 987 个图像的裂隙灯图像数据集和一个包含 1094 个图像的基于智能手机的图像数据集。检测模型采用RFRC(基于ResNet101的Faster RCNN)模型。用于分割模型的 SRU-Net(基于 SE-ResNeXt50 的 U-Net)。open-cv算法测量了角膜中翼状胬肉的宽度、长度和面积。结果检测模型(通过裂隙灯图像训练)获得了95.24%的平均准确率。融合分割模型(通过智能手机和裂隙灯图像训练)的微平均 F1 得分为 0.8981,灵敏度为 0.8709,特异性为 0.9668,曲线下面积 (AUC) 为 0.9295。与同组患者的智能手机和裂隙灯图像相比,基于智能手机的图像中的融合模型表现(F1得分为0.9313,敏感性为0.9360,特异性为0.9613,AUC为0.9426,准确性为92.38%)接近于裂隙灯图像中的模型(通过裂隙灯图像训练)(F1 得分为 0.9448,灵敏度为 0.9165,特异性为 0.9689,AUC 为 0.9569,准确度为 94.29%)。结论 我们的融合模型方法在智能手机数据不足的情况下获得了较高的翼状胬肉检测和分级精度,其性能可与经验丰富的眼科医生相媲美,并且在不同的智能手机品牌中效果良好。可根据合理要求提供数据。当前研究期间生成和/或分析的数据可根据通讯作者 ZuL (zuguoliu@xmu.edu.cn) 的合理要求提供。
更新日期:2024-02-21
down
wechat
bug