当前位置: X-MOL 学术Int. J. Imaging Syst. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Glaucoma assessment from color fundus images using convolutional neural network
International Journal of Imaging Systems and Technology ( IF 3.0 ) Pub Date : 2020-09-19 , DOI: 10.1002/ima.22494
Poonguzhali Elangovan 1 , Malaya Kumar Nath 1
Affiliation  

Early detection and proper screening are essential to prevent vision loss due to glaucoma. In recent years, convolutional neural network (CNN) has been successfully applied to the color fundus images for the automatic detection of glaucoma. Compared to the existing automatic screening methods, CNNs have the ability to extract the distinct features directly from the fundus images. In this paper, a deep learning architecture based on a CNN is designed for the classification of glaucomatous and normal fundus images. An 18–layer CNN is designed and trained to extract the discriminative features from the fundus image. It comprises of four convolutional layers, two max pooling layers, and one fully connected layer. A two–stage tuning approach is proposed for the selection of suitable batch size and initial learning rate. The proposed network is tested on DRISHTI–GS1, ORIGA, RIM–ONE2 (release 2), ACRIMA, and large–scale attention–based glaucoma (LAG) databases. Rotation data augmentation technique is employed to enlarge the dataset. Randomly selected 70% of images are used for training the model and remaining 30% images are used for testing. An overall accuracy of 86.62%, 85.97%, 78.32%, 94.43%, and 96.64% are obtained on DRISHTI–GS1, RIM–ONE2, ORIGA, LAG, and ACRIMA databases, respectively. The proposed method has achieved an accuracy, sensitivity, specificity, and precision of 96.64%, 96.07%, 97.39%, and 97.74%, respectively, for ACRIMA database. Compared to other existing architectures, the proposed method is robust to Gaussian noise and salt–and–pepper noise.

中文翻译:

使用卷积神经网络从彩色眼底图像评估青光眼

早期检测和适当筛查对于预防因青光眼引起的视力丧失至关重要。近年来,卷积神经网络(CNN)已成功地应用于彩色眼底图像以自动检测青光眼。与现有的自动筛选方法相比,CNN具有直接从眼底图像中提取不同特征的能力。本文设计了一种基于CNN的深度学习体系结构,用于对青光眼和正常眼底图像进行分类。设计并训练了一个18层的CNN,以从眼底图像中提取辨别特征。它由四个卷积层,两个最大池化层和一个完全连接的层组成。为选择合适的批次大小和初始学习率,提出了一种两阶段的调整方法。提议的网络已在DRISHTI-GS1,ORIGA,RIM-ONE2(第2版),ACRIMA和大规模基于注意力的青光眼(LAG)数据库中进行了测试。旋转数据增强技术被用来扩大数据集。随机选择70%的图像用于训练模型,其余30%的图像用于测试。在DRISHTI–GS1,RIM–ONE2,ORIGA,LAG和ACRIMA数据库上,分别获得86.62%,85.97%,78.32%,94.43%和96.64%的总体准确度。对于ACRIMA数据库,该方法的准确度,灵敏度,特异性和精密度分别达到96.64%,96.07%,97.39%和97.74%。与其他现有体系结构相比,该方法对高斯噪声和椒盐噪声具有鲁棒性。和大规模的基于注意力的青光眼(LAG)数据库。旋转数据增强技术被用来扩大数据集。随机选择70%的图像用于训练模型,其余30%的图像用于测试。在DRISHTI–GS1,RIM–ONE2,ORIGA,LAG和ACRIMA数据库上,分别获得86.62%,85.97%,78.32%,94.43%和96.64%的总体准确度。对于ACRIMA数据库,该方法的准确度,灵敏度,特异性和精密度分别达到96.64%,96.07%,97.39%和97.74%。与其他现有体系结构相比,该方法对高斯噪声和椒盐噪声具有鲁棒性。和大规模的基于注意力的青光眼(LAG)数据库。旋转数据增强技术被用来扩大数据集。随机选择70%的图像用于训练模型,其余30%的图像用于测试。在DRISHTI–GS1,RIM–ONE2,ORIGA,LAG和ACRIMA数据库上,分别获得86.62%,85.97%,78.32%,94.43%和96.64%的总体准确度。对于ACRIMA数据库,该方法的准确度,灵敏度,特异性和精密度分别达到96.64%,96.07%,97.39%和97.74%。与其他现有体系结构相比,该方法对高斯噪声和椒盐噪声具有鲁棒性。在DRISHTI–GS1,RIM–ONE2,ORIGA,LAG和ACRIMA数据库上,分别获得86.62%,85.97%,78.32%,94.43%和96.64%的总体准确度。对于ACRIMA数据库,该方法的准确度,灵敏度,特异性和精密度分别达到96.64%,96.07%,97.39%和97.74%。与其他现有体系结构相比,该方法对高斯噪声和盐椒噪声具有鲁棒性。在DRISHTI–GS1,RIM–ONE2,ORIGA,LAG和ACRIMA数据库上,分别获得86.62%,85.97%,78.32%,94.43%和96.64%的总体准确度。对于ACRIMA数据库,该方法的准确度,灵敏度,特异性和精密度分别达到96.64%,96.07%,97.39%和97.74%。与其他现有体系结构相比,该方法对高斯噪声和椒盐噪声具有鲁棒性。
更新日期:2020-09-19
down
wechat
bug