当前位置: X-MOL 学术medRxiv. Ophthalmol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpretable gender classification from retinal fundus images using BagNets
medRxiv - Ophthalmology Pub Date : 2021-06-25 , DOI: 10.1101/2021.06.21.21259243
Indu Ilanchezian , Dmitry Kobak , Hanna Faber , Focke Ziemssen , Philipp Berens , Murat Seçkin Ayhan

Deep neural networks (DNNs) are able to predict a person’s gender from retinal fundus images with high accuracy, even though this task is usually considered hardly possible by ophthalmologists. Therefore, it has been an open question which features allow reliable discrimination between male and female fundus images. To study this question, we used a particular DNN architecture called BagNet, which extracts local features from small image patches and then averages the class evidence across all patches. The BagNet performed on par with the more sophisticated Inception-v3 model, showing that the gender information can be read out from local features alone. BagNets also naturally provide saliency maps, which we used to highlight the most informative patches in fundus images. We found that most evidence was provided by patches from the optic disc and the macula, with patches from the optic disc providing mostly male and patches from the macula providing mostly female evidence. Although further research is needed to clarify the exact nature of this evidence, our results suggest that there are localized structural differences in fundus images between genders. Overall, we believe that BagNets may provide a compelling alternative to the standard DNN architectures also in other medical image analysis tasks, as they do not require post-hoc explainability methods.

中文翻译:

使用 BagNets 从视网膜眼底图像中进行可解释的性别分类

深度神经网络 (DNN) 能够从视网膜眼底图像中以高精度预测一个人的性别,尽管眼科医生通常认为这项任务几乎不可能完成。因此,哪些特征允许可靠地区分男性和女性眼底图像一直是一个悬而未决的问题。为了研究这个问题,我们使用了一种称为 BagNet 的特定 DNN 架构,它从小图像块中提取局部特征,然后对所有块的类证据进行平均。BagNet 的性能与更复杂的 Inception-v3 模型相当,表明可以单独从局部特征中读出性别信息。BagNets 还自然地提供了显着图,我们用它来突出眼底图像中信息量最大的补丁。我们发现大多数证据是由视盘和黄斑斑块提供的,视盘斑块主要提供男性证据,黄斑斑块提供大部分女性证据。虽然需要进一步的研究来澄清这一证据的确切性质,但我们的结果表明,性别之间的眼底图像存在局部结构差异。总体而言,我们认为 BagNets 也可以在其他医学图像分析任务中为标准 DNN 架构提供令人信服的替代方案,因为它们不需要事后可解释性方法。我们的结果表明,性别之间的眼底图像存在局部结构差异。总体而言,我们认为 BagNets 也可以在其他医学图像分析任务中为标准 DNN 架构提供令人信服的替代方案,因为它们不需要事后可解释性方法。我们的结果表明,性别之间的眼底图像存在局部结构差异。总体而言,我们认为 BagNets 也可以在其他医学图像分析任务中为标准 DNN 架构提供令人信服的替代方案,因为它们不需要事后可解释性方法。
更新日期:2021-06-28
down
wechat
bug