当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions
ACM Computing Surveys ( IF 16.6 ) Pub Date : 2023-02-02 , DOI: 10.1145/3563691
Rami Ibrahim 1 , M. Omair Shafiq 1
Affiliation  

Convolutional neural networks (CNNs) have shown promising results and have outperformed classical machine learning techniques in tasks such as image classification and object recognition. Their human-brain like structure enabled them to learn sophisticated features while passing images through their layers. However, their lack of explainability led to the demand for interpretations to justify their predictions. Research on Explainable AI or XAI has gained momentum to provide knowledge and insights into neural networks. This study summarizes the literature to gain more understanding of explainability in CNNs (i.e., Explainable Convolutional Neural Networks). We classify models that made efforts to improve the CNNs interpretation. We present and discuss taxonomies for XAI models that modify CNN architecture, simplify CNN representations, analyze feature relevance, and visualize interpretations. We review various metrics used to evaluate XAI interpretations. In addition, we discuss the applications and tasks of XAI models. This focused and extensive survey develops a perspective on this area by addressing suggestions for overcoming XAI interpretation challenges, like models’ generalization, unifying evaluation criteria, building robust models, and providing interpretations with semantic descriptions. Our taxonomy can be a reference to motivate future research in interpreting neural networks.



中文翻译:

可解释的卷积神经网络:分类、回顾和未来方向

卷积神经网络 (CNN)已显示出令人鼓舞的结果,并且在图像分类和对象识别等任务中优于经典机器学习技术。它们类似于人脑的结构使它们能够在将图像传递到它们的层时学习复杂的特征。然而,他们缺乏可解释性导致需要解释来证明他们的预测是正确的。可解释人工智能XAI研究已经获得了提供神经网络知识和见解的动力。本研究总结了文献,以更好地理解 CNN 的可解释性(即可解释的卷积神经网络)。我们对努力改进 CNN 解释的模型进行分类。我们介绍并讨论了 XAI 模型的分类法,这些模型修改了 CNN 架构,简化了 CNN 表示,分析了特征相关性,并可视化了解释。我们回顾了用于评估 XAI 解释的各种指标。此外,我们还讨论了 XAI 模型的应用和任务。这项集中而广泛的调查通过解决克服 XAI 解释挑战的建议,如模型的泛化、统一评估标准、建立稳健的模型,在这一领域发展了一个观点,并提供带有语义描述的解释。我们的分类法可以作为激励未来研究解释神经网络的参考。

更新日期:2023-02-02
down
wechat
bug