当前位置: X-MOL 学术Proc. IEEE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Proceedings of the IEEE ( IF 20.6 ) Pub Date : 2021-03-04 , DOI: 10.1109/jproc.2021.3060483
Wojciech Samek 1 , Gregoire Montavon 2 , Sebastian Lapuschkin 1 , Christopher J. Anders 2 , Klaus-Robert Muller 2
Affiliation  

With the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI). Interpretability and explanation methods for gaining a better understanding of the problem-solving abilities and strategies of nonlinear ML, in particular, deep neural networks, are, therefore, receiving increased attention. In this work, we aim to: 1) provide a timely overview of this active emerging field, with a focus on “ post hoc ” explanations, and explain its theoretical foundations; 2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations; 3) outline best practice aspects, i.e., how to best include interpretation methods into the standard usage of ML; and 4) demonstrate successful usage of XAI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of ML.

中文翻译:

解释深度神经网络及其他:方法与应用的回顾

随着工业和科学中机器学习(ML)的广泛且高度成功的应用,对可解释的人工智能(XAI)的需求日益增长。因此,用于更好地理解非线性机器学习的问题解决能力和策略的可解释性和解释方法受到了越来越多的关注。在这项工作中,我们旨在:1)及时概述这一活跃的新兴领域,重点是“ 事后 解释,并解释其理论基础;2)使用广泛的模拟从理论和比较评估的角度对可解释性算法进行测试;3)概述最佳实践方面,即如何最好地将解释方法纳入ML的标准用法中;和4)演示了在代表性的应用场景中XAI的成功使用。最后,我们讨论了ML这个令人兴奋的基础领域的挑战和可能的未来方向。
更新日期:2021-03-05
down
wechat
bug