当前位置: X-MOL 学术IEEE Signal Proc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Explainable Artificial Intelligence for Regression Models: A methodological perspective
IEEE Signal Processing Magazine ( IF 9.4 ) Pub Date : 6-28-2022 , DOI: 10.1109/msp.2022.3153277
Simon Letzgus 1 , Patrick Wagner 2 , Jonas Lederer 1 , Wojciech Samek 3 , Klaus-Robert Muller 4 , Gregoire Montavon 5
Affiliation  

In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR). In this review, we clarify the fundamental conceptual differences of XAI for regression and classification tasks, establish novel theoretical insights and analysis for XAIR, provide demonstrations of XAIR on genuine practical regression problems, and finally, discuss challenges remaining for the field.

中文翻译:


面向回归模型的可解释人工智能:方法论视角



除了机器学习 (ML) 模型令人印象深刻的预测能力之外,最近还出现了解释方法,可以解释复杂的非线性学习模型(例如深度神经网络)。获得更好的理解尤其重要,例如对于安全关键的机器学习应用或医疗诊断等。尽管这种可解释的人工智能 (XAI) 技术在分类器中已经非常受欢迎,但迄今为止,很少有人关注回归模型的 XAI (XAIR)。在这篇综述中,我们阐明了 XAI 在回归和分类任务方面的基本概念差异,为 XAIR 建立了新颖的理论见解和分析,在真正的实际回归问题上提供了 XAIR 的演示,最后讨论了该领域仍然存在的挑战。
更新日期:2024-08-26
down
wechat
bug