当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the overlooked issue of defining explanation objectives for local-surrogate explainers
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05810
Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

Local surrogate approaches for explaining machine learning model predictions have appealing properties, such as being model-agnostic and flexible in their modelling. Several methods exist that fit this description and share this goal. However, despite their shared overall procedure, they set out different objectives, extract different information from the black-box, and consequently produce diverse explanations, that are -- in general -- incomparable. In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation. We discuss the implications of the lack of agreement, and clarity, amongst the methods' objectives on the research and practice of explainability.

中文翻译:

关于为本地代理解释者定义解释目标的被忽视的问题

用于解释机器学习模型预测的局部替代方法具有吸引人的特性,例如与模型无关且建模灵活。有几种方法符合此描述并共享此目标。然而,尽管他们的总体程序相同,但他们设定了不同的目标,从黑匣子中提取了不同的信息,从而产生了不同的解释,这些解释通常是无与伦比的。在这项工作中,我们回顾了多种方法之间的异同,特别关注它们从模型中提取的信息,因为这对输出:解释有很大影响。我们讨论了方法的目标对可解释性的研究和实践缺乏一致性和清晰度的影响。
更新日期:2021-06-11
down
wechat
bug