当前位置: X-MOL 学术arXiv.cs.CC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Tractability of SHAP Explanations
arXiv - CS - Computational Complexity Pub Date : 2020-09-18 , DOI: arxiv-2009.08634
Guy Van den Broeck, Anton Lykov, Maximilian Schleich, Dan Suciu

SHAP explanations are a popular feature-attribution mechanism for explainable AI. They use game-theoretic notions to measure the influence of individual features on the prediction of a machine learning model. Despite a lot of recent interest from both academia and industry, it is not known whether SHAP explanations of common machine learning models can be computed efficiently. In this paper, we establish the complexity of computing the SHAP explanation in three important settings. First, we consider fully-factorized data distributions, and show that the complexity of computing the SHAP explanation is the same as the complexity of computing the expected value of the model. This fully-factorized setting is often used to simplify the SHAP computation, yet our results show that the computation can be intractable for commonly used models such as logistic regression. Going beyond fully-factorized distributions, we show that computing SHAP explanations is already intractable for a very simple setting: computing SHAP explanations of trivial classifiers over naive Bayes distributions. Finally, we show that even computing SHAP over the empirical distribution is #P-hard.

中文翻译:

关于SHAP解释的可处理性

SHAP 解释是可解释 AI 的流行特征归因机制。他们使用博弈论概念来衡量单个特征对机器学习模型预测的影响。尽管学术界和工业界最近都有很多兴趣,但尚不清楚是否可以有效地计算常见机器学习模型的 SHAP 解释。在本文中,我们在三个重要设置中建立了计算 SHAP 解释的复杂性。首先,我们考虑全分解数据分布,并表明计算 SHAP 解释的复杂性与计算模型期望值的复杂性相同。这种完全分解的设置通常用于简化 SHAP 计算,然而,我们的结果表明,对于常用模型(如逻辑回归),计算可能难以处理。超越完全分解分布,我们表明计算 SHAP 解释对于一个非常简单的设置已经是棘手的:在朴素贝叶斯分布上计算平凡分类器的 SHAP 解释。最后,我们表明即使在经验分布上计算 SHAP 也是#P-hard。
更新日期:2020-09-21
down
wechat
bug