当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
Artificial Intelligence ( IF 14.4 ) Pub Date : 2021-03-31 , DOI: 10.1016/j.artint.2021.103502
Kjersti Aas , Martin Jullum , Anders Løland

Explaining complex or seemingly simple machine learning models is an important practical problem. We want to explain individual predictions from such models by learning simple, interpretable explanations. Shapley value is a game theoretic concept that can be used for this purpose. The Shapley value framework has a series of desirable theoretical properties, and can in principle handle any predictive model. Kernel SHAP is a computationally efficient approximation to Shapley values in higher dimensions. Like several other existing methods, this approach assumes that the features are independent. Since Shapley values currently suffer from inclusion of unrealistic data instances when features are correlated, the explanations may be very misleading. This is the case even if a simple linear model is used for predictions. In this paper, we extend the Kernel SHAP method to handle dependent features. We provide several examples of linear and non-linear models with various degrees of feature dependence, where our method gives more accurate approximations to the true Shapley values.



中文翻译:

解释依赖特征时的单个预测:更精确地近似Shapley值

解释复杂的或看似简单的机器学习模型是一个重要的实际问题。我们想通过学习简单的,可解释的解释来解释这些模型中的各个预测。Shapley值是可以用于此目的的博弈论概念。Shapley值框架具有一系列理想的理论属性,并且原则上可以处理任何预测模型。内核SHAP是在较高维度上对Shapley值的有效计算近似。像其他几种现有方法一样,此方法假定功能是独立的。由于当前将特征关联在一起时,Shapley值会包含不真实的数据实例,因此这些解释可能会引起误解。即使将简单的线性模型用于预测也是如此。在本文中,我们扩展了内核SHAP方法来处理相关功能。我们提供了具有不同程度的特征相关性的线性和非线性模型的几个示例,其中我们的方法为真实的Shapley值提供了更准确的近似值。

更新日期:2021-04-19
down
wechat
bug