当前位置: X-MOL 学术Adv. Data Anal. Classif. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C
Advances in Data Analysis and Classification ( IF 1.6 ) Pub Date : 2020-09-02 , DOI: 10.1007/s11634-020-00418-3
Yanou Ramon , David Martens , Foster Provost , Theodoros Evgeniou

Predictive systems based on high-dimensional behavioral and textual data have serious comprehensibility and transparency issues: linear models require investigating thousands of coefficients, while the opaqueness of nonlinear models makes things worse. Counterfactual explanations are becoming increasingly popular for generating insight into model predictions. This study aligns the recently proposed linear interpretable model-agnostic explainer and Shapley additive explanations with the notion of counterfactual explanations, and empirically compares the effectiveness and efficiency of these novel algorithms against a model-agnostic heuristic search algorithm for finding evidence counterfactuals using 13 behavioral and textual data sets. We show that different search methods have different strengths, and importantly, that there is much room for future research.



中文翻译:

行为和文本数据的实例级反事实解释算法的比较:SEDC,LIME-C和SHAP-C

基于高维行为和文本数据的预测系统具有严重的可理解性和透明性问题:线性模型需要研究数千个系数,而非线性模型的不透明性会使情况变得更糟。反事实的解释正变得越来越流行,以产生对模型预测的洞察力。这项研究将最近提出的线性可解释模型不可知论者解释和Shapley可加性解释与反事实解释的概念相吻合,并在经验上比较了这些新颖算法与模型不可知论启发式搜索算法的有效性和效率,该算法使用13种行为和证据来寻找证据反事实。文本数据集。我们证明了不同的搜索方法具有不同的优势,重要的是,

更新日期:2020-09-02
down
wechat
bug