当前位置: X-MOL 学术Stat. Anal. Data Min. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Visual diagnostics of an explainer model: Tools for the assessment of LIME explanations
Statistical Analysis and Data Mining ( IF 2.1 ) Pub Date : 2021-02-19 , DOI: 10.1002/sam.11500
Katherine Goode 1 , Heike Hofmann 1, 2
Affiliation  

The importance of providing explanations for predictions made by black‐box models has led to the development of explainer model methods such as LIME (local interpretable model‐agnostic explanations). LIME uses a surrogate model to explain the relationship between predictor variables and predictions from a black‐box model in a local region around a prediction of interest. However, the quality of the resulting explanations relies on how well the explainer model captures the black‐box model in a specified local region. Here we introduce three visual diagnostics to assess the quality of LIME explanations: (1) explanation scatterplots, (2) assessment metric plots, and (3) feature heatmaps. We apply the visual diagnostics to a forensics bullet matching dataset to show examples where LIME explanations depend on the tuning parameter values and the explainer model oversimplifies the black‐box model. Our examples raise concerns about claims made of LIME that are similar to other criticisms in the literature.

中文翻译:

解释器模型的视觉诊断:LIME解释的评估工具

为黑盒模型做出的预测提供解释的重要性导致了解释器模型方法的发展,例如LIME(本地可解释模型不可知的解释)。LIME使用替代模型来解释预测变量与在感兴趣的预测周围的局部区域中来自黑盒模型的预测之间的关系。但是,所得到的解释的质量取决于解释器模型在指定本地区域中捕获黑匣子模型的能力。在这里,我们介绍了三种视觉诊断程序来评估LIME解释的质量:(1)解释散点图,(2)评估度量图和(3)特征热图。我们将视觉诊断应用于法医项目匹配数据库,以显示示例,其中LIME解释取决于调整参数值,而解释程序模型则简化了黑匣子模型。我们的例子引起了人们对LIME主张的关注,这与文献中的其他批评类似。
更新日期:2021-03-15
down
wechat
bug