当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
G-LIME: Statistical learning for local interpretations of deep neural networks using global priors
Artificial Intelligence ( IF 5.1 ) Pub Date : 2022-11-14 , DOI: 10.1016/j.artint.2022.103823
Xuhong Li , Haoyi Xiong , Xingjian Li , Xiao Zhang , Ji Liu , Haiyan Jiang , Zeyu Chen , Dejing Dou

To explain the prediction result of a Deep Neural Network (DNN) model based on a given sample, LIME [1] and its derivatives have been proposed to approximate the local behavior of the DNN model around the data point via linear surrogates. Though these algorithms interpret the DNN by finding the key features used for classification, the random interpolations used by LIME would perturb the explanation result and cause the instability and inconsistency between repetitions of LIME computations. To tackle this issue, we propose G-LIME that extends the vanilla LIME through high-dimensional Bayesian linear regression using the sparsity and informative global priors. Specifically, with a dataset representing the population of samples (e.g., the training set), G-LIME first pursues the global explanation of the DNN model using the whole dataset. Then, with a new data point, G-LIME incorporates an modified estimator of ElasticNet-alike to refine the local explanation result through balancing the distance to the global explanation and the sparsity/feature selection in the explanation. Finally, G-LIME uses Least Angle Regression (LARS) and retrieves the solution path of a modified ElasticNet under varying 1-regularization, to screen and rank the importance of features [2] as the explanation result. Through extensive experiments on real world tasks, we show that the proposed method yields more stable, consistent, and accurate results compared to LIME.



中文翻译:

G -LIME:使用全局先验对深度神经网络进行局部解释的统计学习

为了解释基于给定样本的深度神经网络 (DNN) 模型的预测结果,已提出 LIME [1] 及其导数,以通过线性代理来近似 DNN 模型在数据点周围的局部行为。尽管这些算法通过寻找用于分类的关键特征来解释 DNN,但 LIME 使用的随机插值会扰乱解释结果并导致 LIME 计算重复之间的不稳定和不一致。为了解决这个问题,我们建议G-LIME,使用稀疏性和信息丰富的全局先验,通过高维贝叶斯线性回归扩展香草 LIME。具体来说,使用代表样本总体的数据集(例如,训练集),G-LIME 首先追求使用整个数据集对 DNN 模型进行全局解释。然后,有了一个新的数据点,G-LIME 结合了类似 ElasticNet 的改进估计器,通过平衡与全局解释的距离和解释中的稀疏性/特征选择来改进局部解释结果。最后,G-LIME 使用最小角度回归 (LARS) 并在不同条件下检索修改后的 ElasticNet 的解决方案路径1个-正则化,对特征[2]的重要性进行筛选和排序作为解释结果。通过对现实世界任务的广泛实验,我们表明与 LIME 相比,所提出的方法产生更稳定、一致和准确的结果。

更新日期:2022-11-18
down
wechat
bug