当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Show or suppress? Managing input uncertainty in machine learning model explanations
Artificial Intelligence ( IF 14.4 ) Pub Date : 2021-01-27 , DOI: 10.1016/j.artint.2021.103456
Danding Wang , Wencan Zhang , Brian Y. Lim

Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference. However, measurements can be uncertain, and it is unclear how the awareness of input uncertainty can affect the trust in explanations. We propose and study two approaches to help users to manage their perception of uncertainty in a model explanation: 1) transparently show uncertainty in feature attributions to allow users to reflect on, and 2) suppress attribution to features with uncertain measurements and shift attribution to other features by regularizing with an uncertainty penalty. Through simulation experiments, qualitative interviews, and quantitative user evaluations, we identified the benefits of moderately suppressing attribution uncertainty, and concerns regarding showing attribution uncertainty. This work adds to the understanding of handling and communicating uncertainty for model interpretability.



中文翻译:

显示还是隐藏?在机器学习模型解释中管理输入不确定性

特征归因广泛用于可解释的机器学习中,以解释每个测得的输入特征值对输出推断的影响力。然而,测量可能不确定,并且不清楚输入不确定性如何影响对解释的信任。我们提出并研究了两种方法来帮助用户在模型解释中管理其对不确定性的感知:1)透明地显示特征归因的不确定性,以便用户进行反思; 2)抑制具有不确定测量值的特征的归因,并将特征归因于其他特征通过对不确定性进行正则化来修正特征。通过模拟实验,定性访谈和定量用户评估,我们确定了适度抑制归因不确定性的好处,以及关于显示归因不确定性的担忧。这项工作增加了对模型可解释性的处理和交流不确定性的理解。

更新日期:2021-01-28
down
wechat
bug