当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-12 , DOI: arxiv-2109.05629
Oscar Gomez, Steffen Holter, Jun Yuan, Enrico Bertini

Rapid improvements in the performance of machine learning models have pushed them to the forefront of data-driven decision-making. Meanwhile, the increased integration of these models into various application domains has further highlighted the need for greater interpretability and transparency. To identify problems such as bias, overfitting, and incorrect correlations, data scientists require tools that explain the mechanisms with which these model decisions are made. In this paper we introduce AdViCE, a visual analytics tool that aims to guide users in black-box model debugging and validation. The solution rests on two main visual user interface innovations: (1) an interactive visualization design that enables the comparison of decisions on user-defined data subsets; (2) an algorithm and visual design to compute and visualize counterfactual explanations - explanations that depict model outcomes when data features are perturbed from their original values. We provide a demonstration of the tool through a use case that showcases the capabilities and potential limitations of the proposed approach.

中文翻译:

AdViCE:机器学习模型验证的聚合视觉反事实解释

机器学习模型性能的快速提升将它们推到了数据驱动决策的前沿。同时,这些模型越来越多地集成到各种应用领域,进一步凸显了对更高可解释性和透明度的需求。为了识别偏差、过度拟合和不正确的相关性等问题,数据科学家需要能够解释做出这些模型决策的机制的工具。在本文中,我们介绍了 AdViCE,这是一种可视化分析工具,旨在指导用户进行黑盒模型调试和验证。该解决方案依赖于两个主要的可视化用户界面创新:(1) 交互式可视化设计,可以对用户定义的数据子集的决策进行比较;(2) 一种算法和视觉设计,用于计算和可视化反事实解释——当数据特征偏离其原始值时描述模型结果的解释。我们通过一个用例提供了该工具的演示,该用例展示了所提议方法的功能和潜在限制。
更新日期:2021-09-14
down
wechat
bug