当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainability Requires Interactivity
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-16 , DOI: arxiv-2109.07869
Matthias Kirchler, Martin Graf, Marius Kloft, Christoph Lippert

When explaining the decisions of deep neural networks, simple stories are tempting but dangerous. Especially in computer vision, the most popular explanation approaches give a false sense of comprehension to its users and provide an overly simplistic picture. We introduce an interactive framework to understand the highly complex decision boundaries of modern vision models. It allows the user to exhaustively inspect, probe, and test a network's decisions. Across a range of case studies, we compare the power of our interactive approach to static explanation methods, showing how these can lead a user astray, with potentially severe consequences.

中文翻译:

可解释性需要交互性

在解释深度神经网络的决策时,简单的故事很诱人,但也很危险。尤其是在计算机视觉领域,最流行的解释方法会给用户一种错误的理解感,并提供过于简单的图片。我们引入了一个交互式框架来理解现代视觉模型高度复杂的决策边界。它允许用户详尽地检查、探测和测试网络的决策。在一系列案例研究中,我们比较了我们的交互式方法与静态解释方法的力量,展示了这些方法如何导致用户误入歧途,并产生潜在的严重后果。
更新日期:2021-09-17
down
wechat
bug