当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Progressive Disclosure
ACM Transactions on Interactive Intelligent Systems ( IF 3.4 ) Pub Date : 2020-07-07 , DOI: 10.1145/3374218
Steve Whittaker 1 , Aaron Springer 1
Affiliation  

It is essential that users understand how algorithmic decisions are made, as we increasingly delegate important decisions to intelligent systems. Prior work has often taken a techno-centric approach, focusing on new computational techniques to support transparency. In contrast, this article employs empirical methods to better understand user reactions to transparent systems to motivate user-centric designs for transparent systems. We assess user reactions to transparency feedback in four studies of an emotional analytics system. In Study 1, users anticipated that a transparent system would perform better but unexpectedly retracted this evaluation after experience with the system. Study 2 offers an explanation for this paradox by showing that the benefits of transparency are context dependent. On the one hand, transparency can help users form a model of the underlying algorithm's operation. On the other hand, positive accuracy perceptions may be undermined when transparency reveals algorithmic errors. Study 3 explored real-time reactions to transparency. Results confirmed Study 2, in showing that users are both more likely to consult transparency information and to experience greater system insights when formulating a model of system operation. Study 4 used qualitative methods to explore real-time user reactions to motivate transparency design principles. Results again suggest that users may benefit from initially simplified feedback that hides potential system errors and assists users in building working heuristics about system operation. We use these findings to motivate new progressive disclosure principles for transparency in intelligent systems and discuss theoretical implications.

中文翻译:

渐进式披露

随着我们越来越多地将重要决策委托给智能系统,用户必须了解如何做出算法决策。先前的工作通常采用以技术为中心的方法,专注于支持透明度的新计算技术。相比之下,本文采用经验方法来更好地理解用户对透明系统的反应,从而激发以用户为中心的透明系统设计。我们在情绪分析系统的四项研究中评估用户对透明度反馈的反应。在研究 1 中,用户预计透明系统的性能会更好,但在体验了该系统后却出人意料地撤回了这一评估。研究 2 通过表明透明度的好处取决于上下文,为这一悖论提供了解释。一方面,透明度可以帮助用户形成底层算法操作的模型。另一方面,当透明度揭示算法错误时,可能会破坏积极的准确性感知。研究 3 探讨了对透明度的实时反应。结果证实了研究 2,表明用户在制定系统操作模型时更有可能查阅透明度信息并体验到更深入的系统洞察力。研究 4 使用定性方法来探索实时用户反应,以激发透明度设计原则。结果再次表明,用户可能会受益于最初简化的反馈,这些反馈隐藏了潜在的系统错误并帮助用户构建有关系统操作的工作启发式。
更新日期:2020-07-07
down
wechat
bug