当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Harnessing Explanations to Bridge AI and Humans
arXiv - CS - Human-Computer Interaction Pub Date : 2020-03-16 , DOI: arxiv-2003.07370
Vivian Lai, Samuel Carton, Chenhao Tan

Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis, thanks to their superior predictive power. In these applications, however, full automation is often not desired due to ethical and legal concerns. The research community has thus ventured into developing interpretable methods that explain machine predictions. While these explanations are meant to assist humans in understanding machine predictions and thereby allowing humans to make better decisions, this hypothesis is not supported in many recent studies. To improve human decision-making with AI assistance, we propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.

中文翻译:

利用解释来连接人工智能和人类

由于其卓越的预测能力,机器学习模型越来越多地集成到社会关键应用程序中,例如再犯罪预测和医学诊断。然而,在这些应用中,出于道德和法律方面的考虑,通常不需要完全自动化。因此,研究界冒险开发可解释机器预测的方法。虽然这些解释旨在帮助人类理解机器预测,从而让人类做出更好的决策,但最近的许多研究都不支持这一假设。为了在人工智能的帮助下改善人类决策,我们提出了缩小解释效力与人类表现改善之间差距的未来方向。
更新日期:2020-03-18
down
wechat
bug