当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
arXiv - CS - Computation and Language Pub Date : 2020-01-14 , DOI: arxiv-2001.05871
Vivian Lai, Han Liu, Chenhao Tan

To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a training phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically selected examples from training data with explanations. We use deceptive review detection as a testbed and conduct large-scale, randomized human-subject experiments to examine the effectiveness of such tutorials. We find that tutorials indeed improve human performance, with and without real-time assistance. In particular, although deep learning provides superior predictive performance than simple models, tutorials and explanations from simple models are more useful to humans. Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.

中文翻译:

“为什么‘芝加哥’是骗人的?” 为人类构建模型驱动的教程

为了使用机器学习模型支持人类决策,我们经常需要阐明嵌入在模型中的模式,这些模式对人类来说是不显着的、未知的或违反直觉的。虽然现有方法侧重于通过实时辅助解释机器预测,但我们探索了模型驱动的教程,以帮助人类在训练阶段理解这些模式。我们考虑了带有科学论文指南的教程,类似于当前的科学传播实践,以及从训练数据中自动选择的带有解释的示例。我们使用欺骗性评论检测作为测试平台,并进行大规模、随机的人类受试者实验来检查此类教程的有效性。我们发现,无论有没有实时帮助,教程确实提高了人类的表现。特别是,尽管深度学习比简单模型提供了更好的预测性能,但来自简单模型的教程和解释对人类更有用。我们的工作为以人为本的教程和解释提出了未来方向,以实现人类与人工智能之间的协同作用。
更新日期:2020-01-17
down
wechat
bug