当前位置: X-MOL 学术Decis. Support Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mental models and expectation violations in conversational AI interactions
Decision Support Systems ( IF 7.5 ) Pub Date : 2021-02-13 , DOI: 10.1016/j.dss.2021.113515
G. Mark Grimes , Ryan M. Schuetzler , Justin Scott Giboney

Artificial Intelligence is increasingly becoming integrated in many aspects of human life. One particular AI comes in the form of conversational agents (CAs) such as Siri, Alexa, and chatbots used for customer service on websites and other information systems. It is widely accepted that humans treat systems as social actors. Leveraging this bias, companies sometimes attempt to masquerade a CA as a human customer service representative. In addition to the ethical and legal questions around this practice, the benefits and drawbacks of a CA pretending to be human are unclear due to a lack of study. While more human-like interactions can improve outcomes, when users find out that the CA is not human, they may have a negative reaction that may cause reputation harm in the company. In this research we use Expectation Violation Theory to explain what happens when users have high or low expectations of a conversation. We conducted an experiment with 175 participants where some participants were told they were interacting with a CA while others were told they were interacting with a human. We further divided the groups so that some participants interacted with a CA with low conversational capability while others interacted with a CA with high conversational capability. The results show that expectations formed by the user before the interaction change how the user evaluates the CA beyond the actual performance of the CA. These findings provide guidance to developers not just of conversational agents, but also for other technologies where users may be uncertain of a system's capabilities.



中文翻译:

对话式AI交互中的心理模型和期望违规

人工智能正日益融入人类生活的许多方面。一种特殊的AI以会话代理(CA)的形式出现,例如Siri,Alexa和用于在网站和其他信息系统上为客户服务的聊天机器人。人们将系统视为社会行为者,这已被广泛接受。利用这种偏见,公司有时会伪装成CA作为人类客户服务代表。除了围绕这种做法的道德和法律问题外,由于缺乏研究,因此假装成人的CA的优缺点还不清楚。虽然更多的类似于人的互动可以改善结果,但是当用户发现CA不是人类时,他们可能会产生负面反应,从而可能对公司的声誉造成损害。在这项研究中,我们使用期望违规理论来解释当用户对对话的期望值高或低时会发生什么。我们对175名参与者进行了一项实验,其中一些参与者被告知他们正在与CA交互,而其他参与者被告知他们正在与人交互。我们进一步划分组,以使一些参与者与会话能力低的CA交互,而另一些参与者与会话能力高的CA交互。结果表明,用户在交互之前形成的期望改变了用户对CA的评估方式,超出了CA的实际性能。这些发现不仅为对话代理的开发人员提供指导,而且为用户可能不确定系统功能的其他技术提供指导。

更新日期:2021-03-25
down
wechat
bug