当前位置: X-MOL 学术Telematics and Informatics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair
Telematics and Informatics ( IF 7.6 ) Pub Date : 2021-02-24 , DOI: 10.1016/j.tele.2021.101595
Taenyun Kim , Hayeon Song

Trust is essential in individuals’ perception, behavior, and evaluation of intelligent agents. Because, it is the primary motive for people to accept new technology, it is crucial to repair trust when damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is human-like compared to machine-like based on two seemingly competing frameworks of the Computers-Are-Social-Actors paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario to make investment choices based on an artificial intelligence agent’s advice. To see the trajectory of the initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of five rounds of eight investment choices (40 investment choices in total). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal rather than external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external rather than internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.



中文翻译:

智能代理应该为恢复信任而道歉吗?拟人化与道歉归因之间的相互作用对信任修复的影响

信任对于个人对智能代理的感知,行为和评估至关重要。因为,这是人们接受新技术的主要动机,所以在受损时恢复信任至关重要。这项研究基于两种看似相互竞争的Computers-Are-Social-Actors范式和自动化框架,研究了智能代理人应该道歉以恢复信任,以及当代理人是类似于人的机器时,道歉的有效性是如何不同的。偏见。进行了2次(代理:类机器类)X 2(道歉归因:内部外部)主题间设计实验(N = 193)。向参与者展示了一种基于人工智能代理的建议进行投资选择的方案。为了了解最初的信任建立,信任违规和信任修复过程的轨迹,我们设计了一个投资游戏,该游戏由五轮投资,八个投资选择(总共40个投资选择)组成。结果表明,当类似人的代理人向内部而不是外部归因表示道歉时,信任会得到更有效的修复。但是,在具有机器代理的参与者中观察到相反的模式。外部而非内部归属条件显示出更好的信任修复。讨论了理论和实践意义。

更新日期:2021-03-04
down
wechat
bug