当前位置: X-MOL 学术J. Comput. Mediat. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not?
Journal of Computer-Mediated Communication ( IF 5.4 ) Pub Date : 2021-04-19 , DOI: 10.1093/jcmc/zmab005
Jingbo Meng 1 , Yue (Nancy) Dai 2
Affiliation  

This study examined how and when a chatbot’s emotional support was effective in reducing people’s stress and worry. It compared emotional support from chatbot versus human partners in terms of its process and conditional effects on stress/worry reduction. In an online experiment, participants discussed a personal stressor with a chatbot or a human partner who provided none, or either one or both of emotional support and reciprocal self-disclosure. The results showed that emotional support from a conversational partner was mediated through perceived supportiveness of the partner to reduce stress and worry among participants, and the link from emotional support to perceived supportiveness was stronger for a human than for a chatbot. A conversational partner’s reciprocal self-disclosure enhanced the positive effect of emotional support on worry reduction. However, when emotional support was absent, a solely self-disclosing chatbot reduced even less stress than a chatbot not providing any response to participants’ stress. Lay Summary In recent years, AI chatbots have increasingly been used to provide empathy and support to people who are experiencing stressful times. This study compared emotional support from a chatbot compared to that of a human who provided support. We were interested in examining which approach could best effectively reduce people’s worry and stress. When either a person or a chatbot was able to engage with a stressed individual and tell that individual about their own experiences, they were able to build rapport. We found that this type of reciprocal self-disclosure was effective in calming the worry of the individual. Interestingly, if a chatbot only reciprocally self-disclosed but offered no emotional support, the outcome was worse than if the chatbot did not respond to people at all. This work will help in the development of supportive chatbots by providing insights into when and what they should self-disclose.

中文翻译:

来自 AI 聊天机器人的情感支持:支持性合作伙伴是否应该自我披露?

这项研究检查了聊天机器人的情感支持如何以及何时有效地减轻人们的压力和担忧。它比较了聊天机器人与人类合作伙伴的情感支持的过程和对减轻压力/担忧的条件影响。在一项在线实验中,参与者与聊天机器人或没有提供情感支持和相互自我表露的其中一种或两种的人类伙伴讨论个人压力源。结果表明,对话伙伴的情感支持是通过伙伴的感知支持来调节的,以减轻参与者的压力和担忧,并且人类的情感支持与感知支持之间的联系比聊天机器人更强。对话伙伴的相互自我表露增强了情感支持对减少忧虑的积极影响。然而,当缺乏情感支持时,一个完全自我披露的聊天机器人比不对参与者的压力提供任何反应的聊天机器人减轻的压力甚至更小。总结近年来,人工智能聊天机器人越来越多地被用来为正在经历压力的人们提供同理心和支持。这项研究将聊天机器人的情感支持与提供支持的人类进行了比较。我们有兴趣研究哪种方法可以最有效地减少人们的担忧和压力。当一个人或聊天机器人能够与有压力的人互动并告诉那个人他们自己的经历时,他们就能够建立融洽的关系。我们发现这种相互的自我表露可以有效地平息个人的担忧。有趣的是,如果聊天机器人只是相互自我披露但不提供情感支持,那么结果会比聊天机器人根本不回应人的情况更糟。这项工作将通过提供有关何时以及应该自我披露的内容的见解来帮助开发支持性聊天机器人。
更新日期:2021-04-19
down
wechat
bug