当前位置: X-MOL 学术Comput. Hum. Behav. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Use of offensive language in human-artificial intelligence chatbot interaction: The effects of ethical ideology, social competence, and perceived humanlikeness
Computers in Human Behavior ( IF 9.0 ) Pub Date : 2021-03-26 , DOI: 10.1016/j.chb.2021.106795
Namkee Park , Kyungeun Jang , Seonggyeol Cho , Jinyoung Choi

This study examined the factors that affect artificial intelligence (AI) chatbot users' use of profanity and offensive words, employing the concepts of ethical ideology, social competence, and perceived humanlikeness of chatbot. The study also looked into users' liking of chatbots' responses to the users' utterance of profanity and offensive words. Using a national survey (N = 645), the study found that users' idealism orientation was a significant factor in explaining use of such offensive language. In addition, users with high idealism revealed liking of chatbots' active intervention, whereas those with high relativism displayed liking of chatbots' reactive responses. Moreover, users’ perceived humanlikeness of chatbot increased their likelihood of using offensive words targeting dislikable acquaintances, racial/ethnic groups, and political parties. These findings are expected to fill the gap between the current use of AI chatbots and the lack of empirical studies examining language use.



中文翻译:

在人与人工智能聊天机器人交互中使用攻击性语言:道德意识形态,社会能力和感知的人性的影响

这项研究采用了道德意识形态,社会能力和聊天机器人的人性化概念,研究了影响人工智能聊天机器人用户使用亵渎和冒犯性词语的因素。该研究还调查了用户是否喜欢聊天机器人对用户的亵渎和攻击性言语的回应。使用国家调查(N = 645),该研究发现,用户的理想主义倾向是解释使用此类令人反感的语言的重要因素。此外,具有较高理想主义的用户表现出对聊天机器人的积极干预的喜好,而具有相对理想主义的用户则表现出对聊天机器人的反应性的喜好。此外,用户认为聊天机器人的人性化程度增加了他们使用针对讨厌的熟人,种族/族裔群体和政党的冒犯性词语的可能性。预期这些发现将填补当前AI聊天机器人的使用与缺乏检查语言使用的实证研究之间的空白。

更新日期:2021-04-01
down
wechat
bug