当前位置: X-MOL 学术Cyberpsychology, Behavior, and Social Networking › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Something's Fishy About It: How Opinion Congeniality and Explainability Affect Motivated Attribution to Artificial Intelligence Versus Human Comment Moderators
Cyberpsychology, Behavior, and Social Networking ( IF 6.135 ) Pub Date : 2022-08-09 , DOI: 10.1089/cyber.2021.0347
Eun-Ju Lee 1 , Hyun Suk Kim 1 , Yoo Ji Suh 2 , Jin Won Park 1
Affiliation  

An online experiment (N = 384) examined when and how the identity of the comment moderator (artificial intelligence [AI] vs. human) on a news website affects the extent to which individuals (a) suspect political motives for comment removal and (b) believe in the AI heuristic (“AI is objective, neutral, accurate, and fair”). Specifically, we investigated how the provision of an explanation for comment removal (none vs. real vs. placebic), and opinion congeniality between the remaining comments and the user's opinion (uncongenial vs. congenial) qualify social responses to AI. Results showed that news users were more suspicious of political motives for an AI (vs. human) moderator's comment removal (a) when the remaining comments were uncongenial, and (b) when no explanation was offered for deleted comments. Providing a real explanation (vs. none) attenuated participants' suspicion of political motives behind comment removal, but only for the AI moderator. When AI moderated the comments section, the exposure to congenial (vs. uncongenial) comments led participants to endorse the AI heuristic more strongly, but only in the absence of an explanation for comment removal. By contrast, the participants' belief in AI heuristic was stronger when a human moderator preserved uncongenial (vs. congenial) comments. Apparently, they considered AI as a viable alternative to a human moderator whose performance was unsatisfactory.

中文翻译:

有点可疑:意见相投性和可解释性如何影响人工智能与人类评论主持人的动机归因

在线实验(N = 384)检查了新闻网站上评论版主(人工智能 [AI] 与人类)的身份何时以及如何影响个人(a)怀疑删除评论的政治动机和(b)相信人工智能的程度启发式(“人工智能是客观的、中立的、准确的和公平的”)。具体来说,我们调查了对评论删除的解释(无、真实与安慰剂)以及剩余评论与用户意见(不合意与合意)之间的意见相合性如何限定社会对人工智能的反应。结果表明,新闻用户更怀疑人工智能(与人类)版主删除评论的政治动机(a)当剩余评论不合意时,以及(b)当没有为删除的评论提供解释时。提供真实的解释(对比 none) 减轻了参与者对删除评论背后的政治动机的怀疑,但仅限于 AI 版主。当 AI 审核评论部分时,对投机取巧(相对于不投机取巧)评论的曝光导致参与者更强烈地支持 AI 启发式,但仅在没有解释删除评论的情况下。相比之下,当人类主持人保留不合意的(与合意的)评论时,参与者对人工智能启发式的信念更强。显然,他们认为 AI 可以替代表现不佳的人类主持人。但仅在没有解释删除评论的情况下。相比之下,当人类主持人保留不合意的(与合意的)评论时,参与者对人工智能启发式的信念更强。显然,他们认为 AI 可以替代表现不佳的人类主持人。但仅在没有解释删除评论的情况下。相比之下,当人类主持人保留不合意的(与合意的)评论时,参与者对人工智能启发式的信念更强。显然,他们认为 AI 可以替代表现不佳的人类主持人。
更新日期:2022-08-10
down
wechat
bug