当前位置: X-MOL 学术J. Comput. Mediat. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views?
Journal of Computer-Mediated Communication ( IF 7.432 ) Pub Date : 2021-05-07 , DOI: 10.1093/jcmc/zmab006
Magdalena Wojcieszak 1, 2 , Arti Thakur 3 , João Fernando Ferreira Gonçalves 4 , Andreu Casas 5 , Ericka Menchen-Trevino 6 , & Miriam Boon 7
Affiliation  

Although artificial intelligence is blamed for many societal challenges, it also has underexplored potential in political contexts online. We rely on six preregistered experiments in three countries (N = 6,728) to test the expectation that AI and AI-assisted humans would be perceived more favorably than humans (a) across various content moderation, generation, and recommendation scenarios and (b) when exposing individuals to counter-attitudinal political information. Contrary to the preregistered hypotheses, participants see human agents as more just than AI across the scenarios tested, with the exception of news recommendations. At the same time, participants are not more open to counter-attitudinal information attributed to AI rather than a human or an AI-assisted human. These findings, which—with minor variations—emerged across countries, scenarios, and issues, suggest that human intervention is preferred online and that people reject dissimilar information regardless of its source. We discuss the theoretical and practical implications of these findings. Lay Summary In the era of unprecedented political divides and misinformation, artificial intelligence (AI) and algorithms are often seen as the culprits. In contrast to these dominant narratives, we argued that AI might be seen as being less biased than a human in online political contexts. We relied on six preregistered experiments in three countries (the United Sates, Spain, Poland) to test whether internet users perceive AI and AI-assisted humans more favorably than simply humans; (a) across various distinct scenarios online, and (b) when exposing people to opposing political information on a range of contentious issues. Contrary to our expectations, human agents were consistently perceived more favorably than AI except when recommending news. These findings suggest that people prefer human intervention in most online political contexts.

中文翻译:

人工智能能否增强人们对在线审核的支持以及他们对不同政治观点的开放性?

尽管人工智能被归咎于许多社会挑战,但它在网络政治环境中的潜力也未被充分开发。我们依靠在三个国家 (N = 6,728) 的六个预先注册的实验来测试 AI 和 AI 辅助的人类在 (a) 各种内容审核、生成和推荐场景和 (b) 当使个人接触反态度的政治信息。与预先注册的假设相反,除了新闻推荐之外,参与者在测试的场景中认为人类代理不仅仅是人工智能。同时,参与者对归因于人工智能而不是人类或人工智能辅助人类的反态度信息并没有更开放的态度。这些发现在不同国家出现了细微的差异,情景和问题表明,在线人为干预是首选,人们拒绝不同的信息,无论其来源如何。我们讨论了这些发现的理论和实践意义。总结 在前所未有的政治分歧和错误信息的时代,人工智能(AI)和算法经常被视为罪魁祸首。与这些占主导地位的叙述相反,我们认为在网络政治环境中,人工智能可能被视为比人类更少偏见。我们依靠三个国家(美国、西班牙、波兰)的六个预先注册的实验来测试互联网用户是否比单纯的人类更喜欢人工智能和人工智能辅助的人类;(a) 跨越各种不同的在线场景,以及 (b) 当让人们接触到关于一系列有争议问题的反对政治信息时。与我们的预期相反,除了推荐新闻外,人类代理一直被认为比人工智能更受欢迎。这些发现表明,在大多数在线政治环境中,人们更喜欢人为干预。
更新日期:2021-05-07
down
wechat
bug