当前位置: X-MOL 学术Journal of Experimental Social Psychology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI
Journal of Experimental Social Psychology ( IF 3.2 ) Pub Date : 2022-04-05 , DOI: 10.1016/j.jesp.2022.104327
Zaixuan Zhang 1 , Zhansheng Chen 1 , Liying Xu 2
Affiliation  

Artificial intelligence (AI) has become deeply integrated into daily life; thus, it is important to examine how people perceive AI as it functions as a decision-maker, especially in situations involving moral dilemmas. Across four studies (N = 804), we found that people perceive AI as more likely to make utilitarian choices than human beings are (Studies1–4). We then measured people's perceptions (both warmth and competence) toward AI and explored their potential contributions to our predicted main effect (Study 2). In addition, our main effect was replicated in impersonal moral dilemma and personal high-conflict moral dilemma situations (Studies 3 and 4). We discuss the implications of these findings on moral dilemma and human–computer interactions.



中文翻译:

人工智能与道德困境:人工智能中道德决策的认知

人工智能(AI)已经深度融入日常生活;因此,重要的是要检查人们如何看待人工智能作为决策者的功能,尤其是在涉及道德困境的情况下。在四项研究 ( N  = 804) 中,我们发现人们认为 AI 比人类更有可能做出功利主义的选择 (Studies1-4)。然后,我们测量了人们对人工智能的看法(包括热情和能力),并探索了他们对我们预测的主要影响的潜在贡献(研究 2)。此外,我们的主要效果在非个人道德困境和个人高冲突道德困境情况下得到了复制(研究 3 和 4)。我们讨论了这些发现对道德困境和人机交互的影响。

更新日期:2022-04-05
down
wechat
bug