当前位置: X-MOL 学术Current Opinion in Psychology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The power to harm: AI assistants pave the way to unethical behavior
Current Opinion in Psychology ( IF 6.3 ) Pub Date : 2022-06-11 , DOI: 10.1016/j.copsyc.2022.101382
Jonathan Gratch 1 , Nathanael J Fast 1
Affiliation  

Advances in artificial intelligence (AI) enable new ways of exercising and experiencing power by automating interpersonal tasks such as interviewing and hiring workers, managing and evaluating work, setting compensation, and negotiating deals. As these techniques become more sophisticated, they increasingly support personalization where users can “tell” their AI assistants not only what to do, but how to do it: in effect, dictating the ethical values that govern the assistant's behavior. Importantly, these new forms of power could bypass existing social and regulatory checks on unethical behavior by introducing a new agent into the equation. Organization research suggests that acting through human agents (i.e., the problem of indirect agency) can undermine ethical forecasting such that actors believe they are acting ethically, yet a) show less benevolence for the recipients of their power, b) receive less blame for ethical lapses, and c) anticipate less retribution for unethical behavior. We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents. We conclude by examining boundary conditions and discussing potential directions for future research.



中文翻译:

伤害的力量:人工智能助手为不道德行为铺平了道路

人工智能 (AI) 的进步通过自动化人际交往任务(例如面试和雇用工人、管理和评估工作、设定薪酬和谈判交易),实现了行使和体验权力的新方式。随着这些技术变得越来越复杂,它们越来越多地支持个性化,用户不仅可以“告诉”他们的 AI 助手做什么,还可以“告诉”他们如何去做:实际上,决定了支配助手行为的道德价值观。重要的是,这些新的权力形式可以通过在等式中引入新的代理人来绕过现有的对不道德行为的社会和监管检查。组织研究表明,通过人类代理(即间接代理问题)采取行动可能会破坏道德预测,从而使行为者认为他们的行为符合道德,然而a)对权力的接受者表现出较少的仁慈,b)因道德失误而受到的指责较少,c)预计对不道德行为的惩罚较少。我们回顾了一系列研究,这些研究说明了在广泛的社会任务中,人们如何在通过人工智能代理行动时可能表现得不那么道德,并且更愿意欺骗。最后,我们检查边界条件并讨论未来研究的潜在方向。

更新日期:2022-06-11
down
wechat
bug