当前位置: X-MOL 学术Sci. Rep. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots
Scientific Reports ( IF 4.6 ) Pub Date : 2024-03-26 , DOI: 10.1038/s41598-024-55949-y
Yukun Zhao , Zhen Huang , Martin Seligman , Kaiping Peng

Emotions, long deemed a distinctly human characteristic, guide a repertoire of behaviors, e.g., promoting risk-aversion under negative emotional states or generosity under positive ones. The question of whether Artificial Intelligence (AI) can possess emotions remains elusive, chiefly due to the absence of an operationalized consensus on what constitutes 'emotion' within AI. Adopting a pragmatic approach, this study investigated the response patterns of AI chatbots—specifically, large language models (LLMs)—to various emotional primes. We engaged AI chatbots as one would human participants, presenting scenarios designed to elicit positive, negative, or neutral emotional states. Multiple accounts of OpenAI's ChatGPT Plus were then tasked with responding to inquiries concerning investment decisions and prosocial behaviors. Our analysis revealed that ChatGPT-4 bots, when primed with positive, negative, or neutral emotions, exhibited distinct response patterns in both risk-taking and prosocial decisions, a phenomenon less evident in the ChatGPT-3.5 iterations. This observation suggests an enhanced capacity for modulating responses based on emotional cues in more advanced LLMs. While these findings do not suggest the presence of emotions in AI, they underline the feasibility of swaying AI responses by leveraging emotional indicators.



中文翻译:

风险和亲社会行为线索引发人工智能聊天机器人的类似人类的反应模式

情绪长期以来被认为是人类特有的特征,它指导着一系列行为,例如,在消极情绪状态下促进风险规避,在积极情绪状态下促进慷慨。人工智能(AI)是否可以拥有情感的问题仍然难以捉摸,主要是因为对于人工智能中的“情感”的构成缺乏可操作的共识。这项研究采用务实的方法,调查了人工智能聊天机器人(特别是大型语言模型(LLM))对各种情绪素数的反应模式。我们像对待人类参与者一样使用人工智能聊天机器人,呈现旨在引发积极、消极或中性情绪状态的场景。 OpenAI 的 ChatGPT Plus 的多个帐户随后负责回复有关投资决策和亲社会行为的询问。我们的分析表明,ChatGPT-4 机器人在受到积极、消极或中性情绪的影响时,在冒险和亲社会决策中表现出不同的反应模式,这种现象在 ChatGPT-3.5 迭代中不太明显。这一观察结果表明,更高级的法学硕士根据情绪线索调节反应的能力有所增强。虽然这些发现并不表明人工智能中存在情感,但它们强调了利用情感指标来影响人工智能反应的可行性。

更新日期:2024-03-27
down
wechat
bug