当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emotion Recognition in Simulated Social Interactions
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 2018-01-01 , DOI: 10.1109/taffc.2018.2799593
Christian Mumenthaler , David Sander , Antony Manstead

Social context plays an important role in everyday emotional interactions, and others' faces often provide contextual cues in social situations. Investigating this complex social process is a challenge that can be addressed with the use of computer-generated facial expressions. In the current research, we use synthesized facial expressions to investigate the influence of socioaffective inferential mechanisms on the recognition of social emotions. Participants judged blends of facial expressions of shame-sadness, or of anger-disgust, in a target avatar face presented at the center of a screen while a contextual avatar face expressed an emotion (disgust, contempt, and sadness) or remained neutral. The dynamics of the facial expressions and the head/gaze movements of the two avatars were manipulated in order to create an interaction in which the two avatars shared eye gaze only in the social interaction condition. Results of Experiment 1 revealed that when the avatars engaged in social interaction, target expression blends of shame and sadness were perceived as expressing more shame if the contextual face expressed disgust and more sadness when the contextual face expressed sadness. Interestingly, perceptions of shame were not enhanced when the contextual face expressed contempt. The latter finding is probably attributable to the low recognition rates for the expression of contempt observed in Experiment 2.

中文翻译:

模拟社交互动中的情绪识别

社会背景在日常情感互动中发挥着重要作用,而其他人的面孔通常会在社会情境中提供背景线索。研究这个复杂的社会过程是一项挑战,可以通过使用计算机生成的面部表情来解决。在目前的研究中,我们使用合成面部表情来研究社会情感推理机制对社会情感识别的影响。参与者在屏幕中央呈现的目标头像中判断羞耻-悲伤或愤怒-厌恶的面部表情的混合,而上下文头像表达情感(厌恶、蔑视和悲伤)或保持中立。操纵两个化身的面部表情的动态和头部/凝视运动,以创建一种交互,其中两个化身仅在社交互动条件下共享眼睛注视。实验1的结果表明,当虚拟人物进行社交互动时,如果上下文脸表达厌恶,则羞耻和悲伤的目标表情混合被感知为表达更多的羞耻,而上下文脸表达悲伤时则更悲伤。有趣的是,当上下文面孔表达蔑视时,羞耻感并没有增强。后一个发现可能归因于实验 2 中观察到的蔑视表达的低识别率。实验 1 的结果表明,当虚拟人物进行社交互动时,如果上下文脸表达厌恶,则羞耻和悲伤的目标表情混合被感知为表达更多的羞耻,而上下文脸表达悲伤时则更悲伤。有趣的是,当上下文面孔表达蔑视时,羞耻感并没有增强。后一个发现可能归因于实验 2 中观察到的蔑视表达的低识别率。实验 1 的结果表明,当虚拟人物进行社交互动时,如果上下文脸表达厌恶,则羞耻和悲伤的目标表情混合被感知为表达更多的羞耻,而上下文脸表达悲伤时则更悲伤。有趣的是,当上下文面孔表达蔑视时,羞耻感并没有增强。后一个发现可能归因于实验 2 中观察到的蔑视表达的低识别率。
更新日期:2018-01-01
down
wechat
bug