当前位置: X-MOL 学术Atten. Percept. Psychophys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective
Attention, Perception, & Psychophysics ( IF 1.7 ) Pub Date : 2021-03-23 , DOI: 10.3758/s13414-021-02281-6
Chao Ma , Nianxin Guo , Faraday Davies , Yantian Hou , Suyan Guo , Xun Zhu

A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.



中文翻译:

从感性的角度来看,面部表情可以沿上下面部轴进行分类

建立情感模型的根本问题是如何对情感进行分类。以前的研究通常采用以下两种方法之一:(a)着眼于知觉前的视觉提示,如何显着显示面部特征或轮廓;或(b)他们专注于知觉后的情感体验,情绪如何影响行为。在这项研究中,我们尝试在知觉加工水平上对情绪进行分组:众所周知,人类对面部表情的理解不同,因此,我们可以根据面部表情的相似性将面部表情分类为不同的类别吗?在这里,我们使用一种新颖的非词汇范式,使用反应时间评估了20个面部表情之间的知觉差异。多维比例分析显示,面部表情主要沿上下面部轴组织。行为数据的聚类分析描绘了三个上级类别,而眼动追踪测量结果验证了这些聚类结果。有趣的是,可以根据面部显示与声音通信的交互方式来概念化这些上级类别:一组包含具有明显嘴部特征的表情。它们可能与特定物种的发声有关,例如哭泣,大笑。第二组包括在嘴和眼区域都具有诊断特征的视觉显示。它们不是直接说出来的,而是可以用语音表达的,例如,悲伤,愤怒。第三组中的表情也是全脸表情,但完全独立于发声,很可能是两个或多个基本表情的混合体。我们提出了一个理论框架来解释三方划分,其中不同的表达子集被解释为进化链中的连续相。

更新日期:2021-03-24
down
wechat
bug