当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring Macroscopic and Microscopic Fluctuations of Elicited Facial Expressions for Mood Disorder Classification
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2019-04-11 , DOI: 10.1109/taffc.2019.2909873
Qian-Bei Hong , Chung-Hsien Wu , Ming-Hsiang Su , Chia-Cheng Chang

In the clinical diagnosis of mood disorder, a large proportion of patients with bipolar disorder (BD) are misdiagnosed as having unipolar depression (UD). Generally, long-term tracking is required for patients with BD to conduct an appropriate diagnosis by using traditional diagnosis tools. A one-time diagnosis system for facilitating diagnosis procedures is thus highly desirable. Accordingly, in this study, the facial expressions of patients with BD, patients with UD, and healthy controls elicited by emotional video clips were used for conducting mood disorder classification; the classification was performed by exploring the temporal fluctuation characteristics among the three groups. First, macroscopic facial expressions characterized by action units (AUs) were applied for describing the temporal transformation of muscles. Modulation spectrum analysis was applied to extract short-term intensity variations in the AUs. An interval-based multilayer perceptron (MLP) neural network was then used to classify mood disorder on the basis of the detected AU intensities. Moreover, motion vectors (MVs) were employed to describe subtle changes in facial expressions in the microscopic view. Eight basic orientations of MV change were considered for representing microfluctuation. Wavelet decomposition was then applied to extract entropy and energy features in different frequency bands. A long short-term memory model was finally used to model long-term variations for conducting mood disorder classification. A decision-level fusion approach was conducted on the combined results of macroscopic and microscopic facial expressions. For evaluating the described methods, the facial expressions elicited from the 36 subjects (12 from each of the BD, UD, and control groups) were used in 12-fold cross-validation experiments. Approaches for macroscopic and microscopic expressions achieved classification accuracies of 63.9 and 66.7 percent, respectively, and the accuracy of the fusion approach reached 72.2 percent. The results indicate that macroscopic and microscopic view descriptors are complementary to each other and helpful for conducting mood disorder classification.

中文翻译:

探索用于情绪障碍分类的面部表情的宏观和微观波动

在心境障碍的临床诊断中,很大一部分双相障碍(BD)患者被误诊为单相抑郁(UD)。一般来说,BD患者需要长期跟踪才能使用传统的诊断工具进行适当的诊断。因此,非常需要一种便于诊断程序的一次性诊断系统。因此,在本研究中,情绪视频片段引发的 BD 患者、UD 患者和健康对照的面部表情被用于进行情绪障碍分类;通过探索三组之间的时间波动特征进行分类。首先,以动作单元(AU)为特征的宏观面部表情被应用于描述肌肉的时间转换。调制频谱分析用于提取 AU 的短期强度变化。然后使用基于间隔的多层感知器 (MLP) 神经网络根据检测到的 AU 强度对情绪障碍进行分类。此外,运动向量(MV)被用来描述微观视图中面部表情的细微变化。考虑了 MV 变化的八个基本方向来表示微波动。然后应用小波分解来提取不同频段的熵和能量特征。最终使用长短期记忆模型对长期变化进行建模,以进行情绪障碍分类。对宏观和微观面部表情的组合结果进行决策级融合方法。为了评估所描述的方法,从 36 名受试者(BD、UD 和对照组各 12 名)引出的面部表情用于 12 倍交叉验证实验。宏观和微观表达方法的分类准确率分别达到63.9%和66.7%,融合方法的准确率达到72.2%。结果表明,宏观和微观视图描述符相互补充,有助于进行情绪障碍分类。
更新日期:2019-04-11
down
wechat
bug