当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Magnifying Subtle Facial Motions for Effective 4D Expression Recognition
IEEE Transactions on Affective Computing ( IF 11.2 ) Pub Date : 2019-10-01 , DOI: 10.1109/taffc.2017.2747553
Qingkai Zhen , Di Huang , Hassen Drira , Boulbaba Ben Amor , Yunhong Wang , Mohamed Daoudi

In this paper, an effective approach is proposed for automatic 4D Facial Expression Recognition (FER). It combines two growing but disparate ideas in the domain of computer vision, i.e., computing spatial facial deformations using a Riemannian method and magnifying them by a temporal filtering technique. Key frames highly related to facial expressions are first extracted from a long 4D video through a spectral clustering process, forming the Onset-Apex-Offset flow. It is then analyzed to capture the spatial deformations based on Dense Scalar Fields (DSF), where registration and comparison of neighboring 3D faces are jointly led. The generated temporal evolution of these deformations is further fed into a magnification method to amplify facial activities over time. The proposed approach allows revealing subtle deformations and thus improves the emotion classification performance. Experiments are conducted on the BU-4DFE and BP-4D databases, and competitive results are achieved compared to the state-of-the-art.

中文翻译:

放大微妙的面部动作以实现有效的 4D 表情识别

在本文中,提出了一种用于自动 4D 面部表情识别 (FER) 的有效方法。它结合了计算机视觉领域中两个不断发展但截然不同的想法,即使用黎曼方法计算空间面部变形并通过时间过滤技术放大它们。首先通过光谱聚类过程从长 4D 视频中提取与面部表情高度相关的关键帧,形成 Onset-Apex-Offset 流。然后对其进行分析以捕获基于密集标量场 (DSF) 的空间变形,其中联合引导相邻 3D 人脸的配准和比较。这些变形产生的时间演变被进一步输入放大方法,以随着时间的推移放大面部活动。所提出的方法允许揭示细微的变形,从而提高情绪分类性能。在 BU-4DFE 和 BP-4D 数据库上进行了实验,与最新技术相比,取得了具有竞争力的结果。
更新日期:2019-10-01
down
wechat
bug