当前位置: X-MOL 学术IEEE Multimed. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AffectiveNet: Affective-Motion Feature Learning for Microexpression Recognition
IEEE Multimedia ( IF 2.3 ) Pub Date : 2020-09-10 , DOI: 10.1109/mmul.2020.3021659
Monu Verma 1 , Santosh Kumar Vipparthi 1 , Girdhari Singh 1
Affiliation  

Microexpressions are hard to spot due to fleeting and involuntary moments of facial muscles. Interpretation of microemotions from video clips is a challenging task. In this article, we propose an affective-motion imaging that cumulates rapid and short-lived variational information of microexpressions into a single response. Moreover, we have proposed an AffectiveNet: Affective-motion feature learning network that can perceive subtle changes and learns the most discriminative dynamic features to describe the emotion classes. The AffectiveNet holds two blocks: MICRoFeat and MFL block. MICRoFeat block conserves the scale-invariant features, which allows network to capture both coarse and tiny edge variations. Whereas, the MFL block learns microlevel dynamic variations from two different intermediate convolutional layers. Effectiveness of the proposed network is tested over four datasets by using two experimental setups: person independent and cross dataset validation. The experimental results of the proposed network outperform the state-of-the-art approaches with significant margin for MER approaches.

中文翻译:

AffectiveNet:用于微表情识别的情感运动特征学习

由于面部肌肉的短暂和不自主的瞬间,微表情很难被发现。从视频剪辑中对微情感进行解释是一项艰巨的任务。在本文中,我们提出了一种情感运动成像,它将微表达的快速和短暂的变异信息累积为单个响应。此外,我们提出了一个AffectiveNet:情感运动特征学习网络,该网络可以感知细微的变化并学习最具区分性的动态特征来描述情感类别。AffectiveNet包含两个块:MICRoFeat和MFL块。MICRoFeat块保留了比例不变的功能,这使网络可以捕获粗糙和微小的边缘变化。而MFL块从两个不同的中间卷积层学习微级动态变化。通过使用两个实验设置,在四个数据集上测试了所提出网络的有效性:个人无关和交叉数据集验证。拟议网络的实验结果优于最新方法,并为MER方法提供了很大的余量。
更新日期:2020-09-10
down
wechat
bug