当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint Local and Global Information Learning With Single Apex Frame Detection for Micro-Expression Recognition
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-11-06 , DOI: 10.1109/tip.2020.3035042
Yante Li , Xiaohua Huang , Guoying Zhao

Micro-expressions (MEs) are rapid and subtle facial movements that are difficult to detect and recognize. Most recent works have attempted to recognize MEs with spatial and temporal information from video clips. According to psychological studies, the apex frame conveys the most emotional information expressed in facial expressions. However, it is not clear how the single apex frame contributes to micro-expression recognition. To alleviate that problem, this paper firstly proposes a new method to detect the apex frame by estimating pixel-level change rates in the frequency domain. With frequency information, it performs more effectively on apex frame spotting than the currently existing apex frame spotting methods based on the spatio-temporal change information. Secondly, with the apex frame, this paper proposes a joint feature learning architecture coupling local and global information to recognize MEs, because not all regions make the same contribution to ME recognition and some regions do not even contain any emotional information. More specifically, the proposed model involves the local information learned from the facial regions contributing major emotion information, and the global information learned from the whole face. Leveraging the local and global information enables our model to learn discriminative ME representations and suppress the negative influence of unrelated regions to MEs. The proposed method is extensively evaluated using CASME, CASME II, SAMM, SMIC, and composite databases. Experimental results demonstrate that our method with the detected apex frame achieves considerably promising ME recognition performance, compared with the state-of-the-art methods employing the whole ME sequence. Moreover, the results indicate that the apex frame can significantly contribute to micro-expression recognition.

中文翻译:


用于微表情识别的单顶点帧检测联合局部和全局信息学习



微表情 (ME) 是难以检测和识别的快速而微妙的面部动作。最近的作品试图通过视频剪辑中的空间和时间信息来识别 ME。根据心理学研究,顶点框架传达了面部表情中表达的最多的情感信息。然而,尚不清楚单顶点框架如何有助于微表情识别。为了缓解这个问题,本文首先提出了一种通过估计频域中像素级变化率来检测顶点帧的新方法。有了频率信息,它比现有的基于时空变化信息的顶点帧定位方法更有效。其次,在 apex 框架下,本文提出了一种耦合局部和全局信息的联合特征学习架构来识别 ME,因为并非所有区域对 ME 识别做出相同的贡献,有些区域甚至不包含任何情感信息。更具体地说,所提出的模型涉及从贡献主要情感信息的面部区域学习的局部信息以及从整个面部学习的全局信息。利用局部和全局信息使我们的模型能够学习有区别的 ME 表示并抑制不相关区域对 ME 的负面影响。使用 CASME、CASME II、SAMM、SMIC 和复合数据库对所提出的方法进行了广泛评估。实验结果表明,与采用整个 ME 序列的最先进方法相比,我们使用检测到的顶点帧的方法实现了相当有前途的 ME 识别性能。 此外,结果表明顶点框架对微表情识别有显着贡献。
更新日期:2020-11-06
down
wechat
bug