当前位置: X-MOL 学术IEEE Trans. Affect. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FLEPNet: Feature Level Ensemble Parallel Network for Facial Expression Recognition
IEEE Transactions on Affective Computing ( IF 9.6 ) Pub Date : 9-21-2022 , DOI: 10.1109/taffc.2022.3208309
Mohan Karnati 1 , Ayan Seal 1 , Anis Yazidi 2 , Ondrej Krejcar 3
Affiliation  

With the advent of deep learning, the research on facial expression recognition (FER) has received a lot of interest. Different deep convolutional neural network (DCNN) architectures have been developed for real-time and efficient FER. One of the challenges in FER is obtaining trustworthy features that are strongly associated with facial expression changes. Furthermore, traditional DCNNs for FER problems have two significant issues: insufficient training data, which leads to overfitting, and intra-class facial appearance variations. FLEPNet, a texture-based feature-level ensemble parallel network for FER, is proposed in this study and proved to solve the aforementioned problems. Our parallel network FLEPNet uses multi-scale convolutional and multi-scale residual block-based DCNN as building blocks. First, we consider modified homomorphic filtering to normalize the illumination effectively, which minimizes the intra-class difference. The deep networks are then protected against having insufficient training data by using texture analysis on face expression images to identify multiple attributes. Four texture features are extracted and combined with the image's original characteristics. Finally, the integrated features retrieved by two networks are used to classify seven facial expressions. Experimental results reveal that the proposed technique achieves an average accuracy of 0.9914, 0.9894, 0.9796, 0.8756, and 0.8072 on Japanese Female Facial Expressions, Extended CohnKanade, Karolinska Directed Emotional Faces, Real-world Affective Face Database, and Facial Expression Recognition 2013 databases, respectively. Moreover, experimental outcomes depict significant reliability when compared to competing approaches.

中文翻译:


FLEPNet:用于面部表情识别的特征级集成并行网络



随着深度学习的出现,面部表情识别(FER)的研究引起了人们的广泛兴趣。人们已经开发了不同的深度卷积神经网络 (DCNN) 架构来实现实时高效的 FER。 FER 的挑战之一是获得与面部表情变化密切相关的可信特征。此外,用于 FER 问题的传统 DCNN 存在两个重大问题:训练数据不足,导致过度拟合;以及类内面部外观变化。本研究提出了 FLEPNet,一种基于纹理的特征级 FER 集成并行网络,并被证明可以解决上述问题。我们的并行网络 FLEPNet 使用多尺度卷积和基于多尺度残差块的 DCNN 作为构建块。首先,我们考虑修改同态滤波来有效地标准化照明,从而最小化类内差异。然后,通过对面部表情图像进行纹理分析来识别多个属性,从而防止深度网络训练数据不足。提取四种纹理特征并与图像的原始特征相结合。最后,使用两个网络检索到的集成特征对七种面部表情进行分类。实验结果表明,该技术在 Japanese Female Facial Expressions、Extend CohnKanade、Karolinska Directed Emotional Faces、Real-world Affective Face Database 和 Facial Expression Recognition 2013 数据库上的平均准确度为 0.9914、0.9894、0.9796、0.8756 和 0.8072。分别。此外,与竞争方法相比,实验结果显示出显着的可靠性。
更新日期:2024-08-26
down
wechat
bug