当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Facial Expression Recognition through person-wise regeneration of expressions using Auxiliary Classifier Generative Adversarial Network (AC-GAN) based model
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-04-09 , DOI: 10.1016/j.jvcir.2021.103110
Dharanya V. , Alex Noel Joseph Raj , Varun P. Gopi

Recently, Facial Expression Recognition (FER) has gained much attention in the research area for its various applications. In the facial expression recognition task, subject-dependent issue is predominant when a small-scale database is used for training the system. The proposed Auxiliary Classifier Generative Adversarial Network (AC-GAN) based model regenerates ten expressions (angry, contempt, disgust, embarrassment, fear, joy, neutral, pride, sad, surprise) from input face image and recognizes its expression. To alleviate the subject dependence issue, we train the model person-wise and generate all the above expressions for a person and allow the discriminator to classify the expressions. The generator of our model uses U-Net Architecture, and the discriminator uses Capsule Networks for improved feature extraction. The model has been evaluated on the ADFES-BIV dataset yielding an overall classification accuracy of 93.4%. We also compared our model with the existing methods by evaluating our model on commonly used datasets like CK+, KDEF.



中文翻译:

通过基于辅助分类器生成对抗网络(AC-GAN)的人为表达方式的人为再生来进行面部表情识别

近年来,面部表情识别(FER)因其各种应用而在研究领域引起了很多关注。在面部表情识别任务中,当使用小型数据库来训练系统时,与主体相关的问题尤为突出。拟议的基于辅助分类器的生成对抗网络(AC-GAN)模型从输入的人脸图像中重新生成了十种表情(生气,轻蔑,厌恶,尴尬,恐惧,喜悦,中立,骄傲,悲伤,惊奇),并识别出其表情。为了减轻主题依赖问题,我们对人进行了模型训练,并为一个人生成了所有上述表达式,并允许鉴别者对这些表达式进行分类。我们模型的生成器使用U-Net架构,鉴别器使用Capsule网络来改进特征提取。该模型已经在ADFES-BIV数据集上进行了评估,总体分类精度为93.4%。通过对常用数据集(如CK +,KDEF)评估模型,我们还将模型与现有方法进行了比较。

更新日期:2021-04-12
down
wechat
bug