Skip to main content
Log in

Multi-View Facial Expression Recognition with Multi-View Facial Expression Light Weight Network

  • APPLIED PROBLEMS
  • Published:
Pattern Recognition and Image Analysis Aims and scope Submit manuscript

Abstract

Facial expression recognition for frontal faces has become a well-established research area in the last two decades. However, non-frontal facial expression recognition hasn’t been paid much attention until recently. In this paper, we propose an MVFE-LightNet (Multi-View Facial Expression Light Weight Network) for multi-view facial expression recognition. To this end, we first applied MTCNN for facial detection and alignment and then did preprocessing like normalization and data augmentation. Finally, we put the images into MVFE-LightNet to extract sub-space features of facial expressions with various poses. A depthwise separable residual convolution module architecture was designed to reduce the parameters of the model and lessen the chance of overfitting. Experiments were implemented on Radboud Faces Database and BU-3DFE dataset. We demonstrated that our method could effectively improve the recognition accuracy, and achieved the accuracy of 95.6% and 88.7% respectively for the Radboud and BU-3DFE.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.

Similar content being viewed by others

REFERENCES

  1. H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition”, in Proceedings of the IEEE International Conference on Computer Vision (Santiago, 2015), pp. 2983–2991.

  2. N. Hesse, T. Gehrig, H. Gao, and H. K. Ekenel, “Multi-view facial expression recognition using local appearance features,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) (Tsukuba, 2012), pp. 3533–3536.

  3. O. Rudovic, I. Patras, and M. Pantic, “Coupled Gaussian process regression for pose-invariant facial expression recognition,” in Computer Vision–ECCV 2010 (Crete, 2010), pp. 350–363.

  4. S. Moore and R. Bowden, “Local binary patterns for multiview facial expression recognition,” Comput. Vision Image Understanding 115 (4), 541–558 (2011).

    Article  Google Scholar 

  5. C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study”, Image Vision Comput. 27 (6), 803–816 (2009).

    Article  Google Scholar 

  6. G. Fanelli, A. Yao, P.-L. Noel, J. Gall, and L. Van Gool, “Hough forest-based facial expression recognition from video sequences,” in European Conference on Computer Vision (Crete, 2010), pp. 195–206.

  7. Y. Ding, Q. Zhao, B. Li, and X. Yuan, “Facial expression recognition from image sequence based on LBP and Taylor expansion,” IEEE Access 5, 19409–19419 (2017).

    Article  Google Scholar 

  8. B. Ryu, A. R. Rivera, J. Kim, and O. Chae, “Local directional ternary pattern for facial expression recognition,” IEEE Trans. Image Process. 26 (12), 6006–6018 (2017).

    Article  MathSciNet  Google Scholar 

  9. O. Rudovic, I. Patras, and M. Pantic, “Regression-based multi-view facial expression recognition,” in 2010 20th International Conference on Pattern Recognition (Istanbul, 2010), pp. 4121–4124.

  10. Q. Mao, Q. Rao, Y. Yu, and M. Dong, “Hierarchical Bayesian theme models for multi-pose facial expression recognition,” IEEE Trans. Multimedia 19 (4), 861–873 (2017).

    Article  Google Scholar 

  11. T. Zhang, W. Zheng, Z. Cui, Y. Zong, J. Yan, and K. Yan, “A deep neural network-driven feature learning method for multi-view facial expression recognition,” IEEE Trans. Multimedia 18 (12), 2528–2536 (2016).

    Article  Google Scholar 

  12. A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in IEEE Winter Conference on Applications of Computer Vision (2016), pp. 1–10.

  13. D. V. Sang, L. T. B. Cuong, and P. T. Ha, “Discriminative deep feature learning for facial emotion recognition,” in 2018 1st International Conference on Multimedia Analysis and Pattern Recognition (MAPR) (Ho Chi Minh City, 2018), pp. 1–6.

  14. K. Shan, J. Guo, W. You, D. Lu, and R. Bie, “Automatic facial expression recognition based on a deep convolutional-neural-network structure,” in 2017 IEEE 15th International Conference on Software Engineering Research, Management and Applications (SERA) (London 2017), pp. 123–128.

  15. W. Ge and Y. Yu, “Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI, 2017).

  16. Y. Zhou and B. E. Shi, “Action unit selective feature maps in deep networks for facial expression recognition,” in 2017 International Joint Conference on Neural Networks (IJCNN) (Anchorage, AK, 2017), pp. 2031–2038.

  17. J. Luttrell et al., “A deep transfer learning approach to fine-tuning facial recognition models,” in 2018 13th IEEE Conference on Industrial Electronics and Applications (ICIEA) (Wuhan, 2018), pp. 2671–2676.

  18. J. Xiang and G. Zhu, “Joint face detection and facial expression recognition with MTCNN,” in 2017 4th International Conference on Information Science and Control Engineering (Changsha, 2017), pp. 424–427.

  19. A. Halevy, P. Norvig, and F. Pereira, “The unreasonable effectiveness of data,” Intell. Syst. 24 (2), 8–12 (2009).

    Article  Google Scholar 

  20. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in International Conference on Artificial Intelligence and Statistics (Ft. Lauderdale, 2012), pp. 315–323.

  21. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (Lille, 2015), pp. 448–456.

  22. O. Langner, R. Dotsch, G. Bijlstra, D. Wigboldus, S. Hawk, and A. Knippenberg, “Presentation and validation of the Radboud Faces Database,” Cognit. Emotion 24 (8), 1377–1388 (2010).

    Article  Google Scholar 

  23. L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in Automatic face and Gesture Recognition 2006. FGR 2006. 7th International Conference on (Southampton, 2006), pp. 211–216.

  24. F. Chollet, Keras (2015). https://github.com/fchollet/keras.

  25. W. Zheng, H. Tang, Z. Lin, et al., “A novel approach to expression recognition from non-frontal face images,” in IEEE International Conference on Computer Vision (Kyoto, 2011), pp. 1901–1908.

  26. Wenming Zheng, “Multi-view facial expression recognition based on group sparse,” IEEE Trans. Affective Comput. 5 (1), 71–85 (2014).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Shao Jie or Qian Yongsheng.

Ethics declarations

FUNDING

This work is supported by National Natural Science Foundation of China (no. F020603).

COMPLIANCE WITH ETHICAL STANDARDS

The authors declare that they have no conflicts of interest. This article does not contain any studies involving animals performed by any of the authors. This article does not contain any studies involving human participants performed by any of the authors.

Additional information

Shao Jie received a B.S. and an M.S. degree in the Nanjing University of Aeronautics and Astronautics. Then she got her Ph.D. at Tongji University. At present, she is an associate professor at Shanghai University of Electric Power. Her current research interest includes computer vision, video surveillance, and human emotion analysis.

Yongsheng Qian received his bachelor’s degree in electrical engineering and automation from Hubei University for Nationalities in 2015. He is currently a graduate student in the department of electronics and information engineering at Shanghai University of Electric Power, Shanghai, China. His research interest includes facial expression recognition and deep learning.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao Jie, Qian Yongsheng Multi-View Facial Expression Recognition with Multi-View Facial Expression Light Weight Network. Pattern Recognit. Image Anal. 30, 805–814 (2020). https://doi.org/10.1134/S1054661820040197

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1054661820040197

Keywords:

Navigation