Skip to main content
Log in

A comprehensive survey on automatic facial action unit analysis

  • Survey
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Facial Action Coding System is the most influential sign judgment method for facial behavior, and it is a comprehensive and anatomical system which could encode various facial movements by the combination of basic AUs (Action Units). AUs define certain facial configurations caused by contraction of one or more facial muscles, and they are independent of interpretation of emotions. However, automatic facial action unit recognition remains challenging due to several open questions such as rigid and non-rigid facial motions, multiple AUs detection, intensity estimation and naturalistic context application. This paper introduces recent advances in automatic facial action unit recognition, focusing on the fundamental components of face detection and registration, facial action representation, feature selection and classification. The comprehensive analysis of facial representations is presented according to the facial data properties (image and video, 2D and 3D) and characteristics of facial features (predesign and learning, appearance and geometry, hybrid and fusion). Facial action unit recognition involves AUs occurrence detection, AUs temporal segment detection and AUs intensity estimation. We discussed the role of each component, main techniques with their characteristics, challenges and potential directions of facial action unit analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. http://what-when-how.com/face-recognition/facial-expression-recognition-face-recognition-techniques-part-1/

  2. https://en.wikipedia.org/wiki/Confusion_matrix

  3. Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33(1), 17–34 (2009)

    Google Scholar 

  4. Amirian, M., Kächele, M., Palm, G., Schwenker, F.: Support vector regression of sparse dictionary-based features for view-independent action unit intensity estimation. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

  5. Andrieu, C., Doucet, A., Singh, S., Tadić, V.B.: Particle methods for change detection, system identification, and control. Proc. IEEE 92(3), 423–438 (2004)

    Google Scholar 

  6. Baltrušaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalization for automatic action unit detection. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2015)

  7. Baltrušaitis, T., Robinson, P., Morency, L.-P.: Constrained local neural fields for robust facial landmark detection in the wild. IEEE International Conference on Computer Vision Workshops, pp. 354–361 (2013)

  8. Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Measuring facial expressions by computer image analysis. Psychophysiology 36, 253–263 (1999)

    Google Scholar 

  9. Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I.R., Movellan, J.R.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006)

    Google Scholar 

  10. Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Fully automatic facial action recognition in spontaneous behavior. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), pp. 223–230 (2006)

  11. Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pp. 568–573 (2005)

  12. Bassili, J.N.: Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. J. Pers. Soc. Psychol. 37(11), 2049–2058 (1979)

    Google Scholar 

  13. Batista, J.C., Albiero, V., Bellon, O.R.P., Silva, L.: AUMPNet: simultaneous action units detection and intensity estimation on multipose facial images using a single convolutional neural network. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

  14. Bazzo, J.J., Lamarm, M.V.: Recognizing facial actions using Gabor wavelets with neutral face average difference. In: Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR’04), pp. 505–510 (2004)

  15. Benitez-Quiroz, C.F., Srinivasan, R., Martinez, A.M.: EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’16) (2017)

  16. Benitez-Quiroz, C.F., Srinivasan, R., Feng, Q., Wang, Y., Martinez, A.M.: EmotioNet challenge: recognition of facial expressions of emotion in the wild, arXiv preprint arXiv:1703.01210 (2017)

  17. Bevilacqua, F., Backlund, P., Engström, H.: Variations of facial actions while playing games with inducing boredom and stress. In: 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-Games), pp. 1–8 (2016)

  18. Bishay, M., Patras, I.: Fusing multilabel deep networks for facial action unit detection. In: IEEE 12th International Conference on Automatic Face & Gesture Recognition, pp. 681–688 (2017)

  19. Blom, P.M., Bakkes, S., Tan, C.T., Whiteson, S., Roijers, D., Valenti, R., Gevers, T.: Towards personalized gaming via facial expression recognition. In: Proceedings of the Tenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE 2014), pp. 30–36 (2014)

  20. Brahnam, S., Chuang, C.-F., Sexton, R.S., Shih, F.Y.: Machine assessment of neonatal facial expressions of acute pain. Decis. Support Syst. 43(4), 1242–1254 (2007)

    Google Scholar 

  21. Bäenziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12(5), 1161–1179 (2012)

    Google Scholar 

  22. Cakir, D., Arica, N.: Size variant landmark patches for facial action unit detection. In: IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference, pp. 1–4 (2016)

  23. Chang, Y., Vieira, M., Turk, M., Velho, L.: Automatic 3D facial expression analysis in videos. In: Analysis and Modelling of Faces and Gestures. Lecture Notes in Computer Science, vol. 3723, pp. 293–307 (2005)

  24. Chu, W., De la Torre, F., Cohn, J.F.: Selective transfer machine for personalized facial action unit detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3515–3522 (2013)

  25. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960)

    Google Scholar 

  26. Cohn, J.F., Schmidt, K.L.: The timing of facial motion in posed and spontaneous smiles. Int. J. Wavelets Multiresolut. Inf. Process. 2, 1–12 (2004)

    MathSciNet  Google Scholar 

  27. Cohn, J.F., Zlochower, A.J., Lien, J., Kanade, T.: Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. Psychophysiology 36(1), 35–43 (1999)

    Google Scholar 

  28. Cohn, J.F., Reed, L.I., Ambadar, Z., Xiao, J., Moriyama, T.: Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 610–616 (2004)

  29. Cohn, J.F., Kruez, T.S., Matthews, I., Yang, Y., Nguyen, M.H., Padilla, M.T., Zhou, F., De la Torre, F.: Detecting depression from facial actions and vocal prosody. In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, pp. 1–7 (2009)

  30. Cohn, J.F.: Foundations of human computing: facial expression and emotion. In: Proceedings of the 8th International Conference on Multimodal Interfaces (ICMI’06), pp. 233–238 (2006)

  31. Corneanu, C.A., Simón, M.O., Cohn, J.F., Guerrero, S.E.: Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1548–1568 (2016)

    Google Scholar 

  32. Corneanu, C.A., Madadi, M., Escalera, S.: Deep Structure Inference Network for Facial Action Unit Recognition, arXiv: 1803.05873v2 (2018)

  33. Cosker, D., Krumhuber, E., Hilton, A.: A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling. In: IEEE International Conference on Computer Vision (ICCV), pp. 2296–2303 (2011)

  34. Cotter, S.F.: Sparse representation for accurate classification of corrupted and occluded facial expressions. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 838–841 (2010)

  35. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human computer interaction. IEEE Signal Process. Mag. 18(1), 32–80 (2001)

    Google Scholar 

  36. Cruz, A., Bhanu, B., Yang, S.: A psychologically-inspired match-score fusion model for video-based facial expression recognition. In: Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction, pp. 341–350 (2011)

  37. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), pp. 886–893 (2005)

  38. Danelakis, A., Theoharis, T., Pratikakis, I.: A spatio-temporal wavelet-based descriptor for dynamic 3D facial expression retrieval and recognition. Vis. Comput. 32, 1001–1011 (2016)

    Google Scholar 

  39. De la Torre, F., Simon, T., Ambadar, Z., Cohn, J.F.: Fast FACS: a computer-assisted system to increase speed and reliability of manual FACS coding. In: International Conference on Affective Computing and Intelligent Interaction (ACII2011), Lecture Notes in Computer Science vol. 6974, pp. 57–66 (2011)

  40. DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S., Morbini, F., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., Morency, L.-P.: SimSensei kiosk: a virtual human interviewer for healthcare decision support. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multiagent Systems, pp. 1061–1068 (2014)

  41. Dhall, A., Asthana, A., Goeche, R., Gedeon, T.: Emotion recognition using phog and lpq features. In: IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG’11), pp. 878–883 (2011)

  42. Ding, X., Chu, X.-S., De la Torre, F., Cohn, J.F., Wang, Q.: Facial action unit event detection by cascade of tasks. In: International Conference on Computer Vision, pp. 2400–2407 (2013)

  43. Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Classifying facial actions. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 974–989 (1999)

    Google Scholar 

  44. Douglas-Cowie, E., Cowie, R., Cox, C., Amier, N., Heylen, D.: The sensitive artificial listener: an induction technique for generating emotionally coloured conversation. In: LREC Workshop on Corpora for Research on Emotion and Affect, pp. 1–4 (2008)

  45. Duan, X., Dai, Q., Wang, X., Wang, Y., Hua, Z.: Recognizing spontaneous micro-expression from eye region. Neurocomputing 217, 27–36 (2016)

    Google Scholar 

  46. Ekman, P., Friesen, W.V.: The Facial Action Coding System: A Technique for Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)

    Google Scholar 

  47. Ekman, P., Friesen, W.V.: Universal and cultural differences in the judgments of facial expression of emotion. J. Pers. Soc. Psychol. 53(4), 712–717 (1987)

    Google Scholar 

  48. Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), 2nd edn. Oxford University Press, Oxford (2005)

    Google Scholar 

  49. Ekman, P., Friesen, W.V., Hager, J.C.: Facial action coding system: the manual on CD-ROM. A Human Face, Salt Lake City, (2002)

  50. Eleftheriadis, S., Rudovic, O., Pantic, M.: Joint facial action unit detection and feature fusion: a multi-conditional learning approach. IEEE Trans. Image Process. 25(12), 5727–5742 (2016)

    MathSciNet  MATH  Google Scholar 

  51. Eleftheriadis, S., Rudovic, O., Pantic, M.: Multi-conditional latent variable model for joint facial action unit detection. International Conference on Computer Vision (ICCV), pp. 3792–3800 (2015)

  52. Frank, M.G., Ekman, P.: Appearing truthful generalizes across different deception situations. J. Pers. Soc. Psychol. 86(3), 486–495 (2004)

    Google Scholar 

  53. Friesen, W.V., Ekman, P.: EMFACS-7: Emotional facial action coding system. Unpublished manuscript, University of California at SanFrancisco (1983)

  54. Gilbert, C.A., Lilley, C.M., Craig, K.D., McGrath, P.J., Court, C.A., Bennett, S.M., Montgomery, C.J.: Postoperative pain expression in preschool children: validation of the child facial coding system. Clin. J. Pain 15(3), 192–200 (1999)

    Google Scholar 

  55. Girard, J.M., Cohn, J.F., Mahoor, M.H., Mavadati, S.M., Hammal, Z., Rosenwald, D.P.: Nonverbal social withdrawal in depression: evidence from manual and automatic analyses. Image Vis. Comput. 32(10), 641–647 (2014)

    Google Scholar 

  56. Girard, J.M., Cohn, J.F., De la Torre, F.: Estimating smile intensity: a better way. Pattern Recogn. Lett. 66, 13–21 (2015)

    Google Scholar 

  57. Gong, B., Wang, Y., Liu, J., Tang, X.: Automatic facial expression recognition on a single 3D face by exploring shape deformation. In: Proceedings of the 17th ACM International Conference on Multimedia, pp. 569–572 (2009)

  58. Gudi, A., Tasli, H.E., den Uyl, T.M., Maroulis, A.: Deep learning based facs action unit occurrence and intensity estimation. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–5 (2015)

  59. Hamlaoui, S., Davoine, F.: Facial action tracking using particle filters and active appearance models. In: Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-aware Services: Usages and Technologies, pp. 165–169 (2005)

  60. Hammal, Z., Cohn, J.F.: Automatic detection of pain intensity. In: Proceedings of the 14th ACM International Conference on Multimodal Interaction, pp. 47–52 (2012)

  61. Harrigan, J.A., Rosenthal, R., Scherer, K.R.: New handbook of methods in nonverbal behavior research (Series in Affective Science), Oxford University Press, 1st edition (2008)

  62. He, S., Wang, S., Lan, W., Fu, H., Ji, Q.: Facial expression recognition using deep Boltzmann machine from thermal infrared images. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 239–244 (2013)

  63. He, J., Li, D., Yang, B., Cao, S., Sun, B., Yun, L.: Multi view facial action unit detection based on CNN and BLSTM-RNN. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

  64. Hjortsjö, C.H.: Man’s face and mimic language. http://diglib.uibk.ac.at/ulbtirol/content/titleinfo/782346 (1970)

  65. Hu, Q., Jiang, F., Mei, C., Shen, C.: CCT: a cross-concat and temporal neural network for multi-label action unit detection. In: 2018 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2018)

  66. Izard, C.E.: Maximally Discriminative Facial Movement Coding System (MAX). Instructional Resources Center, University of Delaware, Newark (1979)

    Google Scholar 

  67. Izard, C.E.: Measuring emotions in infants and children. In: Izard, C.E., Read, P.B. (eds.) Cambridge Studies in Social and Emotional Development, pp. 114–116. Cambridge University Press, New York (1982)

    Google Scholar 

  68. Jabid, T., Kabir, M.H., Chae, O.: Robust facial expression recognition based on local directional pattern. ETRI J. 32(5), 784–794 (2010)

    Google Scholar 

  69. Jaiswal, S., Valstar, M.: Deep learning the dynamic appearance and shape of facial action units. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–8 (2016)

  70. Jaiswal, S., Martinez, B., Valstar, M.F.: Learning to combine local models for facial action unit detection. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2015)

  71. Jeni, L.A., Girard, J.M., Cohn, J.F., De la Torre, F.: Continuous AU intensity estimation using localized, sparse facial feature space. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1–7 (2013)

  72. Jeni, L.A., Cohn, J.F., De la Torre, F.: Facing imbalanced data recommendations for the use of performance metrics. In: Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 245–251 (2013)

  73. Ji, Q., Lan, P., Looney, C.: A probabilistic framework for modeling and real-time monitoring human fatigue. IEEE Trans. Syst. Man Cybern. A 36(5), 862–875 (2006)

    Google Scholar 

  74. Jiang, B., Valstar, M., Martinez, B., Pantic, M.: A dynamic appearance descriptor approach to facial actions temporal modeling. IEEE Trans. Cybern. 44(2), 161–174 (2014)

    Google Scholar 

  75. Jiang, B., Martinez, B., Valstar, M.F., Pantic, M.: Decision level fusion of domain specific regions for facial action recognition. In: 22nd International Conference on Pattern Recognition, pp. 1776–1781 (2014)

  76. Jiang, B., Valstar, M.F., Pantic, M.: Action unit detection using sparse appearance descriptors in space-time video volumes. In: IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG’11), pp. 314–321 (2011)

  77. Ju, Q.: Robust binary neural networks based 3D face detection and accurate face registration. Int. J. Comput. Intell. Syst. 6, 669–683 (2013)

    Google Scholar 

  78. Jörn, O.: Face animation in MPEG-4. In: Pandzic, I.S., Forchheimer, R. (eds.) MPEG-4 Facial Animation: The Standard, Implementation and Applications, pp. 17–55. Wiley, New York (2003)

    Google Scholar 

  79. Kaiser, M., Kwolek, B., Staub, C., Rigoll, G.: Registration of 3D facial surfaces using covariance matrix pyramids. In: IEEE International Conference on Robotics and Automation, pp. 1002–1007 (2010)

  80. el Kaliouby, P., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-Time Vision for Human-Computer Interaction, pp. 181–200. Springer, New York (2005)

    Google Scholar 

  81. Kaltwang, S., Rudovic, O., Pantic, M.: Continuous pain intensity estimation from facial expressions. In: Advances in Visual Computing (ISVC), Lecture Notes in Computer Science, vol. 7432, pp. 368–377 (2012)

  82. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)

  83. Kapoor, A., Qi, Y., Picard, R.W.: Fully automatic upper facial action recognition. In: Proceedings of the IEEE International Workshop on Analysis and Modelling of Faces and Gestures (AMFG’03), pp. 195–202 (2003)

  84. Khorrami, P., Le Paine, T., Huang, T.S.: Do deep neural networks learn facial action units when doing expression recognition? In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 19–27 (2015)

  85. Kim, M., Pavlovic, V.: Structured output ordinal regression for dynamic facial emotion intensity prediction. In: European Conference on Computer Vision, pp. 649–662 (2010)

  86. Koelstra, S., Pantic, M., Patras, I.: A dynamic texture based approach to recognition of facial actions and their temporal models. IEEE Trans. Pattern Anal. Mach. Intell. 32(11), 1940–1954 (2010)

    Google Scholar 

  87. Koelstra, S., Pantic, M.: Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics. In: IEEE International Conference on Automatic Face & Gesture Recognition, pp. 1–8 (2008)

  88. Kotsia, I., Pitas, I.: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process. 16(1), 172–187 (2007)

    MathSciNet  Google Scholar 

  89. Krippendorff, K.: Estimating the reliability, systematic error and random error of interval data. Educ. Psychol. Measur. 30, 61–70 (1970)

    Google Scholar 

  90. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of International Conference on Machine Learning, pp. 282–289 (2001)

  91. Li, W., Abtahi, F., Zhu, Z., Yin, L.: EAC-Net: deep nets with enhancing and cropping for facial action unit detection. IEEE Trans. Pattern Anal. Mach. Intell. 40(11), 2583–2596 (2018)

    Google Scholar 

  92. Li, Y., Chen, J., Zhao, Y., Ji, Q.: Data-free prior model for facial action unit recognition. IEEE Trans. Affect. Comput. 4(2), 127–141 (2013)

    Google Scholar 

  93. Li, Y., Wu, B., Ghanem, B., Zhao, Y., Yao, H., Ji, Q.: Facial action unit recognition under incomplete data based on multi-label learning with missing labels. Pattern Recogn. 60, 890–900 (2016)

    Google Scholar 

  94. Li, Y., Mavadati, S.M., Mahoor, M.H., Ji, Q.: A unified probabilistic framework for measuring the intensity of spontaneous facial action units. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 22–26 (2013)

  95. Li, X., Chen, S., Jin, Q.: Facial action units detection with multi-features and –AUs fusion. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition

  96. Li, Z., Peng, J., Chen, L.: Light-adaptive Face Registration Based on Drivers’ Video. In: IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing, pp. 373–376 (2016)

  97. Li, Z., Imai, J., Kaneko, M.: Facial feature localization using statistical models and SIFT descriptors. In: The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 961–966 (2009)

  98. Lifkooee, M.Z., Soysal, O.M., Sekeroglu, K.: Video mining for facial action unit classification using statistical spatial-temporal feature image and LoG deep convolutional neural network. Mach. Vis. Appl. 30(1), 41–57 (2018)

    Google Scholar 

  99. Liong, S.-T., See, J., Wong, K., Le Ngo, A.C., Oh, Y., Phan, R.: Automatic apex frame spotting in micro-expression database. In: 3rd IAPR Asian Conference on Pattern Recognition, pp. 665–669 (2015)

  100. Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24, 615–625 (2006)

    Google Scholar 

  101. Littlewort, G., Whitehill, J., Wu, T., Butko, N.J., Ruvolo, P., Movellan, J.R., Bartlett, M.S.: The motion in emotion—a CERT based approach to the FERA emotion challenge. In: IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG’11), pp. 897–902 (2011)

  102. Littlewort, G.C., Bartlett, M.S., Lee, K.: Faces of pain: automated measurement of spontaneous facial expressions of genuine and posed pain. In: Proceedings of the 9th International Conference on Multimodal Interfaces, pp. 15–21 (2007)

  103. Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’04), pp. 80 (2004)

  104. Liu, Y., Zhang, J., Yan, W., Wang, S., Zhao, G., Fu, X.: A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Trans. Affect. Comput. 7(4), 299–310 (2016)

    Google Scholar 

  105. Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1805–1812 (2014)

  106. Long, F., Wu, T., Movellan, J.R., Bartlett, M.S., Littlewort, G.: Learning spatiotemporal features by using independent component analysis with application to facial expression recognition. Neurocomputing 93, 126–132 (2012)

    Google Scholar 

  107. Lucas, G.M., Gratch, J., Scherer, S., Boberg, J., Stratou, G.: Towards an affective interface for assessment of psychological distress. In: Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 539–545 (2015)

  108. Lucey, S., Ashraf, A.B., Cohn, J.F.: Investigating spontaneous facial action recognition through aam representations of the face. In: Kurihara, K. (ed.) Face Recognition Book, pp. 395–406. Pro Literature Verlag, Mammendorf (2007)

    Google Scholar 

  109. Lucey, P., Cohn, J.F., Matthews, I., Lucey, S., Sridharam, S., Howlett, J., Prkachin, K.M.: Automatically detecting pain in video through facial action units. IEEE Trans. Syst. Man Cybern. B 41(3), 664–674 (2011)

    Google Scholar 

  110. Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E., Chew, S., Matthews, I.: Painful monitoring: automatic pain monitoring using the UNBC-McMaster shoulder pain expression archive database. Image Vis. Comput. 30(3), 197–205 (2012)

    Google Scholar 

  111. Lucey, S., Matthews, I., Hu, C., Ambadar, Z., De la Torre, F., Cohn, J.: AAM derived face representations for robust facial action recognition. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), pp. 155–160 (2006)

  112. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK +): a complete dataset for action unit and emotion-specified expression. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 94–101 (2010)

  113. Mahoor, M.H., Cadavid, S., Messinger, D.S., Cohn, J.F.: A framework for automated measurement of the intensity of non-pose facial action units. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 74–80 (2009)

  114. Martinez, B., Valstar, M.F., Jiang, B., Pantic, M.: Automatic analysis of facial actions: a survey. IEEE Trans. Affect. Comput. 13(9), 1–22 (2017)

    Google Scholar 

  115. Mavadati, S.M., Mahoor, M.H., Bartlett, K., Trinh, P., Cohn, J.F.: DISFA: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013)

    Google Scholar 

  116. Mavadati, S.M., Mahoor, M.H.: Temporal facial expression modeling for automated action unit intensity measurement. In: 22nd International Conference on Pattern Recognition, pp. 4648–4653 (2014)

  117. McCall, J.C., Trivedi, M.M.: Facial action coding using multiple visual cues and a hierarchy of particle filters. In: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), p.150 (2006)

  118. McDuff, D., el Kaliouby, R., Senechal, T., Amr, M., Cohn, J.F., Picard, R.: Affectiva-MIT facial expression database (AMFED): naturalistic and spontaneous facial expressions collected in-the-wild. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 881–888 (2013)

  119. McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schröder, M.: The SEMAINE database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affect. Comput. 3(1), 5–17 (2012)

    Google Scholar 

  120. Meng, Z., Han, S., Chen, M., Tong, Y.: Audiovisual facial action unit recognition using feature level fusion. Int. Multimed. Data Eng. Manage. 7(1), 60–76 (2016)

    Google Scholar 

  121. Meng, Z., Han, S., Tong, Y.: Listen to your face: inferring facial action units from audio channel. IEEE Trans. Affect. Comput. https://doi.org/10.1109/TAFFC.2017.2749299

    Article  Google Scholar 

  122. Meng, Z., Han, S., Liu, P., Tong, Y.: Improving speech related facial action unit recognition by audiovisual information fusion. IEEE Trans. Cybern 49(9), 3293–3306 (2018)

    Google Scholar 

  123. Ming, Z., Bugeau, A., Rouas, J.-L., Shochi, T.: Facial action units intensity estimation by the fusion of features with multi-kernel support vector machine. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2015)

  124. Modrow, D., Laloni, C., Doemens, G., Rigoll, G.: A novel sensor system for 3D face scanning based on infrared coded light. In: Proceedings of SPIE Conference on Three-Dimensional Image Capture and Applications, vol. 6805 (2008)

  125. Mohammadi, M.R., Fatemizadeh, E., Mahoor, M.H.: Intensity estimation of spontaneous facial action units based on their sparsity properties. IEEE Transactions on Cybernetics 46(3), 817–826 (2016)

    Google Scholar 

  126. Mohammadian, A., Aghaeinia, H., Towhidkhah, F., Seyyedsalehi, S.Z.: Subject adaptation using selective style transfer mapping for detection of facial action units. Expert Syst. Appl. 56, 282–290 (2016)

    Google Scholar 

  127. Mohoor, M.H., Zhou, M., Veon, K.L., Mavadati, S.M., Cohn, J.F.: Facial action unit recognition with sparse representation. In: IEEE International Conference on Automatic Face and Gesture Recognition (FG’11), 21–25, March 2011, March, Santa Barbara, CA, USA, pp. 336–342 (2011)

  128. Nicolle, J., Bailly, K., Chetouani, M.: Real-time facial action unit intensity prediction with regularized metric learning. Image Vis. Comput. 52, 1–14 (2016)

    Google Scholar 

  129. Nicolle, J., Bailly, K., Chetouani, M.: Facial action unit intensity prediction via hard multi-task metric learning for kernel regression. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2015)

  130. Nishtha, N.R.: Automatic AU intensity detection/estimation for facial expression analysis: a review. In: International Conference on Inter Disciplinary Research in Engineering and Technology (ICIDRET), pp. 83–88 (2016)

  131. Oh, Y.-H., Le Ngo, A.C., Phan, R.C.-W., See, J., Ling, H.-C.: Intrinsic two-dimensional local structures for micro-expression recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1851–1855 (2016)

  132. Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Delac, K., Grgic, M. (eds.) Face Recognition, pp. 377–416. I-Tech Education and Publishing, New York (2007)

    Google Scholar 

  133. Pantic, M., Patras, I.: Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man Cybern. B 36(2), 433–449 (2006)

    Google Scholar 

  134. Pantic, M., Rothkrantz, L.J.M.: Toward an affect-sensitive multimodal human-computer interaction. Proc. IEEE 91(9), 1370–1390 (2003)

    Google Scholar 

  135. Pantic, M., Rothkrantz, L.J.M.: Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst. Man Cybern. B 34(3), 1449–1461 (2004)

    Google Scholar 

  136. Pantic, M., Patras, I.: Temporal modeling of facial actions from face profile image sequences. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 49–52 (2004)

  137. Pantic, M., Patras, I.: Detecting facial actions and their temporal segments in nearly frontal-view face image sequences. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 3358–3363 (2005)

  138. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: IEEE International Conference on Multimedia and Expo (2005)

  139. Petajan, E.: MPEG-4 face and body animation coding applied to HCI. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-Time Vision for Human-Computer Interaction, pp. 249–268. Springer, Berlin (2005)

    Google Scholar 

  140. Peters, J., Koot, H.M., Grunau, R.E., de Boer, J., Druenen, M.J.V., Tibboel, D., Duivenvoorden, H.J.: Neonatal facial coding system for assessing postoperative pain in infants: item reduction is valid and feasible. Clin. J. Pain 19(6), 353–363 (2003)

    Google Scholar 

  141. Qian, K., Su, K., Zhang, J., Li, Y.: A 3D face registration algorithm based on conformal mapping. In: International Conference of Intelligence Computation and Evolutionary Computation, vol. 30, no. 22, pp. 1–11 (2018)

  142. Ranjan, R., Patel, V.M., Chellappa, R.: HyperFace: a deep multi-task learning framework for face detection, landmark, localization, pose estimation, and gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 121–135 (2019)

    Google Scholar 

  143. Rashid, M., Abu-Bakar, S.A.R., Mokji, M.: Human emotion recognition from videos using spatio-temporal and audio features. Vis. Comput. 29, 1269–1275 (2013)

    Google Scholar 

  144. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks, in NIPS (2015)

  145. Rojo, R., Prados-Frutos, J.C., López-Valverde, A.: Pain assessment using the facial action coding system, a systematic review. Med. Clin. 145(8), 350–355 (2015)

    Google Scholar 

  146. Rosenberg, E.L., Ekman, P., Blumenthal, J.A.: Facial expression and the affective component of cynical hostility in male coronary heart disease patients. Health Psychol. 17(4), 376–380 (1998)

    Google Scholar 

  147. Rudovic, O., Pavlovic, V., Pantic, M.: Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Trans. Pattern Anal. Mach. Intell. 37(5), 944–958 (2015)

    Google Scholar 

  148. Rudovic, O., Pavlovic, V., Pantic, M.: Kernel conditional ordinal random fields for temporal fields for temporal segmentation of facial action units. In: ECCV2012, Lecture Notes in Computer Science vol. 7584, pp. 260–269 (2012)

  149. Russell, J.A., Fernández-Dols, J.M.: The psychology of facial expression. Cambridge University Press, USA (1997)

    Google Scholar 

  150. Salah, A.A., Sebe, N., Gevers, T.: Communication and automatic interpretation of affect from facial expressions. In: Chapter of Affective Computing and Interaction: Psychological, Cognitive and Neuroscientific Perspectives, pp. 157–183 (2010)

  151. Salter, T.: A need for flexible robotic devices. AMD Newslett. 6(1), 3 (2009)

    Google Scholar 

  152. Sandbach, G., Zafeiriou, S., Pantic, M., Yin, L.: Static and dynamic 3D facial expression recognition: a comprehensive survey. Image Vis. Comput. 30(10), 683–697 (2012)

    Google Scholar 

  153. Sandbach, G., Zafeiriou, S., Pantic, M.: Local normal binary patterns for 3D facial action unit detection. In: 19th IEEE International Conference on Image Processing, pp. 1813–1816 (2012)

  154. Sandbach, G., Zafeiriou, S., Pantic, M.: Markov random field structures for facial action unit intensity estimation. In: 2013 IEEE International Conference on Computer Vision Workshops, pp. 738–745 (2013)

  155. Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1113–1133 (2015)

    Google Scholar 

  156. Savran, A., Bilge, M.T.: Regression-based intensity estimation of facial action units. Image Vision Computing 30(10), 774–784 (2012)

    Google Scholar 

  157. Savran, A., Sankur, B., Bilge, M.T.: Comparative evaluation of 3D vs 2D modality for automatic detection of facial action units. Pattern Recogn. 45(2), 767–782 (2012)

    Google Scholar 

  158. Savran, A., Alyüz, N., Dibeklioğlu, H., Çeliktutan, O., Gökberk, B., Sankur, B., Akarun, L.: Bosphorus database for 3D face analysis. In: Biometrics and Identity Management. Lecture Notes in Computer Science vol. 5372, pp. 47–56 (2008)

  159. Savran, A., Sankur, B., Bilge, M.T.: Estimation of facial action intensities on 2D and 3D data. In: 19th European Signal Processing Conference (EUSIPCO), pp. 1969–1973 (2011)

  160. Savran, A., Sankur, B., Bilge, M.T.: Facial action unit detection: 3D versus 2D modality. In: IEEE CVPR’10 Workshop on Human Communicative Behavior Analysis, pp. 71–78 (2010)

  161. Savran, A., Sankur, B.: Automatic detection of facial actions from 3D data. In: IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp. 612–619 (2009)

  162. Sayette, M.A., Creswell, K.G., Dimoff, J.D., Fairbairn, C.E., Cohn, J.F., Heckman, B.W., Kirchner, T.R., Levine, J.M., Moreland, R.L.: Alcohol and group formation: a multimodal investigation of the effects of alcohol on emotion and social bonding. Psychol. Sci. 23(8), 869–878 (2012)

    Google Scholar 

  163. Seetaface. https://github.com/seetaface/SeetaFaceEngine

  164. Senechal, T., Rapp, V., Salam, H., Seguier, R., Bailly, K., Prevost, L.: Facial action recognition combining heterogeneous features via multikernel learning. IEEE Trans. Syst. Man Cybern. B 42(4), 993–1005 (2012)

    Google Scholar 

  165. Senechal, T., Rapp, V., Prevost, L.: Facial feature tracking for emotional dynamic analysis. In: Advances Concepts for Intelligent Vision Systems, pp. 495–506 (2011)

  166. Seshadri, K., Savvides, M.: Towards a unified framework for pose, expression and occlusion tolerant automatic facial alignment. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2110–2122 (2016)

    Google Scholar 

  167. Shreve, M., Godavarthy, S., Manohar, V., Goldgof, D., Sarkar, S.: Towards macro- and micro-expression spotting in video using strain patterns. In: 2009 Workshop on Applications of Computer Vision (WACV), pp. 1–6 (2009)

  168. Sikka, K.: Facial expression analysis for estimating pain in clinical settings. In: Proceedings of the 16th International Conference on Multimodal Interaction, pp. 349–353 (2014)

  169. Sikka, K., Wu, T., Susskind, J., Bartlett, M.: Exploring bag of words architectures in the facial expression domain. In: European Conference on Computer Vision, pp. 250–259 (2012)

  170. Simon, T., Nguyen, M.H., De la Torre, F., Cohn, J.F.: Action unit detection with segment-based SVMs. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2737–2744 (2010)

  171. Siritanawan, P., Kotani, K.: Facial action units detection by robust temporal features. In: 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), pp. 161–168 (2015)

  172. Steidl, S., Levit, M., Batliner, A., Nöth, E., Niemann. H.: ’Off all things the measure is man’ automatic classification of emotions and inter-labeler consistency. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’05) vol 1, pp. 317–320 (2005)

  173. Stratou, G., Ghosh, A., Debevec, P., Morency, L.P.: Effect of illumination on automatic expression recognition: a novel 3D relightable facial database. In: IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG’11), pp. 611–618 (2011)

  174. Sun, Y., Reale, M., Yin, L.: Recognizing partial facial action units based on 3D dynamic range data for facial expression recognition. In: IEEE International Conference on Automatic Face and Gesture, pp. 1–8 (2008)

  175. Sánchez-Lozano, E., Martinez, B., Tzimiropoulos, G., Valstar, M.: Cascaded continuous regression for real-time incremental face traking. In: European Conference on Computer Vision—ECCV 2016, Part VIII, pp. 645–661 (2016)

  176. Sánchez-Lozano, E., Tzimiropoulos, G., Martinez, B., De la Torre, F., Valstar, M.: A functional regression approach to facial landmark tracking. CoRR, abs/1612.02203 (2016)

  177. Taheri, S., Qiu, Q., Chellappa, R.: Structure-preserving sparse decomposition for facial expression analysis. IEEE Trans. Image Process. 23(8), 3590–3603 (2014)

    MathSciNet  MATH  Google Scholar 

  178. Tam, G.K.L., Cheng, Z.-Q., Lai, Y.-K., Langbein, F.C., Liu, Y., Marshall, D., Martin, R.R., Sun, X.-F., Rosin, P.L.: Registration of 3D point clouds and meshes: a survey from rigid to nonrigid. IEEE Trans. Visual Comput. Graphics 19(7), 1199–1217 (2013)

    Google Scholar 

  179. Tan, C.T., Rosser, D., Bakkes, S., Pisan, Y.: A feasibility study in using facial expressions analysis to evaluate player experiences. In: Proceedings of the 8th Australasian Conference on Interactive Entertainment: Playing the System (2012)

  180. Tang, C., Zheng, W., Yan, J., Li, Q., Li, Y., Zhang, T., Cui, Z.: View-independent facial action unit detection. In: IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

  181. Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97–115 (2001)

    Google Scholar 

  182. Tian, Y., Kanade, T., Cohn, J.F.: Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR’02), pp. 229–234 (2002)

  183. Tian, Y., Kanade, T., Cohn, J.F.: Recognizing lower face action units for facial expression analysis. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 484–490 (2000)

  184. Tian, Y., Kanada, T., Cohn, J.F.: Recognizing upper face action units for facial expression analysis. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 294–301 (2000)

  185. Tong, Y., Chen, J., Ji, Q.: A unified probabilistic framework for spontaneous facial action modeling and understanding. IEEE Trans. Pattern Anal. Mach. Intell. 32(2), 258–273 (2010)

    Google Scholar 

  186. Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationship. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1683–1699 (2007)

    Google Scholar 

  187. Tsalakanidou, F., Malassiotis, S.: Real-time 2d + 3d facial action and expression recognition. Pattern Recogn. 43(5), 1763–1775 (2010)

    Google Scholar 

  188. Tsalakanidou, F., Malassiotis, S.: Robust facial action recognition from real-time 3D streams. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 4–11 (2009)

  189. Tzimiropoulos, G., Pantic, M.: Gauss-newton deformable part models for face alignment in-the-wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1851–1858 (2014)

  190. Valstar, M.F., Mehu, M., Jiang, B., Pantic, M., Scherer, K.: Meta-analysis of the first facial expression recognition challenge. IEEE Trans. Syst. Man Cybern. B 42(4), 966–979 (2012)

    Google Scholar 

  191. Valstar, M.F., Pantic, M.: Fully automatic recognition of the temporal phases of facial actions. IEEE Trans. Syst. Man Cybern. B 42(1), 28–43 (2012)

    Google Scholar 

  192. Valstar, M.F., Pantic, M.: Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In: Proceedings of International Conference on Language Resources and Evaluation, Workshop on Emotion, pp. 65–70 (2010)

  193. Valstar, M.F., Pantic, M.: Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In: Human-Computer Interaction, Lecture Notes on Computer Science vol 4796, pp. 118–127 (2007)

  194. Valstar, M.F., Jiang, B., Mehu, M., Pantic, M., Scherer, K.: The first facial expression recognition and analysis challenge. In: IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (FG’11), pp. 921–926 (2011)

  195. Valstar, M.F., Almaev, T., Girard, J.M., McKeown, G., Mehu, M., Yin, L., Pantic, M., Cohn, J.F.: FERA 2015-Second facial expression recognition and analysis challenge. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1–8 (2015)

  196. Valstar, M.F., Patras, I., Pantic, M.: Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), p. 76 (2005)

  197. Valstar, M., Pantic, M.: Fully automatic facial action unit detection and temporal analysis. In: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), p. 149 (2006)

  198. Valstar, M.F., Sánchez-Lozano, E., Cohn, J.F., Jeni, L.A.: FERA2017-Addressing head pose in the third facial expression recognition and analysis challenge. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

  199. Valstar, M.F., Gunes, H., Pantic, M.: How to distinguish posed from spontaneous smiles using geometric features. In: 9th International Conference on Multimodal Interfaces (ICMI2007), pp. 38–45 (2007)

  200. Valstar, M.F., Pantic, M., Ambadar, Z., Cohn, J.F.: Spontaneous versus posed facial behaviour: automatic analysis of brow actions. In: Proceedings of the 8th International Conference on Multimodal Interfaces, pp. 162–170 (2006)

  201. Valstar, M., Pantic, M., Patras, I.: Motion history for facial action detection from face video. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 635–640 (2004)

  202. Vinciarelli, A., Pantic, M., Bourlard, H.: Social signal processing: survey of an emerging domain. Image Vis. Comput. 27(12), 1743–1759 (2009)

    Google Scholar 

  203. Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vision 57(2), 137–154 (2004)

    Google Scholar 

  204. Wang, S., Gan, Q., Ji, Q.: Expression-assisted facial action unit recognition under incomplete AU annotation. Pattern Recogn. 61, 78–91 (2017)

    Google Scholar 

  205. Wang, S., Yan, W., Li, X., Zhao, G., Zhou, C., Fu, X., Yang, M., Tao, J.: Micro-expression recognition using color spaces. IEEE Trans. Image Process. 24(12), 6034–6047 (2015)

    MathSciNet  MATH  Google Scholar 

  206. Wang, N., Gao, X., Tao, D., Li, X.: Facial feature point detection: a comprehensive survey. Int. J. Comput. Vis. pp. 1–32 (2014)

  207. Wang, Z., Li, Y., Wang, S., Ji, Q.: Capturing global semantic relationships for facial action unit recognition. In: IEEE International Conference on Computer Vision, pp. 3304–3311 (2013)

  208. Wendin, K., Allesen-Holm, B.H., Bredie, W.L.P.: Do facial reactions add new dimensions to measuring sensory responses to basic tastes? Food Qual. Prefer. 22, 346–354 (2011)

    Google Scholar 

  209. Whitehill, J., Omlin, C.W.: Haar features for FACS AU recognition. In: Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), pp. 101 (2006)

  210. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Google Scholar 

  211. Wu, B., Lyu, S., Hu, B., Ji, Q.: Multi-label learning with missing labels for image annotation and facial action unit recognition. Pattern Recogn. 48(7), 2279–2289 (2015)

    Google Scholar 

  212. Wu, H., Zhang, K., Tian, G.: Simultaneous face detection and pose estimation using convolutiona neural network cascade. IEEE Access 6, 49563–49575 (2018)

    Google Scholar 

  213. Wu, T., Butko, N.J., Ruvolo, P., Whitehill, J., Bartlett, M.S., Movellan, J.R.: Action unit recognition transfer across datasets. In: IEEE International Conference on Automatic Face and Gesture Recognition (FG’11), pp. 889–896 (2011)

  214. Wu, Q., Shen, X., Fu, X.: The machine knows what you are hiding: an automatic micro-expression recognition system. In: Affective Computing and Intelligent Interaction, Lection Notes in Computer Science vol. 6975, pp. 152–162 (2011)

  215. Wu, S., Kan, M., He, Z., Shan, S., Chen, X.: Funnel-structured cascade for multi-view face detection with alignment-awareness. Neurocomputing 221(19), 138–145 (2017)

    Google Scholar 

  216. Wu, T., Bartlett, M., Movellan, J.R.: Facial expression recognition using Gabor motion energy filters. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 42–47 (2010)

  217. Xiong, X., De la Torre, F.: Supervised descent method and its application to face alignment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 532–539 (2013)

  218. Yadav, P.C., Singh, H.V., Patel, A.K., Singh, A.: A comparative analysis of different facial action tracking models and techniques. In: International Conference on Emerging Trends in Electrical, Electronics and Sustainable Energy Systems, pp. 347–349 (2016)

  219. Yan, W., Li, X., Wang, S., Zhao, G., Liu, Y., Chen, Y., Fu, X.: CASME II: an improved spontaneous micro-expression database and the baseline evaluation. PLoS ONE 9(1), e86041 (2014). https://doi.org/10.1371/journal.pone.0086041

    Article  Google Scholar 

  220. Yan, W., Wu, Q., Liu, Y., Wang, S., Fu, X.: CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces. In: IEEE Conference on Automatic Face and Gesture Recognition, pp. 1–7 (2013)

  221. Yang, M.-H., Kriegman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)

    Google Scholar 

  222. Yang, P., Liu, Q., Metaxas, D.N.: Boosting encoded dynamic features for facial expression recognition. Pattern Recogn. Lett. 30(2), 132–139 (2009)

    Google Scholar 

  223. Yang, P., Liu, Q., Metaxas, D.N.: Boosting coded dynamic features for facial action units and facial expression recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6 (2007)

  224. Yang, C., Zhan, Y.: Upper facial action units recognition based on KPCA and SVM. In: Computer Graphics, Imaging and Visualisation (CGIV), pp. 349–353 (2007)

  225. Yüce, A., Gao, H., Thiran, J.P.: Discriminant multi-label manifold embedding for facial action unit detection. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2015)

  226. Zafeiriou, S., Zhanga, C., Zhang, Z.: A survey on face detection in the wild: past, present and future. Comput. Vis. Image Underst. 138, 1–24 (2015)

    Google Scholar 

  227. Zafeiriou, S., Petrou, M.: Sparse representations for facial expressions recognition via l1 optimization. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 32–39 (2010)

  228. Zamzmi, G., Pai, C., Goldgof, D., Kasturi, R., Ashmeade, T., Sun, Y.: An approach for automated multimodal analysis of infants’ pain. In: International Conference on Pattern Recognition (ICPR) (2016)

  229. Zeng, J., Chu, W.-S., De la Torre, F., Cohn, J.F., Xiong, Z.: Confidence preserving machine for facial action unit detection. IEEE Trans. Image Process. 25(10), 4753–4767 (2016)

    MathSciNet  MATH  Google Scholar 

  230. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)

    Google Scholar 

  231. Zhang, P., Ben, X., Yan, R., Wu, C., Guo, C.: Micro-expression recognition system. Optik 127(3), 1395–1400 (2016)

    Google Scholar 

  232. Zhang, Y., Ji, Q.: Active and dynamic information fusion for facial expression understanding from image sequence. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 699–714 (2005)

    Google Scholar 

  233. Zhang, X., Mahoor, M.H.: Task-dependent multi-task kernel learning for facial action unit detection. Pattern Recogn. 51, 187–196 (2016)

    Google Scholar 

  234. Zhang, X., Yin, L., Cohn, J.F., Canavan, S., Reale, M., Horowitz, A., Liu, P., Girard, J.M.: BP4D-spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32(10), 692–706 (2014)

    Google Scholar 

  235. Zhang, Y., Zhang, L., Hossain, M.A.: Adaptive 3D facial action intensity estimation and emotion recognition. Expert Syst. Appl. 42(3), 1446–1464 (2015)

    Google Scholar 

  236. Zhang, X., Yin, L., Cohn, J.F., Canavan, S., Reale, M., Horowitz, A., Liu, P.: A high resolution spontaneous 3D dynamic facial expression database. In: The 10th IEEE International Conference on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2013)

  237. Zhang, X., Mahoor, M.H.: Simultaneous detection of multiple facial action units via hierarchical task structure learning. In: 22nd International Conference on Pattern Recognition (ICPR), pp. 1863–1868 (2014)

  238. Zhang, C., Zhang, Z.: A survey of recent advances in face detection. Technical Report, Microsoft Research, CA, USA, MSR-TR-2010-66 (2010)

  239. Zhang, C., Zhang, Z.: Improving Multiview face detection with multi-task deep convolutional neural networks. In: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1036–1041 (2014)

  240. Zhang, Y., Dong, W., Hu, B., Ji, Q.: Weakly-supervised deep convolutional neural network learning for facial action unit intensity estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2314–2323 (2018)

  241. Zhang, L., Tong, Y., Ji, Q.: Active image labeling and its application to facial action labeling. In: ECCV 2008, Lecture Notes in Computer Science vol. 5303, pp. 706–719 (2008)

  242. Zhao, K., Chu, W.-S., De la Torre, F., Cohn, J.F., Zhang, H.: Joint patch and multi-label learning for facial action unit and holistic expression recognition. IEEE Trans. Image Process. 25(8), 3931–3946 (2016)

    MathSciNet  MATH  Google Scholar 

  243. Zhao, J., Mao, X., Zhang, J.: Learning deep facial expression features from image and optical flow sequences using 3D CNN. Vis. Comput. (2018). https://doi.org/10.1007/s00371-018-1477-y

    Article  Google Scholar 

  244. Zhao, X., Dellandréa, E., Chen, L., Samaras, D.: AU recognition on 3D faces based on an extended statistical facial feature model. In: Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–6 (2010)

  245. Zhao, K., Chu, W.-S., De la Torre, F., Cohn, J.F., Zhang, H.: Joint patch and multi-label learning for facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2207–2216 (2015)

  246. Zhao, K., Chu, W., Zhang, H.: Deep region and multi-label learning for facial action unit detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3391–3399 (2016)

  247. Zhao, S., Gao, Y., Zhang, B.: Sobel-LBP. In: IEEE International Conference on Image Processing, pp. 2144–2147 (2008)

  248. Zhi, R., Flierl, M., Ruan, Q., Kleijn, W.B.: Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Trans. Syst. Man Cybern. B 41(1), 38–52 (2011)

    Google Scholar 

  249. Zhou, Z.-H., Chen, K.-J., Dai, H.-B.: Enhancing relevance feedback in image retrieval using unlabeled data. ACM Trans. Inf. Syst. 24(2), 219–244 (2006)

    Google Scholar 

  250. Zhou, Y., Pi, J., Shi, B.E.: Pose-independent facial action unit intensity regression based on multi-task deep transfer learning. In: 2017 IEEE 12th International Conference on Automatic Face & Gesture Recognition (2017)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (61673052), the National Research and Development Major Project (2017YFD0400100), the Fundamental Research Fund for the Central Universities of China (2302018FRF-TP-18-014A2) and the grant from Chinese Scholarship Council (CSC).

Author information

Authors and Affiliations

Authors

Contributions

Zhi, R. organized the paper and made the draft. Liu, M. supplemented the content of manuscript. Zhang, D. improved the paper and supplied literature materials for the paper.

Corresponding author

Correspondence to Ruicong Zhi.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhi, R., Liu, M. & Zhang, D. A comprehensive survey on automatic facial action unit analysis. Vis Comput 36, 1067–1093 (2020). https://doi.org/10.1007/s00371-019-01707-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-019-01707-5

Keywords

Navigation