Skip to main content
Log in

Advances in computer–human interaction for detecting facial expression using dual tree multi band wavelet transform and Gaussian mixture model

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In human communication, facial expressions play an important role, which carries enough information about human emotions. Last two decades, it becomes a very active research area in pattern recognition and computer vision. In this type of recognition, there is a drawback of how to extract the features because of its dynamic nature of facial structures, which are extracted from the facial images and to predict the level of difficulties in the extraction of the facial expressions. In this research, an efficient approach for emotion or facial expression analysis based on dual-tree M-band wavelet transform (DTMBWT) and Gaussian mixture model (GMM) is presented. Different facial expressions are represented by DTMBWT at various decomposition levels from one to six. From the representations, DTMBWT energy and entropy features are extracted as features for the corresponding facial expression. These features are analyzed for the recognition using GMM classifier by varying the number of Gaussians used. Japanese female facial expression database which contains seven facial expressions; happy, sad, angry, fear, neutral, surprise and disgust are employed for the evaluation. Results show that the framework provides 98.14% accuracy using fourth-level decomposition, which is considerably high.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Kim DH, Jung SU, Chung MJ (2008) Extension of cascaded simple feature based face detection to facial expression recognition. Pattern Recognit Lett 29(11):1621–1631

    Article  Google Scholar 

  2. Toole AJO (2007) Face recognition algorithms surpass humans matching faces over changes in illumination. IEEE Trans Pattern Anal Mach Intell 29(9):1642

    Article  Google Scholar 

  3. Sharma UM (2015) Hybrid feature based face verification and recognition system using principal component analysis and artificial neural network. Indian J Sci Technol 8(S1):115–120. https://doi.org/10.17485/ijst/2015/v8is1/57754

    Article  Google Scholar 

  4. Ko BC (2018) A brief review of facial emotion recognition based on visual information. Sensors (Basel) 18(2):pii: E401. https://doi.org/10.3390/s18020401

    Article  Google Scholar 

  5. Kaschte B (2012) Biometric authentication systems today and in the future. University of Auckland, Auckland

    Google Scholar 

  6. Tivatansakul S, Ohkura M, Puangpontip S, Achalakul T (2014) Emotional healthcare system: emotion detection by facial expressions using japanese database. In: 2014 6th Computer science and electronic engineering conference (CEEC). 978-1-4799-6692-9/14/$31.00 ©2014 IEEE, University of Essex, UK

  7. Matyáš V, Říha Z (2002) Biometric authentication—security and usability. Faculty of Informatics, Masaryk University Brno, Czech Republic

  8. Kumbhar M, Jadhav A, Patil M (2012) Facial expression recognition based on image feature. Int J Comput Commun Eng 1(2):117

    Article  Google Scholar 

  9. Powar NU, Foytik JD, Asari VK (2011) Facial expression analysis using 2D and 3D features. 978-1-4577-1041-4/11/$26.00 ©2011 IEEE

  10. Matyáš V, Říha Z (2011) Security of biometric authentication systems. Int J Comput Inf Syst Ind Manag Appl 3:174–184. ISSN 2150-7988. www.mirlabs.net/ijcisim/index.html

  11. Saini R, Rana N (2014) Comparison of various biometric methods. Int J Adv Sci Technol (IJAST) 2(I). ISSN 2348-5426

  12. Boia R, Dogaru R, Florea L (2013) A comparison of several classifiers for eye detection on emotion expressing faces. 978-1-4799-2442-4/13/$31.00 ©2013 IEEE

  13. Odoyo WO, Lee G-B, Park J-J, Cho B-J (2009) Facial expression classification using eigen-components of principal expressions. ISBN 978-89-5519-139-4, Feb. 15–18, 2009 ICACT

  14. Molavi M, bin Yunus J, Akbari E (2012) Comparison of different methods for emotion classification. In: 2012 Sixth Asia modelling symposium. 978-0-7695-4730-5/12 $26.00 © 2012 IEEE. https://doi.org/10.1109/ams.2012.53

  15. Molavi M, bin Yunus J (2012) The effect of noise removing on emotional classification. In: 2012 International conference on computer & information science (ICCIS). 978-1-4673-1938-6/12/$31.00 ©2012 IEEE

  16. Sariyanidi E, Gunes H, Cavallaro A (2017) Learning bases of activity for facial expression recognition. IEEE Trans Image Process 26:1965–1978

    Article  MathSciNet  Google Scholar 

  17. Mao Q, Rao Q, Yu Y, Dong M (2017) Hierarchical Bayesian theme models for multipose facial expression recognition. IEEE Trans Multim 19(4):861–873

    Article  Google Scholar 

  18. Xie S, Hu H (2017) Facial expression recognition with FRR-CNN. Electron Lett 53(4):235–237

    Article  Google Scholar 

  19. Chu WS, De la Torre F, Cohn JF (2017) Selective transfer machine for personalized facial expression analysis. IEEE Trans Pattern Anal Mach Intell 39(3):529–545

    Article  Google Scholar 

  20. Meena HK, Sharma KK, Joshi SD (2017) Improved facial expression recognition using graph signal processing. Electron Lett 53(11):718–720

    Article  Google Scholar 

  21. Ding Y, Zhao Q, Li B, Yuan X (2017) Facial expression recognition from image sequence based on LBP and Taylor expansion. IEEE Access 5:19409–19419

    Article  Google Scholar 

  22. Ryu B, Rivera AR, Kim J, Chae O (2017) Local directional ternary pattern for facial expression recognition. IEEE Trans Image Process 26(12):6006–6018

    Article  MathSciNet  Google Scholar 

  23. Jiang X, Feng B, Jin L (2016) Facial expression recognition via sparse representation using positive and reverse templates. IET Image Process 10(8):616–623

    Article  Google Scholar 

  24. Zen G, Porzi L, Sangineto E, Ricci E, Sebe N (2016) Learning personalized models for facial expression analysis and gesture recognition. IEEE Trans Multime 18(4):775–788

    Article  Google Scholar 

  25. Lee SI, Lee SH, Plataniotis KN, Ro YM (2016) Experimental investigation of facial expressions associated with visual discomfort: feasibility study towards an objective measurement of visual discomfort based on facial expression. J Disp Technol 12:1785–1797

    Article  Google Scholar 

  26. Lee SH, Ro YM (2016) Partial matching of facial expression sequence using over-complete transition dictionary for emotion recognition. IEEE Trans Affect Comput 7(4):389–408

    Article  Google Scholar 

  27. Mariooryad S, Busso C (2016) Facial expression recognition in the presence of speech using blind lexical compensation. IEEE Trans Affect Comput 7(4):346–359

    Article  Google Scholar 

  28. Zhang T, Zheng W, Cui Z, Zong Y, Yan J, Yan K (2016) A deep neural network-driven feature learning method for multi-view facial expression recognition. IEEE Trans Multim 18(12):2528–2536

    Article  Google Scholar 

  29. Ren F, Huang Z (2016) Automatic facial expression learning method based on humanoid robot XIN-REN. IEEE Trans Hum-Mach Syst 46(6):810–821

    Article  Google Scholar 

  30. Liu M, Shan S, Wang R, Chen X (2016) Learning expressionlets via universal manifold model for dynamic facial expression recognition. IEEE Trans Image Process 25(12):5920–5932

    Article  MathSciNet  Google Scholar 

  31. Yan J, Zheng W, Xu Q, Lu G, Li H, Wang B (2016) Sparse kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech. IEEE Trans Multim 18(7):1319–1329

    Article  Google Scholar 

  32. Ali IR, Kolivand H, Alkawaz MH (2018) Lip syncing method for realistic expressive 3D face model. Multim Tools Appl 77(5):5323–5366

    Article  Google Scholar 

  33. Kamarol SKA, Jaward MH, Parkkinen J, Parthiban R (2016) Spatiotemporal feature extraction for facial expression recognition. IET Image Process 10(7):534–541

    Article  Google Scholar 

  34. Testa RL, Corrêa CG, Machado-Lima A, Santos Nunes FL (2019) Synthesis of facial expressions in photographs: characteristics, approaches, and challenges. ACM Comput Surv. https://doi.org/10.1145/3292652

    Article  Google Scholar 

  35. Mahmood A, Hussain S, Iqbal K, Elkilani WS (2019) Recognition of facial expressions under varying conditions using dual-feature fusion. Math Probl Eng 2019:9185481. https://doi.org/10.1155/2019/9185481

    Article  Google Scholar 

  36. Gritti T, Shan C, Jeanne V, Braspenning R (2008) Local features based facial expression recognition with face registration. 978-1-4244-2154-1/08/$25.00 ©2008 IE

  37. Ghimire D, Lee J (2013) Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines. Sensors 13:7714–7734. https://doi.org/10.3390/s130607714

    Article  Google Scholar 

  38. Sajjad M, Shah A, Jan Z, Shah SI, Baik SW, Mehmood I (2018) Facial appearance and texture feature-based robust facial expression recognition framework for sentiment knowledge discovery. Clust Comput 21:549–567

    Article  Google Scholar 

  39. Chaux C, Duval L, Pesquet JC (2006) Image analysis using a dual-tree M-band wavelet transforms. IEEE Trans Image Process 15(8):2397–2412

    Article  MathSciNet  Google Scholar 

  40. Selesnick IW, Baraniuk RG, Kingsbury NC (2005) The dual-tree complex wavelet transforms. IEEE Signal Process Mag 22(6):123–151

    Article  Google Scholar 

  41. Sonawane JM, Gaikwad SD, Prakash G (2017) Microarray data classification using dual tree M-band wavelet features. Int J Adv Signal Image Sci 3(1):19–24

    Google Scholar 

  42. Keerthi Anand VD (2017) Wavelets for speaker recognition using GMM classifier. Int J Adv Signal Image Sci 3(1):13–18

    Google Scholar 

  43. Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: 3rd IEEE international conference on automatic face and gesture recognition, pp 200–205

Download references

Funding

The author(s) received no specific funding for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jenni Kommineni.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kommineni, J., Mandala, S., Sunar, M.S. et al. Advances in computer–human interaction for detecting facial expression using dual tree multi band wavelet transform and Gaussian mixture model. Neural Comput & Applic 34, 15397–15408 (2022). https://doi.org/10.1007/s00521-020-05037-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05037-9

Keywords

Navigation