Skip to main content

Advertisement

Log in

Predicting behavioral competencies automatically from facial expressions in real-time video-recorded interviews

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

A Correction to this article was published on 16 March 2021

This article has been updated

Abstract

This work aims to develop a real-time image and video processor enabled with an artificial intelligence (AI) agent that can predict a job candidate’s behavioral competencies according to his or her facial expressions. This is accomplished using a real-time video-recorded interview with a histogram of oriented gradients and support vector machine (HOG-SVM) plus convolutional neural network (CNN) recognition. Different from the classical view of recognizing emotional states, this prototype system was developed to automatically decode a job candidate’s behaviors by their microexpressions based on the behavioral ecology view of facial displays (BECV) in the context of employment interviews using a real-time video-recorded interview. An experiment was conducted at a Fortune 500 company, and the video records and competency scores were collected from the company’s employees and hiring managers. The results indicated that our proposed system can provide better predictive power than can human-structured interviews, personality inventories, occupation interest testing, and assessment centers. As such, our proposed approach can be utilized as an effective screening method using a personal-value-based competency model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Change history

References

  1. Woodruffe, C.: What is meant by competency? In: Boam, R., Sparrow, P. (eds.) Designing and Achieving Competency, pp. 1–29. McGraw-Hill, New York (1992)

    Google Scholar 

  2. Hofrichter, D.A., Spencer, L.M.: Competencies: the right foundation the right foundation for effective human resources management. Compens. Benefits Rev. 28, 21–26 (1996)

    Article  Google Scholar 

  3. Moore, D., Cheng, M.I., Dainty, A.: Competence, competency and competencies: performance assessment in organisations. Work Study 51, 314–319 (2002)

    Article  Google Scholar 

  4. Kochanski, J.T.: Introduction to special issue on human resource competencies. Hum. Resour. Manag. 35, 3–6 (1996)

    Article  Google Scholar 

  5. Spencer, L.M., Spencer, S.M.: Competence at Work: Models for Superior Performance. Wiley, New York (1993)

    Google Scholar 

  6. Cardy, R.L., Selvarajan, T.T.: Competencies: alternative frameworks for competitive advantage. Bus. Horiz. 49, 235–245 (2006)

    Article  Google Scholar 

  7. Feltham, R.: Using competencies in selection and recruitment. In: Boam, R., Sparrow, P. (eds) Designing and Achieving Competency. A Competency-based Approach to Developing People and Organizations. pp. 89–103. McGraw-Hill, London (1992)

  8. Nikolaou, I.: The development and validation of a measure of generic work competencies. Int. J. Test. 3, 309–319 (2003)

    Article  Google Scholar 

  9. Hartwell, C.J., Johnson, C.D., Posthuma, R.A.: Are we asking the right questions? Predictive validity comparison of four structured interview question types. J. Bus. Res. 100, 122–129 (2019)

    Article  Google Scholar 

  10. DeGroot, T., Gooty, J.: Can nonverbal cues be used to make meaningful personality attributions in employment interviews? J. Bus. Psychol. 24, 179–192 (2009)

    Article  Google Scholar 

  11. Nikolaou, I., Foti, K.: Personnel selection and personality. In: Zeigler-Hill, V., Shackelford, T. (eds.) The SAGE Handbook of Personality and Individual Differences, pp. 659–677. Sage, London (2018)

    Google Scholar 

  12. Huffcutt, A.I., Van Iddekinge, C.H., Roth, P.L.: Understanding applicant behavior in employment interviews: a theoretical model of interviewee performance. Hum. Resour. Manag. Rev. 21, 353–367 (2011)

    Google Scholar 

  13. Melchers, K.G., Roulin, N., Buehl, A.K.: A review of applicant faking in selection interviews. Int. J. Sel. Assess. 28, 123–142 (2020)

    Article  Google Scholar 

  14. Suen, H.Y., Chen, M.Y.C., Lu, S.H.: Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Comput. Hum. Behav. 98, 93–101 (2019)

    Article  Google Scholar 

  15. Takalkar, M., Xu, M., Wu, Q., Chaczko, Z.: A survey: facial micro-expression recognition. Multimed. Tools Appl. 77, 19301–19325 (2018)

    Article  Google Scholar 

  16. Suen, H., Hung, K., Lin, C.: TensorFlow-based automatic personality recognition used in asynchronous video interviews. IEEE Access 7, 61018–61023 (2019)

    Article  Google Scholar 

  17. Suen, H.Y., Hung, K.E., Lin, C.L.: Intelligent video interview agent used to predict communication skill and perceived personality traits. Hum. Centric Comput. Inf. Sci. 10, 3 (2020)

    Article  Google Scholar 

  18. Hilke, S., Bellini, J.: Artificial intelligence: the robots are now hiring. The Wall Street Journal. https://www.wsj.com/articles/artificial-intelligence-the-robots-are-now-hiring-moving-upstream-1537435820 (2018). Accessed 20 Sep 2018

  19. Waller, B.M., Whitehouse, J., Micheletta, J.: Rethinking primate facial expression: a predictive framework. Neurosci. Biobehav. Rev. 82, 13–21 (2017)

    Article  Google Scholar 

  20. Fridlund, A.J.: Human Facial Expression: An Evolutionary View. Academic Press, San Diego (1994)

    Google Scholar 

  21. Chanes, L., Wormwood, J.B., Betz, N., Barrett, L.F.: Facial expression predictions as drivers of social perception. J. Pers. Soc. Psychol. 114, 380–396 (2018)

    Article  Google Scholar 

  22. Crivelli, C., Fridlund, A.J.: Facial displays are tools for social influence. Trends Cogn. Sci. 22, 388–399 (2018)

    Article  Google Scholar 

  23. Barrett, L.F., Adolphs, R., Marsella, S., Martinez, A.M., Pollak, S.D.: Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol. Sci. Public Interest 20, 1–68 (2019)

    Article  Google Scholar 

  24. Ekman, P.: What scientists who study emotion agree about. Perspect. Psychol. Sci. 11, 31–34 (2016)

    Article  Google Scholar 

  25. Ekman, P., Friesen, W.V.: Nonverbal leakage and clues to deception. Psych. 32, 88–106 (1969)

    Google Scholar 

  26. Crivelli, C., Carrera, P., Fernández-Dols, J.M.: Are smiles a sign of happiness? Spontaneous expressions of judo winners. Evol. Hum. Behav. 36, 52–58 (2015)

    Article  Google Scholar 

  27. Fridlund, A.J.: The behavioral ecology view of facial displays, 25 years later. In: Fernández-Dols, J.M., Russell, J.A. (eds.) Oxford Series in Social Cognition and Social Neuroscience. The Science of Facial Expression. pp. 77–92. Oxford University Press, Oxford (2017)

  28. Rehman, B., Ong, W.H., Tan, A.C.H., Ngo, T.D.: Face detection and tracking using hybrid margin-based ROI techniques. Vis. Comput. 36, 633–647 (2020)

    Article  Google Scholar 

  29. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. I–511–I–518. IEEE, Kauai, HI, USA (2001)

  30. Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004)

    Article  Google Scholar 

  31. Shreve, M., Godavarthy, S., Goldgof, D., Sarkar, S.: Macro- and micro-expression spotting in long videos using spatio-temporal strain. In: IEEE International Conference on Automatic Face & Gesture Recognition and Workshops, pp. 51–56. IEEE, Santa Barbara, CA, USA (2011)

  32. Pitaloka, D.A., Wulandari, A., Basaruddin, T., Liliana, D.Y.: Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Comput. Sci. 116, 523–529 (2017)

    Article  Google Scholar 

  33. Yudin, D.A., Dolzhenko, A.V., Kapustina, E.O.: The usage of grayscale or color images for facial expression recognition with deep neural networks. In: Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y. (eds.) Advances in Neural Computation. Machine Learning, and Cognitive Research III, pp. 271–281. Springer International Publishing, Cham (2020)

    Google Scholar 

  34. Sadeghi, H., Raie, A.A.: Human vision inspired feature extraction for facial expression recognition. Multimed. Tools Appl. 78, 30335–30353 (2019)

    Article  Google Scholar 

  35. Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3444–3451. IEEE, Portland, OR, USA (2013)

  36. Merget, D., Rock, M., Rigoll, G.: Robust facial landmark detection via a fully-convolutional local-global context network. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 781–790. IEEE, Salt Lake City, UT, USA (2018)

  37. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), pp. 886–893. IEEE, San Diego, CA, USA (2005)

  38. Carcagnì, P., Del Coco, M., Leo, M., Distante, C.: Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4, 645 (2015)

    Article  Google Scholar 

  39. King, D.: Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)

    Google Scholar 

  40. Csaba, B., Tamás, H., Horváth, A., Oláh, A., Reguly, I.Z.: PPCU sam: open-source face recognition framework. Procedia Comput. Sci. 159, 1947–1956 (2019)

    Article  Google Scholar 

  41. Pursche, T., Clauß, R., Tibken, B., Möller, R.: Using neural networks to enhance the quality of ROIs for video based remote heart rate measurement from human faces. In: IEEE International Conference on Consumer Electronics (ICCE), pp. 1–5. IEEE, Las Vegas, NV, USA (2019)

  42. Johnston, B., Chazal, P.D.: A review of image-based automatic facial landmark identification techniques. EURASIP J. Image Video Process. 2018, 86 (2018)

    Article  Google Scholar 

  43. Aslan, M.F., Durdu, A., Sabanci, K., Mutluer, M.A.: CNN and HOG based comparison study for complete occlusion handling in human tracking. Measurement 158, 107704 (2020)

    Article  Google Scholar 

  44. Adouani, A., Henia, W.M.B., Lachiri, Z.: Comparison of Haar-like, HOG and LBP approaches for face detection in video sequences. In: 16th International Multi-Conference on Systems, Signals & Devices (SSD), pp. 266–271. Istanbul, Turkey (2019)

  45. Hammal, Z., Couvreur, L., Caplier, A., Rombaut, M.: Facial expression classification: an approach based on the fusion of facial deformations using the transferable belief model. Int. J. Approx. Reason. 46, 542–567 (2007)

    Article  Google Scholar 

  46. Liu, Y., Zhang, J., Yan, W., Wang, S., Zhao, G., Fu, X.: A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Trans. Affect. Comput. 7, 299–310 (2016)

    Article  Google Scholar 

  47. Mehendale, N.: Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2, 446 (2020)

    Article  Google Scholar 

  48. Zhao, Y., Xu, J.: A convolutional neural network for compound micro-expression recognition. Sensors (Basel, Switz) 19, 5553 (2019)

    Article  Google Scholar 

  49. González-Lozoya, S.M., de la Calleja, J., Pellegrin, L., Escalante, H.J., Medina, M.A., Benitez-Ruiz, A.: Recognition of facial expressions based on CNN features. Multimed. Tools Appl. 79, 13987–14007 (2020)

    Article  Google Scholar 

  50. Sajjad, M., Zahir, S., Ullah, A., Akhtar, Z., Muhammad, K.: Human behavior understanding in big multimedia data using CNN based facial expression recognition. Mob. Netw. Appl. 25, 1611–1621 (2020)

    Article  Google Scholar 

  51. Fortune: Fortune 500 in 2020. https://fortune.com/fortune500/2020/ (2020). Accessed 11 Aug 2020

  52. Taber, K.S.: The use of Cronbach’s Alpha when developing and reporting research instruments in science education. Res. Sci. Educ. 48, 1273–1296 (2018)

    Article  Google Scholar 

  53. Oh, Y.H., See, J., Le Ngo, A.C., Phan, R.C.W., Baskaran, V.M.: A survey of automatic facial micro-expression analysis: databases, methods, and challenges. Front. Psychol. 9, 1128 (2018)

    Article  Google Scholar 

  54. Deng, J., Trigeorgis, G., Zhou, Y., Zafeiriou, S.: Joint multi-view face alignment in the wild. IEEE Trans. Image Process. 2019, 1 (2019)

    MathSciNet  MATH  Google Scholar 

  55. Cıbuk, M., Budak, U., Guo, Y., Cevdet Ince, M., Sengur, A.: Efficient deep features selections and classification for flower species recognition. Measurement 137, 7–13 (2019)

    Article  Google Scholar 

  56. Krishnaraj, N., Elhoseny, M., Thenmozhi, M., Selim, M.M., Shankar, K.: Deep learning model for real-time image compression in Internet of Underwater Things (IoUT). J. Real-Time Image Process. 2019, 1 (2019)

    Google Scholar 

  57. Saravanan, A., Perichetla, G., Gayathri, D.K.S.: Facial emotion recognition using convolutional neural networks. arXiv: 1910.05602 (2019)

  58. Schmidt, F.L., Oh, S., Shaffer, J.A.: The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings (Fox School of Business Research Paper). Temple University, Philadelphia, PA (2016)

  59. Lopes, A.T., de Aguiar, E., De Souza, A.F., Oliveira-Santos, T.: Facial expression recognition with Convolutional Neural Networks: coping with few data and the training sample order. Pattern Recognit. 61, 610–628 (2017)

    Article  Google Scholar 

  60. Smith, J.: You are what you will: Kant, schopenhauer, facial expression of emotion, and affective computing. Ger. Life Lett. 70, 466–477 (2017)

    Article  Google Scholar 

  61. Poiesi, F., Cavallaro, A.: Predicting and recognizing human interactions in public spaces. J. Real-Time Image Proc. 10, 785–803 (2015)

    Article  Google Scholar 

  62. Hannuna, S., Camplani, M., Hall, J., et al.: DS-KCF: a real-time tracker for RGB-D data. J. Real-Time Image Proc. 16, 1439–1458 (2019)

    Article  Google Scholar 

  63. Bordallo-López, M., Nieto, A., Boutellier, J., et al.: Evaluation of real-time LBP computing in multiple architectures. J. Real-Time Image Proc. 13, 375–396 (2017)

    Article  Google Scholar 

  64. Pang, W., Choi, K., Qin, J.: Fast Gabor texture feature extraction with separable filters using GPU. J. Real-Time Image Proc. 12, 5–13 (2016)

    Article  Google Scholar 

  65. Shen, X.B., Wu, Q., Fu, X.I.: Effects of the duration of expressions on the recognition of microexpressions. J. Zhejiang Univ. Sci. B 13, 221–230 (2012)

    Article  Google Scholar 

  66. Queiroz, R.B., Musse, S.R., Badler, N.I.: Investigating macroexpressions and microexpressions in computer graphics animated faces. MIT Press 23, 191–208 (2014)

    Google Scholar 

  67. Garbin, C., Zhu, X., Marques, O.: Dropout vs. batch normalization: an empirical study of their impact to deep learning. Multimedia Tools Appl. 79, 12777–12815 (2020)

    Article  Google Scholar 

  68. Dai, C., Liu, X., Lai, J., Li, P.: Human behavior deep recognition architecture for smart city applications in the 5G environment. IEEE Netw. 33, 206–211 (2019)

    Article  Google Scholar 

  69. Su, Y.S., Chou, C.H., Chu, Y.L., Yang, Z.F.: A finger-worn device for exploring Chinese printed text with using CNN algorithm on a micro IoT processor. IEEE Access. 7, 116529–116541 (2019)

    Article  Google Scholar 

  70. Dai, C., Liu, X., Lai, J.: Human action recognition using two-stream attention based LSTM networks. Appl. Soft Comput. 86, 105820 (2020)

    Article  Google Scholar 

  71. Su, Y.S., Lin, C.L., Chen, S.Y., Lai, C.F.: Bibliometric study of social network analysis literature. Libr. Hi Tech. 38, 420–433 (2019)

    Article  Google Scholar 

  72. Dai, C., Liu, X., Chen, W., Lai, C.F.: A low-latency object detection algorithm for the edge devices of IoV systems. IEEE Trans. Veh. Technol. 69, 11169–11178 (2020)

    Article  Google Scholar 

  73. Su, Y.S., Chen, H.R.: Social Facebook with Big Six approaches for improved students’ learning performance and behavior: A case study of a project innovation and implementation course. Front. Psychol. 11, 1166 (2020)

    Article  Google Scholar 

  74. Dai, C., Liu, X., Yang, L.T., Ni, M., Ma, Z., Zhang, Q., Deen, M.J.: Video scene segmentation using tensor-train faster-RCNN for multimedia IoT systems. IEEE Internet Things J. 2020, 5 (2020)

    Google Scholar 

  75. Su, Y.S., Ni, C.F., Li, W.C., Lee, I.H., Lin, C.P.: Applying deep learning algorithms to enhance simulations of large-scale groundwater flow in IoTs. Appl. Soft Comput. 92, 106298 (2020)

    Article  Google Scholar 

  76. Su, Y.S., Liu, T.Q.: Applying data mining techniques to explore users behaviors and viewing video patterns in converged IT environments. J. Ambient Intell. Hum. Comput. (2020). https://doi.org/10.1007/s12652-020-02712-6

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Ministry of Science and Technology, Taiwan (Grant no. 109-2511-H-003-046).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Hung-Yue Suen or Kuo-En Hung.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Su, YS., Suen, HY. & Hung, KE. Predicting behavioral competencies automatically from facial expressions in real-time video-recorded interviews. J Real-Time Image Proc 18, 1011–1021 (2021). https://doi.org/10.1007/s11554-021-01071-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-021-01071-5

Keywords

Navigation