Full length article
Decoding emotional changes of android-gamers using a fused Type-2 fuzzy deep neural network

https://doi.org/10.1016/j.chb.2020.106640Get rights and content

Highlights

  • Designed a fused type-2 fuzzy deep neural network for online emotion recognition.

  • EEG signals and facial images are utilized simultaneously to decode emotional changes.

  • Experimented on android-gamers to assess their emotion changes while playing games.

  • A phase-sensitive CSP algorithm is modelled to extract features from EEG signal.

Abstract

With the fastest growing popularity of gaming applications on android phone, analyzing emotion changes of steadfast android-gamers have become a study of utmost interest among most of the psychologists. Recently, some android games are producing negative impacts to the gamers; even in the worst cases the effect is becoming life-threatening too. Most of the existing research works are based on psychological view-point of exploring the impact (positive/negative) of playing android games for the child and adult age-group. However, the online recognition of emotional state changes of the android-gamers while playing video games may be relatively unexplored. To fill this void, the present study proposes a novel method of identifying the emotional state changes of android-gamers by decoding their brain signals and facial images simultaneously during playing video games. Besides above, the second novelty of the paper lies in designing a multimodal fusion method between brain signals and facial images for the said application. To address this challenge, the paper proposes a fused type-2 fuzzy deep neural network (FT2FDNN) which integrates the brain signal processing approach by a general type-2 fuzzy reasoning algorithm with the flavor of the image/video processing approach using a deep convolutional neural network. FT2FDNN uses multiple modalities to extract the similar information (here, emotional changes) simultaneously from the type-2 fuzzy and deep neural representations. The proposed fused type-2 fuzzy deep learning paradigm demonstrates promising results in classifying the emotional changes of gamers with high classification accuracy. Thus the proposed work explores a new era for future researchers.

Introduction

Playing android games and beating the scores of others has become a ‘happening’ thing for young individuals for quite some time now. People used to play games on their desktops for leisure purpose earlier. But with the rapid technological improvements, newer android game-playing through mobile phones is getting popularized by a larger mass of people. The matter of concern is that if only good games are played for recreational purposes, then that would have positive impacts in terms of releasing stress. However, people especially young minds tend to get addicted to games (Ryan et al., 2006) irrespective of focusing on their works. The obsession for android games and lack of focus in day-to-day activities bring negative impacts (Schneider et al., 2004). On the other hand, there are several violent games, available in the online market, which can be freely downloadable and playing the games for a longer time create severe disorder on the mental condition of the players.

Already some works are present in the existing literature (Andersonet al., 2010)- (Carnagey et al., 2007) which justify our claim of emotional changes due to playing android games (Andersonet al., 2010). Also, it is to be added that not only mood swings, but the adverse impacts of playing such violent games also can be much deeper in young minds. Now a days, cyber criminals are targeting through gaming platforms also (Messias et al., 2011). Just a few years ago in 2016, a treacherous game named as ‘Blue Whale Challenge’ released. Many people had even committed suicide by just playing that game. This type of incidents inspire us to develop an algorithm which can automatically identify the emotional state change of android gamers. The multi-modal approach utilizes both the electroencephalographic (EEG) signal and facial images of the gamers while playing video games. The automatic recognition of emotional state (Cowie and Cornelius, 2003) of android game player can be used for self-monitoring of the gamers and/or for parental control as well.

Kühn et al. in (Kühn et al., 2019) showed that how playing violent android games can have detrimental effect rather than playing non-violent games. Przybylski and Weinstein measured the aggression level in adolescents spending their time in android games and found more aggression in them (Przybylski & Weinstein, 2019). In a similar type of work, Prescott, Sargent and Hull claimed that the generated physical aggression increases over time with more involvement in violent games (Prescott et al., 2018). Likewise, Hasan et al. checked the long-term effects of violent android games (Hasan et al., 2013). Arriaga et al. studied the effects of playing violent android games for a longer duration for college students (Arriaga et al., 2011). Carnagey et al. reported that psychological damage can occur in the chronic game players (Carnagey et al., 2007).

All the above cited papers show that how playing games for a longer duration could be detrimental to our health. The studies that have already been done mainly are from the psychological perspectives. There is hardly any work in the technical ground to justify the claims of the psychologists in terms of machine learning (ML) algorithms to decode the emotional changes of the gamers in real-time. The proposed work is a novel one proposed by the authors in terms of mainly applicability of ML approaches in recognizing changes in emotions of players by processing facial expressions and EEG signals simultaneously. The authors did not find any relevant work done so far where these two modalities have been combined to classify emotions of gamers. Although there exist a few works, where both of these modalities have been incorporated for the application in other domains, but not for detecting emotional state changes of android-game addicted people.

To grasp emotion quantitatively, different physiological (Balters & Steinert, 2017)- (Purves et al., 2012, p. 713), behavioral (Coulson, 2004)- (Gottman & Krokoff, 1989) and subjective (Fredrickson et al., 2003)- (Watson et al., 1988) measures were adopted in the existing literature. The physiological changes, that express human emotions, actually constitute energy in motion, which can be quantified using various physiological sensors more accurately as compared to the behavioral and subjective aspects. Thus, a greater amount of work has been carried out to decode the physiological responses for distinct emotions. In this context, several types of measures, such as, Autonomic Nervous System (ANS)- mediated changes (Balters & Steinert, 2017) like heart-rate changes (using electrocardiogram sensor) (Anttonen & Surakka, 2005)- (Pollatos et al., 2007), blood pressure changes (using pulse oximeter, manual auscultatory and digital oscillometric techniques) (Sarlo et al., 2005), breathing rate changes (using respiratory transducer) (Homma & Masaoka, 2008), changes in brain signal characteristics (using EEG) (Murugappan et al., 2008) and brain activations (using functional near infrared spectroscopy/fNIRs and/or functional magnetic resonance imaging/fMRI) (FakhrHosseini et al., 2015)- (Kesler et al., 2001) etc. have been used in the literature. However, the recent advances in brain wave and brain image analysis enable offering more sophisticated and refined testing of emotional contents (T Dasborough et al., 2008).

Huang et al. in (Huang et al., 2017) propose a work to recognize four emotional states, where movie clips have been used as stimulus to the subjects. The study shows that multi-modal approach is far better than using a single mode of either facial images or EEG signals, when neural network is taken as the classifier. This motivated us to use multi-modal approach. Another work by Sokolov et al. shows that Hjorth parameters can be used as EEG features and for extracting features from facial images, principal component analysis can be undertaken (Sokolov et al., 2017). However, the work lacks in achieving a high accuracy, whereas the present approach surely increases the accuracy. On the other hand, Petrantonakis and Hadjileon in (Petrantonakis & Hadjileontiadis, 2010) implemented hybrid adaptive filtering approach for extracting the relevant EEG features. In the current paper, the authors implemented a novel EEG feature extraction method using phase-sensitive CSP algorithm, which successfully outperforms the standard CSP algorithm (Delorme & Makeig, 2004)- (Abdi & Williams, 2010) and the other existing feature extraction techniques.

Apart from designing CSP feature extractor, another primary motivation here is to design a fused type-2 fuzzy deep neural network (FT2FDNN) using the concept of multimodal fusion for online emotion recognition of gamers when they engaged themselves playing android games. The proposed model simultaneously extracts information from the brain signal using general type-2 fuzzy set (GT2FS) induced reasoning and from the image/video data of facial expression (Friesen and Ekman, 1978) using a deep convolutional neural network (CNN) representation. The knowledge learnt by the GT2FS and CNN representations are then combined together in the fusion layer to produce the final fused data-representation for getting an ultimate solution for pattern-classification. From our previous experience (Ghosh et al., 2018)– (Saha et al., 2016) and based on the existing literature (Ghosh et al., 2019), it is evident that brain signals have wide fluctuations over time which results in uncertainties in the data representation. We select general type-2 fuzzy logic (GT2FL) for EEG data analysis for its inherent capability of handling such uncertainties (Rakshit et al., 2016). On the other hand, convolutional neural network (CNN) (Lawrence et al., 1997) has proved its efficacy dealing with image/video data while removing the noise from the original data. This inspires us to use a 3-dimensional (3D) CNN for image (video) data analysis. The proposed FT2FDNN utilizes the outputs obtained by the GT2FS and 3D-CNN to form the fused representation and finally fed to a classifier unit to get the desired six emotion classes: happiness, sadness, anger, surprise, disgust and neutral. Thereby, FT2FDNN is more potentially suitable for online classification of emotional contents of the gamers as the model can efficiently handle the high level of data ambiguity and noise.

The organization of the remainder in this paper is as follows. In section 2, the principles & methodologies adopted to design proposed algorithm of this paper are discussed from the viewpoint of mathematics. Section 3 gives an overview of the experimental framework, EEG data acquisition during playing the android games, and the results obtained from the experiments are presented in Section 4 Offline and online analysis of image and EEG data, Performance evaluation of the proposed FT2FDNN. The results of all the experiments are summarized in the discussion section 6 and concluding remarks are listed in Section 7.

Section snippets

The proposed approach

In this section, the methodology adopted to design an efficient online emotion detector of android gamers, has been discussed. A Fused Type-2 Fuzzy Deep Neural Network (FT2FDNN) is designed, which integrates the classical image-dataset processing approach using a 3-dimensional convolutional neural network (3D-CNN) (Lawrence et al., 1997) with the flavor of EEG signal (Acharya et al., 2018) classification using type-2 fuzzy set (T2FS). The proposed FT2FDNN is shown in Fig. 1. As shown in the

Experimental framework

The section describes the experimental protocol undertaken to perform the experiments.

Offline and online analysis of image and EEG data

The experiments are conducted into two ways: offline data analysis and online data analysis.

Performance evaluation of the proposed FT2FDNN

The performance of the FT2FDNN classifier with the existing ones has been evaluated in this section.

Discussion

The proposed work is dedicated to recognize 6 different emotional states for android game players. The advantages of the described work can be stated based on the below mentioned parameters.

  • a)

    Robustness analysis: The classification of the emotional states is done using a multi-modal fusion model using GT2FS and 3D-CNN. If any classifier suffers from overfitting and/or underfitting, then that reflects in the performance of the classification accuracy. But, in the proposed work, 87.58%

Conclusion

The paper presents a novel method of real-time emotion recognition of android-gamers using the concept of multi-modal fusion. The proposed technique works automatically by fusing the results obtained by EEG classification using GT2FS and facial expression classification by 3D-CNN. To the best of the authors’ knowledge, this is a novel attempt where the brain signals as well as facial expressions are measured simultaneously for understanding how android games are playing a significant role in

Credit author statement

Lidia Ghosh: Methodology, Writing – original draft, Validation, Data curation, Software; Sriparna Saha: Visualization, Writing – original draft, Software, Data curation, Investigation; Amit Konar: Conceptualization.

Acknowledgment

The first and third authors thankfully acknowledged the fund provided by Ministry of Human Resource Development for RUSA-II project granted to Jadavpur University, India. The second author is thankful to the University for providing research seed money and UGC Start-up Grant under the scheme of Basic Scientific Research. The work is approved by Institutional Ethics Committee.

References (54)

  • C.A. Anderson

    Violent video game effects on aggression, empathy, and prosocial behavior in eastern and western countries: A meta-analytic review

    Psychological Bulletin

    (2010)
  • J. Anttonen et al.

    Emotions and heart rate while sitting on a chair

  • P. Arriaga et al.

    Effects of playing violent computer games on emotional desensitization and aggressive behavior 1

    Journal of Applied Social Psychology

    (2011)
  • S. Balters et al.

    Capturing emotion reactivity through physiology measurement as a foundation for affective engineering in engineering design science and engineering practices

    Journal of Intelligent Manufacturing

    (2017)
  • M. Coulson

    Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence

    Journal of Nonverbal Behavior

    (2004)
  • M. FakhrHosseini et al.

    Estimation of drivers' emotional states based on neuroergonmic equipment: An exploratory study using fNIRS

  • B.L. Fredrickson et al.

    What good are positive emotions in crisis? A prospective study of resilience and emotions following the terrorist attacks on the United States on september 11th, 2001

    Journal of Personality and Social Psychology

    (2003)
  • E. Friesen et al.

    Facial action coding system: A technique for the measurement of facial movement

    Palo Alto

    (1978)
  • L. Ghosh et al.

    Hemodynamic analysis for cognitive load assessment and classification in motor learning tasks using type-2 fuzzy sets

    IEEE Trans. Emerg. Top. Comput. Intell.

    (2018)
  • J.M. Gottman et al.

    Marital interaction and satisfaction: A longitudinal view

    Journal of Consulting and Clinical Psychology

    (1989)
  • A. Halder et al.

    Emotion recognition from the lip-contour of a subject using artificial bee colony optimization algorithm

  • I. Homma et al.

    Breathing rhythms and emotions

    Experimental Physiology

    (2008)
  • Y. Huang et al.

    Fusion of facial expressions and EEG for multimodal emotion recognition

    Computational Intelligence and Neuroscience

    (2017)
  • J. Huang et al.

    Video-based sign language recognition without temporal segmentation

  • M.L. Kesler et al.

    Neural substrates of facial emotion processing using fMRI

    Cognitive Brain Research

    (2001)
  • S. Kühn et al.

    Does playing violent video games cause aggression? A longitudinal intervention study

    Molecular Psychiatry

    (2019)
  • S. Lawrence et al.

    Face recognition: A convolutional neural-network approach

    IEEE Transactions on Neural Networks

    (1997)
  • Cited by (0)

    View full text