当前位置: X-MOL 学术Comput. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Maximum weight multi-modal information fusion algorithm of electroencephalographs and face images for emotion recognition
Computers & Electrical Engineering ( IF 4.3 ) Pub Date : 2021-07-16 , DOI: 10.1016/j.compeleceng.2021.107319
Mei Wang 1 , Ziyang Huang 1 , Yuancheng Li 1 , Lihong Dong 1 , Hongguang Pan 1
Affiliation  

In view of the low accuracy of the traditional emotion recognition methods based on facial expressions, an emotion recognition method based on maximum weight multi-modal information fusion of electroencephalographs (EEGs) and facial expression information is proposed in this paper. First, the induced emotional EEG data is converted into the corresponding EEG topographic map data and sent to the convolutional network for training and outputting decision information. Second, the illumination compensation method is utilized to filter the noise of the face image data. Then, the face image data is trained in the multi-scale feature extraction network, and the decision information is output. Finally, aiming at the decision-level information fusion, a weighted fusion method is proposed in this paper for emotion recognition. Experimental tests show that the recognition accuracy of the multi-scale feature extraction network on the CK+ data set and Fer2013 data reached 94.4% and 72%, respectively. Simultaneously, the multi-modal information fusion method achieves 92.6% accuracy in emotion recognition.



中文翻译:

用于情感识别的脑电图和人脸图像最大权重多模态信息融合算法

针对传统基于面部表情的情感识别方法准确率不高的问题,本文提出了一种基于最大权重多模态信息融合脑电图(EEG)与面部表情信息的情感识别方法。首先,将诱发的情绪脑电数据转换成相应的脑电地形图数据,送入卷积网络进行训练并输出决策信息。其次,利用光照补偿方法过滤人脸图像数据的噪声。然后在多尺度特征提取网络中训练人脸图像数据,输出决策信息。最后,针对决策级信息融合,本文提出了一种加权融合方法用于情感识别。实验测试表明,多尺度特征提取网络在CK+数据集和Fer2013数据上的识别准确率分别达到了94.4%和72%。同时,多模态信息融合方法在情感识别中达到了92.6%的准确率。

更新日期:2021-07-16
down
wechat
bug