当前位置: X-MOL 学术Digit. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emotion recognition based on fusion of long short-term memory networks and SVMs
Digital Signal Processing ( IF 2.9 ) Pub Date : 2021-07-09 , DOI: 10.1016/j.dsp.2021.103153
Tian Chen 1, 2, 3 , Hongfang Yin 1, 2, 3 , Xiaohui Yuan 4 , Yu Gu 1, 2, 3 , Fuji Ren 1, 2, 3, 5 , Xiao Sun 1, 2, 3
Affiliation  

This paper proposes a multimodal fusion emotion recognition method based on Dempster-Shafer evidence theory, which includes electroencephalogram (EEG) and electrocardiogram (ECG). For EEG, we use the SVM classifier to classify features, and for ECG, we establish the corresponding Bi-directional Long Short-Term Memory network emotion recognition structure, which is fused with EEG classification results through the evidence theory. We selected 25 video clips with five emotions (happy, relaxed, angry, sad, and disgusted), and a total of 20 subjects participated in our emotional experiment. The experimental results prove that the performance of the multi-modal fusion model proposed in this paper is superior to the single-modal emotion recognition model. In the Arousal and Valance dimensions, the average accuracy is improved by 2.64% and 2.75% compared with the EEG signal-based emotion recognition model. Compared with the emotion recognition model based on the ECG signal, the accuracy is improved by 7.37% and 8.73%.



中文翻译:

基于长短期记忆网络与支持向量机融合的情感识别

本文提出了一种基于Dempster-Shafer证据理论的多模态融合情感识别方法,包括脑电图(EEG)和心电图(ECG)。对于脑电,我们使用SVM分类器对特征进行分类,对于心电,我们建立了相应的双向长短期记忆网络情感识别结构,通过证据理论与脑电分类结果融合。我们选取了具有五种情绪(高兴、轻松、愤怒、悲伤和厌恶)的 25 个视频片段,共有 20 名受试者参与了我们的情绪实验。实验结果证明本文提出的多模态融合模型的性能优于单模态情感识别模型。在 Arousal 和 Valance 维度,平均准确率提高了 2.64% 和 2。75% 与基于 EEG 信号的情绪识别模型相比。与基于心电信号的情绪识别模型相比,准确率分别提高了7.37%和8.73%。

更新日期:2021-07-20
down
wechat
bug