当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emotion recognition by deeply learned multi-channel textual and EEG features
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2021-01-12 , DOI: 10.1016/j.future.2021.01.010
Yishu Liu , Guifang Fu

Human emotion recognition is a key technique in human–computer interaction. Traditional emotion recognition algorithms rely on external actions such as facial expression, which may fail to capture real human emotion since facial expression signals may be camouflaged. EEG signal is closely close to human emotion, which can directly reflect human emotion. In this paper, we propose to learn multi-channel features from the EEG signal for human emotion recognition, where the EEG signal is generated by sound signal stimulation. Specifically, we apply multi-channel EEG and textual feature fusion in time-domain to recognize different human emotions, where six statistical features in time-domain are fused to a feature vector for emotion classification. The textual feature extraction is based. And we conduct EEG&textual-based feature extraction from both time and frequency domain. Finally, we train SVM for human emotion recognition. Experimental on DEAP dataset show that compared with frequency domain feature-based emotion recognition algorithms, our proposed method improves recognition accuracy rate.



中文翻译:

通过深度学习的多渠道文本和EEG功能识别情绪

人的情感识别是人机交互中的关键技术。传统的情感识别算法依赖于诸如面部表情之类的外部动作,由于面部表情信号可能会被伪装,因此它们可能无法捕获真实的人类情感。脑电信号与人的情感非常接近,可以直接反映人的情感。在本文中,我们建议从EEG信号中学习多通道特征以进行人类情感识别,其中EEG信号是通过声音信号刺激产生的。具体来说,我们在时域中应用多通道EEG和文本特征融合来识别不同的人类情感,其中将时域中的六个统计特征融合到特征向量中进行情感分类。文本特征提取基于。我们进行脑电图 从时域和频域提取基于文本的特征。最后,我们训练SVM进行人类情感识别。在DEAP数据集上的实验表明,与基于频域特征的情绪识别算法相比,本文提出的方法提高了识别准确率。

更新日期:2021-02-05
down
wechat
bug