当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EmotionMeter: A Multimodal Framework for Recognizing Human Emotions
IEEE Transactions on Cybernetics ( IF 11.8 ) Pub Date : 2019-03-01 , DOI: 10.1109/tcyb.2018.2797176
Wei-Long Zheng , Wei Liu , Yifei Lu , Bao-Liang Lu , Andrzej Cichocki

In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.

中文翻译:

EmotionMeter:用于识别人类情绪的多模式框架

在本文中,我们提出了一种称为EmotionMeter的多模式情感识别框架,该框架结合了脑电波和眼球运动。为了增加EmotionMeter在实际应用中的可行性和可穿戴性,我们在耳朵上方设计了一个六电极放置位置,以收集脑电图(EEG)信号。我们结合脑电图和眼球运动来整合用户的内部认知状态和外部潜意识行为,以提高EmotionMeter的识别准确性。实验结果表明,与单模态相比,与多模态深度神经网络的模态融合可以显着提高性能,并且在四种情绪(快乐,悲伤,恐惧和中性)下,最佳平均准确度达到85.11%。我们探讨了脑电图和眼球运动的代表性特征的互补性,并确定了脑电图具有分类快乐情绪的优势,而眼球运动在识别恐惧情绪方面胜过脑电图。为了研究EmotionMeter随时间推移的稳定性,每个受试者在不同的日期进行3次实验。借助六电极EEG和眼球运动功能,EmotionMeter在整个训练过程中的平均识别准确率达到72.39%。这些实验结果证明了EmotionMeter在会议期间和会议之间的有效性。每个对象在不同的​​日期进行三次实验。借助六电极EEG和眼球运动功能,EmotionMeter在整个训练过程中的平均识别准确率达到72.39%。这些实验结果证明了EmotionMeter在会议期间和会议之间的有效性。每个对象在不同的​​日期进行三次实验。借助六电极EEG和眼球运动功能,EmotionMeter在整个会议期间的平均识别准确率达到72.39%。这些实验结果证明了EmotionMeter在会议期间和会议之间的有效性。
更新日期:2019-03-01
down
wechat
bug