当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Online multi-hypergraph fusion learning for cross-subject emotion recognition
Information Fusion ( IF 14.7 ) Pub Date : 2024-03-07 , DOI: 10.1016/j.inffus.2024.102338
Tongjie Pan , Yalan Ye , Yangwuyong Zhang , Kunshu Xiao , Hecheng Cai

Multimodal fusion for emotion recognition has received increasing attention from researchers because of its ability to effectively leverage multimodal complementary information. However, there are two main challenges lead to performance degradation of existing emotion recognition models, which limits the practical use of existing models. One is that multimodal signals are difficult to fuse effectively to respond to the complexity of emotions. The other is that the individual variability and non-stationarity of physiological signals lead to poor performance of the model on new subjects. In particular, existing methods will not work well when faced with emotion recognition of new subjects in online scenarios. In this paper, we propose a novel online multi-hypergraph fusion learning method (OnMHF) to effectively fuse multimodal information and to reduce the difference between training data and test data for online cross-subject emotion recognition. Specifically, in a training phase, a multi-hypergraph fusion is proposed to fuse multimodal physiological signals to effectively obtain emotion-aware information via leveraging multimodal complementary information and high-order correlations among multimodal signals. In an online recognizing phase, an online multi-hypergraph learning is designed to learn online multimodal information from online multimodal data by updating hypergraph structure. As a result, the proposed method can be more effective for emotion recognition of target subjects when target data arrive in an online manner. Experimental results have demonstrated that the proposed method outperforms the baselines and compared state-of-the-art methods in online emotion recognition tasks.

中文翻译:

用于跨学科情感识别的在线多超图融合学习

用于情感识别的多模态融合因其能够有效利用多模态互补信息而受到越来越多研究人员的关注。然而,有两个主要挑战导致现有情感识别模型的性能下降,从而限制了现有模型的实际使用。一是多模态信号难以有效融合以应对复杂的情绪。另一个是生理信号的个体变异性和非平稳性导致模型在新受试者上表现不佳。特别是,当面对在线场景中新主题的情感识别时,现有方法将无法很好地发挥作用。在本文中,我们提出了一种新颖的在线多超图融合学习方法(OnMHF),以有效融合多模态信息并减少在线跨主题情感识别的训练数据和测试数据之间的差异。具体来说,在训练阶段,提出了多超图融合来融合多模态生理信号,通过利用多模态互补信息和多模态信号之间的高阶相关性来有效地获得情感感知信息。在在线识别阶段,在线多超图学习被设计为通过更新超图结构从在线多模态数据中学习在线多模态信息。因此,当目标数据以在线方式到达时,所提出的方法可以更有效地对目标对象进行情绪识别。实验结果表明,所提出的方法优于基线,并与在线情感识别任务中最先进的方法进行了比较。
更新日期:2024-03-07
down
wechat
bug