当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fusing pairwise modalities for emotion recognition in conversations
Information Fusion ( IF 18.6 ) Pub Date : 2024-02-15 , DOI: 10.1016/j.inffus.2024.102306
Chunxiao Fan , Jie Lin , Rui Mao , Erik Cambria

Multimodal fusion has the potential to significantly enhance model performance in the domain of Emotion Recognition in Conversations (ERC) by efficiently integrating information from diverse modalities. However, existing methods face challenges as they directly integrate information from different modalities, making it difficult to assess the individual impact of each modality during training and to capture nuanced fusion. To deal with it, we propose a novel framework named Fusing Pairwise Modalities for ERC. In this proposed method, the pairwise fusion technique is incorporated into multimodal fusion to enhance model performance, which enables each modality to contribute unique information, thereby facilitating a more comprehensive understanding of the emotional context. Additionally, a designed density loss is applied to characterise fused feature density, with a specific focus on mitigating redundancy in pairwise fusion methods. The density loss penalises feature density during training, contributing to a more efficient and effective fusion process. To validate the proposed framework, we conduct comprehensive experiments on two benchmark datasets, namely IEMOCAP and MELD. The results demonstrate the superior performance of our approach compared to state-of-the-art methods, indicating its effectiveness in addressing challenges related to multimodal fusion in the context of ERC.

中文翻译:

融合成对模式以进行对话中的情感识别

多模态融合有可能通过有效地集成来自不同模态的信息来显着增强对话中的情绪识别(ERC)领域的模型性能。然而,现有方法面临挑战,因为它们直接整合来自不同模态的信息,使得很难评估训练期间每种模态的个体影响并捕获细微的融合。为了解决这个问题,我们提出了一个名为 Fusing Pairwise Modalities for ERC 的新颖框架。在该方法中,将成对融合技术纳入多模态融合中以增强模型性能,使每种模态都能贡献独特的信息,从而有助于更全面地理解情感背景。此外,应用设计的密度损失来表征融合特征密度,特别关注减少成对融合方法中的冗余。密度损失在训练期间惩罚特征密度,有助于更高效和有效的融合过程。为了验证所提出的框架,我们对两个基准数据集(IEMOCAP 和 MELD)进行了全面的实验。结果表明,与最先进的方法相比,我们的方法具有优越的性能,表明其在解决 ERC 背景下与多模态融合相关的挑战方面的有效性。
更新日期:2024-02-15
down
wechat
bug