当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Two-Level Multimodal Fusion for Sentiment Analysis in Public Security
Security and Communication Networks Pub Date : 2021-06-04 , DOI: 10.1155/2021/6662337
Jianguo Sun 1 , Hanqi Yin 1 , Ye Tian 1 , Junpeng Wu 1 , Linshan Shen 1 , Lei Chen 2
Affiliation  

Large amounts of data are widely stored in cyberspace. Not only can they bring much convenience to people’s lives and work, but they can also assist the work in the information security field, such as microexpression recognition and sentiment analysis in the criminal investigation. Thus, it is of great significance to recognize and analyze the sentiment information, which is usually described by different modalities. Due to the correlation among different modalities data, multimodal can provide more comprehensive and robust information than unimodal in data analysis tasks. The complementary information from different modalities can be obtained by multimodal fusion methods. These approaches can process multimodal data through fusion algorithms and ensure the accuracy of the information used for subsequent classification or prediction tasks. In this study, a two-level multimodal fusion (TlMF) method with both data-level and decision-level fusion is proposed to achieve the sentiment analysis task. In the data-level fusion stage, a tensor fusion network is utilized to obtain the text-audio and text-video embeddings by fusing the text with audio and video features, respectively. During the decision-level fusion stage, the soft fusion method is adopted to fuse the classification or prediction results of the upstream classifiers, so that the final classification or prediction results can be as accurate as possible. The proposed method is tested on the CMU-MOSI, CMU-MOSEI, and IEMOCAP datasets, and the empirical results and ablation studies confirm the effectiveness of TlMF in capturing useful information from all the test modalities.

中文翻译:

公共安全情感分析的两级多模态融合

大量数据广泛存储在网络空间中。它们不仅可以为人们的生活和工作带来很多便利,还可以辅助信息安全领域的工作,如刑事侦查中的微表情识别、情感分析等。因此,对情感信息进行识别和分析具有重要意义,情感信息通常用不同的模态来描述。由于不同模态数据之间的相关性,多模态可以在数据分析任务中提供比单模态更全面、更稳健的信息。通过多模态融合方法可以获得来自不同模态的互补信息。这些方法可以通过融合算法处理多模态数据,并确保用于后续分类或预测任务的信息的准确性。在这项研究中,提出了一种具有数据级和决策级融合的两级多模态融合(TlMF)方法来实现情感分析任务。在数据级融合阶段,张量融合网络通过将文本与音频和视频特征分别融合来获得文本-音频和文本-视频嵌入。在决策级融合阶段,采用软融合方法融合上游分类器的分类或预测结果,使最终的分类或预测结果尽可能准确。所提出的方法在 CMU-MOSI、CMU-MOSEI 和 IEMOCAP 数据集上进行了测试,实证结果和消融研究证实了 TlMF 从所有测试模式中捕获有用信息的有效性。提出了一种具有数据级和决策级融合的两级多模态融合(TlMF)方法来实现情感分析任务。在数据级融合阶段,张量融合网络通过将文本与音频和视频特征分别融合来获得文本-音频和文本-视频嵌入。在决策级融合阶段,采用软融合方法融合上游分类器的分类或预测结果,使最终的分类或预测结果尽可能准确。所提出的方法在 CMU-MOSI、CMU-MOSEI 和 IEMOCAP 数据集上进行了测试,实证结果和消融研究证实了 TlMF 从所有测试模式中捕获有用信息的有效性。提出了一种具有数据级和决策级融合的两级多模态融合(TlMF)方法来实现情感分析任务。在数据级融合阶段,张量融合网络通过将文本与音频和视频特征分别融合来获得文本-音频和文本-视频嵌入。在决策级融合阶段,采用软融合方法融合上游分类器的分类或预测结果,使最终的分类或预测结果尽可能准确。所提出的方法在 CMU-MOSI、CMU-MOSEI 和 IEMOCAP 数据集上进行了测试,实证结果和消融研究证实了 TlMF 从所有测试模式中捕获有用信息的有效性。
更新日期:2021-06-04
down
wechat
bug