当前位置: X-MOL 学术IEEE Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Modality Distillation for Separable Multimodal Sentiment Analysis
IEEE Intelligent Systems ( IF 6.4 ) Pub Date : 2021-02-09 , DOI: 10.1109/mis.2021.3057757
Wei Peng 1 , Xiaopeng Hong 2 , Guoying Zhao 1
Affiliation  

Multimodal sentiment analysis has increasingly attracted attention since with the arrival of complementary data streams, it has great potential to improve and go beyond unimodal sentiment analysis. In this article, we present an efficient separable multimodal learning method to deal with the tasks with modality missing issue. In this method, the multimodal tensor is utilized to guide the evolution of each separated modality representation. To save the computational expense, Tucker decomposition is introduced, which leads to a general extension of the low-rank tensor fusion method with more modality interactions. The method, in turn, enhances our modality distillation processing. Comprehensive experiments on three popular multimodal sentiment analysis datasets, CMU-MOSI, POM, and IEMOCAP, show a superior performance especially when only partial modalities are available.

中文翻译:

用于可分离多模态情感分析的自适应模态蒸馏

随着互补数据流的到来,多模态情感分析越来越受到关注,它具有改进和超越单模态情感分析的巨大潜力。在本文中,我们提出了一种有效的可分离多模态学习方法来处理模态缺失问题的任务。在这种方法中,多模态张量被用来指导每个分离的模态表示的演化。为了节省计算费用,引入了塔克分解,这导致了低秩张量融合方法的一般扩展,具有更多的模态交互。反过来,该方法增强了我们的模态蒸馏处理。在三个流行的多模态情感分析数据集 CMU-MOSI、POM 和 IEMOCAP 上的综合实验,
更新日期:2021-02-09
down
wechat
bug