当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements
arXiv - CS - Multimedia Pub Date : 2021-01-15 , DOI: arxiv-2101.06053
Lukas Stappen, Alice Baird, Lea Schumann, Björn Schuller

Truly real-life data presents a strong, but exciting challenge for sentiment and emotion research. The high variety of possible `in-the-wild' properties makes large datasets such as these indispensable with respect to building robust machine learning models. A sufficient quantity of data covering a deep variety in the challenges of each modality to force the exploratory analysis of the interplay of all modalities has not yet been made available in this context. In this contribution, we present MuSe-CaR, a first of its kind multimodal dataset. The data is publicly available as it recently served as the testing bed for the 1st Multimodal Sentiment Analysis Challenge, and focused on the tasks of emotion, emotion-target engagement, and trustworthiness recognition by means of comprehensively integrating the audio-visual and language modalities. Furthermore, we give a thorough overview of the dataset in terms of collection and annotation, including annotation tiers not used in this year's MuSe 2020. In addition, for one of the sub-challenges - predicting the level of trustworthiness - no participant outperformed the baseline model, and so we propose a simple, but highly efficient Multi-Head-Attention network that exceeds using multimodal fusion the baseline by around 0.2 CCC (almost 50 % improvement).

中文翻译:

汽车评论(MuSe-CaR)数据集中的多峰情感分析:收集,见解和改进

真实的真实数据为情感和情感研究提出了强大而激动人心的挑战。多种可能的“荒野”属性使得大型数据集(如此类数据集)对于构建强大的机器学习模型而言必不可少。在这种情况下,尚未提供足够数量的数据来涵盖每种模式所面临挑战的多种多样,以迫使对所有模式之间的相互作用进行探索性分析。在此贡献中,我们介绍了MuSe-CaR,这是其首个多峰数据集。该数据最近作为第一届多模式情感分析挑战赛的测试平台而公开提供,并通过全面整合视听和语言模式来专注于情感,情感目标参与和可信赖性识别等任务。
更新日期:2021-01-18
down
wechat
bug