当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enriched Music Representations With Multiple Cross-Modal Contrastive Learning
IEEE Signal Processing Letters ( IF 3.2 ) Pub Date : 2021-04-05 , DOI: 10.1109/lsp.2021.3071082
Andres Ferraro , Xavier Favory , Konstantinos Drossos , Yuntae Kim , Dmitry Bogdanov

Modeling various aspects that make a music piece unique is a challenging task, requiring the combination of multiple sources of information. Deep learning is commonly used to obtain representations using various sources of information, such as the audio, interactions between users and songs, or associated genre metadata. Recently, contrastive learning has led to representations that generalize better compared to traditional supervised methods. In this paper, we present a novel approach that combines multiple types of information related to music using cross-modal contrastive learning, allowing us to learn an audio feature from heterogeneous data simultaneously. We align the latent representations obtained from playlists-track interactions, genre metadata, and the tracks' audio, by maximizing the agreement between these modality representations using a contrastive loss. We evaluate our approach in three tasks, namely, genre classification, playlist continuation and automatic tagging. We compare the performances with a baseline audio-based CNN trained to predict these modalities. We also study the importance of including multiple sources of information when training our embedding model. The results suggest that the proposed method outperforms the baseline in all the three downstream tasks and achieves comparable performance to the state-of-the-art.

中文翻译:


通过多种跨模态对比学习丰富音乐表征



对使音乐作品独一无二的各个方面进行建模是一项具有挑战性的任务,需要结合多个信息源。深度学习通常用于使用各种信息源获取表示,例如音频、用户与歌曲之间的交互或相关的流派元数据。最近,对比学习带来了比传统监督方法更好的泛化表示。在本文中,我们提出了一种新颖的方法,该方法使用跨模态对比学习结合与音乐相关的多种类型的信息,使我们能够同时从异构数据中学习音频特征。我们通过使用对比损失来最大化这些模态表示之间的一致性,从而对齐从播放列表-曲目交互、流派元数据和曲目音频获得的潜在表示。我们在三个任务中评估我们的方法,即流派分类、播放列表延续和自动标记。我们将性能与经过训练以预测这些模式的基于音频的基线 CNN 进行比较。我们还研究了在训练嵌入模型时包含多个信息源的重要性。结果表明,所提出的方法在所有三个下游任务中都优于基线,并实现了与最先进技术相当的性能。
更新日期:2021-04-05
down
wechat
bug