当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sequence-to-Sequence Piano Transcription with Transformers
arXiv - CS - Sound Pub Date : 2021-07-19 , DOI: arxiv-2107.09142
Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel

Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets. However, these models have required extensive domain-specific design of network architectures, input/output representations, and complex decoding schemes. In this work, we show that equivalent performance can be achieved using a generic encoder-decoder Transformer with standard decoding methods. We demonstrate that the model can learn to translate spectrogram inputs directly to MIDI-like output events for several transcription tasks. This sequence-to-sequence approach simplifies transcription by jointly modeling audio features and language-like output dependencies, thus removing the need for task-specific architectures. These results point toward possibilities for creating new Music Information Retrieval models by focusing on dataset creation and labeling rather than custom model design.

中文翻译:

使用变压器进行序列到序列钢琴转录

近年来,通过在大型数据集上训练自定义深度神经网络,自动音乐转录取得了重大进展。然而,这些模型需要对网络架构、输入/输出表示和复杂的解码方案进行广泛的特定领域设计。在这项工作中,我们展示了使用具有标准解码方法的通用编码器-解码器 Transformer 可以实现等效性能。我们证明该模型可以学习将频谱图输入直接转换为多个转录任务的类似 MIDI 的输出事件。这种序列到序列的方法通过联合建模音频特征和类似语言的输出依赖性来简化转录,从而消除对特定任务架构的需要。
更新日期:2021-07-21
down
wechat
bug