当前位置:
X-MOL 学术
›
arXiv.cs.MM
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Transformer-XL Based Music Generation with Multiple Sequences of Time-valued Notes
arXiv - CS - Multimedia Pub Date : 2020-07-11 , DOI: arxiv-2007.07244 Xianchao Wu and Chengyuan Wang and Qinying Lei
arXiv - CS - Multimedia Pub Date : 2020-07-11 , DOI: arxiv-2007.07244 Xianchao Wu and Chengyuan Wang and Qinying Lei
Current state-of-the-art AI based classical music creation algorithms such as
Music Transformer are trained by employing single sequence of notes with
time-shifts. The major drawback of absolute time interval expression is the
difficulty of similarity computing of notes that share the same note value yet
different tempos, in one or among MIDI files. In addition, the usage of single
sequence restricts the model to separately and effectively learn music
information such as harmony and rhythm. In this paper, we propose a framework
with two novel methods to respectively track these two shortages, one is the
construction of time-valued note sequences that liberate note values from
tempos and the other is the separated usage of four sequences, namely, former
note on to current note on, note on to note off, pitch, and velocity, for
jointly training of four Transformer-XL networks. Through training on a 23-hour
piano MIDI dataset, our framework generates significantly better and hour-level
longer music than three state-of-the-art baselines, namely Music Transformer,
DeepJ, and single sequence-based Transformer-XL, evaluated automatically and
manually.
更新日期:2020-07-15