当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Generative Model for Raw Audio Using Transformer Architectures
arXiv - CS - Multimedia Pub Date : 2021-06-30 , DOI: arxiv-2106.16036
Prateek Verma, Chris Chafe

This paper proposes a novel way of doing audio synthesis at the waveform level using Transformer architectures. We propose a deep neural network for generating waveforms, similar to wavenet \cite{oord2016wavenet}. This is fully probabilistic, auto-regressive, and causal, i.e. each sample generated depends only on the previously observed samples. Our approach outperforms a widely used wavenet architecture by up to 9\% on a similar dataset for predicting the next step. Using the attention mechanism, we enable the architecture to learn which audio samples are important for the prediction of the future sample. We show how causal transformer generative models can be used for raw waveform synthesis. We also show that this performance can be improved by another 2\% by conditioning samples over a wider context. The flexibility of the current model to synthesize audio from latent representations suggests a large number of potential applications. The novel approach of using generative transformer architectures for raw audio synthesis is, however, still far away from generating any meaningful music, without using latent codes/meta-data to aid the generation process.

中文翻译:

使用 Transformer 架构的原始音频生成模型

本文提出了一种使用 Transformer 架构在波形级别进行音频合成的新方法。我们提出了一个用于生成波形的深度神经网络,类似于 wavenet \cite{oord2016wavenet}。这是完全概率性的、自回归的和因果性的,即生成的每个样本仅取决于先前观察到的样本。我们的方法在用于预测下一步的类似数据集上优于广泛使用的 wavenet 架构高达 9%。使用注意力机制,我们使架构能够了解哪些音频样本对于预测未来样本很重要。我们展示了因果变换器生成模型如何用于原始波形合成。我们还表明,通过在更广泛的背景下调节样本,这种性能可以再提高 2%。当前模型从潜在表示合成音频的灵活性表明了大量潜在的应用。然而,使用生成式转换器架构进行原始音频合成的新方法离生成任何有意义的音乐还很远,不使用潜在代码/元数据来帮助生成过程。
更新日期:2021-07-01
down
wechat
bug