当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Constructive Universal High-Dimensional Distribution Generation through Deep ReLU Networks
arXiv - CS - Machine Learning Pub Date : 2020-06-30 , DOI: arxiv-2006.16664 Dmytro Perekrestenko, Stephan M\"uller, Helmut B\"olcskei
arXiv - CS - Machine Learning Pub Date : 2020-06-30 , DOI: arxiv-2006.16664 Dmytro Perekrestenko, Stephan M\"uller, Helmut B\"olcskei
We present an explicit deep neural network construction that transforms
uniformly distributed one-dimensional noise into an arbitrarily close
approximation of any two-dimensional Lipschitz-continuous target distribution.
The key ingredient of our design is a generalization of the "space-filling"
property of sawtooth functions discovered in (Bailey & Telgarsky, 2018). We
elicit the importance of depth - in our neural network construction - in
driving the Wasserstein distance between the target distribution and the
approximation realized by the network to zero. An extension to output
distributions of arbitrary dimension is outlined. Finally, we show that the
proposed construction does not incur a cost - in terms of error measured in
Wasserstein-distance - relative to generating $d$-dimensional target
distributions from $d$ independent random variables.
中文翻译:
通过深度 ReLU 网络构建通用高维分布生成
我们提出了一个明确的深度神经网络构造,它将均匀分布的一维噪声转换为任意二维 Lipschitz 连续目标分布的任意接近的近似值。我们设计的关键要素是 (Bailey & Telgarsky, 2018) 中发现的锯齿函数的“空间填充”特性的概括。我们引出深度的重要性 - 在我们的神经网络构建中 - 将目标分布与网络实现的近似值之间的 Wasserstein 距离驱动为零。概述了对任意维度的输出分布的扩展。最后,
更新日期:2020-07-01
中文翻译:
通过深度 ReLU 网络构建通用高维分布生成
我们提出了一个明确的深度神经网络构造,它将均匀分布的一维噪声转换为任意二维 Lipschitz 连续目标分布的任意接近的近似值。我们设计的关键要素是 (Bailey & Telgarsky, 2018) 中发现的锯齿函数的“空间填充”特性的概括。我们引出深度的重要性 - 在我们的神经网络构建中 - 将目标分布与网络实现的近似值之间的 Wasserstein 距离驱动为零。概述了对任意维度的输出分布的扩展。最后,