当前位置: X-MOL 学术J. Fourier Anal. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors
Journal of Fourier Analysis and Applications ( IF 1.2 ) Pub Date : 2021-03-11 , DOI: 10.1007/s00041-021-09830-5
Wen Huang , Paul Hand , Reinhard Heckel , Vladislav Voroninski

Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors’ knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.



中文翻译:

随机生成先验下压缩感知的一种收敛算法

深度生成建模已导致在各种反问题中强制执行结构先验的新方法和最先进的方法。与稀疏性给出的先验相反,深度模型可以提供属于特定自然类别的图像或信号的流形的直接低维参数化,从而允许将恢复算法放置在低维空间中。当以固定的角度来看时,该维数甚至可能低于相同信号的稀疏度。这些方法未知的是,是否存在计算效率高的算法,其样本复杂度在生成模型给出的表示的维数上是最佳的。在本文中,我们提出了这样的算法和分析。假设生成模型是一个神经网络,该神经网络在每层都具有足够的可扩展性并具有高斯权重,我们提供了一种梯度下降方案,并证明了对于模型范围内信号的噪声压缩测量,该算法收敛到该信号,直至噪声水平。样本复杂度相对于生成先验的输入维数的缩放比例是线性的,因此,除了常量和其他变量的因素外,无法得到改善。据作者所知,这是通过计算有效算法在生成先验条件下进行压缩感测的第一个恢复保证。我们提供了一种梯度下降方案,并证明了在模型范围内对信号进行噪声压缩测量时,算法会收敛到该信号,直至达到噪声水平。样本复杂度相对于生成先验的输入维数的缩放比例是线性的,因此,除了常量和其他变量的因素外,无法得到改善。据作者所知,这是通过计算有效算法在生成先验条件下进行压缩感测的第一个恢复保证。我们提供了一种梯度下降方案,并证明了在模型范围内对信号进行噪声压缩测量时,算法会收敛到该信号,直至达到噪声水平。样本复杂度相对于生成先验的输入维数的缩放比例是线性的,因此,除了常量和其他变量的因子外,无法得到改善。据作者所知,这是通过计算有效算法在生成先验条件下进行压缩感测的第一个恢复保证。

更新日期:2021-03-12
down
wechat
bug