当前位置: X-MOL 学术Entropy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors
Entropy ( IF 2.1 ) Pub Date : 2021-01-16 , DOI: 10.3390/e23010115
Jorio Cocola 1 , Paul Hand 1, 2 , Vladislav Voroninski 3
Affiliation  

We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level.

中文翻译:


具有生成网络先验的尖峰矩阵模型不存在统计计算差距



我们使用生成神经网络先验对尖峰 Wishart 和 Wigner 矩阵模型进行非渐近分析。尖峰随机矩阵具有一阶信号加噪声的形式,并已被用作高维主成分分析 (PCA)、社区检测和群体同步的模型。根据施加在尖峰上的先验,这些模型可以显示可以使用无限计算资源实现的信息理论上最佳重建误差与当前已知多项式时间算法的次优性能之间的统计计算差距。这些差距被认为是根本性的,就像稀疏 PCA 的标志性案例一样。与此类情况形成鲜明对比的是,我们表明在生成网络先验下不存在统计计算差距,其中尖峰位于生成神经网络的范围内。具体来说,我们分析了一种梯度下降方法,用于在扩展高斯神经网络的范围内最小化非线性最小二乘目标,并表明它可以在多项式时间内恢复对潜在尖峰的估计,并具有速率最优的样本复杂性和依赖关系噪音水平。
更新日期:2021-01-16
down
wechat
bug