当前位置: X-MOL 学术IEEE Trans. Inform. Theory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Nonconvex Matrix Factorization From Rank-One Measurements
IEEE Transactions on Information Theory ( IF 2.2 ) Pub Date : 2021-01-10 , DOI: 10.1109/tit.2021.3050427
Yuanxin Li 1 , Cong Ma 2 , Yuxin Chen 3 , Yuejie Chi 1
Affiliation  

We consider the problem of recovering low-rank matrices from random rank-one measurements, which spans numerous applications including covariance sketching, phase retrieval, quantum state tomography, and learning shallow polynomial neural networks, among others. Our approach is to directly estimate the low-rank factor by minimizing a nonconvex least-squares loss function via vanilla gradient descent, following a tailored spectral initialization. When the true rank is bounded by a constant, this algorithm is guaranteed to converge to the ground truth (up to global ambiguity) with near-optimal sample complexity and computational complexity. To the best of our knowledge, this is the first guarantee that achieves near-optimality in both metrics. In particular, the key enabler of near-optimal computational guarantees is an implicit regularization phenomenon: without explicit regularization, both spectral initialization and the gradient descent iterates automatically stay within a region incoherent with the measurement vectors. This feature allows one to employ much more aggressive step sizes compared with the ones suggested in prior literature, without the need of sample splitting.

中文翻译:


一阶测量的非凸矩阵分解



我们考虑从随机秩一测量中恢复低秩矩阵的问题,该问题涉及众多应用,包括协方差草图、相位检索、量子态断层扫描和学习浅多项式神经网络等。我们的方法是通过香草梯度下降最小化非凸最小二乘损失函数,并遵循定制的谱初始化,来直接估计低秩因子。当真实等级受常数限制时,该算法保证以接近最优的样本复杂度和计算复杂度收敛到基本事实(达到全局模糊度)。据我们所知,这是在两个指标上实现接近最优的第一个保证。特别是,接近最优计算保证的关键推动者是隐式正则化现象:如果没有显式正则化,谱初始化和梯度下降迭代都会自动停留在与测量向量不相干的区域内。与先前文献中建议的步长相比,这一功能允许采用更激进的步长,而无需样本分割。
更新日期:2021-01-10
down
wechat
bug