当前位置: X-MOL 学术Found. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Polynomial Time Methods for Exact Low-Rank Tensor Completion
Foundations of Computational Mathematics ( IF 2.5 ) Pub Date : 2019-01-07 , DOI: 10.1007/s10208-018-09408-6
Dong Xia , Ming Yuan

In this paper, we investigate the sample size requirement for exact recovery of a high-order tensor of low rank from a subset of its entries. We show that a gradient descent algorithm with initial value obtained from a spectral method can, in particular, reconstruct a \({d\times d\times d}\) tensor of multilinear ranks (rrr) with high probability from as few as \(O(r^{7/2}d^{3/2}\log ^{7/2}d+r^7d\log ^6d)\) entries. In the case when the ranks \(r=O(1)\), our sample size requirement matches those for nuclear norm minimization (Yuan and Zhang in Found Comput Math 1031–1068, 2016), or alternating least squares assuming orthogonal decomposability (Jain and Oh in Advances in Neural Information Processing Systems, pp 1431–1439, 2014). Unlike these earlier approaches, however, our method is efficient to compute, is easy to implement, and does not impose extra structures on the tensor. Numerical results are presented to further demonstrate the merits of the proposed approach.

中文翻译:

精确低阶张量完成的多项式时间方法

在本文中,我们调查了从其项的子集中准确恢复低阶高阶张量的样本大小要求。我们表明,具有从频谱方法获得的初始值的梯度下降算法可以特别地从以下几方面重构具有多线性秩(r,  r,  r)的\({d \ times d \ times d} \)张量。少至\(O(r ^ {7/2} d ^ {3/2} \ log ^ {7/2} d + r ^ 7d \ log ^ 6d)\)个条目。在排名\(r = O(1)\)的情况下,我们的样本量要求与核规范最小化的要求(Yuan和Zhang在Found Comput Math 1031–1068,2016)或假设正交可分解性的交替最小二乘法(Jain和Oh在《神经信息处理系统的发展》第1431-1439页, 2014)。但是,与这些早期方法不同,我们的方法计算效率高,易于实现,并且不会在张量上施加额外的结构。数值结果被提出以进一步证明所提出的方法的优点。
更新日期:2019-01-07
down
wechat
bug