当前位置:
X-MOL 学术
›
arXiv.cs.LG
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
A Closed Form Solution to Best Rank-1 Tensor Approximation via KL divergence Minimization
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.02898 Kazu Ghalamkari, Mahito Sugiyama
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.02898 Kazu Ghalamkari, Mahito Sugiyama
Tensor decomposition is a fundamentally challenging problem. Even the
simplest case of tensor decomposition, the rank-1 approximation in terms of the
Least Squares (LS) error, is known to be NP-hard. Here, we show that, if we
consider the KL divergence instead of the LS error, we can analytically derive
a closed form solution for the rank-1 tensor that minimizes the KL divergence
from a given positive tensor. Our key insight is to treat a positive tensor as
a probability distribution and formulate the process of rank-1 approximation as
a projection onto the set of rank-1 tensors. This enables us to solve rank-1
approximation by convex optimization. We empirically demonstrate that our
algorithm is an order of magnitude faster than the existing rank-1
approximation methods and gives better approximation of given tensors, which
supports our theoretical finding.
中文翻译:
通过KL散度最小化实现最佳1级张量逼近的封闭形式解决方案
张量分解是一个根本上具有挑战性的问题。即使是最简单的张量分解情况,以最小二乘(LS)误差表示的Rank-1逼近也是已知的NP-hard。在这里,我们表明,如果考虑KL散度而不是LS误差,我们可以分析得出秩1张量的闭式解,从而使给定正张量的KL散度最小。我们的主要见识是将正张量视为概率分布,并将等级1逼近的过程公式化为等级1张量集的投影。这使我们能够通过凸优化来求解秩1逼近。我们凭经验证明,我们的算法比现有的rank-1逼近方法快一个数量级,并且可以更好地逼近给定张量,
更新日期:2021-03-05
中文翻译:
通过KL散度最小化实现最佳1级张量逼近的封闭形式解决方案
张量分解是一个根本上具有挑战性的问题。即使是最简单的张量分解情况,以最小二乘(LS)误差表示的Rank-1逼近也是已知的NP-hard。在这里,我们表明,如果考虑KL散度而不是LS误差,我们可以分析得出秩1张量的闭式解,从而使给定正张量的KL散度最小。我们的主要见识是将正张量视为概率分布,并将等级1逼近的过程公式化为等级1张量集的投影。这使我们能够通过凸优化来求解秩1逼近。我们凭经验证明,我们的算法比现有的rank-1逼近方法快一个数量级,并且可以更好地逼近给定张量,