当前位置: X-MOL 学术SIAM J. Matrix Anal. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Lower Memory Oblivious (Tensor) Subspace Embeddings with Fewer Random Bits: Modewise Methods for Least Squares
SIAM Journal on Matrix Analysis and Applications ( IF 1.5 ) Pub Date : 2021-03-18 , DOI: 10.1137/19m1308116
Mark A. Iwen , Deanna Needell , Elizaveta Rebrova , Ali Zare

SIAM Journal on Matrix Analysis and Applications, Volume 42, Issue 1, Page 376-416, January 2021.
In this paper new general modewise Johnson--Lindenstrauss (JL) subspace embeddings are proposed that can be both generated much faster and stored more easily than traditional JL embeddings when working with extremely large vectors and/or tensors. Corresponding embedding results are then proven for two different types of low-dimensional (tensor) subspaces. The first of these new subspace embedding results produces improved space complexity bounds for embeddings of rank-$r$ tensors whose CP decompositions are contained in the span of a fixed (but unknown) set of $r$ rank-$1$ basis tensors. In the traditional vector setting this first result yields new and very general near-optimal oblivious subspace embedding constructions that require fewer random bits to generate than standard JL embeddings when embedding subspaces of $\mathbb{C}^N$ spanned by basis vectors with special Kronecker structure. The second result proven herein provides new fast JL embeddings of arbitrary $r$-dimensional subspaces $\mathcal{S} \subset \mathbb{C}^N$ which also require fewer random bits (and so are easier to store, i.e., require less space) than standard fast JL embedding methods in order to achieve small $\epsilon$-distortions. These new oblivious subspace embedding results work by (i) effectively folding any given vector in $\mathcal{S}$ into a (not necessarily low-rank) tensor, and then (ii) embedding the resulting tensor into $\mathbb{C}^m$ for $m \leq C r \log^c(N) / \epsilon^2$. Applications related to compression and fast compressed least squares solution methods are also considered, including those used for fitting low-rank CP decompositions, and the proposed JL embedding results are shown to work well numerically in both settings.


中文翻译:

具有较少随机位的低内存遗忘(张量)子空间嵌入:最小二乘的模式方法

SIAM 矩阵分析与应用杂志,第 42 卷,第 1 期,第 376-416 页,2021 年 1 月。
在本文中,提出了新的通用模态 Johnson--Lindenstrauss (JL) 子空间嵌入,当使用极大的向量和/或张量时,它可以比传统的 JL 嵌入更快地生成和更容易存储。然后针对两种不同类型的低维(张量)子空间证明相应的嵌入结果。这些新的子空间嵌入结果中的第一个为 rank-$r$ 张量的嵌入产生了改进的空间复杂度界限,其 CP 分解包含在固定(但未知)的 $r$rank-$1$ 基础张量集合的范围内。在传统的向量设置中,第一个结果产生了新的和非常普遍的近乎最优的不经意子空间嵌入结构,当嵌入由具有特殊的基向量跨越的 $\mathbb{C}^N$ 子空间时,需要比标准 JL 嵌入更少的随机位来生成克罗内克结构。这里证明的第二个结果提供了任意 $r$ 维子空间 $\mathcal{S} \subset \mathbb{C}^N$ 的新的快速 JL 嵌入,这也需要更少的随机位(因此更容易存储,即,需要比标准的快速 JL 嵌入方法更少的空间,以实现小的 $\epsilon$-失真。这些新的不经意子空间嵌入结果通过(i)有效地将 $\mathcal{S}$ 中的任何给定向量折叠成(不一定是低阶)张量,然后 (ii) 将生成的张量嵌入到 $\mathbb{C}^m$ 中,用于 $m \leq C r \log^c(N) / \epsilon^2$。还考虑了与压缩和快速压缩最小二乘解法相关的应用,包括那些用于拟合低秩 CP 分解的应用,并且表明所提出的 JL 嵌入结果在这两种设置下都能很好地进行数值计算。
更新日期:2021-03-18
down
wechat
bug