当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals
Neural Computation ( IF 2.9 ) Pub Date : 2020-09-01 , DOI: 10.1162/neco_a_01304
Ishan Wickramasingha 1 , Ahmed Elrewainy 1 , Michael Sobhy 2 , Sherif S Sherif 3
Affiliation  

Sparse signal representations have gained much interest recently in both signal processing and statistical communities. Compared to orthogonal matching pursuit (OMP) and basis pursuit, which solve the L0 and L1 constrained sparse least-squares problems, respectively, least angle regression (LARS) is a computationally efficient method to solve both problems for all critical values of the regularization parameter λ. However, all of these methods are not suitable for solving large multidimensional sparse least-squares problems, as they would require extensive computational power and memory. An earlier generalization of OMP, known as Kronecker-OMP, was developed to solve the L0 problem for large multidimensional sparse least-squares problems. However, its memory usage and computation time increase quickly with the number of problem dimensions and iterations. In this letter, we develop a generalization of LARS, tensor least angle regression (T-LARS) that could efficiently solve either large L0 or large L1 constrained multidimensional, sparse, least-squares problems (underdetermined or overdetermined) for all critical values of the regularization parameter λ and with lower computational complexity and memory usage than Kronecker-OMP. To demonstrate the validity and performance of our T-LARS algorithm, we used it to successfully obtain different sparse representations of two relatively large 3D brain images, using fixed and learned separable overcomplete dictionaries, by solving both L0 and L1 constrained sparse least-squares problems. Our numerical experiments demonstrate that our T-LARS algorithm is significantly faster (46 to 70 times) than Kronecker-OMP in obtaining K-sparse solutions for multilinear leastsquares problems. However, the K-sparse solutions obtained using Kronecker-OMP always have a slightly lower residual error (1.55% to 2.25%) than ones obtained by T-LARS. Therefore, T-LARS could be an important tool for numerous multidimensional biomedical signal processing applications.

中文翻译:

多维信号稀疏表示的张量最小角回归

稀疏信号表示最近在信号处理和统计社区中引起了极大的兴趣。与分别解决 L0 和 L1 约束稀疏最小二乘问题的正交匹配追踪 (OMP) 和基追踪相比,最小角度回归 (LARS) 是一种计算效率高的方法,可以解决正则化参数的所有临界值的这两个问题λ。然而,所有这些方法都不适合解决大型多维稀疏最小二乘问题,因为它们需要大量的计算能力和内存。OMP 的早期推广,称为 Kronecker-OMP,被开发用于解决大型多维稀疏最小二乘问题的 L0 问题。然而,它的内存使用量和计算时间随着问题维数和迭代次数的增加而迅速增加。在这封信中,我们开发了 LARS、张量最小角回归 (T-LARS) 的推广,它可以有效地解决大 L0 或大 L1 约束的多维、稀疏、最小二乘问题(欠定或超定)的所有临界值正则化参数 λ 并且比 Kronecker-OMP 具有更低的计算复杂度和内存使用量。为了证明我们的 T-LARS 算法的有效性和性能,我们使用它通过解决 L0 和 L1 约束稀疏最小二乘问题,成功地获得了两个相对较大的 3D 大脑图像的不同稀疏表示,使用固定和学习的可分离过完备字典. 我们的数值实验表明,我们的 T-LARS 算法在获得多线性最小二乘问题的 K 稀疏解方面比 Kronecker-OMP 快得多(46 到 70 倍)。然而,使用 Kronecker-OMP 获得的 K 稀疏解总是比 T-LARS 获得的残差略低(1.55% 到 2.25%)。因此,T-LARS 可能成为众多多维生物医学信号处理应用的重要工具。
更新日期:2020-09-01
down
wechat
bug