当前位置: X-MOL 学术SIAM J. Sci. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Parallel and Communication Avoiding Least Angle Regression
SIAM Journal on Scientific Computing ( IF 3.0 ) Pub Date : 2021-03-15 , DOI: 10.1137/19m1305720
Swapnil Das , James Demmel , Kimon Fountoulakis , Laura Grigori , Michael W. Mahoney , Shenghao Yang

SIAM Journal on Scientific Computing, Volume 43, Issue 2, Page C154-C176, January 2021.
We are interested in parallelizing the least angle regression (LARS) algorithm for fitting linear regression models to high-dimensional data. We consider two parallel and communication avoiding versions of the basic LARS algorithm. The two algorithms have different asymptotic costs and practical performance. One offers more speedup and the other produces more accurate output. The first is bLARS, a block version of the LARS algorithm, where we update $b$ columns at each iteration. Assuming that the data are row-partitioned, bLARS reduces the number of arithmetic operations, latency, and bandwidth by a factor of $b$. The second is tournament-bLARS (T-bLARS), a tournament version of LARS where processors compete by running several LARS computations in parallel to choose $b$ new columns to be added in the solution. Assuming that the data are column-partitioned, T-bLARS reduces latency by a factor of $b$. Similarly to LARS, our proposed methods generate a sequence of linear models. We present extensive numerical experiments that illustrate speedups up to 4x compared to LARS without any compromise in solution quality.


中文翻译:

并行和通信避免最小角度回归

SIAM科学计算杂志,第43卷,第2期,第C154-C176页,2021年1月。
我们对并行化最小角度回归(LARS)算法以将线性回归模型拟合到高维数据感兴趣。我们考虑基本LARS算法的两个并行和通信避免版本。两种算法具有不同的渐近成本和实际性能。一种提供更快的速度,另一种提供更准确的输出。第一个是LARS算法的块版本bLARS,我们在每次迭代中更新$ b $列。假设数据是按行划分的,则bLARS将算术运算,延迟和带宽的数量减少$ b $。第二个是Tournament-bLARS(T-bLARS),这是LARS的锦标赛版本,处理器通过并行运行多个LARS计算来竞争,以选择要在解决方案中添加的$ b $新列。假设数据是按列划分的,则T-bLARS可将延迟减少$ b $倍。与LARS相似,我们提出的方法会生成一系列线性模型。我们提供了广泛的数值实验,这些实验表明,与LARS相比,速度提高了4倍,而解决方案质量却没有任何妥协。
更新日期:2021-03-16
down
wechat
bug