当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Unified Alternating Direction Method of Multipliers by Majorization Minimization
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2017-03-31 , DOI: 10.1109/tpami.2017.2689021
Canyi Lu , Jiashi Feng , Shuicheng Yan , Zhouchen Lin

Accompanied with the rising popularity of compressed sensing, the Alternating Direction Method of Multipliers (ADMM) has become the most widely used solver for linearly constrained convex problems with separable objectives. In this work, we observe that many existing ADMMs update the primal variable by minimizing different majorant functions with their convergence proofs given case by case. Inspired by the principle of majorization minimization, we respectively present the unified frameworks of Gauss-Seidel ADMMs and Jacobian ADMMs, which use different historical information for the current updating. Our frameworks generalize previous ADMMs to solve the problems with non-separable objectives. We also show that ADMMs converge faster when the used majorant function is tighter. We then propose the Mixed Gauss-Seidel and Jacobian ADMM (M-ADMM) which alleviates the slow convergence issue of Jacobian ADMMs by absorbing merits of the Gauss-Seidel ADMMs. M-ADMM can be further improved by backtracking and wise variable partition. We also propose to solve the multi-blocks problems by Proximal Gauss-Seidel ADMM which is of the Gauss-Seidel type. It convegences for non-strongly convex objective. Experiments on both synthesized and real-world data demonstrate the superiority of our new ADMMs. Finally, we release a toolbox that implements efficient ADMMs for many problems in compressed sensing.

中文翻译:


一种统一的乘法器交替方向极小化方法



随着压缩感知的日益普及,乘子交替方向法(ADMM)已成为解决具有可分离目标的线性约束凸问题最广泛使用的求解器。在这项工作中,我们观察到许多现有的 ADMM 通过最小化不同的主函数以及逐个给出的收敛证明来更新原始变量。受majorization最小化原理的启发,我们分别提出了Gauss-Seidel ADMM和Jacobian ADMM的统一框架,它们使用不同的历史信息进行当前更新。我们的框架概括了以前的 ADMM,以解决具有不可分离目标的问题。我们还表明,当使用的主函数更紧时,ADMM 收敛得更快。然后,我们提出混合高斯-塞德尔和雅可比 ADMM (M-ADMM),它通过吸收高斯-塞德尔 ADMM 的优点来缓解雅可比 ADMM 收敛速度慢的问题。 M-ADMM可以通过回溯和明智的变量划分进一步改进。我们还建议通过Gauss-Seidel类型的Proximal Gauss-Seidel ADMM来解决多块问题。对于非强凸目标它是收敛的。对合成数据和真实数据的实验证明了我们新的 ADMM 的优越性。最后,我们发布了一个工具箱,可以针对压缩感知中的许多问题实现高效的 ADMM。
更新日期:2017-03-31
down
wechat
bug