当前位置: X-MOL 学术SIAM J. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Proximal Alternating Direction Method of Multiplier for Linearly Constrained Nonconvex Minimization
SIAM Journal on Optimization ( IF 3.1 ) Pub Date : 2020-08-13 , DOI: 10.1137/19m1242276
Jiawei Zhang , Zhi-Quan Luo

SIAM Journal on Optimization, Volume 30, Issue 3, Page 2272-2302, January 2020.
Consider the minimization of a nonconvex differentiable function over a bounded polyhedron. A popular primal-dual first-order method for this problem is to perform a gradient projection iteration for the augmented Lagrangian function and then update the dual multiplier vector using the constraint residual. However, numerical examples show that this approach can exhibit “oscillation” and may not converge. In this paper, we propose a proximal alternating direction method of multipliers for the multiblock version of this problem. A distinctive feature of this method is the introduction of a “smoothed” (i.e., exponentially weighted) sequence of primal iterates and the inclusion, at each iteration, to the augmented Lagrangian function of a quadratic proximal term centered at the current smoothed primal iterate. The resulting proximal augmented Lagrangian function is inexactly minimized (via a gradient projection step) at each iteration while the dual multiplier vector is updated using the residual of the linear constraints. When the primal and dual stepsizes are chosen sufficiently small, we show that suitable “smoothing” can stabilize the “oscillation,” and the iterates of the new proximal ADMM algorithm converge to a stationary point under some mild regularity conditions. The iteration complexity of our algorithm for finding an $\epsilon$-stationary solution is $\mathcal{O}(1/\epsilon^2)$, which improves the best known complexity of $\mathcal{O}(1/\epsilon^3)$ for the problem under consideration. Furthermore, when the objective function is quadratic, we establish the linear convergence of the algorithm. Our proof is based on a new potential function and a novel use of error bounds.


中文翻译:

线性约束非凸极小化的乘数的近交方向法

SIAM优化杂志,第30卷,第3期,第2272-2302页,2020年1月。
考虑有界多面体上非凸可微函数的最小化。针对此问题的一种流行的原始对偶一阶方法是对增强的拉格朗日函数执行梯度投影迭代,然后使用约束残差更新对偶乘子向量。但是,数值示例表明,这种方法可能会出现“振荡”,并且可能不会收敛。在本文中,我们针对该问题的多块版本提出了乘数的近端交替方向方法。该方法的一个显着特征是引入了原始迭代的“平滑”(即指数加权)序列,并在每次迭代中将以当前平滑原始迭代为中心的二次近邻项的增强拉格朗日函数包括在内。在每次迭代中,使用线性约束的残差更新对偶乘子向量时,最终得到的近端增强拉格朗日函数被不精确地最小化(通过梯度投影步骤)。当原始和对偶stepsizes被选择足够小的,我们表明,合适的“平滑”可以稳定“振荡”,而新的近端ADMM算法收敛到一个固定的点一些轻微的规律性的条件下进行迭代。用于找到$ \ epsilon $平稳解的算法的迭代复杂度为$ \ mathcal {O}(1 / \ epsilon ^ 2)$,这提高了$ \ mathcal {O}(1 / \ epsilon ^ 3)$用于正在考虑的问题。此外,当目标函数是二次函数时,我们建立了算法的线性收敛。
更新日期:2020-08-13
down
wechat
bug