当前位置: X-MOL 学术J. Glob. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
Journal of Global Optimization ( IF 1.8 ) Pub Date : 2020-08-19 , DOI: 10.1007/s10898-020-00943-7
Zhongming Wu , Chongshou Li , Min Li , Andrew Lim

This paper proposes an inertial Bregman proximal gradient method for minimizing the sum of two possibly nonconvex functions. This method includes two different inertial steps and adopts the Bregman regularization in solving the subproblem. Under some general parameter constraints, we prove the subsequential convergence that each generated sequence converges to the stationary point of the considered problem. To overcome the parameter constraints, we further propose a nonmonotone line search strategy to make the parameter selections more flexible. The subsequential convergence of the proposed method with line search is established. When the line search is monotone, we prove the stronger global convergence and linear convergence rate under Kurdyka–Łojasiewicz framework. Moreover, numerical results on SCAD and MCP nonconvex penalty problems are reported to demonstrate the effectiveness and superiority of the proposed methods and line search strategy.



中文翻译:

一类非凸优化问题的Bregman正则化惯性近端梯度法

本文提出了一种惯性Bregman近端梯度方法,以最小化两个可能的非凸函数之和。该方法包括两个不同的惯性步骤,并采用Bregman正则化来解决子问题。在某些一般参数约束下,我们证明了每个生成的序列收敛到所考虑问题的固定点的后续收敛性。为了克服参数约束,我们进一步提出了一种非单调的线搜索策略,以使参数选择更加灵活。建立了该方法与线搜索的后续收敛。当线搜索为单调时,我们证明了在Kurdyka–Łojasiewicz框架下更强的全局收敛性和线性收敛速度。此外,

更新日期:2020-08-19
down
wechat
bug