当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Faster subgradient methods for functions with Hölderian growth
Mathematical Programming ( IF 2.7 ) Pub Date : 2019-01-07 , DOI: 10.1007/s10107-018-01361-0
Patrick R. Johnstone , Pierre Moulin

The purpose of this manuscript is to derive new convergence results for several subgradient methods applied to minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and weakly sharp minima as special cases. To this end there are three main contributions. First, for a constant and sufficiently small stepsize, we show that the subgradient method achieves linear convergence up to a certain region including the optimal set, with error of the order of the stepsize. Second, if appropriate problem parameters are known, we derive a decaying stepsize which obtains a much faster convergence rate than is suggested by the classical $$O(1/\sqrt{k})$$ O ( 1 / k ) result for the subgradient method. Thirdly we develop a novel “descending stairs” stepsize which obtains this faster convergence rate and also obtains linear convergence for the special case of weakly sharp functions. We also develop an adaptive variant of the “descending stairs” stepsize which achieves the same convergence rate without requiring an error bound constant which is difficult to estimate in practice.

中文翻译:

具有 Hölderian 增长的函数的更快的次梯度方法

本手稿的目的是为几种次梯度方法推导出新的收敛结果,这些方法用于最小化具有霍尔德增长的非光滑凸函数。增长条件在许多应用中都得到满足,并且包括具有二次增长和弱尖锐最小值的函数作为特殊情况。为此,有三个主要贡献。首先,对于恒定且足够小的步长,我们表明次梯度方法可以线性收敛到包括最优集在内的某个区域,误差为步长的阶数。其次,如果已知合适的问题参数,我们会推导出一个衰减步长,它获得比经典 $$O(1/\sqrt{k})$O ( 1 / k ) 结果所建议的更快的收敛速度次梯度法。第三,我们开发了一种新的“下降楼梯”步长,它获得了更快的收敛速度,并且对于弱锐函数的特殊情况也获得了线性收敛。我们还开发了“下楼梯”步长的自适应变体,它可以实现相同的收敛速度,而无需在实践中难以估计的误差界限常数。
更新日期:2019-01-07
down
wechat
bug