当前位置: X-MOL 学术J. Optim. Theory Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Nearly Optimal First-Order Methods for Convex Optimization under Gradient Norm Measure: an Adaptive Regularization Approach
Journal of Optimization Theory and Applications ( IF 1.6 ) Pub Date : 2021-01-27 , DOI: 10.1007/s10957-020-01806-7
Masaru Ito , Mituhiro Fukuda

In the development of first-order methods for smooth (resp., composite) convex optimization problems, where smooth functions with Lipschitz continuous gradients are minimized, the gradient (resp., gradient mapping) norm becomes a fundamental optimality measure. Under this measure, a fixed iteration algorithm with the optimal iteration complexity for the smooth case is known, while determining this number of iteration to obtain a desired accuracy requires the prior knowledge of the distance from the initial point to the optimal solution set. In this paper, we report an adaptive regularization approach, which attains the nearly optimal iteration complexity without knowing the distance to the optimal solution set. To obtain further faster convergence adaptively, we secondly apply this approach to construct a first-order method that is adaptive to the Hölderian error bound condition (or equivalently, the Łojasiewicz gradient property), which covers moderately wide classes of applications. The proposed method attains nearly optimal iteration complexity with respect to the gradient mapping norm.



中文翻译:

梯度范数下凸优化的近似最优一阶方法:一种自适应正则化方法

在开发用于平滑(分别为复合)凸优化问题的一阶方法时,使用Lipschitz连续梯度的平滑函数被最小化,梯度(分别为梯度映射)范数成为基本的最优性度量。在此措施下,已知一种具有平滑情况下最佳迭代复杂度的固定迭代算法,而确定此迭代次数以获得所需精度时,则需要先了解从初始点到最佳解集的距离。在本文中,我们报告了一种自适应正则化方法,该方法可在不知道与最佳解集距离的情况下获得接近最佳的迭代复杂度。为了自适应地获得更快的收敛速度,其次,我们采用这种方法来构造一阶方法,该方法适用于Hölderian误差界条件(或等效地,Łojasiewicz梯度属性),它涵盖了中等范围的应用。相对于梯度映射范数,所提出的方法获得了几乎最佳的迭代复杂度。

更新日期:2021-01-28
down
wechat
bug