当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convergence rates of subgradient methods for quasi-convex optimization problems
Computational Optimization and Applications ( IF 2.2 ) Pub Date : 2020-05-15 , DOI: 10.1007/s10589-020-00194-y
Yaohua Hu , Jiawen Li , Carisa Kwok Wai Yu

Quasi-convex optimization acts a pivotal part in many fields including economics and finance; the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the quantitative convergence theory, including the iteration complexity and convergence rates, of various subgradient methods for solving quasi-convex optimization problems in a unified framework. In particular, we consider a sequence satisfying a general (inexact) basic inequality, and investigate the global convergence theorem and the iteration complexity when using the constant, diminishing or dynamic stepsize rules. More importantly, we establish the linear (or sublinear) convergence rates of the sequence under an additional assumption of weak sharp minima of Hölderian order and upper bounded noise. These convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods, including the standard/inexact/conditional subgradient methods, for solving quasi-convex optimization problems under the assumptions of the Hölder condition and/or the weak sharp minima of Hölderian order.

中文翻译:

拟凸优化问题的次梯度方法的收敛速度

拟凸优化在包括经济学和金融学在内的许多领域起着举足轻重的作用。次梯度法是解决大规模拟凸优化问题的有效迭代算法。在本文中,我们研究了在统一框架下解决准凸优化问题的各种次梯度方法的定量收敛理论,包括迭代复杂度和收敛速度。特别地,我们考虑一个满足一般(不精确)基本不等式的序列,并在使用恒定,递减或动态逐步调整规则时研究全局收敛定理和迭代复杂性。更重要的是,我们在霍德尔阶弱尖锐最小值和上限噪声的附加假设下,建立了序列的线性(或亚线性)收敛速率。
更新日期:2020-05-15
down
wechat
bug