当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimized first-order methods for smooth convex minimization
Mathematical Programming ( IF 2.2 ) Pub Date : 2015-10-17 , DOI: 10.1007/s10107-015-0949-3
Donghwan Kim 1 , Jeffrey A Fessler 1
Affiliation  

We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle (Math Program 145(1–2):451–482, 2014. doi:10.1007/s10107-013-0653-0) recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods (Polyak in USSR Comput Math Math Phys 4(5):1–17, 1964. doi:10.1016/0041-5553(64)90137-5), and Nesterov’s fast gradient methods (Nesterov in Sov Math Dokl 27(2):372–376, 1983; Math Program 103(1):127–152, 2005. doi:10.1007/s10107-004-0552-5). However, the numerical method in Drori and Teboulle (2014) is computationally expensive for large N, and the corresponding numerically optimized first-order algorithm in Drori and Teboulle (2014) requires impractical memory and computation for large-scale optimization problems. In this paper, we propose optimized first-order algorithms that achieve a convergence bound that is two times smaller than for Nesterov’s fast gradient methods; our bound is found analytically and refines the numerical bound in Drori and Teboulle (2014). Furthermore, the proposed optimized first-order methods have efficient forms that are remarkably similar to Nesterov’s fast gradient methods.

中文翻译:

用于平滑凸最小化的优化一阶方法

我们引入了新的优化一阶方法,用于平滑无约束凸最小化。Drori 和 Teboulle(Math Program 145(1–2):451–482, 2014. doi:10.1007/s10107-013-0653-0)最近描述了一种用于计算 N 次迭代最优步长系数的数值方法。 -阶算法,包括梯度方法、重球方法(Polyak in USSR Comput Math Math Phys 4(5):1–17, 1964. doi:10.1016/0041-5553(64)90137-5)和 Nesterov 的快速梯度方法(Nesterov in Sov Math Dokl 27(2):372–376, 1983; Math Program 103(1):127–152, 2005. doi:10.1007/s10107-004-0552-5)。然而,Drori 和 Teboulle (2014) 中的数值方法对于大 N 的计算成本很高,Drori 和 Teboulle (2014) 中相应的数值优化一阶算法需要不切实际的内存和大规模优化问题的计算。在本文中,我们提出了优化的一阶算法,其收敛界限比 Nesterov 的快速梯度方法小两倍;我们的界限是通过分析找到的,并在 Drori 和 Teboulle (2014) 中改进了数值界限。此外,所提出的优化一阶方法具有与 Nesterov 的快速梯度方法非常相似的有效形式。
更新日期:2015-10-17
down
wechat
bug