当前位置: X-MOL 学术Ann. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asymptotic optimality in stochastic optimization
Annals of Statistics ( IF 4.5 ) Pub Date : 2021-01-29 , DOI: 10.1214/19-aos1831
John C. Duchi , Feng Ruan

We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of Hájek and Le Cam for classical statistical problems. We give complementary optimality results, developing fully online methods that adaptively achieve optimal convergence guarantees. Our results provide function-specific lower bounds and convergence results that make precise a correspondence between statistical difficulty and the geometric notion of tilt-stability from optimization. As part of this development, we show how variants of Nesterov’s dual averaging—a stochastic gradient-based procedure—guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures fail. Additionally, we highlight a gap between problems with linear and nonlinear constraints: standard stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints, necessitating the development of asymptotically optimal Riemannian stochastic gradient methods.

中文翻译:

随机优化中的渐近最优

我们研究了随机凸优化问题的局部复杂性测度,为经典统计问题提供了类似于Hájek和Le Cam的局部极小极大理论。我们给出互补的最优结果,开发完全在线的方法来自适应地实现最优收敛保证。我们的结果提供了特定于功能的下界和收敛结果,从而使统计难度与优化优化的倾斜稳定性的几何概念精确对应。作为此开发的一部分,我们将说明Nesterov双重平均的变体(一种基于随机梯度的过程)如何确保优化问题中约束的有限时间标识,而随机梯度过程却失败了。此外,我们强调了线性和非线性约束的问题之间的差距:
更新日期:2021-01-29
down
wechat
bug