当前位置: X-MOL 学术J. Optim. Theory Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Unified Convergence Analysis of Stochastic Bregman Proximal Gradient and Extragradient Methods
Journal of Optimization Theory and Applications ( IF 1.6 ) Pub Date : 2021-01-08 , DOI: 10.1007/s10957-020-01799-3
Xiantao Xiao

We consider a mini-batch stochastic Bregman proximal gradient method and a mini-batch stochastic Bregman proximal extragradient method for stochastic convex composite optimization problems. A simplified and unified convergence analysis framework is proposed to obtain almost sure convergence properties and expected convergence rates of the mini-batch stochastic Bregman proximal gradient method and its variants. This framework can also be used to analyze the convergence of the mini-batch stochastic Bregman proximal extragradient method, which has seldom been discussed in the literature. We point out that the standard uniformly bounded variance assumption and the usual Lipschitz gradient continuity assumption are not required in the analysis.

中文翻译:

随机 Bregman 近端梯度和超梯度方法的统一收敛分析

我们考虑了用于随机凸复合优化问题的小批量随机 Bregman 近端梯度方法和小批量随机 Bregman 近端超梯度方法。提出了一种简化统一的收敛分析框架,以获得小批量随机Bregman近端梯度方法及其变体的几乎确定的收敛特性和预期收敛速度。该框架还可以用于分析小批量随机 Bregman 近端超梯度方法的收敛性,这在文献中很少被讨论。我们指出在分析中不需要标准一致有界方差假设和通常的 Lipschitz 梯度连续性假设。
更新日期:2021-01-08
down
wechat
bug