当前位置: X-MOL 学术SIAM J. Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
SIAM Journal on Optimization ( IF 3.1 ) Pub Date : 2020-06-18 , DOI: 10.1137/18m1229869
Yangyang Xu

SIAM Journal on Optimization, Volume 30, Issue 2, Page 1664-1692, January 2020.
The stochastic gradient method (SGM) has been popularly applied to solve optimization problems with an objective that is stochastic or an average of many functions. Most existing works on SGMs assume that the underlying problem is unconstrained or has an easy-to-project constraint set. In this paper, we consider problems that have a stochastic objective and also many functional constraints. For such problems, it could be extremely expensive to project a point to the feasible set, or even compute subgradient and/or function value of all constraint functions. To find solutions to these problems, we propose a novel (adaptive) SGM based on the classical augmented Lagrangian function. Within every iteration, it inquires a stochastic subgradient of the objective, and a subgradient and the function value of one randomly sampled constraint function. Hence, the per-iteration complexity is low. We establish its convergence rate for convex problems and also problems with strongly convex objective. It can achieve the optimal $O(1/\sqrt{k})$ convergence rate for the convex case and nearly optimal $O\big((\log k)/k\big)$ rate for the strongly convex case. Numerical experiments on a sample approximation problem of the robust portfolio selection and quadratically constrained quadratic programming are conducted to demonstrate its efficiency.


中文翻译:

具有多个功能约束的凸程序的本原-对偶随机梯度法

SIAM优化杂志,第30卷,第2期,第1664-1692页,2020年1月。
随机梯度法(SGM)已广泛用于解决优化问题,其目标是随机的或许多函数的平均值。现有的大多数关于SGM的工作都假定基本问题不受约束或具有易于设计的约束集。在本文中,我们考虑具有随机目标以及许多功能约束的问题。对于此类问题,将点投影到可行集甚至计算所有约束函数的次梯度和/或函数值可能会非常昂贵。为了找到这些问题的解决方案,我们提出了一种基于经典增强拉格朗日函数的新颖(自适应)SGM。在每次迭代中,它都会查询目标的随机次梯度,以及一个随机采样约束函数的次梯度和函数值。因此,每次迭代的复杂度很低。我们确定了凸问题和具有强凸目标的问题的收敛速度。对于凸情况,它可以达到最佳的$ O(1 / \ sqrt {k})$收敛速度;对于强凸情况,它可以达到接近最佳的$ O \ big((\ log k)/ k \ big)$比率。进行了鲁棒投资组合选择和二次约束二次规划的样本逼近问题的数值实验,以证明其效率。
更新日期:2020-07-23
down
wechat
bug