当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dynamic stochastic approximation for multi-stage stochastic optimization
Mathematical Programming ( IF 2.2 ) Pub Date : 2020-03-20 , DOI: 10.1007/s10107-020-01489-y
Guanghui Lan , Zhiqiang Zhou

In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal $${{\mathcal {O}}}(1/\epsilon ^4)$$ O ( 1 / ϵ 4 ) rate of convergence in terms of the total number of required scenarios when applied to a three-stage stochastic optimization problem. We further show that this rate of convergence can be improved to $${{\mathcal {O}}}(1/\epsilon ^2)$$ O ( 1 / ϵ 2 ) when the objective function is strongly convex. We also discuss variants of DSA for solving more general multi-stage stochastic optimization problems with the number of stages $$T > 3$$ T > 3 . The developed DSA algorithms only need to go through the scenario tree once in order to compute an $$\epsilon $$ ϵ -solution of the multi-stage stochastic optimization problem. As a result, the memory required by DSA only grows linearly with respect to the number of stages. To the best of our knowledge, this is the first time that stochastic approximation type methods are generalized for multi-stage stochastic optimization with $$T \ge 3$$ T ≥ 3 .

中文翻译:

多阶段随机优化的动态随机逼近

在本文中,我们考虑在每个阶段具有凸目标和圆锥约束的多阶段随机优化问题。我们提出了一种新的随机一阶方法,即动态随机逼近 (DSA) 算法,用于解决这些类型的随机优化问题。我们表明,就应用时所需场景的总数而言,DSA 可以实现最佳的 $${{\mathcal {O}}}(1/\epsilon ^4)$$ O ( 1 / ϵ 4 ) 收敛率一个三阶段随机优化问题。我们进一步表明,当目标函数是强凸的时,这种收敛速度可以提高到 $${{\mathcal {O}}}(1/\epsilon ^2)$$ O ( 1 / ϵ 2 )。我们还讨论了 DSA 的变体,用于解决阶段数 $$T > 3$$ T > 3 的更一般的多阶段随机优化问题。开发的 DSA 算法只需要遍历场景树一次,即可计算多阶段随机优化问题的 $$\epsilon $$ ϵ -solution。因此,DSA 所需的内存仅随阶段数呈线性增长。据我们所知,这是第一次将随机近似类型方法推广到 $$T \ge 3$$ T ≥ 3 的多阶段随机优化。
更新日期:2020-03-20
down
wechat
bug