当前位置: X-MOL 学术Math. Program. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Inexact stochastic mirror descent for two-stage nonlinear stochastic programs
Mathematical Programming ( IF 2.2 ) Pub Date : 2020-04-02 , DOI: 10.1007/s10107-020-01490-5
Vincent Guigues

We introduce an inexact variant of stochastic mirror descent (SMD), called inexact stochastic mirror descent (ISMD), to solve nonlinear two-stage stochastic programs where the second stage problem has linear and nonlinear coupling constraints and a nonlinear objective function which depends on both first and second stage decisions. Given a candidate first stage solution and a realization of the second stage random vector, each iteration of ISMD combines a stochastic subgradient descent using a prox-mapping with the computation of approximate (instead of exact for SMD) primal and dual second stage solutions. We provide two convergence analysis of ISMD, under two sets of assumptions. The first convergence analysis is based on the formulas for inexact cuts of value functions of convex optimization problems shown recently in Guigues (SIAM J. Optim. 30 (1), 407–438, 2020). The second convergence analysis provides a convergence rate (the same as SMD) and relies on new formulas that we derive for inexact cuts of value functions of convex optimization problems assuming that the dual function of the second stage problem for all fixed first stage solution and realization of the second stage random vector, is strongly concave. We show that this assumption of strong concavity is satisfied for some classes of problems and present the results of numerical experiments on two simple two-stage problems which show that solving approximately the second stage problem for the first iterations of ISMD can help us obtain a good approximate first stage solution quicker than with SMD.

中文翻译:

两阶段非线性随机程序的不精确随机镜像下降

我们引入了随机镜像下降 (SMD) 的不精确变体,称为不精确随机镜像下降 (ISMD),以解决非线性两阶段随机程序,其中第二阶段问题具有线性和非线性耦合约束以及非线性目标函数第一阶段和第二阶段的决定。给定候选的第一阶段解决方案和第二阶段随机向量的实现,ISMD 的每次迭代都将使用近似映射的随机次梯度下降与近似(而不是 SMD 的精确)原始和双第二阶段解决方案的计算相结合。我们在两组假设下提供了两种 ISMD 收敛分析。第一个收敛分析基于最近在 Guigues (SIAM J. Optim. 30 (1), 407–438, 2020)。第二次收敛分析提供了收敛速度(与 SMD 相同)并依赖于我们为凸优化问题的价值函数的不精确切割推导出的新公式,假设第二阶段问题的对偶函数对于所有固定的第一阶段解决方案和实现的第二阶段随机向量,是强凹的。我们证明了这种强凹性假设对于某些类别的问题是满足的,并展示了两个简单的两阶段问题的数值实验结果,这表明解决 ISMD 第一次迭代的近似第二阶段问题可以帮助我们获得一个很好的结果。近似第一阶段解决方案比 SMD 更快。第二次收敛分析提供了收敛速度(与 SMD 相同)并依赖于我们为凸优化问题的价值函数的不精确切割推导出的新公式,假设第二阶段问题的对偶函数对于所有固定的第一阶段解决方案和实现的第二阶段随机向量,是强凹的。我们证明了这种强凹性假设对于某些类别的问题是满足的,并展示了两个简单的两阶段问题的数值实验结果,这表明解决 ISMD 第一次迭代的近似第二阶段问题可以帮助我们获得一个很好的结果。近似第一阶段解决方案比 SMD 更快。第二次收敛分析提供了收敛速度(与 SMD 相同)并依赖于我们为凸优化问题的价值函数的不精确切割推导出的新公式,假设第二阶段问题的对偶函数对于所有固定的第一阶段解决方案和实现的第二阶段随机向量,是强凹的。我们证明了这种强凹性假设对于某些类别的问题是满足的,并展示了两个简单的两阶段问题的数值实验结果,这表明解决 ISMD 第一次迭代的近似第二阶段问题可以帮助我们获得一个很好的结果。近似第一阶段解决方案比 SMD 更快。
更新日期:2020-04-02
down
wechat
bug