当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decomposition and discrete approximation methods for solving two-stage distributionally robust optimization problems
Computational Optimization and Applications ( IF 1.6 ) Pub Date : 2020-11-04 , DOI: 10.1007/s10589-020-00234-7
Yannan Chen , Hailin Sun , Huifu Xu

Decomposition methods have been well studied for solving two-stage and multi-stage stochastic programming problems, see Rockafellar and Wets (Math. Oper. Res. 16:119–147, 1991), Ruszczyński and Shapiro (Stochastic Programming, Handbook in OR & MS, North-Holland Publishing Company, Amsterdam, 2003) and Ruszczyński (Math. Program. 79:333–353, 1997). In this paper, we propose an algorithmic framework based on the fundamental ideas of the methods for solving two-stage minimax distributionally robust optimization (DRO) problems where the underlying random variables take a finite number of distinct values. This is achieved by introducing nonanticipativity constraints for the first stage decision variables, rearranging the minimax problem through Lagrange decomposition and applying the well-known primal-dual hybrid gradient (PDHG) method to the new minimax problem. The algorithmic framework does not depend on specific structure of the ambiguity set. To extend the algorithm to the case that the underlying random variables are continuously distributed, we propose a discretization scheme and quantify the error arising from the discretization in terms of the optimal value and the optimal solutions when the ambiguity set is constructed through generalized prior moment conditions, the Kantorovich ball and \(\phi\)-divergence centred at an empirical probability distribution. Some preliminary numerical tests show the proposed decomposition algorithm featured with parallel computing performs well.



中文翻译:

求解两阶段分布鲁棒优化问题的分解和离散逼近方法

对于解决两阶段和多阶段随机规划问题,已经对分解方法进行了很好的研究,请参见Rockafellar和Wets(Math。Oper.Res.16:119-147,1991),Ruszczyński和Shapiro(随机编程,《或MS,北荷兰出版公司,阿姆斯特丹,2003年)和Ruszczyński(数学程序。79:333-353,1997年)。在本文中,我们提出了一种基于算法原理的算法框架,该方法用于解决两阶段的最小极大分布鲁棒优化(DRO)问题,其中底层随机变量采用有限数量的不同值。这是通过为第一阶段决策变量引入非预期约束来实现的,通过Lagrange分解重新排列极小极大问题,并将新的极小极大问题应用众所周知的原始对偶混合梯度(PDHG)方法。算法框架不依赖于歧义集的特定结构。为了将算法扩展到基础随机变量连续分布的情况,我们提出了一种离散化方案,并在通过广义先验矩条件构造歧义集时,根据最优值和最优解量化了离散化产生的误差。 ,Kantorovich球和\(\ phi \)-散度集中于经验概率分布。一些初步的数值测试表明,所提出的具有并行计算功能的分解算法性能良好。

更新日期:2020-11-06
down
wechat
bug