当前位置: X-MOL 学术Comput. Optim. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Algorithms for stochastic optimization with function or expectation constraints
Computational Optimization and Applications ( IF 2.2 ) Pub Date : 2020-02-18 , DOI: 10.1007/s10589-020-00179-x
Guanghui Lan , Zhiqiang Zhou

This paper considers the problem of minimizing an expectation function over a closed convex set, coupled with a function or expectation constraint on either decision variables or problem parameters. We first present a new stochastic approximation (SA) type algorithm, namely the cooperative SA (CSA), to handle problems with the constraint on devision variables. We show that this algorithm exhibits the optimal \({{{\mathcal {O}}}}(1/\epsilon ^2)\) rate of convergence, in terms of both optimality gap and constraint violation, when the objective and constraint functions are generally convex, where \(\epsilon\) denotes the optimality gap and infeasibility. Moreover, we show that this rate of convergence can be improved to \({{{\mathcal {O}}}}(1/\epsilon )\) if the objective and constraint functions are strongly convex. We then present a variant of CSA, namely the cooperative stochastic parameter approximation (CSPA) algorithm, to deal with the situation when the constraint is defined over problem parameters and show that it exhibits similar optimal rate of convergence to CSA. It is worth noting that CSA and CSPA are primal methods which do not require the iterations on the dual space and/or the estimation on the size of the dual variables. To the best of our knowledge, this is the first time that such optimal SA methods for solving function or expectation constrained stochastic optimization are presented in the literature.

中文翻译:

具有函数或期望约束的随机优化算法

本文考虑了最小化封闭凸集上的期望函数的问题,以及对决策变量或问题参数的函数或期望约束。我们首先提出一种新的随机近似(SA)类型算法,即协作SA(CSA),以处理对除法变量的约束问题。我们证明了该算法在最优间隙和约束违反方面都表现出最优的\({{{\ mathcal {O}}}}(1 / \ epsilon ^ 2)\)收敛速度函数通常是凸的,其中\(\ epsilon \)表示最优差距和不可行性。此外,我们证明该收敛速度可以提高到\({{{\ mathcal {O}}}}}(1 / \ epsilon} \如果目标函数和约束函数是强烈凸的。然后,我们提出了CSA的一种变体,即协作随机参数逼近(CSPA)算法,用于处理在问题参数上定义约束时的情况,并表明它表现出与CSA相似的最佳收敛速度。值得注意的是,CSA和CSPA是主要方法,不需要对偶空间上的迭代和/或对偶变量大小的估计。据我们所知,这是文献中首次提出用于求解函数或期望约束随机优化的最优SA方法。
更新日期:2020-02-18
down
wechat
bug