当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Conservative Stochastic Optimization With Expectation Constraints
IEEE Transactions on Signal Processing ( IF 4.6 ) Pub Date : 2021-05-21 , DOI: 10.1109/tsp.2021.3082467
Zeeshan Akhtar , Amrit Bedi , Ketan Rajawat

This paper considers stochastic convex optimization problems where the objective and constraint functions involve expectations with respect to the data indices or environmental variables, in addition to deterministic convex constraints on the domain of the variables. Since the underlying data distribution is unknown a priori, a closed-form solution is generally not available, and classical deterministic optimization paradigms are not applicable. State-of-the-art approaches, such as those using the saddle point framework, are able to ensure that the optimality gap as well as the constraint violation decay as O(T−12)\mathcal{O}(T^{-\frac{1}{2}}) where TT is the number of stochastic gradients. In this work, we propose a novel conservative stochastic optimization algorithm (CSOA) that achieves zero average constraint violation and O(T−12)\mathcal{O}(T^{-\frac{1}{2}}) optimality gap. Further, we also consider the scenario where carrying out a projection step onto the convex domain constraints at every iteration is not viable. Traditionally, the projection operation can be avoided by considering the conditional gradient or Frank-Wolfe (FW) variant of the algorithm. The state-of-the-art stochastic FW variants achieve an optimality gap of O(T−13)\mathcal{O}(T^{-\frac{1}{3}}) after TT iterations, though these algorithms have not been applied to problems with functional expectation constraints. In this work, we propose the FW-CSOA algorithm that is not only projection-free but also achieves zero average constraint violation with O(T−14)\mathcal{O}(T^{-\frac{1}{4}}) decay of the optimality gap. The efficacy of the proposed algorithms is tested on two relevant problems: fair classification and structured matrix completion.

中文翻译:


具有期望约束的保守随机优化



本文考虑随机凸优化问题,其中目标函数和约束函数除了变量域上的确定性凸约束之外还涉及对数据索引或环境变量的期望。由于底层数据分布是先验未知的,因此封闭式解决方案通常不可用,并且经典的确定性优化范例也不适用。最先进的方法,例如使用鞍点框架的方法,能够确保最优性差距以及约束违反衰减为 O(T−12)\mathcal{O}(T^{- \frac{1}{2}}) 其中 TT 是随机梯度的数量。在这项工作中,我们提出了一种新颖的保守随机优化算法(CSOA),该算法实现了零平均约束违规和 O(T−12)\mathcal{O}(T^{-\frac{1}{2}}) 最优性差距。此外,我们还考虑了在每次迭代时对凸域约束执行投影步骤是不可行的情况。传统上,可以通过考虑算法的条件梯度或 Frank-Wolfe (FW) 变体来避免投影操作。最先进的随机 FW 变体在 TT 迭代后实现了 O(T−13)\mathcal{O}(T^{-\frac{1}{3}}) 的最优性差距,尽管这些算法具有尚未应用于具有功能期望约束的问题。在这项工作中,我们提出了 FW-CSOA 算法,该算法不仅是无投影的,而且还实现了 O(T−14)\mathcal{O}(T^{-\frac{1}{4} })最优性差距的衰减。所提出算法的有效性在两个相关问题上进行了测试:公平分类和结构化矩阵完成。
更新日期:2021-05-21
down
wechat
bug