当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Adaptive Primal-Dual Subgradient Algorithm for Online Distributed Constrained Optimization
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 2017-10-05 , DOI: 10.1109/tcyb.2017.2755720
Deming Yuan , Daniel W. C. Ho , Guo-Ping Jiang

In this paper, we consider the problem of solving distributed constrained optimization over a multiagent network that consists of multiple interacting nodes in online setting, where the objective functions of nodes are time-varying and the constraint set is characterized by an inequality. Through introducing a regularized convex-concave function, we present a consensus-based adaptive primal-dual subgradient algorithm that removes the need for knowing the total number of iterations T in advance. We show that the proposed algorithm attains an O(T 1/2+c ) [where c ∈ (0, 1/2)] regret bound and an O(T 1-c/2 ) bound on the violation of constraints; in addition, we show an improvement to an O(T c ) regret bound when the objective functions are strongly convex. The proposed algorithm allows a novel tradeoffs between the regret and the violation of constraints. Finally, a numerical example is provided to illustrate the effectiveness of the algorithm.

中文翻译:


在线分布式约束优化的自适应原对偶次梯度算法



在本文中,我们考虑解决在线设置中由多个交互节点组成的多智能体网络上的分布式约束优化问题,其中节点的目标函数是时变的,约束集具有不等式的特征。通过引入正则化凸凹函数,我们提出了一种基于共识的自适应原始-对偶次梯度算法,无需提前知道迭代总数 T。我们表明,所提出的算法在违反约束时达到了 O(T 1/2+c ) [其中 c ∈ (0, 1/2)] 后悔界限和 O(T 1-c/2 ) 界限;此外,当目标函数是强凸函数时,我们展示了 O(T c ) 后悔界限的改进。所提出的算法允许在遗憾和违反约束之间进行新颖的权衡。最后通过数值例子说明算法的有效性。
更新日期:2017-10-05
down
wechat
bug