当前位置: X-MOL 学术IEEE Trans. Control Netw. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed Zero-Order Algorithms for Nonconvex Multiagent Optimization
IEEE Transactions on Control of Network Systems ( IF 4.0 ) Pub Date : 2020-09-16 , DOI: 10.1109/tcns.2020.3024321
Yujie Tang , Junshan Zhang , Na Li

Distributed multiagent optimization finds many applications in distributed learning, control, estimation, etc. Most existing algorithms assume knowledge of first-order information of the objective and have been analyzed for convex problems. However, there are situations where the objective is nonconvex, and one can only evaluate the function values at finitely many points. In this article, we consider derivative-free distributed algorithms for nonconvex multiagent optimization, based on recent progress in zero-order optimization. We develop two algorithms for different settings, provide detailed analysis of their convergence behavior, and compare them with existing centralized zero-order algorithms and gradient-based distributed algorithms.

中文翻译:

非凸多主体优化的分布式零阶算法

分布式多主体优化在分布式学习,控制,估计等方面有许多应用。大多数现有算法都假设目标的一阶信息,并已针对凸问题进行了分析。但是,在某些情况下,目标是非凸的,并且只能在有限的多个点上评估函数值。在本文中,我们基于零阶优化的最新进展,考虑了用于非凸多主体优化的无导数分布式算法。我们针对不同的设置开发了两种算法,提供了它们的收敛行为的详细分析,并将它们与现有的集中式零阶算法和基于梯度的分布式算法进行了比较。
更新日期:2020-09-16
down
wechat
bug