当前位置: X-MOL 学术Asian J. Control › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed adaptive online learning for convex optimization with weight decay
Asian Journal of Control ( IF 2.7 ) Pub Date : 2020-12-22 , DOI: 10.1002/asjc.2489
Xiuyu Shen 1 , Dequan Li 1 , Runyue Fang 1 , Yuejin Zhou 1 , Xiongjun Wu 2
Affiliation  

This paper investigates an adaptive gradient-based online convex optimization problem over decentralized networks. The nodes of a network aim to track the minimizer of a global time-varying convex function, and the communication pattern among nodes is captured as a connected undirected graph. To tackle such optimization problems in a collaborative and distributed manner, a weight decay distributed adaptive online gradient algorithm, called WDDAOG, is firstly proposed, which incorporates distributed optimization methods with adaptive strategies. Then, our theoretical analysis clearly illustrates the difference between weight decay and L2 regularization for distributed adaptive gradient algorithms. The dynamic regret bound for the proposed algorithm is further analyzed. It is shown that the dynamic regret bound for convex functions grows with order of urn:x-wiley:asjc:media:asjc2489:asjc2489-math-0001, where T and n represent the time horizon and the number of nodes associated with the network, respectively. Numerical experiments demonstrate that WDDAOG works well in practice and compares favorably to existing distributed online optimization schemes.

中文翻译:

权重衰减凸优化的分布式自适应在线学习

本文研究了分散网络上基于自适应梯度的在线凸优化问题。网络的节点旨在跟踪全局时变凸函数的最小值,并将节点之间的通信模式捕获为连通的无向图。为了以协作和分布式的方式解决此类优化问题,首先提出了一种权重衰减分布式自适应在线梯度算法WDDAOG,该算法将分布式优化方法与自适应策略相结合。然后,我们的理论分析清楚地说明了权重衰减和L 2之间的区别分布式自适应梯度算法的正则化。进一步分析了所提出算法的动态后悔界。结果表明,凸函数的动态后悔界随着 的阶数增长骨灰盒:x-wiley:asjc:媒体:asjc2489:asjc2489-math-0001,其中Tn分别表示时间范围和与网络关联的节点数。数值实验表明 WDDAOG 在实践中运行良好,与现有的分布式在线优化方案相比具有优势。
更新日期:2020-12-22
down
wechat
bug