当前位置: X-MOL 学术Complexity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Online Supervised Learning with Distributed Features over Multiagent System
Complexity ( IF 1.7 ) Pub Date : 2020-11-16 , DOI: 10.1155/2020/8830359
Xibin An 1 , Bing He 1 , Chen Hu 1 , Bingqi Liu 1
Affiliation  

Most current online distributed machine learning algorithms have been studied in a data-parallel architecture among agents in networks. We study online distributed machine learning from a different perspective, where the features about the same samples are observed by multiple agents that wish to collaborate but do not exchange the raw data with each other. We propose a distributed feature online gradient descent algorithm and prove that local solution converges to the global minimizer with a sublinear rate . Our algorithm does not require exchange of the primal data or even the model parameters between agents. Firstly, we design an auxiliary variable, which implies the information of the global features, and estimate at each agent by dynamic consensus method. Then, local parameters are updated by online gradient descent method based on local data stream. Simulations illustrate the performance of the proposed algorithm.

中文翻译:

Multiagent系统上具有分布式功能的在线监督学习

目前,大多数最新的在线分布式机器学习算法都是在网络代理之间的数据并行体系结构中进行研究的。我们从不同的角度研究在线分布式机器学习,其中,希望合作但不相互交换原始数据的多个代理观察到相同样本的特征。我们提出了一种分布式特征在线梯度下降算法,并证明了局部解以亚线性速率收敛到全局极小我们的算法不需要代理之间交换原始数据甚至模型参数。首先,我们设计了一个辅助变量,该变量隐含了全局特征的信息,并通过动态共识方法对每个代理进行了估计。然后,基于局部数据流,通过在线梯度下降法更新局部参数。仿真说明了所提出算法的性能。
更新日期:2020-11-16
down
wechat
bug