当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2019-01-24 , DOI: arxiv-1901.08460
Valentina Zantedeschi, Aur\'elien Bellet, Marc Tommasi

We consider the fully decentralized machine learning scenario where many users with personal datasets collaborate to learn models through local peer-to-peer exchanges, without a central coordinator. We propose to train personalized models that leverage a collaboration graph describing the relationships between user personal tasks, which we learn jointly with the models. Our fully decentralized optimization procedure alternates between training nonlinear models given the graph in a greedy boosting manner, and updating the collaboration graph (with controlled sparsity) given the models. Throughout the process, users exchange messages only with a small number of peers (their direct neighbors when updating the models, and a few random users when updating the graph), ensuring that the procedure naturally scales with the number of users. Overall, our approach is communication-efficient and avoids exchanging personal data. We provide an extensive analysis of the convergence rate, memory and communication complexity of our approach, and demonstrate its benefits compared to competing techniques on synthetic and real datasets.

中文翻译:

个性化模型和协作图的完全去中心化联合学习

我们考虑完全分散的机器学习场景,其中许多拥有个人数据集的用户通过本地点对点交换协作学习模型,而无需中央协调器。我们建议训练个性化模型,利用协作图描述用户个人任务之间的关系,我们与模型共同学习。我们完全分散的优化程序在以贪婪提升方式训练非线性模型和更新协作图(具有受控稀疏性)之间交替。在整个过程中,用户仅与少数对等方(更新模型时是他们的直接邻居,更新图时是一些随机用户)交换消息,确保该过程自然地随着用户数量而扩展。全面的,我们的方法是高效沟通,避免交换个人数据。我们对我们的方法的收敛速度、内存和通信复杂性进行了广泛的分析,并展示了与合成和真实数据集上的竞争技术相比的优势。
更新日期:2020-03-27
down
wechat
bug