当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed Graph Convolutional Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-13 , DOI: arxiv-2007.06281
Simone Scardapane, Indro Spinelli, Paolo Di Lorenzo

The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.

中文翻译:

分布式图卷积网络

这项工作的目的是开发一个用于训练图卷积网络 (GCN) 的完全分布式算法框架。所提出的方法能够利用输入数据的有意义的关系结构,这些数据由一组通过稀疏网络拓扑进行通信的代理收集。在制定集中式 GCN 训练问题后,我们首先展示了如何在底层数据图在不同代理之间拆分的分布式场景中进行推理。然后,我们提出了一种分布式梯度下降程序来解决 GCN 训练问题。生成的模型沿三条线分配计算:推理期间、反向传播期间和优化期间。收敛到 GCN 训练问题的平稳解也是在温和条件下建立的。最后,我们提出了一个优化标准来设计代理之间的通信拓扑,以便与描述数据关系的图相匹配。一系列广泛的数值结果验证了我们的提议。据我们所知,这是第一项将图卷积神经网络与分布式优化相结合的工作。
更新日期:2020-07-14
down
wechat
bug