当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural Execution of Graph Algorithms
arXiv - CS - Data Structures and Algorithms Pub Date : 2019-10-23 , DOI: arxiv-1910.10593
Petar Veli\v{c}kovi\'c, Rex Ying, Matilde Padovano, Raia Hadsell, Charles Blundell

Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs. In almost all cases so far, however, they have been applied to directly recovering a final solution from raw inputs, without explicit guidance on how to structure their problem-solving. Here, instead, we focus on learning in the space of algorithms: we train several state-of-the-art GNN architectures to imitate individual steps of classical graph algorithms, parallel (breadth-first search, Bellman-Ford) as well as sequential (Prim's algorithm). As graph algorithms usually rely on making discrete decisions within neighbourhoods, we hypothesise that maximisation-based message passing neural networks are best-suited for such objectives, and validate this claim empirically. We also demonstrate how learning in the space of algorithms can yield new opportunities for positive transfer between tasks---showing how learning a shortest-path algorithm can be substantially improved when simultaneously learning a reachability algorithm.

中文翻译:

图算法的神经执行

图神经网络 (GNN) 是解决图结构输入问题的强大表征工具。然而,到目前为止,在几乎所有情况下,它们都被应用于直接从原始输入中恢复最终解决方案,而没有明确指导如何构建他们的问题解决方案。在这里,我们专注于算法空间的学习:我们训练了几种最先进的 GNN 架构来模仿经典图算法、并行(广度优先搜索,Bellman-Ford)以及顺序图算法的各个步骤(Prim 算法)。由于图算法通常依赖于在邻域内做出离散决策,我们假设基于最大化的消息传递神经网络最适合此类目标,并根据经验验证这一主张。
更新日期:2020-01-16
down
wechat
bug