当前位置: X-MOL 学术arXiv.cs.DM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Characterizing the Expressive Power of Invariant and Equivariant Graph Neural Networks
arXiv - CS - Discrete Mathematics Pub Date : 2020-06-28 , DOI: arxiv-2006.15646
Wa\"iss Azizian, Marc Lelarge

Various classes of Graph Neural Networks (GNN) have been proposed and shown to be successful in a wide range of applications with graph structured data. In this paper, we propose a theoretical framework able to compare the expressive power of these GNN architectures. The current universality theorems only apply to intractable classes of GNNs. Here, we prove the first approximation guarantees for practical GNNs, paving the way for a better understanding of their generalization. Our theoretical results are proved for invariant GNNs computing a graph embedding (permutation of the nodes of the input graph does not affect the output) and equivariant GNNs computing an embedding of the nodes (permutation of the input permutes the output). We show that Folklore Graph Neural Networks (FGNN), which are tensor based GNNs augmented with matrix multiplication are the most expressive architectures proposed so far for a given tensor order. We illustrate our results on the Quadratic Assignment Problem (a NP-Hard combinatorial problem) by showing that FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms (based on spectral, SDP or other GNNs architectures). On a practical side, we also implement masked tensors to handle batches of graphs of varying sizes.

中文翻译:

表征不变和等变图神经网络的表达能力

已经提出了各种类别的图神经网络 (GNN),并证明它们在具有图结构化数据的广泛应用中是成功的。在本文中,我们提出了一个理论框架,能够比较这些 GNN 架构的表达能力。当前的普遍性定理仅适用于难以处理的 GNN 类。在这里,我们证明了实际 GNN 的第一个近似保证,为更好地理解它们的泛化铺平了道路。我们的理论结果证明了计算图嵌入的不变 GNN(输入图的节点的排列不影响输出)和计算节点的嵌入(输入的排列排列输出)的等变 GNN。我们展示了民俗图神经网络(FGNN),这是迄今为止针对给定张量阶提出的最具表现力的架构,它们是基于张量的 GNN,并通过矩阵乘法进行了增强。我们通过展示 FGNN 能够学习如何解决问题来说明我们在二次分配问题(NP-Hard 组合问题)上的结果,从而导致比现有算法(基于频谱、SDP 或其他 GNN 架构)更好的平均性能)。在实践方面,我们还实现了掩码张量来处理批量不同大小的图。导致比现有算法(基于光谱、SDP 或其他 GNN 架构)更好的平均性能。在实践方面,我们还实现了掩码张量来处理批量不同大小的图。导致比现有算法(基于光谱、SDP 或其他 GNN 架构)更好的平均性能。在实践方面,我们还实现了掩码张量来处理批量不同大小的图。
更新日期:2020-06-30
down
wechat
bug