当前位置: X-MOL 学术ACM SIGMOD Rec. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Expressive Power of Graph Neural Networks as a Query Language
ACM SIGMOD Record ( IF 1.1 ) Pub Date : 2020-12-10 , DOI: 10.1145/3442322.3442324
Pablo Barceló 1 , Egor V. Kostylev 2 , Mikaël Monet 3 , Jorge Pérez 4 , Juan L. Reutter 1 , Juan-Pablo Silva 5
Affiliation  

In this paper we survey our recent results characterizing various graph neural network (GNN) architectures in terms of their ability to classify nodes over graphs, for classifiers based on unary logical formulas- or queries. We focus on the language FOC2, a well-studied fragment of FO. This choice is motivated by the fact that FOC2 is related to theWeisfeiler-Lehman (WL) test for checking graph isomorphism, which has the same ability as GNNs for distinguishing nodes on graphs. We unveil the exact relationship between FOC2 and GNNs in terms of node classification. To tackle this problem, we start by studying a popular basic class of GNNs, which we call AC-GNNs, in which the features of each node in a graph are updated, in successive layers, according only to the features of its neighbors. We prove that the unary FOC2 formulas that can be captured by an AC-GNN are exactly those that can be expressed in its guarded fragment, which in turn corresponds to graded modal logic. This result implies in particular that ACGNNs are too weak to capture all FOC2 formulas. We then seek for what needs to be added to AC-GNNs for capturing all FOC2. We show that it suffices to add readouts layers, which allow updating the node features not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs with readouts ACR-GNNs. We also describe experiments that validate our findings by showing that, on synthetic data conforming to FOC2 but not to graded modal logic, AC-GNNs struggle to fit in while ACR-GNNs can generalise even to graphs of sizes not seen during training.

中文翻译:

图神经网络作为查询语言的表达能力

在本文中,我们调查了我们最近的结果,这些结果描述了各种图神经网络 (GNN) 架构在图上分类节点的能力,用于基于一元逻辑公式或查询的分类器。我们专注于语言 FOC2,这是一个经过充分研究的 FO 片段。这种选择的动机是 FOC2 与用于检查图同构的 Weisfeiler-Lehman (WL) 测试相关,该测试具有与 GNN 相同的区分图上节点的能力。我们揭示了 FOC2 和 GNN 在节点分类方面的确切关系。为了解决这个问题,我们首先研究一种流行的 GNN 基础类,我们称之为 AC-GNN,其中图中每个节点的特征在连续层中仅根据其邻居的特征进行更新。我们证明了 AC-GNN 可以捕获的一元 FOC2 公式正是可以在其保护片段中表达的那些,这反过来又对应于分级模态逻辑。这一结果尤其意味着 ACGNN 太弱而无法捕获所有 FOC2 公式。然后我们寻找需要添加到 AC-GNN 以捕获所有 FOC2 的内容。我们展示了添加读数层就足够了,这不仅允许更新节点特征,还可以更新其邻居,还可以更新全局属性向量。我们称带有读数的 GNN 为 ACR-GNN。我们还描述了验证我们发现的实验,通过表明,在符合 FOC2 但不符合分级模态逻辑的合成数据上,AC-GNN 难以适应,而 ACR-GNN 甚至可以泛化到训练期间未见过的大小图。
更新日期:2020-12-10
down
wechat
bug