当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GraphiT: Encoding Graph Structure in Transformers
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05667
Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal

We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs). Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length. We thoroughly evaluate these two ideas on many classification and regression tasks, demonstrating the effectiveness of each of them independently, as well as their combination. In addition to performing well on standard benchmarks, our model also admits natural visualization mechanisms for interpreting graph motifs explaining the predictions, making it a potentially strong candidate for scientific applications where interpretation is important. Code available at https://github.com/inria-thoth/GraphiT.

中文翻译:

GraphiT:在 Transformer 中编码图结构

我们表明,将图视为节点特征集并将结构和位置信息结合到转换器架构中能够胜过用经典图神经网络 (GNN) 学习的表示。我们的模型GraphiT通过(i)利用基于图上正定核的自注意力分数中的相对位置编码策略来编码此类信息,以及(ii)枚举和编码局部子结构,例如短路径。我们在许多分类和回归任务上彻底评估了这两种想法,分别展示了它们各自的有效性以及它们的组合。除了在标准基准上表现良好之外,我们的模型还承认用于解释解释预测的图形图案的自然可视化机制,使其成为解释很重要的科学应用的潜在有力候选者。代码可在 https://github.com/inria-thoth/GraphiT 获得。
更新日期:2021-06-11
down
wechat
bug