当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Graph-Bert: Only Attention is Needed for Learning Graph Representations
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-01-15 , DOI: arxiv-2001.05140
Jiawei Zhang, Haopeng Zhang, Congying Xia, Li Sun

The dominant graph neural networks (GNNs) over-rely on the graph links, several serious performance problems with which have been witnessed already, e.g., suspended animation problem and over-smoothing problem. What's more, the inherently inter-connected nature precludes parallelization within the graph, which becomes critical for large-sized graph, as memory constraints limit batching across the nodes. In this paper, we will introduce a new graph neural network, namely GRAPH-BERT (Graph based BERT), solely based on the attention mechanism without any graph convolution or aggregation operators. Instead of feeding GRAPH-BERT with the complete large input graph, we propose to train GRAPH-BERT with sampled linkless subgraphs within their local contexts. GRAPH-BERT can be learned effectively in a standalone mode. Meanwhile, a pre-trained GRAPH-BERT can also be transferred to other application tasks directly or with necessary fine-tuning if any supervised label information or certain application oriented objective is available. We have tested the effectiveness of GRAPH-BERT on several graph benchmark datasets. Based the pre-trained GRAPH-BERT with the node attribute reconstruction and structure recovery tasks, we further fine-tune GRAPH-BERT on node classification and graph clustering tasks specifically. The experimental results have demonstrated that GRAPH-BERT can out-perform the existing GNNs in both the learning effectiveness and efficiency.

中文翻译:

Graph-Bert:学习图表示只需要注意

占主导地位的图神经网络 (GNN) 过度依赖图链接,已经见证了几个严重的性能问题,例如假死问题和过度平滑问题。更重要的是,固有的相互连接的性质排除了图中的并行化,这对于大型图变得至关重要,因为内存限制限制了跨节点的批处理。在本文中,我们将介绍一种新的图神经网络,即 GRAPH-BERT(Graph based BERT),它完全基于注意力机制,没有任何图卷积或聚合算子。我们建议使用本地上下文中的采样无链接子图训练 GRAPH-BERT,而不是用完整的大输入图来训练 GRAPH-BERT。GRAPH-BERT 可以在独立模式下有效学习。同时,如果任何监督标签信息或某些面向应用的目标可用,预训练的 GRAPH-BERT 也可以直接转移到其他应用任务或进行必要的微调。我们已经在几个图形基​​准数据集上测试了 GRAPH-BERT 的有效性。基于带有节点属性重建和结构恢复任务的预训练 GRAPH-BERT,我们进一步在节点分类和图聚类任务上进一步微调 GRAPH-BERT。实验结果表明,GRAPH-BERT 在学习效果和效率方面都可以优于现有的 GNN。我们已经在几个图形基​​准数据集上测试了 GRAPH-BERT 的有效性。基于带有节点属性重建和结构恢复任务的预训练 GRAPH-BERT,我们进一步在节点分类和图聚类任务上进一步微调 GRAPH-BERT。实验结果表明,GRAPH-BERT 在学习效果和效率方面都可以优于现有的 GNN。我们已经在几个图形基​​准数据集上测试了 GRAPH-BERT 的有效性。基于带有节点属性重建和结构恢复任务的预训练 GRAPH-BERT,我们进一步在节点分类和图聚类任务上进一步微调 GRAPH-BERT。实验结果表明,GRAPH-BERT 在学习效果和效率方面都可以优于现有的 GNN。
更新日期:2020-01-23
down
wechat
bug