当前位置: X-MOL 学术arXiv.cs.SI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs
arXiv - CS - Social and Information Networks Pub Date : 2021-06-11 , DOI: arxiv-2106.06218
Seongjun Yun, Minbyul Jeong, Sungdong Yoo, Seunghun Lee, Sean S. Yi, Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim

Graph Neural Networks (GNNs) have been widely applied to various fields due to their powerful representations of graph-structured data. Despite the success of GNNs, most existing GNNs are designed to learn node representations on the fixed and homogeneous graphs. The limitations especially become problematic when learning representations on a misspecified graph or a heterogeneous graph that consists of various types of nodes and edges. To address this limitations, we propose Graph Transformer Networks (GTNs) that are capable of generating new graph structures, which preclude noisy connections and include useful connections (e.g., meta-paths) for tasks, while learning effective node representations on the new graphs in an end-to-end fashion. We further propose enhanced version of GTNs, Fast Graph Transformer Networks (FastGTNs), that improve scalability of graph transformations. Compared to GTNs, FastGTNs are 230x faster and use 100x less memory while allowing the identical graph transformations as GTNs. In addition, we extend graph transformations to the semantic proximity of nodes allowing non-local operations beyond meta-paths. Extensive experiments on both homogeneous graphs and heterogeneous graphs show that GTNs and FastGTNs with non-local operations achieve the state-of-the-art performance for node classification tasks. The code is available: https://github.com/seongjunyun/Graph_Transformer_Networks

中文翻译:

Graph Transformer Networks:学习元路径图以改进 GNN

图神经网络(GNN)由于其对图结构数据的强大表示而被广泛应用于各个领域。尽管 GNN 取得了成功,但大多数现有的 GNN 旨在学习固定和同构图上的节点表示。当在错误指定的图或由各种类型的节点和边组成的异构图上学习表示时,这些限制尤其成问题。为了解决这个限制,我们提出了能够生成新图结构的图变换器网络(GTN),它排除了噪声连接并包括任务的有用连接(例如,元路径),同时在新图上学习有效的节点表示一种端到端的时尚。我们进一步提出了 GTNs 的增强版本,Fast Graph Transformer Networks (FastGTNs),提高图转换的可扩展性。与 GTN 相比,FastGTN 快 230 倍,使用的内存减少 100 倍,同时允许与 GTN 相同的图形转换。此外,我们将图转换扩展到节点的语义邻近性,允许元路径之外的非本地操作。在同构图和异构图上的大量实验表明,具有非本地操作的 GTN 和 FastGTN 实现了节点分类任务的最先进性能。代码可用:https://github.com/seongjunyun/Graph_Transformer_Networks 我们将图转换扩展到节点的语义邻近性,允许元路径之外的非本地操作。在同构图和异构图上的大量实验表明,具有非本地操作的 GTN 和 FastGTN 实现了节点分类任务的最先进性能。代码可用:https://github.com/seongjunyun/Graph_Transformer_Networks 我们将图转换扩展到节点的语义邻近性,允许元路径之外的非本地操作。在同构图和异构图上的大量实验表明,具有非本地操作的 GTN 和 FastGTN 实现了节点分类任务的最先进性能。代码可用:https://github.com/seongjunyun/Graph_Transformer_Networks
更新日期:2021-06-14
down
wechat
bug