当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Graph Normalization for Graph Neural Networks
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-09-24 , DOI: arxiv-2009.11746
Yihao Chen, Xin Tang, Xianbiao Qi, Chun-Guang Li, Rong Xiao

Graph Neural Networks (GNNs) have attracted considerable attention and have emerged as a new promising paradigm to process graph-structured data. GNNs are usually stacked to multiple layers and the node representations in each layer are computed through propagating and aggregating the neighboring node features with respect to the graph. By stacking to multiple layers, GNNs are able to capture the long-range dependencies among the data on the graph and thus bring performance improvements. To train a GNN with multiple layers effectively, some normalization techniques (e.g., node-wise normalization, batch-wise normalization) are necessary. However, the normalization techniques for GNNs are highly task-relevant and different application tasks prefer to different normalization techniques, which is hard to know in advance. To tackle this deficiency, in this paper, we propose to learn graph normalization by optimizing a weighted combination of normalization techniques at four different levels, including node-wise normalization, adjacency-wise normalization, graph-wise normalization, and batch-wise normalization, in which the adjacency-wise normalization and the graph-wise normalization are newly proposed in this paper to take into account the local structure and the global structure on the graph, respectively. By learning the optimal weights, we are able to automatically select a single best or a best combination of multiple normalizations for a specific task. We conduct extensive experiments on benchmark datasets for different tasks, including node classification, link prediction, graph classification and graph regression, and confirm that the learned graph normalization leads to competitive results and that the learned weights suggest the appropriate normalization techniques for the specific task. Source code is released here https://github.com/cyh1112/GraphNormalization.

中文翻译:

学习图神经网络的图归一化

图神经网络(GNN)引起了相当多的关注,并已成为处理图结构数据的一种新的有前途的范式。GNN 通常堆叠到多个层,每层中的节点表示是通过传播和聚合相对于图的相邻节点特征来计算的。通过堆叠到多个层,GNN 能够捕获图上数据之间的长期依赖关系,从而带来性能改进。为了有效地训练具有多层的 GNN,一些归一化技术(例如,逐节点归一化、逐批归一化)是必要的。然而,GNN 的归一化技术与任务高度相关,不同的应用任务偏好不同的归一化技术,这很难提前知道。为了解决这个不足,在本文中,我们建议通过优化四个不同级别的归一化技术的加权组合来学习图归一化,包括节点归一化、邻接归一化、图形归一化和批量归一化,其中邻接 - wise normalization和graph-wise normalization是本文新提出的,分别考虑了图上的局部结构和全局结构。通过学习最佳权重,我们能够为特定任务自动选择单个最佳或多个标准化的最佳组合。我们对不同任务的基准数据集进行了广泛的实验,包括节点分类、链接预测、图分类和图回归,并确认学习到的图归一化会产生有竞争力的结果,并且学习到的权重为特定任务建议了适当的归一化技术。源代码在这里发布 https://github.com/cyh1112/GraphNormalization。
更新日期:2020-09-25
down
wechat
bug