当前位置:
X-MOL 学术
›
arXiv.cs.SI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Scaling Up Graph Neural Networks Via Graph Coarsening
arXiv - CS - Social and Information Networks Pub Date : 2021-06-09 , DOI: arxiv-2106.05150 Zengfeng Huang, Shengzhong Zhang, Chong Xi, Tang Liu, Min Zhou
arXiv - CS - Social and Information Networks Pub Date : 2021-06-09 , DOI: arxiv-2106.05150 Zengfeng Huang, Shengzhong Zhang, Chong Xi, Tang Liu, Min Zhou
Scalability of graph neural networks remains one of the major challenges in
graph machine learning. Since the representation of a node is computed by
recursively aggregating and transforming representation vectors of its
neighboring nodes from previous layers, the receptive fields grow
exponentially, which makes standard stochastic optimization techniques
ineffective. Various approaches have been proposed to alleviate this issue,
e.g., sampling-based methods and techniques based on pre-computation of graph
filters. In this paper, we take a different approach and propose to use graph
coarsening for scalable training of GNNs, which is generic, extremely simple
and has sublinear memory and time costs during training. We present extensive
theoretical analysis on the effect of using coarsening operations and provides
useful guidance on the choice of coarsening methods. Interestingly, our
theoretical analysis shows that coarsening can also be considered as a type of
regularization and may improve the generalization. Finally, empirical results
on real world datasets show that, simply applying off-the-shelf coarsening
methods, we can reduce the number of nodes by up to a factor of ten without
causing a noticeable downgrade in classification accuracy.
中文翻译:
通过图粗化扩大图神经网络
图神经网络的可扩展性仍然是图机器学习的主要挑战之一。由于节点的表示是通过递归聚合和变换来自先前层的相邻节点的表示向量来计算的,因此感受野呈指数增长,这使得标准的随机优化技术无效。已经提出了各种方法来缓解这个问题,例如,基于图过滤器预计算的基于采样的方法和技术。在本文中,我们采用了一种不同的方法,并建议使用图粗化进行 GNN 的可扩展训练,这是通用的、极其简单的并且在训练过程中具有次线性的内存和时间成本。我们对使用粗化操作的效果进行了广泛的理论分析,并为粗化方法的选择提供了有用的指导。有趣的是,我们的理论分析表明,粗化也可以被认为是一种正则化,可以提高泛化能力。最后,真实世界数据集的实证结果表明,简单地应用现成的粗化方法,我们可以将节点数量减少多达 10 倍,而不会导致分类精度的明显下降。
更新日期:2021-06-10
中文翻译:
通过图粗化扩大图神经网络
图神经网络的可扩展性仍然是图机器学习的主要挑战之一。由于节点的表示是通过递归聚合和变换来自先前层的相邻节点的表示向量来计算的,因此感受野呈指数增长,这使得标准的随机优化技术无效。已经提出了各种方法来缓解这个问题,例如,基于图过滤器预计算的基于采样的方法和技术。在本文中,我们采用了一种不同的方法,并建议使用图粗化进行 GNN 的可扩展训练,这是通用的、极其简单的并且在训练过程中具有次线性的内存和时间成本。我们对使用粗化操作的效果进行了广泛的理论分析,并为粗化方法的选择提供了有用的指导。有趣的是,我们的理论分析表明,粗化也可以被认为是一种正则化,可以提高泛化能力。最后,真实世界数据集的实证结果表明,简单地应用现成的粗化方法,我们可以将节点数量减少多达 10 倍,而不会导致分类精度的明显下降。