当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings
arXiv - CS - Machine Learning Pub Date : 2021-06-10 , DOI: arxiv-2106.05609
Matthias Fey, Jan E. Lenssen, Frank Weichert, Jure Leskovec

We present GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs. GAS prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations, leading to constant GPU memory consumption in respect to input node size without dropping any data. While existing solutions weaken the expressive power of message passing due to sub-sampling of edges or non-trainable propagations, our approach is provably able to maintain the expressive power of the original GNN. We achieve this by providing approximation error bounds of historical embeddings and show how to tighten them in practice. Empirically, we show that the practical realization of our framework, PyGAS, an easy-to-use extension for PyTorch Geometric, is both fast and memory-efficient, learns expressive node representations, closely resembles the performance of their non-scaling counterparts, and reaches state-of-the-art performance on large-scale graphs.

中文翻译:

GNNAutoScale:通过历史嵌入的可扩展和富有表现力的图神经网络

我们提出了 GNNAutoScale (GAS),这是一个将任意消息传递 GNN 扩展到大图的框架。GAS 通过利用来自先前训练迭代的历史嵌入来修剪计算图的整个子树,导致与输入节点大小相关的 GPU 内存消耗恒定,而不会丢失任何数据。虽然现有的解决方案由于边缘的二次采样或不可训练的传播而削弱了消息传递的表达能力,但我们的方法可以证明能够保持原始 GNN 的表达能力。我们通过提供历史嵌入的近似误差界限来实现这一点,并展示如何在实践中收紧它们。根据经验,我们表明我们的框架 PyGAS 的实际实现是 PyTorch Geometric 的一个易于使用的扩展,它既快速又节省内存,
更新日期:2021-06-11
down
wechat
bug