当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2021-06-11 , DOI: arxiv-2106.06150
Jialin Dong, Da Zheng, Lin F. Yang, Geroge Karypis

Graph neural networks (GNNs) are powerful tools for learning from graph data and are widely used in various applications such as social network recommendation, fraud detection, and graph search. The graphs in these applications are typically large, usually containing hundreds of millions of nodes. Training GNN models on such large graphs efficiently remains a big challenge. Despite a number of sampling-based methods have been proposed to enable mini-batch training on large graphs, these methods have not been proved to work on truly industry-scale graphs, which require GPUs or mixed-CPU-GPU training. The state-of-the-art sampling-based methods are usually not optimized for these real-world hardware setups, in which data movement between CPUs and GPUs is a bottleneck. To address this issue, we propose Global Neighborhood Sampling that aims at training GNNs on giant graphs specifically for mixed-CPU-GPU training. The algorithm samples a global cache of nodes periodically for all mini-batches and stores them in GPUs. This global cache allows in-GPU importance sampling of mini-batches, which drastically reduces the number of nodes in a mini-batch, especially in the input layer, to reduce data copy between CPU and GPU and mini-batch computation without compromising the training convergence rate or model accuracy. We provide a highly efficient implementation of this method and show that our implementation outperforms an efficient node-wise neighbor sampling baseline by a factor of 2X-4X on giant graphs. It outperforms an efficient implementation of LADIES with small layers by a factor of 2X-14X while achieving much higher accuracy than LADIES.We also theoretically analyze the proposed algorithm and show that with cached node data of a proper size, it enjoys a comparable convergence rate as the underlying node-wise sampling method.

中文翻译:

巨图上混合 CPU-GPU 训练的全局邻居采样

图神经网络 (GNN) 是从图数据中学习的强大工具,广泛用于各种应用,例如社交网络推荐、欺诈检测和图搜索。这些应用程序中的图通常很大,通常包含数亿个节点。在如此大的图上有效地训练 GNN 模型仍然是一个巨大的挑战。尽管已经提出了许多基于采样的方法来支持大图上的小批量训练,但这些方法还没有被证明适用于真正的工业规模图,这需要 GPU 或混合 CPU-GPU 训练。最先进的基于采样的方法通常没有针对这些真实世界的硬件设置进行优化,其中 CPU 和 GPU 之间的数据移动是一个瓶颈。为了解决这个问题,我们提出了 Global Neighborhood Sampling,旨在在巨型图上训练 GNN,专门用于混合 CPU-GPU 训练。该算法定期对所有小批量节点的全局缓存进行采样,并将它们存储在 GPU 中。这种全局缓存允许在 GPU 内对 mini-batch 进行重要性采样,这大大减少了 mini-batch 中的节点数量,尤其是在输入层,以减少 CPU 和 GPU 之间的数据复制和 mini-batch 计算,而不会影响训练收敛速度或模型精度。我们提供了该方法的高效实现,并表明我们的实现在巨图上以 2X-4X 的因子优于高效的节点级邻居采样基线。
更新日期:2021-06-14
down
wechat
bug