当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learned Low Precision Graph Neural Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-19 , DOI: arxiv-2009.09232
Yiren Zhao, Duo Wang, Daniel Bates, Robert Mullins, Mateja Jamnik, Pietro Lio

Deep Graph Neural Networks (GNNs) show promising performance on a range of graph tasks, yet at present are costly to run and lack many of the optimisations applied to DNNs. We show, for the first time, how to systematically quantise GNNs with minimal or no loss in performance using Network Architecture Search (NAS). We define the possible quantisation search space of GNNs. The proposed novel NAS mechanism, named Low Precision Graph NAS (LPGNAS), constrains both architecture and quantisation choices to be differentiable. LPGNAS learns the optimal architecture coupled with the best quantisation strategy for different components in the GNN automatically using back-propagation in a single search round. On eight different datasets, solving the task of classifying unseen nodes in a graph, LPGNAS generates quantised models with significant reductions in both model and buffer sizes but with similar accuracy to manually designed networks and other NAS results. In particular, on the Pubmed dataset, LPGNAS shows a better size-accuracy Pareto frontier compared to seven other manual and searched baselines, offering a 2.3 times reduction in model size but a 0.4% increase in accuracy when compared to the best NAS competitor. Finally, from our collected quantisation statistics on a wide range of datasets, we suggest a W4A8 (4-bit weights, 8-bit activations) quantisation strategy might be the bottleneck for naive GNN quantisations.

中文翻译:

学习低精度图神经网络

深度图神经网络 (GNN) 在一系列图任务上表现出良好的性能,但目前运行成本高,并且缺乏许多应用于 DNN 的优化。我们首次展示了如何使用网络架构搜索 (NAS) 在性能损失最小或没有损失的情况下系统地量化 GNN。我们定义了 GNN 可能的量化搜索空间。提议的新型 NAS 机制称为低精度图 NAS (LPGNAS),将架构和量化选择限制为可微分。LPGNAS 在单轮搜索中使用反向传播自动学习最佳架构以及 GNN 中不同组件的最佳量化策略。在八个不同的数据集上,解决对图中看不见的节点进行分类的任务,LPGNAS 生成的量化模型显着减少了模型和缓冲区大小,但与手动设计的网络和其他 NAS 结果具有相似的准确性。特别是,在 Pubmed 数据集上,LPGNAS 与其他七个手动和搜索基线相比显示出更好的尺寸精度帕累托前沿,与最佳 NAS 竞争对手相比,模型尺寸减少了 2.3 倍,但准确度提高了 0.4%。最后,从我们收集的大量数据集的量化统计数据来看,我们建议 W4A8(4 位权重,8 位激活)量化策略可能是朴素 GNN 量化的瓶颈。与其他七个手动和搜索基线相比,LPGNAS 显示出更好的尺寸精度帕累托前沿,与最佳 NAS 竞争对手相比,模型尺寸减少了 2.3 倍,但准确度提高了 0.4%。最后,从我们收集的大量数据集的量化统计数据来看,我们建议 W4A8(4 位权重,8 位激活)量化策略可能是朴素 GNN 量化的瓶颈。与其他七个手动和搜索基线相比,LPGNAS 显示出更好的尺寸精度帕累托前沿,与最佳 NAS 竞争对手相比,模型尺寸减少了 2.3 倍,但准确度提高了 0.4%。最后,从我们收集的大量数据集的量化统计数据来看,我们建议 W4A8(4 位权重,8 位激活)量化策略可能是朴素 GNN 量化的瓶颈。
更新日期:2020-09-22
down
wechat
bug