当前位置: X-MOL 学术arXiv.cs.PL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Execution through Neural Code Fusion
arXiv - CS - Programming Languages Pub Date : 2019-06-17 , DOI: arxiv-1906.07181
Zhan Shi, Kevin Swersky, Daniel Tarlow, Parthasarathy Ranganathan, Milad Hashemi

As the performance of computer systems stagnates due to the end of Moore's Law, there is a need for new models that can understand and optimize the execution of general purpose code. While there is a growing body of work on using Graph Neural Networks (GNNs) to learn representations of source code, these representations do not understand how code dynamically executes. In this work, we propose a new approach to use GNNs to learn fused representations of general source code and its execution. Our approach defines a multi-task GNN over low-level representations of source code and program state (i.e., assembly code and dynamic memory states), converting complex source code constructs and complex data structures into a simpler, more uniform format. We show that this leads to improved performance over similar methods that do not use execution and it opens the door to applying GNN models to new tasks that would not be feasible from static code alone. As an illustration of this, we apply the new model to challenging dynamic tasks (branch prediction and prefetching) from the SPEC CPU benchmark suite, outperforming the state-of-the-art by 26% and 45% respectively. Moreover, we use the learned fused graph embeddings to demonstrate transfer learning with high performance on an indirectly related task (algorithm classification).

中文翻译:

通过神经代码融合学习执行

由于摩尔定律的终结导致计算机系统的性能停滞不前,因此需要能够理解和优化通用代码执行的新模型。虽然使用图神经网络 (GNN) 来学习源代码表示的工作越来越多,但这些表示并不了解代码是如何动态执行的。在这项工作中,我们提出了一种使用 GNN 学习通用源代码及其执行的融合表示的新方法。我们的方法在源代码和程序状态(即汇编代码和动态内存状态)的低级表示上定义了多任务 GNN,将复杂的源代码构造和复杂的数据结构转换为更简单、更统一的格式。我们表明,与不使用执行的类似方法相比,这可以提高性能,并为将 GNN 模型应用于新任务打开了大门,而这些新任务仅靠静态代码是不可行的。为了说明这一点,我们将新模型应用于来自 SPEC CPU 基准测试套件的具有挑战性的动态任务(分支预测和预取),分别比最先进的模型高出 26% 和 45%。此外,我们使用学习到的融合图嵌入来展示在间接相关任务(算法分类)上具有高性能的迁移学习。分别比最先进的技术高出 26% 和 45%。此外,我们使用学习到的融合图嵌入来展示在间接相关任务(算法分类)上具有高性能的迁移学习。分别比最先进的技术高出 26% 和 45%。此外,我们使用学习到的融合图嵌入来展示在间接相关任务(算法分类)上具有高性能的迁移学习。
更新日期:2020-03-12
down
wechat
bug