当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
UNIT: Unifying Tensorized Instruction Compilation
arXiv - CS - Hardware Architecture Pub Date : 2021-01-21 , DOI: arxiv-2101.08458
Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki

Because of the increasing demand for computation in DNN, researchers develope both hardware and software mechanisms to reduce the compute and memory burden. A widely adopted approach is to use mixed precision data types. However, it is hard to leverage mixed precision without hardware support because of the overhead of data casting. Hardware vendors offer tensorized instructions for mixed-precision tensor operations, like Intel VNNI, Tensor Core, and ARM-DOT. These instructions involve a computing idiom that reduces multiple low precision elements into one high precision element. The lack of compilation techniques for this makes it hard to utilize these instructions: Using vendor-provided libraries for computationally-intensive kernels is inflexible and prevents further optimizations, and manually writing hardware intrinsics is error-prone and difficult for programmers. Some prior works address this problem by creating compilers for each instruction. This requires excessive effort when it comes to many tensorized instructions. In this work, we develop a compiler framework to unify the compilation for these instructions -- a unified semantics abstraction eases the integration of new instructions, and reuses the analysis and transformations. Tensorized instructions from different platforms can be compiled via UNIT with moderate effort for favorable performance. Given a tensorized instruction and a tensor operation, UNIT automatically detects the applicability, transforms the loop organization of the operation,and rewrites the loop body to leverage the tensorized instruction. According to our evaluation, UNIT can target various mainstream hardware platforms. The generated end-to-end inference model achieves 1.3x speedup over Intel oneDNN on an x86 CPU, 1.75x speedup over Nvidia cuDNN on an NvidiaGPU, and 1.13x speedup over a carefully tuned TVM solution for ARM DOT on an ARM CPU.

中文翻译:

单元:统一张量化指令编译

由于DNN中对计算的需求不断增长,研究人员开发了硬件和软件机制来减少计算和内存负担。一种广泛采用的方法是使用混合精度数据类型。但是,由于数据转换的开销,在没有硬件支持的情况下很难利用混合精度。硬件供应商为混合精度张量操作提供了张量化的指令,例如Intel VNNI,Tensor Core和ARM-DOT。这些指令涉及一种计算惯用法,该惯用法将多个低精度元素简化为一个高精度元素。缺少为此的编译技术使其很难使用这些指令:将供应商提供的库用于计算密集型内核是不灵活的,并且会阻止进一步的优化,对于程序员来说,手动编写硬件内在函数容易出错并且很困难。一些先前的工作通过为每个指令创建编译器来解决此问题。在涉及许多张量指令时,这需要大量的精力。在这项工作中,我们开发了一个编译器框架来统一这些指令的编译-统一的语义抽象简化了新指令的集成,并重用了分析和转换。可以通过UNIT轻而易举地从UNIT编译来自不同平台的张紧指令,以实现良好的性能。给定张量指令和张量操作,UNIT会自动检测适用性,转换操作的循环组织,并重写循环主体以利用张量指令。根据我们的评估,UNIT可以针对各种主流硬件平台。生成的端到端推理模型在x86 CPU上的速度比Intel oneDNN快1.3倍,在NvidiaGPU上的Nvidia cuDNN快1.75倍,在ARM CPU上针对ARM DOT精心调整的TVM解决方案的速度是1.13倍。
更新日期:2021-01-22
down
wechat
bug