当前位置:
X-MOL 学术
›
arXiv.cs.AR
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Proximu$: Efficiently Scaling DNN Inference in Multi-core CPUs through Near-Cache Compute
arXiv - CS - Hardware Architecture Pub Date : 2020-11-23 , DOI: arxiv-2011.11695 Anant V. Nori, Rahul Bera, Shankar Balachandran, Joydeep Rakshit, Om J. Omer, Avishaii Abuhatzera, Kuttanna Belliappa, Sreenivas Subramoney
arXiv - CS - Hardware Architecture Pub Date : 2020-11-23 , DOI: arxiv-2011.11695 Anant V. Nori, Rahul Bera, Shankar Balachandran, Joydeep Rakshit, Om J. Omer, Avishaii Abuhatzera, Kuttanna Belliappa, Sreenivas Subramoney
Deep Neural Network (DNN) inference is emerging as the fundamental bedrock
for a multitude of utilities and services. CPUs continue to scale up their raw
compute capabilities for DNN inference along with mature high performance
libraries to extract optimal performance. While general purpose CPUs offer
unique attractive advantages for DNN inference at both datacenter and edge,
they have primarily evolved to optimize single thread performance. For highly
parallel, throughput-oriented DNN inference, this results in inefficiencies in
both power and performance, impacting both raw performance scaling and overall
performance/watt. We present Proximu$\$$, where we systematically tackle the root
inefficiencies in power and performance scaling for CPU DNN inference.
Performance scales efficiently by distributing light-weight tensor compute near
all caches in a multi-level cache hierarchy. This maximizes the cumulative
utilization of the existing bandwidth resources in the system and minimizes
movement of data. Power is drastically reduced through simple ISA extensions
that encode the structured, loop-y workload behavior. This enables a bulk
offload of pre-decoded work, with loop unrolling in the light-weight near-cache
units, effectively bypassing the power-hungry stages of the wide Out-of-Order
(OOO) CPU pipeline. Across a number of DNN models, Proximu$\$$ achieves a 2.3x increase in
convolution performance/watt with a 2x to 3.94x scaling in raw performance.
Similarly, Proximu$\$$ achieves a 1.8x increase in inner-product
performance/watt with 2.8x scaling in performance. With no changes to the
programming model, no increase in cache capacity or bandwidth and minimal
additional hardware, Proximu$\$$ enables unprecedented CPU efficiency gains
while achieving similar performance to state-of-the-art Domain Specific
Accelerators (DSA) for DNN inference in this AI era.
中文翻译:
Proximu $:通过近缓存计算在多核CPU中有效扩展DNN推理
深度神经网络(DNN)推理正在成为众多公用事业和服务的基本基础。CPU继续扩大其用于DNN推理的原始计算能力,以及成熟的高性能库以提取最佳性能。尽管通用CPU在数据中心和边缘为DNN推理提供了独特的吸引人的优势,但它们主要是为了优化单线程性能而发展的。对于高度并行,面向吞吐量的DNN推断,这会导致功率和性能效率低下,从而影响原始性能扩展和总体性能/瓦数。我们介绍了Proximu $ \ $$,在其中我们系统地解决了CPU DNN推理在功率和性能扩展方面的根本低效问题。通过在多级缓存层次结构中的所有缓存附近分配轻量级张量计算,可以有效地扩展性能。这样可以最大程度地利用系统中现有带宽资源的累积利用率,并最大程度地减少数据移动。通过对结构化的,环回工作负载行为进行编码的简单ISA扩展,可大大降低功耗。这样就可以减轻预编码工作的工作量,并在轻量级接近缓存单元中展开循环,从而有效地绕开了整个乱序(OOO)CPU管道的耗电阶段。在许多DNN模型中,Proximu $ / $$的卷积性能/瓦数提高了2.3倍,原始性能提高了2倍至3.94倍。同样,Proximu $ / $$的内部产品性能/瓦特提高了1.8倍,而性能扩展为2.8倍。
更新日期:2020-11-25
中文翻译:
Proximu $:通过近缓存计算在多核CPU中有效扩展DNN推理
深度神经网络(DNN)推理正在成为众多公用事业和服务的基本基础。CPU继续扩大其用于DNN推理的原始计算能力,以及成熟的高性能库以提取最佳性能。尽管通用CPU在数据中心和边缘为DNN推理提供了独特的吸引人的优势,但它们主要是为了优化单线程性能而发展的。对于高度并行,面向吞吐量的DNN推断,这会导致功率和性能效率低下,从而影响原始性能扩展和总体性能/瓦数。我们介绍了Proximu $ \ $$,在其中我们系统地解决了CPU DNN推理在功率和性能扩展方面的根本低效问题。通过在多级缓存层次结构中的所有缓存附近分配轻量级张量计算,可以有效地扩展性能。这样可以最大程度地利用系统中现有带宽资源的累积利用率,并最大程度地减少数据移动。通过对结构化的,环回工作负载行为进行编码的简单ISA扩展,可大大降低功耗。这样就可以减轻预编码工作的工作量,并在轻量级接近缓存单元中展开循环,从而有效地绕开了整个乱序(OOO)CPU管道的耗电阶段。在许多DNN模型中,Proximu $ / $$的卷积性能/瓦数提高了2.3倍,原始性能提高了2倍至3.94倍。同样,Proximu $ / $$的内部产品性能/瓦特提高了1.8倍,而性能扩展为2.8倍。