当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FuSeConv: Fully Separable Convolutions for Fast Inference on Systolic Arrays
arXiv - CS - Hardware Architecture Pub Date : 2021-05-27 , DOI: arxiv-2105.13434
Surya Selvam, Vinod Ganesan, Pratyush Kumar

Both efficient neural networks and hardware accelerators are being explored to speed up DNN inference on edge devices. For example, MobileNet uses depthwise separable convolution to achieve much lower latency, while systolic arrays provide much higher performance per watt. Interestingly however, the combination of these two ideas is inefficient: The computational patterns of depth-wise separable convolution are not systolic and lack data reuse to saturate the systolic array's constrained dataflow. In this paper, we propose FuSeConv (Fully-Separable Convolution) as a drop-in replacement for depth-wise separable convolution. FuSeConv generalizes the decomposition of convolutions fully to separable 1D convolutions along spatial and depth dimensions. The resultant computation is systolic and efficiently utilizes the systolic array with a slightly modified dataflow. With FuSeConv, we achieve a significant speed-up of 3x-7x with the MobileNet family of networks on a systolic array of size 64x64, with comparable accuracy on the ImageNet dataset. The high speed-up motivates exploration of hardware-aware Neural Operator Search (NOS) in complement to ongoing efforts on Neural Architecture Search (NAS).

中文翻译:

FuSeConv:用于快速推断收缩阵列的完全可分离卷积

正在探索高效的神经网络和硬件加速器,以加速边缘设备上的 DNN 推理。例如,MobileNet 使用深度可分离卷积来实现更低的延迟,而脉动阵列提供更高的每瓦性能。然而有趣的是,这两种想法的结合是低效的:深度可分离卷积的计算模式不是收缩的,并且缺乏数据重用来使收缩阵列的受限数据流饱和。在本文中,我们提出 FuSeConv(完全可分离卷积)作为深度可分离卷积的直接替代品。FuSeConv 将卷积的分解完全推广到沿空间和深度维度的可分离一维卷积。由此产生的计算是收缩的,并且有效地利用了具有稍微修改的数据流的收缩阵列。使用 FuSeConv,我们在 64x64 大小的收缩阵列上使用 MobileNet 系列网络实现了 3x-7x 的显着加速,在 ImageNet 数据集上具有相当的准确性。高速加速激发了对硬件感知神经算子搜索 (NOS) 的探索,以补充正在进行的神经架构搜索 (NAS) 工作。
更新日期:2021-05-31
down
wechat
bug