Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MLogNet: A Logarithmic Quantization-Based Accelerator for Depthwise Separable Convolution
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( IF 2.7 ) Pub Date : 2022-02-09 , DOI: 10.1109/tcad.2022.3150249
Jooyeon Choi 1 , Hyeonuk Sim 1 , Sangyun Oh 1 , Sugil Lee 1 , Jongeun Lee 1
Affiliation  

In this article, we propose a novel logarithmic quantization-based deep neural network (DNN) architecture for depthwise separable convolution (DSC) networks. Our architecture is based on selective two-word logarithmic quantization (STLQ), which improves accuracy greatly over logarithmic-scale quantization while retaining the speed and area advantage of logarithmic quantization. On the other hand, it also comes with the synchronization problem due to variable-latency processing elements (PEs), which we address through a novel architecture and a compile-time optimization technique. Our architecture is dynamically reconfigurable to support various combinations of depthwise versus pointwise convolution layers efficiently. Our experimental results using layers from MobileNetV2 and ShuffleNetV2 demonstrate that our architecture is significantly faster and more area efficient than previous DSC accelerator architectures as well as previous accelerators utilizing logarithmic quantization.

中文翻译:


MLogNet:一种基于对数量化的深度可分离卷积加速器



在本文中,我们提出了一种新颖的基于对数量化的深度神经网络(DNN)架构,用于深度可分离卷积(DSC)网络。我们的架构基于选择性双字对数量化(STLQ),与对数尺度量化相比,它大大提高了精度,同时保留了对数量化的速度和面积优势。另一方面,由于可变延迟处理元件(PE),它也带来了同步问题,我们通过新颖的架构和编译时优化技术解决了这个问题。我们的架构是动态可重构的,以有效地支持深度卷积层与点卷积层的各种组合。我们使用 MobileNetV2 和 ShuffleNetV2 层的实验结果表明,我们的架构比以前的 DSC 加速器架构以及以前使用对数量化的加速器要快得多,面积效率更高。
更新日期:2022-02-09
down
wechat
bug