当前位置: X-MOL 学术arXiv.stat.ME › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low-complexity Approximate Convolutional Neural Networks
arXiv - STAT - Methodology Pub Date : 2022-07-29 , DOI: arxiv-2208.00087
R. J. Cintra, S. Duffner, C. Garcia, A. Leite

In this paper, we present an approach for minimizing the computational complexity of trained Convolutional Neural Networks (ConvNet). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero-one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low-power, efficient hardware designs. We applied our approach on three use cases of different complexity: (i) a "light" but efficient ConvNet for face detection (with around 1000 parameters); (ii) another one for hand-written digit classification (with more than 180000 parameters); and (iii) a significantly larger ConvNet: AlexNet with $\approx$1.2 million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.

中文翻译:

低复杂度近似卷积神经网络

在本文中,我们提出了一种最小化经过训练的卷积神经网络 (ConvNet) 的计算复杂度的方法。这个想法是逼近给定 ConvNet 的所有元素,并用能够极大降低计算复杂度的有效逼近替换原始卷积滤波器和参数(池化和偏置系数;以及激活函数)。低复杂度卷积滤波器是通过基于 Frobenius 范数的二元(零一)线性规划方案获得的。得到的矩阵允许只需要加法和位移操作的无乘法计算。这种低复杂度的结构为低功耗、高效的硬件设计铺平了道路。我们将我们的方法应用于三个不同复杂度的用例:(i)“轻” 但用于人脸检测的高效 ConvNet(大约 1000 个参数);(ii) 另一个用于手写数字分类(超过 180000 个参数);(iii) 一个大得多的 ConvNet:AlexNet,矩阵约为 120 万美元。我们评估了不同近似级别的各个任务的整体性能。在所有考虑的应用中,已经导出了非常低复杂度的近似值,保持了几乎相同的分类性能。
更新日期:2022-08-02
down
wechat
bug