当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Accelerating Convolutional Neural Networks by Removing Interspatial and Interkernel Redundancies
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 10-18-2018 , DOI: 10.1109/tcyb.2018.2873762
Linghua Zeng , Xinmei Tian

Recently, the high computational resource demands of convolutional neural networks (CNNs) have hindered a wide range of their applications. To solve this problem, many previous works attempted to reduce the redundant calculations during the evaluation of CNNs. However, these works mainly focused on either interspatial or interkernel redundancy. In this paper, we further accelerate existing CNNs by removing both types of redundancies. First, we convert interspatial redundancy into interkernel redundancy by decomposing one convolutional layer to one block that we design. Then, we adopt rank-selection and pruning methods to remove the interkernel redundancy. The rank-selection method, which considerably reduces manpower, contributes to determining the number of kernels to be pruned in the pruning method. We apply a layer-wise training algorithm rather than the traditional end-to-end training to overcome the difficulty of convergence. Finally, we fine-tune the entire network to achieve better performance. Our method is applied on three widely used datasets of an image classification task. We achieve better results in terms of accuracy and compression rate compared with previous state-of-the-art methods.

中文翻译:


通过消除空间间和内核间冗余来加速卷积神经网络



最近,卷积神经网络(CNN)对计算资源的高需求阻碍了其广泛的应用。为了解决这个问题,之前的许多工作都试图减少 CNN 评估过程中的冗余计算。然而,这些工作主要集中在空间间或内核间冗余上。在本文中,我们通过消除这两种类型的冗余来进一步加速现有的 CNN。首先,我们通过将一个卷积层分解为我们设计的一个块,将空间冗余转换为内核冗余。然后,我们采用排序选择和剪枝方法来消除内核间冗余。等级选择方法有助于确定剪枝方法中要剪枝的核的数量,从而大大减少了人力。我们采用分层训练算法而不是传统的端到端训练来克服收敛困难。最后,我们对整个网络进行微调,以获得更好的性能。我们的方法应用于图像分类任务的三个广泛使用的数据集。与以前最先进的方法相比,我们在准确性和压缩率方面取得了更好的结果。
更新日期:2024-08-22
down
wechat
bug