当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust pruning for efficient CNNs
Pattern Recognition Letters ( IF 3.9 ) Pub Date : 2020-04-06 , DOI: 10.1016/j.patrec.2020.03.034
Hidenori Ide , Takumi Kobayashi , Kenji Watanabe , Takio Kurita

Deep convolutional neural network (CNN) with considerable number of parameters is one of the promising methods for image recognition. There, however, is generally difficult in applying deep CNNs to resource constrained devices due to the heavy computational burden. For reducing computational cost of CNNs while retaining the classification performance, it is effective to apply pruning methods that remove from CNNs redundant parameters less contributing to classification. The contribution of parameters can be estimated by the empirical classification loss computed over training samples to which ground-truth labels are assigned. The empirical classification loss, however, might be vulnerable to the outlier samples and/or the hard ones that are difficult to classify, and thus the pruning would accordingly be degraded. In this paper, we propose a pruning method based on a novel criterion to measure the redundancy of the parameters in CNNs through empirical classification loss. We start with the Taylor expansion of the loss function and then derive the mathematical formulation of the pruning criterion so as to be robust against some sort of outlier samples. The proposed pruning criterion can also provide stable metric for parameters and evaluate layers of various depth fairly without biases toward shallower or deeper layers. In addition, we present an effective method to normalize the criterion scores for further improving performance. In the experiments on image classification, our method exhibits favorable performance compared with the other methods.



中文翻译:

强大的修剪功能可实现高效的CNN

具有大量参数的深度卷积神经网络(CNN)是用于图像识别的有前途的方法之一。然而,由于繁重的计算负担,通常难以在资源受限的设备上应用深度的CNN。为了在保留分类性能的同时降低CNN的计算成本,应用从CNN中删除多余的修剪方法是有效的参数对分类的贡献较小。可以通过对分配有真实标签的训练样本计算出的经验分类损失来估计参数的贡献。然而,经验分类损失可能容易受到异常样本和/或难以分类的硬样本的影响,因此修剪将因此而降低。在本文中,我们提出了一种基于新准则的修剪方法,该方法通过经验分类损失来测量CNN中参数的冗余度。我们从损失函数的泰勒展开开始,然后推导修剪准则的数学公式,以便对某些异常样本具有鲁棒性。提出的修剪标准还可以为参数提供稳定的度量标准,并公平地评估各种深度的层,而不会偏向较浅或较深的层。此外,我们提出了一种有效的方法来规范化标准分数,以进一步提高性能。在图像分类实验中,与其他方法相比,我们的方法具有良好的性能。

更新日期:2020-04-06
down
wechat
bug