当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Redundancy-Aware Pruning of Convolutional Neural Networks
Neural Computation ( IF 2.7 ) Pub Date : 2020-12-01 , DOI: 10.1162/neco_a_01330
Guotian Xie 1
Affiliation  

Pruning is an effective way to slim and speed up convolutional neural networks. Generally previous work directly pruned neural networks in the original feature space without considering the correlation of neurons. We argue that such a way of pruning still keeps some redundancy in the pruned networks. In this letter, we proposed to prune in the intermediate space in which the correlation of neurons is eliminated. To achieve this goal, the input and output of a convolutional layer are first mapped to an intermediate space by orthogonal transformation. Then neurons are evaluated and pruned in the intermediate space. Extensive experiments have shown that our redundancy-aware pruning method surpasses state-of-the-art pruning methods on both efficiency and accuracy. Notably, using our redundancy-aware pruning method, ResNet models with three times the speed-up could achieve competitive performance with fewer floating point operations per second even compared to DenseNet.

中文翻译:

卷积神经网络的冗余感知剪枝

修剪是一种精简和加速卷积神经网络的有效方法。一般以前的工作直接在原始特征空间中修剪神经网络,而不考虑神经元的相关性。我们认为这种修剪方式在修剪后的网络中仍然保留了一些冗余。在这封信中,我们建议在消除神经元相关性的中间空间中进行修剪。为了实现这个目标,卷积层的输入和输出首先通过正交变换映射到一个中间空间。然后在中间空间中评估和修剪神经元。大量实验表明,我们的冗余感知剪枝方法在效率和准确性方面都超过了最先进的剪枝方法。值得注意的是,使用我们的冗余感知修剪方法,
更新日期:2020-12-01
down
wechat
bug