当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 10-8-2018 , DOI: 10.1109/tpami.2018.2874634
Shi Chen , Qi Zhao

Recent surge of Convolutional Neural Networks (CNNs) has brought successes among various applications. However, these successes are accompanied by a significant increase in computational cost and the demand for computational resources, which critically hampers the utilization of complex CNNs on devices with limited computational power. In this work, we propose a feature representation based layer-wise pruning method that aims at reducing complex CNNs to more compact ones with equivalent performance. Different from previous parameter pruning methods that conduct connection-wise or filter-wise pruning based on weight information, our method determines redundant parameters by investigating the features learned in the convolutional layers and the pruning process is operated at a layer level. Experiments demonstrate that the proposed method is able to significantly reduce computational cost and the pruned models achieve equivalent or even better performance compared to the original models on various datasets.

中文翻译:


浅化深层网络:基于特征表示的逐层剪枝



最近卷积神经网络(CNN)的激增给各种应用带来了成功。然而,这些成功伴随着计算成本和计算资源需求的显着增加,这严重阻碍了复杂 CNN 在计算能力有限的设备上的利用。在这项工作中,我们提出了一种基于特征表示的分层剪枝方法,旨在将复杂的 CNN 简化为具有同等性能的更紧凑的CNN。与之前基于权重信息进行连接式或过滤式剪枝的参数剪枝方法不同,我们的方法通过研究在卷积层中学习的特征来确定冗余参数,并且剪枝过程在层级别上操作。实验表明,所提出的方法能够显着降低计算成本,并且与原始模型相比,剪枝模型在各种数据集上实现了相当甚至更好的性能。
更新日期:2024-08-22
down
wechat
bug