当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Non-Parametric Adaptive Network Pruning
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-01-20 , DOI: arxiv-2101.07985
Lin Mingbao, Ji Rongrong, Li Shaojie, Wang Yan, Wu Yongjian, Huang Feiyue, Ye Qixiang

Popular network pruning algorithms reduce redundant information by optimizing hand-crafted parametric models, and may cause suboptimal performance and long time in selecting filters. We innovatively introduce non-parametric modeling to simplify the algorithm design, resulting in an automatic and efficient pruning approach called EPruner. Inspired by the face recognition community, we use a message passing algorithm Affinity Propagation on the weight matrices to obtain an adaptive number of exemplars, which then act as the preserved filters. EPruner breaks the dependency on the training data in determining the "important" filters and allows the CPU implementation in seconds, an order of magnitude faster than GPU based SOTAs. Moreover, we show that the weights of exemplars provide a better initialization for the fine-tuning. On VGGNet-16, EPruner achieves a 76.34%-FLOPs reduction by removing 88.80% parameters, with 0.06% accuracy improvement on CIFAR-10. In ResNet-152, EPruner achieves a 65.12%-FLOPs reduction by removing 64.18% parameters, with only 0.71% top-5 accuracy loss on ILSVRC-2012. Code can be available at https://github.com/lmbxmu/EPruner.

中文翻译:

非参数自适应网络修剪

流行的网络修剪算法通过优化手工制作的参数模型来减少冗余信息,并且可能导致性能欠佳且选择滤波器的时间过长。我们创新地引入了非参数建模,以简化算法设计,从而产生了一种自动高效的修剪方法,称为EPruner。受人脸识别社区的启发,我们在权重矩阵上使用消息传递算法“亲和力传播”来获得自适应数量的样本,然后将其用作保留的过滤器。EPruner在确定“重要”过滤器时打破了对训练数据的依赖性,并允许CPU在几秒钟内实现,这比基于GPU的SOTA快一个数量级。此外,我们证明了样例的权重为微调提供了更好的初始化。在VGGNet-16上,EPruner通过删除88.80%的参数实现了76.34%的FLOP减少,而CIFAR-10的精度提高了0.06%。在ResNet-152中,EPruner通过删除64.18%的参数实现了65.12%的FLOP减少,而ILSVRC-2012的top-5精度损失仅为0.71%。可以从https://github.com/lmbxmu/EPruner获得代码。
更新日期:2021-01-21
down
wechat
bug