当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Manipulating Identical Filter Redundancy for Efficient Pruning on Deep and Complicated CNN
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-07-30 , DOI: arxiv-2107.14444
Xiaohan Ding, Tianxiang Hao, Jungong Han, Yuchen Guo, Guiguang Ding

The existence of redundancy in Convolutional Neural Networks (CNNs) enables us to remove some filters/channels with acceptable performance drops. However, the training objective of CNNs usually tends to minimize an accuracy-related loss function without any attention paid to the redundancy, making the redundancy distribute randomly on all the filters, such that removing any of them may trigger information loss and accuracy drop, necessitating a following finetuning step for recovery. In this paper, we propose to manipulate the redundancy during training to facilitate network pruning. To this end, we propose a novel Centripetal SGD (C-SGD) to make some filters identical, resulting in ideal redundancy patterns, as such filters become purely redundant due to their duplicates; hence removing them does not harm the network. As shown on CIFAR and ImageNet, C-SGD delivers better performance because the redundancy is better organized, compared to the existing methods. The efficiency also characterizes C-SGD because it is as fast as regular SGD, requires no finetuning, and can be conducted simultaneously on all the layers even in very deep CNNs. Besides, C-SGD can improve the accuracy of CNNs by first training a model with the same architecture but wider layers then squeezing it into the original width.

中文翻译:

操纵相同的过滤器冗余以在深度和复杂的 CNN 上进行有效修剪

卷积神经网络 (CNN) 中冗余的存在使我们能够以可接受的性能下降去除一些过滤器/通道。然而,CNN 的训练目标通常倾向于最小化与精度相关的损失函数,而没有关注冗余,使得冗余随机分布在所有过滤器上,因此删除其中任何一个都可能导致信息丢失和精度下降,因此需要恢复的后续微调步骤。在本文中,我们建议在训练期间操纵冗余以促进网络修剪。为此,我们提出了一种新颖的向心 SGD(C-SGD)来使一些过滤器相同,从而产生理想的冗余模式,因为这些过滤器由于重复而变得纯粹冗余;因此删除它们不会损害网络。如 CIFAR 和 ImageNet 所示,与现有方法相比,C-SGD 提供了更好的性能,因为冗余组织得更好。效率也是 C-SGD 的特征,因为它与常规 SGD 一样快,不需要微调,并且即使在非常深的 CNN 中也可以在所有层上同时进行。此外,C-SGD 可以通过首先训练具有相同架构但层数更宽的模型,然后将其压缩到原始宽度来提高 CNN 的准确性。
更新日期:2021-08-02
down
wechat
bug