当前位置: X-MOL 学术Circuits Syst. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Structured Network Pruning via Adversarial Multi-indicator Architecture Selection
Circuits, Systems, and Signal Processing ( IF 1.8 ) Pub Date : 2021-02-17 , DOI: 10.1007/s00034-021-01668-y
Yuxuan Wei , Ying Chen

Network pruning offers an opportunity to facilitate deploying convolutional neural networks (CNNs) on resource-limited embedded devices. Pruning more redundant network structures while ensuring network accuracy is challenging. Most existing CNN compression methods iteratively prune the “least important” filters and retrain the pruned network layer-by-layer, which may lead to a sub-optimal solution. In this paper, an end-to-end structured network pruning method based on adversarial multi-indicator architecture selection (AMAS) is presented. The pruning is implemented by striving to align the output of the baseline network with the output of the pruned network in a generative adversarial framework. Furthermore, to efficiently find optimal pruned architecture under constrained resources, an adversarial fine-tuning network selection strategy is designed, in which two contradictory indicators, namely pruned channel number and network classification accuracy, are considered. Experiments on SVHN show that AMAS reduces 75.37% of FLOPs and 74.42% of parameters with even 0.36% accuracy improvement for ResNet-110. On CIFAR-10, it achieves a reduction of 77.08% FLOPs and removes 73.98% of parameters with negligible accuracy cost for GoogLeNet. In particular, it obtains a 56.87% pruned rate in FLOPs and 59.18% parameters reduction, while with an increase of 0.49% accuracy for ResNet-110, which significantly outperforms state-of-the-art methods.



中文翻译:

通过对抗性多指标体系结构选择进行结构化网络修剪

网络修剪为在资源受限的嵌入式设备上部署卷积神经网络(CNN)提供了机会。在确保网络准确性的同时,修剪更多的冗余网络结构具有挑战性。大多数现有的CNN压缩方法会反复修剪“最不重要”的过滤器,并逐层重新训练修剪的网络,这可能会导致次优解决方案。本文提出了一种基于对抗性多指标体系结构选择(AMAS)的端到端结构化网络修剪方法。修剪是通过在生成的对抗框架中努力使基准网络的输出与修剪后的网络的输出对齐来实现的。此外,为了在资源受限的情况下有效地找到最佳的修剪架构,设计了一种对抗性微调网络选择策略,其中考虑了修剪通道数和网络分类精度两个矛盾指标。在SVHN上进行的实验表明,AMAS减少了75.37%的FLOP和74.42%的参数,而ResNet-110的准确度甚至提高了0.36%。在CIFAR-10上,它减少了77.08%的FLOP,并删除了73.98%的参数,而GoogLeNet的准确度成本可忽略不计。特别是,它在FLOP中获得了56.87%的修剪率,并减少了59.18%的参数,而ResNet-110的准确度提高了0.49%,大大优于最新方法。ResNet-110的参数提高了42%,准确度提高了0.36%。在CIFAR-10上,它减少了77.08%的FLOP,并删除了73.98%的参数,而GoogLeNet的准确度成本可忽略不计。特别是,它在FLOP中获得了56.87%的修剪率,并减少了59.18%的参数,而ResNet-110的准确度提高了0.49%,大大优于最新方法。ResNet-110的参数提高了42%,准确度提高了0.36%。在CIFAR-10上,它减少了77.08%的FLOP,并删除了73.98%的参数,而GoogLeNet的准确度成本可忽略不计。特别是,它在FLOP中获得了56.87%的修剪率,并减少了59.18%的参数,而ResNet-110的准确度提高了0.49%,大大优于最新方法。

更新日期:2021-02-18
down
wechat
bug