当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Structured feature sparsity training for convolutional neural network compression
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2020-08-06 , DOI: 10.1016/j.jvcir.2020.102867
Wei Wang , Liqiang Zhu

Convolutional neural networks (CNNs) with large model size and computing operations are difficult to be deployed on embedded systems, such as smartphones or AI cameras. In this paper, we propose a novel structured pruning method, termed the structured feature sparsity training (SFST), to speed up the inference process and reduce the memory usage of CNNs. Unlike other existing pruning methods, which require multiple iterations of pruning and retraining to ensure stable performance, SFST only needs to fine-tune the pretrained model with additional regularization on the less important features and then prune them, no multiple pruning and retraining needed. SFST can be deployed to a variety of modern CNN architectures including VGGNet, ResNet and MobileNetv2. Experimental results on CIFAR, SVHN, ImageNet and MSTAR benchmark dataset demonstrate the effectiveness of our scheme, which achieves superior performance over the state-of-the-art methods.



中文翻译:

卷积神经网络压缩的结构化特征稀疏度训练

具有大模型尺寸和计算操作的卷积神经网络(CNN)难以部署在智能手机或AI相机等嵌入式系统上。在本文中,我们提出了一种新颖的结构化修剪方法,称为结构化特征稀疏度训练(SFST),以加快推理过程并减少CNN的内存使用。与其他现有的修剪方法不同,后者需要多次修剪和重新训练以确保稳定的性能,而SFST仅需要对次要的模型进行微调,并在次要的特征上进行附加正则化,然后对其进行修剪即可,而无需进行多次修剪和重新训练。SFST可以部署到各种现代CNN架构,包括VGGNet,ResNet和MobileNetv2。CIFAR,SVHN,

更新日期:2020-08-06
down
wechat
bug