Abstract
As the network structure of convolutional neural network (CNN) becomes deeper and wider, network optimization, such as pruning, has received ever-increasing research focus. This paper propose a new pruning strategy based on Feature Extraction Ability Measurement (FEAM), which is a novel index of the feature extraction ability from both theoretical analysis and practical operation. Firstly, FEAM is computed as the product of the the kernel sparsity and feature dispersion. Kernel sparsity describes the ability of feature extraction in theory, and feature dispersion represents the feature extraction ability in practical operation. Secondly, FEAMs of all filters in the network are normalized so that the pruning operation can be applied to cross-layer filters. Finally, filters with weak FEAM are pruned to obtain a compact CNN model. In addition, fine-tuning is adopted to restore the generalization ability. Experiments on CAFAR-10 and CUB-200-2011 demonstrate the effectiveness of our method.
Similar content being viewed by others
References
Girshick, R.: Fast R-CNN. In: The IEEE International Conference on Computer Vision (ICCV) (2015)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Hu, H., Gu, J., Zhang, Z., et al.: Relation networks for object detection. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
He, K., Gkioxari, G., Dollr, P., et al.: Mask R-CNN. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Huang, G., Liu, Z., Van Der Maaten, L. et al.: Densely connected convolutional networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Xu, Y., Fu, T., Yang, H., et al.: Dynamic video segmentation network. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Kuhn, M., Johnson, K.: An introduction to feature selection. In: Applied Predictive Modeling. Springer, Berlin (2013)
Krizhevsky, A., Sutskever, I., Hinton, G.E..: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems (NIPS) (2012)
Simonyan, K., Zisserman, A.: Imagenet classification with deep convolutional neural networks. In: International Conference on Learning Representations (ICLR) (2015)
Lin, S., Ji, R., Chen, C., et al.: Holistic CNN compression via low-rank decomposition with knowledge transfer. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2018)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Coordinating filters for faster deep neural networks. In: The IEEE International Conference on Computer Vision (ICCV) (2017)
Wu, J., Leng, C., Wang, Y., et al.: Quantized convolutional neural networks for mobile devices. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Jacob, B., Kligys, S., Chen, B., et al.: Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Lin, X., Zhao, C., Pan, W.: Towards accurate binary convolutional neural network. In: International Conference on Neural Information Processing Systems (NIPS) (2017)
Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: International Conference on Learning Representations (ICLR) (2016)
Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016)
Molchanov, P., Tyree, S., Karras, T., et al.: Pruning convolutional neural networks for resource efficient inference. In: International Conference on Learning Representations (ICLR) (2017)
Luo, J., Wu, J.: An entropy-based pruning method for CNN compression. arXiv preprint arXiv:1706.05791 (2017)
Zhou, B., Sun, Y., Bau, D., et al: Revisiting the importance of individual units in CNNs via ablation. arXiv preprint arXiv:1806.02891 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR) (2015)
He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Li, H., Kadav, A., Durdanovic, I., et al.: Pruning filters for efficient convnets. In: International Conference on Learning Representations (ICLR) (2016)
He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: International Conference on Computer Vision (ICCV) (2017)
Lin, M., Chen,Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
Son, S., Nah, S., Mu Lee, K.: Clustering convolutional kernels to compress deep neural networks. In: European Conference on Computer Vision (ECCV) (2018)
Sandler, M., Howard, A.G., Zhu, M., et al.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Computer Vision and Pattern Recognition (CVPR) (2018)
Howard, A.G., Andrew, G., Zhu, M., et al.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Wu, H., Tang, Y. & Zhang, X. A pruning method based on the measurement of feature extraction ability. Machine Vision and Applications 32, 20 (2021). https://doi.org/10.1007/s00138-020-01148-4
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00138-020-01148-4