Skip to main content
Log in

Computation of CNN’s Sensitivity to Input Perturbation

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Although Convolutional Neural Networks (CNNs) are considered as being "approximately invariant" to nuisance perturbations such as image transformation, shift, scaling, and other small deformations, some existing studies show that intense noises can cause noticeable variation to CNNs’ outputs. This paper focuses on exploring a method of measuring sensitivity by observing corresponding output variation to input perturbation on CNNs. The sensitivity is statistically defined in a bottom-up way from neuron to layer, and finally to the entire CNN network. An iterative algorithm is proposed for approximating the defined sensitivity. On the basic architecture of CNNs, the theoretically computed sensitivity is verified on the MNIST database with four types of commonly used noise distributions: Gaussian, Uniform, Salt and Pepper, and Rayleigh. Experimental results show the theoretical sensitivity is on the one hand in agreement with the actual output variation what on the maps, layers or entire networks are, and on the other hand an applicable quantitative measure for robust network selection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig.16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  2. Simon M, Rodner E (2015) Neural activation constellations: unsupervised part model discovery with convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 1143–1151

  3. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv preprint. http://arxiv.org/abs/1409.1556

  4. Karpathy A et al (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732

  5. Girshick R et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  6. Kalchbrenner N, Grefenstette E, Blunsom P (2014) A convolutional neural network for modeling sentences. arXiv preprint http://arxiv.org/abs/1404.2188

  7. Zhang Y, Wallace B (2015) A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification. arXiv preprint http://arxiv.org/abs/1510.03820

  8. Brust CA et al (2015) Convolutional patch networks with spatial prior for road detection and urban scene understanding. arXiv preprint http://arxiv.org/abs/1502.06344

  9. Hariharan B et al (2015) Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 447–456

  10. Fukushima K, Miyake S (1982) Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition, Competition and cooperation in neural nets. Springer, Berlin , pp 267–285

    Google Scholar 

  11. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833

  12. Rodner E et al (2016) Fine-grained recognition in the noisy wild: Sensitivity analysis of convolutional neural networks approaches. arXiv preprint http://arxiv.org/abs/1610.06756

  13. Kwon S et al (2016). Measuring error-tolerance in SRAM architecture on hardware accelerated neural network. In: 2016 IEEE international conference on consumer electronics-Asia, pp 1–4

  14. Szegedy C et al (2013) Intriguing properties of neural networks. arXiv preprint http://arxiv.org/abs/1312.6199

  15. Fawzi A, Fawzi O, Frossard P (2018) Analysis of classifiers’ robustness to adversarial perturbations. Mach Learn 107(3):481–508

    Article  MathSciNet  Google Scholar 

  16. Fawzi A et al (2016) Robustness of classifiers: from adversarial to random noise. In: Advances in neural information processing systems, pp 1632–1640

  17. Moosavi D, Seyed M (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 86–94

  18. Sharif M et al (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 1528–1540

  19. Akhtar N, Mian A (2018) Threat of adversarial attacks on deep learning in computer vision: a survey. arXiv preprint http://arxiv.org/abs/1801.00553

  20. Novak R el al (2018) Sensitivity and generalization in neural networks: an empirical study. In: International conference on learning representations. arXiv preprint http://arxiv.org/abs/1802.08760.

  21. Moosavi D el al (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  22. Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Advances in neural information processing systems, pp 2266–2276

  23. Zhang J, Huang C (2020) Dynamics analysis on a class of delayed neural networks involving inertial terms. Adv Differ Equ 2020:120. https://doi.org/10.1186/s13662-020-02566-4

    Article  MathSciNet  Google Scholar 

  24. Yang X et al (2019) Dynamic properties of foreign exchange complex network. Mathematics 7:832

    Article  Google Scholar 

  25. Huang C, Tan Y (2020) Global behavior of a reaction–diffusion model with time delay and Dirichlet condition. J Differ Equ 271:186–215

    Article  MathSciNet  Google Scholar 

  26. Fawzi A et al (2017) The robustness of deep networks: a geometrical perspective. IEEE Signal Process Mag 34(6):50–62

    Article  Google Scholar 

  27. Saltelli A (2002) Sensitivity analysis for importance assessment. Risk Anal 22(3):579–590

    Article  Google Scholar 

  28. Saltelli A et al (2019) Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices. Environ Model Softw 114:29–39

    Article  Google Scholar 

  29. Saltelli A et al (2008) Global sensitivity analysis: the primer. Wiley, Chichester

    MATH  Google Scholar 

  30. Veiga D et al (2015) Global sensitivity analysis with dependence measures. J Stat Comput Simul 85(7):1283–1305

    Article  MathSciNet  Google Scholar 

  31. Stevenson M, Winter R, Widrow B (1990) Sensitivity of feedforward neural networks to weight errors. IEEE Trans Neural Netw 1(1):71–80

    Article  Google Scholar 

  32. Piche SW et al (1995) The selection of weight accuracies for Madalines. IEEE Trans Neural Netw 6(2):432–445

    Article  Google Scholar 

  33. Zeng X, Wang Y, Zhang K (2006) Computation of Adalines’ sensitivity to weight perturbation. IEEE Trans Neural Netw 17(2):515–519

    Article  Google Scholar 

  34. Wang Y et al (2006) Computation of Madalines’ sensitivity to input and weight perturbations. Neural Comput 18(11):2854–2877

    Article  MathSciNet  Google Scholar 

  35. Choi JY, Choi CH (1992) Sensitivity analysis of multilayer perceptron with differentiable activation functions. IEEE Trans Neural Netw 3(1):101–107

    Article  Google Scholar 

  36. Fu L, Chen T (1993) Sensitivity analysis for input vector in multilayer feedforward neural networks. In: IEEE international conference on neural networks, pp 215–218

  37. Yeung D, Sun X (2002) Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Trans Neural Netw 13(1):34–44

    Article  Google Scholar 

  38. Yang S, Ho C, Siu S (2007) Sensitivity analysis of the split-complex valued multilayer perceptron due to the errors of the iid inputs and weights. IEEE Trans Neural Netw 18(5):1280–1293

    Article  Google Scholar 

  39. Zeng X, Yeung D (2001) Sensitivity analysis of multilayer perceptron to input and weight perturbations. IEEE Trans Neural Netw 12(6):1358–1366

    Article  Google Scholar 

  40. Zeng X, Yeung D (2003) A quantified sensitivity measure for multilayer perceptron to input perturbation. Neural Comput 15(1):183–212

    Article  Google Scholar 

  41. Ng WWY et al (2002) Statistical output sensitivity to input and weight perturbations of radial basis function neural networks. IEEE Int Conf Syst Man Cybern 2:503–508

    Article  Google Scholar 

  42. Cheng A, Yeung D (1999) Sensitivity analysis of neocognitron. IEEE Trans Syst Man Cybern C Appl Rev 29(2):238–249

    Article  Google Scholar 

  43. Chen D et al (2020) Fixed time synchronization of delayed quaternion-valued memristor-based neural Networks. Adv Differ Equ 2020:92. https://doi.org/10.1186/s13662-020-02560-w

    Article  MathSciNet  Google Scholar 

  44. Cao JD et al (2020) Zagreb connection indices of molecular graphs based on operations. Complexity 2020:1–15

    Google Scholar 

  45. Zhou Y et al (2020) Finite-time stochastic synchronization of dynamic networks with nonlinear coupling strength via quantized intermittent control. Appl Math Comput 376:125157

    MathSciNet  MATH  Google Scholar 

  46. Yeung D et al (2010) Sensitivity analysis for neural networks. Springer, Berlin

    Book  Google Scholar 

  47. Wang W et al (2020) Bipartite formation problem of second-order nonlinear multi-agent systems with hybrid impulses. Appl Math Comput 370:124926

    MathSciNet  MATH  Google Scholar 

  48. Huang C et al (2020) Asymptotic behavior for a class of population dynamics. Mathematics 5(4):3378–3390

    MathSciNet  Google Scholar 

  49. Kumari S et al (2020) On the construction, properties and Hausdorff dimension of random cantor one pth set. Mathematics 5(4):3138–3155

    MathSciNet  Google Scholar 

  50. Zhang Y, Wallace B (2017) Sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. In: International joint conference on natural language processing, pp 253–263

  51. Rawat W, Wang Z (2017) Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput 29(9):2352–2449

    Article  MathSciNet  Google Scholar 

  52. Shu H, Zhu H (2019) Sensitivity analysis of deep neural networks. Proc AAAI Conf Artif Intell 33:4943–4950

    Google Scholar 

Download references

Acknowledgements

This work is supported by the Fundamental Research Funds for the Central Universities under Grant Nos. 2016B44414 and 2018B678X14, Postgraduate Research and Practice Innovation Program of Jiangsu Province of China under Grant No. KYCX18_0553, and the Science and Technology Project of Huai’an City under Grant No. HAG201602 and HAS201607.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoqin Zeng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiang, L., Zeng, X., Wu, S. et al. Computation of CNN’s Sensitivity to Input Perturbation. Neural Process Lett 53, 535–560 (2021). https://doi.org/10.1007/s11063-020-10420-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10420-7

Keywords

Navigation