Skip to main content
Log in

Verifying ReLU Neural Networks from a Model Checking Perspective

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for the Σ2 ∪ Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Mcculloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 1943, 5: 115-133.

    Article  MathSciNet  Google Scholar 

  2. Salehinejad H, Baarbe J, Sankar S, Barfett J, Colak E, Valaee S. Recent advances in recurrent neural networks. arXiv:1801.01078, 2018. https://arxiv.org/abs/1801.01078, May 2020.

  3. LeCun Y. Generalization and network design strategies. In Connectionism in Perspective, Pfeifer R, Schreter Z, Fogelman-Soulié F, Steels L (eds.), Elsevier, 1989, pp.143-155.

  4. Nair V, Hinton G E. Rectified linear units improve restricted Boltzmann machines. In Proc. the 27th International Conference on Machine Learning, June 2010, pp.807-814.

  5. Lei N, Luo Z, Yau S, Gu X D. Geometric understanding of deep learning. arXiv:1805.10451, 2018. https://arxiv.org/abs/1805.10451, May 2020.

  6. Clarke E M, Emerson E A. Design and synthesis of synchronization skeletons using branching-time temporal logic. In Proc. the 3rd Workshop on Logics of Programs, May 1981, pp.52-71.

  7. Queille J, Sifakis J. Specification and verification of concurrent systems in CESAR. In Proc. the 5th Int. Symp. Programming, April 1982, pp.337-351.

  8. Katz G, Barrett C W, Dill D L, Julian K, Kochenderfer M J. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. the 29th Int. Conf. Computer Aided Verification, July 2017, pp.97-117.

  9. Pnueli A. The temporal logic of programs. In Proc. the 18th Annual Symp. Foundations of Computer Science, October 1977, pp.46-57.

  10. Pnueli A, Rosner R. On the synthesis of a reactive module. In Proc. the 16th Annual ACM Symp. Principles of Programming Languages, January 1989, pp.179-190.

  11. Manna Z, Zarba C G. Combining decision procedures. In Proc. the 10th Anniversary Colloquium of the Int. Institute for Software Technology of the United Nations University, March 2002, pp.381-422.

  12. Davenport J H, Heintz J. Real quantifier elimination is doubly exponential. J. Symbolic Computation, 1988, 5(1/2): 29-35.

    Article  MathSciNet  Google Scholar 

  13. Huang X, Kwiatkowska M, Wang S, Wu M. Safety verification of deep neural networks. In Proc. the 29th Int. Conf. Computer Aided Verification, July 2017, pp.3-29.

  14. Ruan W, Huang X, Kwiatkowska M. Reachability analysis of deep neural networks with provable guarantees. In Proc. the 27th Int. Joint Conf. Artificial Intelligence, July 2018, pp.2651-2659.

  15. Pulina L, Tacchella A. An abstraction-refinement approach to verification of artificial neural networks. In Proc. the 22nd Int. Conf. Computer Aided Verification, July 2010, pp.243-257.

  16. Dutta S, Jha S, Sankaranarayanan S, Tiwari A. Output range analysis for deep feedforward neural networks. In Proc. the 10th Int. Symposium NASA Formal Methods, April 2018, pp.121-138.

  17. Weng T, Zhang H, Chen H, Song Z, Hsieh C, Daniel L, Duane S B, Dhillon I S. Towards fast computation of certified robustness for ReLU networks. In Proc. the 35th Int. Conf. Machine Learning, July 2018, pp.5273-5282.

  18. Penrose R. A generalized inverse for matrices. Mathematical Proc. the Cambridge, 1955, 51: 406-413.

    Article  Google Scholar 

  19. Penrose R. On the best approximate solution of linear matrix equations. Mathematical Proceedings of the Cambridge Philosophical Society, 1956, 52(1): 17-19.

    Article  Google Scholar 

  20. Farkas G. über die theorie der einfachen ungleichungen. J. die Reine und Angewandte Mathematik, 1902, 124: 1-24. (in German)

  21. Lomuscio A, Maganti L. An approach to reachability analysis for feed-forward ReLU neural networks. arXiv:1706.07351, 2017. https://arxiv.org/abs/1706.07351, May 2020.

  22. Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X. Safety and trustworthiness of deep neural networks: A survey. arXiv:1812.08342v4, 2019. https://arxiv.org/abs/1812.08342, April 2020.

  23. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J, Fergus R. Intriguing properties of neural networks. In Proc. the 2nd Int. Conf. Learning Representations, April 2014.

  24. Lei Y, Chen S, Fan L, Song F, Liu Y. Advanced evasion attacks and mitigations on practical ML-based phishing website classifiers. arXiv:2004.06954, 2020. https://arxiv.org/abs/2004.06954, May 2020.

  25. Chen G, Chen S, Fan L, Du X, Zhao Z, Song F, Liu Y. Who is real Bob? Adversarial attacks on speaker recognition systems. arXiv:1911.01840, 2019. https://arxiv.org/abs/1911.01840, May 2020.

  26. Duan Y, Zhao Z, Bu L, Song F. Things you may not know about adversarial example: A black-box adversarial image attack. arXiv:1905.07672, 2019. https://arxiv.org/abs/1905.07672, May 2020.

  27. Narodytska N, Kasiviswanathan S P, Ryzhyk L, Sagiv M, Walsh T. Verifying properties of binarized deep neural networks. In Proc. the 32nd AAAI Conf. Artificial Intelligence, February 2018, pp.6615-6624.

  28. Cheng C, Nührenberg G, Ruess H. Maximum resilience of artificial neural networks. In Proc. the 15th Int. Symp. Automated Technology for Verification and Analysis, October 2017, pp.251-268.

  29. Bunel R, Turkaslan I, Torr P H S, Kohli P, Mudigonda P K. A unified view of piecewise linear neural network verification. In Proc. the 32nd Annual Conf. Neural Information Processing Systems, December 2018, pp.4795-4804.

  30. Tran H, Lopez D M, Musau P, Yang X, Nguyen L V, Xiang W, Johnson T T. Star-based reachability analysis of deep neural networks. In Proc. the 3rd World Congress on Formal Methods, October 2019, pp.670-686.

  31. Cousot P, Cousot R. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proc. the 4th ACM Symp. Principles of Programming Languages, January 1977, pp.238-252.

  32. Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M T. AI2: Safety and robustness certification of neural networks with abstract interpretation. In Proc. the 39th IEEE Symp. Security and Privacy, May 2018, pp.3-18.

  33. Wang S, Pei K, Whitehouse J, Yang J, Jana S. Formal security analysis of neural networks using symbolic intervals. In Proc. the 27th USENIX Security Symp., August 2018, pp.1599-1614.

  34. Singh G, Gehr T, Püschel M, Vechev M. T. Boosting robustness certification of neural networks. In Proc.the 7th International Conference on Learning Representations, May 2019.

  35. Wan W, Zhang Z, Zhu Y, Zhang M, Song F. Accelerating robustness verification of deep neural networks guided by target labels. arXiv:2007.08520, 2020. https://arxiv.org/abs/2007.08520, July 2020.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wan-Wei Liu.

Supplementary Information

ESM 1

(PDF 345 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, WW., Song, F., Zhang, THR. et al. Verifying ReLU Neural Networks from a Model Checking Perspective. J. Comput. Sci. Technol. 35, 1365–1381 (2020). https://doi.org/10.1007/s11390-020-0546-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-020-0546-7

Keywords

Navigation