Skip to main content

Advertisement

Log in

Qualitative and Quantitative Model Checking Against Recurrent Neural Networks

  • Regular Paper
  • Theory and Algorithms
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Recurrent neural networks (RNNs) have been heavily used in applications relying on sequence data such as time series and natural languages. As a matter of fact, their behaviors lack rigorous quality assurance due to the black-box nature of deep learning. It is an urgent and challenging task to formally reason about the behaviors of RNNs. To this end, we first present an extension of linear-time temporal logic to reason about properties with respect to RNNs, such as local robustness, reachability, and some temporal properties. Based on the proposed logic, we formalize the verification obligation as a Hoare-like triple, from both qualitative and quantitative perspectives. The former concerns whether all the outputs resulting from the inputs fulfilling the pre-condition satisfy the post-condition, whereas the latter is to compute the probability that the post-condition is satisfied on the premise that the inputs fulfill the pre-condition. To tackle these problems, we develop a systematic verification framework, mainly based on polyhedron propagation, dimension-preserving abstraction, and the Monte Carlo sampling. We also implement our algorithm with a prototype tool and conduct experiments to demonstrate its feasibility and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z H, Karpathy A, Khosla A, Bernstein M, Berg A C, Fei-Fei L. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211–252. DOI: https://doi.org/10.1007/s11263-015-0816-y.

    Article  MathSciNet  Google Scholar 

  2. Pennington J, Socher R, Manning C D. GloVe: Global vectors for word representation. In Proc. the 2014 Conference on Empirical Methods in Natural Language Processing, Oct. 2014, pp.1532–1543. DOI: https://doi.org/10.3115/v1/d14-1162.

    Chapter  MATH  Google Scholar 

  3. Hinton G, Deng L, Yu D, Dahl G E, Mohamed A R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T N, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82–97. DOI: https://doi.org/10.1109/MSP.2012.2205597.

    Article  Google Scholar 

  4. Liu X W, Zhu X Z, Li M M, Wang L, Tang C, Yin J P, Shen D G, Wang H M, Gao W. Late fusion incomplete multi-view clustering. IEEE Trans. Pattern Analysis and Machine Intelligence, 2019, 41(10): 2410–2423. DOI: https://doi.org/10.1109/TPAMI.2018.2879108.

    Article  MATH  Google Scholar 

  5. Urmson C, Whittaker W. Self-driving cars and the urban challenge. IEEE Intelligent Systems, 2008, 23(2): 66–68. DOI: https://doi.org/10.1109/mis.2008.34.

    Article  MATH  Google Scholar 

  6. Litjens G, Kooi T, Bejnordi B E, Setio A A A, Ciompi F, Ghafoorian M, van der Laak J A W M, van Ginneken B, Sanchez C I. A survey on deep learning in medical image analysis. Medical Image Analysis, 2017, 42: 60–88. DOI: https://doi.org/10.1016/j.media.2017.07.005.

    Article  Google Scholar 

  7. Huang X W, Kroening D, Ruan W J, Sharp J, Sun Y C, Thamo E, Wu M, Yi X P. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020, 37: 100270. DOI: https://doi.org/10.1016/j.cosrev.2020.100270.

    Article  MathSciNet  MATH  Google Scholar 

  8. Molnar C, Casalicchio G, Bischl B. Interpretable machine learning—A brief history, state-of-the-art and challenges. In Proc. the 2020 Workshops of the European Conference on Machine Learning and Knowledge Discovery in Databases, Sept. 2020, pp.417–431. DOI: https://doi.org/10.1007/978-3-030-65965-3_28.

    MATH  Google Scholar 

  9. Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In Proc. the 3rd International Conference on Learning Representations, May 2015.

    MATH  Google Scholar 

  10. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik Z B, Swami A. The limitations of deep learning in adversarial settings. In Proc. the 2016 IEEE European Symposium on Security and Privacy, Mar. 2016, pp.372–387. DOI: https://doi.org/10.1109/EuroSP.2016.36.

    MATH  Google Scholar 

  11. Katz G, Barrett C W, Dill D L, Julian K, Kochenderfer M J. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. the 29th International Conference on Computer Aided Verification, Jul. 2017, pp.97–117. DOI: https://doi.org/10.1007/978-3-319-63387-9_5.

    Chapter  Google Scholar 

  12. Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2: Safety and robustness certification of neural networks with abstract interpretation. In Proc. the 2018 IEEE Symposium on Security and Privacy, May 2018, pp.3–18. DOI: https://doi.org/10.1109/sp.2018.00058.

    Chapter  MATH  Google Scholar 

  13. Singh G, Gehr T, Püschel M, Vechev M. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 2019, 3 (POPL): 41. DOI: https://doi.org/10.1145/3290354.

    Article  MATH  Google Scholar 

  14. Liu W W, Song F, Zhang T H R, Wang J. Verifying ReLU neural networks from a model checking perspective. Journal of Computer Science and Technology, 2020, 35(6): 1365–1381. DOI: https://doi.org/10.1007/s11390-020-0546-7.

    Article  MATH  Google Scholar 

  15. Akintunde M E, Kevorchian A, Lomuscio A, Pirovano E. Verification of RNN-based neural agent-environment systems. In Proc. the 33rd AAAI Conference on Artificial Intelligence, Jan. 27–Feb. 1, 2019, pp.6006–6013. DOI: https://doi.org/10.1609/aaai.v33i01.33016006.

    Google Scholar 

  16. Jacoby Y, Barrett C, Katz G. Verifying recurrent neural networks using invariant inference. In Proc. the 18th International Symposium on Automated Technology for Verification and Analysis, Oct. 2020, pp.57–74. DOI: https://doi.org/10.1007/978-3-030-59152-6_3.

    Chapter  MATH  Google Scholar 

  17. Ko C Y, Lyu Z Y, Weng L, Daniel L, Wong N, Lin D H. POPQORN: Quantifying robustness of recurrent neural networks. In Proc. the 36th International Conference on Machine Learning, Jun. 2019, pp.3468–3477.

    MATH  Google Scholar 

  18. Du T Y, Ji S L, Shen L J, Zhang Y, Li J F, Shi J, Fang C F, Yin J W, Beyah R, Wang T. Cert-RNN: Towards certifying the robustness of recurrent neural networks. In Proc. the 2021 ACM SIGSAC Conference on Computer and Communications Security, Nov. 2021, pp.516–534. DOI: https://doi.org/10.1145/3460120.3484538.

    Chapter  Google Scholar 

  19. Ryou W, Chen J Y, Balunovic M, Singh G, Dan A, Vechev M. Scalable polyhedral verification of recurrent neural networks. In Proc. the 33rd International Conference on Computer Aided Verification, Jul. 2021, pp.225–248. DOI: https://doi.org/10.1007/978-3-030-81685-8_10.

    Chapter  MATH  Google Scholar 

  20. Zhang H C, Shinn M, Gupta A, Gurfinkel A, Le N, Narodytska N. Verification of recurrent neural networks for cognitive tasks via reachability analysis. In Proc. the 24th European Conference on Artificial Intelligence, Aug. 29–Sept. 8, 2020, pp.1690–1697. DOI: https://doi.org/10.3233/FAIA200281.

    Google Scholar 

  21. Vengertsev D, Sherman E. Recurrent neural network properties and their verification with Monte Carlo techniques. In Proc. the 34th AAAI Conference on Artificial Intelligence, Feb. 2020, pp.178–185.

    Google Scholar 

  22. Khmelnitsky I, Neider D, Roy R, Xie X, Barbot B, Bollig B, Finkel A, Haddad S, Leucker M, Ye L N. Property-directed verification and robustness certification of recurrent neural networks. In Proc. the 19th International Symposium on Automated Technology for Verification and Analysis, Oct. 2021, pp.364–380. DOI: https://doi.org/10.1007/978-3-030-88885-5_24.

    Chapter  MATH  Google Scholar 

  23. Kalra N, Paddock S M. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transportation Research Part A: Policy and Practice, 2016, 94: 182–193. DOI: https://doi.org/10.1016/j.tra.2016.09.010.

    MATH  Google Scholar 

  24. Dahnert M, Hou J, Nießner M, Dai A. Panoptic 3D scene reconstruction from a single RGB image. In Proc. the 35th International Conference on Neural Information Processing Systems, Dec. 2021, Article No. 633.

    MATH  Google Scholar 

  25. Wang J X, Wang K C, Rudzicz F, Brudno M. Grad2Task: Improved few-shot text classification using gradients for task representation. In Proc. the 35th International Conference on Neural Information Processing Systems, Dec. 2021, Article No. 501.

    MATH  Google Scholar 

  26. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks, 1989, 2(5): 359–366. DOI: https://doi.org/10.1016/0893-6080(89)90020-8.

    Article  MATH  Google Scholar 

  27. Ziegler G M. Lectures on Polytopes. Springer, 1995. DOI: https://doi.org/10.1007/978-1-4613-8431-1.

    Book  MATH  Google Scholar 

  28. Preparata F P, Shamos M I. Computational Geometry: An Introduction. Springer, 1985. DOI: https://doi.org/10.1007/978-14612-1098-6.

    Book  MATH  Google Scholar 

  29. Bredon G E. Topology and Geometry. Springer, 1993. DOI: https://doi.org/10.1007/978-1-4757-6848-0.

    Book  MATH  Google Scholar 

  30. Zheng Y. Computing bounding polytopes of a compact set and related problems in n-dimensional space. Computer-Aided Design, 2019, 109: 22–32. DOI: https://doi.org/10.1016/j.cad.2018.12.002.

    Article  MathSciNet  MATH  Google Scholar 

  31. Barber C B, Dobkin D P, Huhdanpaa H. The quickhull algorithm for convex hulls. ACM Trans. Mathematical Software, 1996, 22(4): 469–483. DOI: https://doi.org/10.1145/235815.235821.

    Article  MathSciNet  MATH  Google Scholar 

  32. Legay A, Lukina A, Traonouez L M, Yang J X, Smolka S A, Grosu R. Statistical model checking. In Computing and Software Science: State of the Art and Perspectives, Steffen B, Woeginger G (eds.), Springer, 2019, pp.478–504. DOI: https://doi.org/10.1007/978-3-319-91908-9_23.

    Chapter  MATH  Google Scholar 

  33. Mancini T, Mari F, Melatti I, Salvo I, Tronci E, Gruber J K, Hayes B, Prodanovic M, Elmegaard L. Parallel statistical model checking for safety verification in smart grids. In Proc. the 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), Oct. 2018. DOI:https://doi.org/10.1109/smartgridcomm.2018.8587416.

    MATH  Google Scholar 

  34. Wali K I, Othman S A. Schedule risk analysis using Monte Carlo simulation for residential projects. Zanco Journal of Pure and Applied Sciences, 2019, 31(5): 90–103. DOI: https://doi.org/10.21271/zjpas.31.5.11.

    MATH  Google Scholar 

  35. Younesi A, Shayeghi H, Safari A, Siano P. Assessing the resilience of multi microgrid based widespread power systems against natural disasters using Monte Carlo Simulation. Energy, 2020, 207: 118220. DOI: https://doi.org/10.1016/j.energy.2020.118220.

    Article  Google Scholar 

  36. Okamoto M. Some inequalities relating to the partial sum of binomial probabilities. Annals of the Institute of Statistical Mathematics, 1959, 10(1): 29–35. DOI: https://doi.org/10.1007/bf02883985.

    Article  MathSciNet  MATH  Google Scholar 

  37. Tran H D, Manzanas Lopez D, Musau P, Yang X D, Nguyen L V, Xiang W M, Johnson T T. Star-based reachability analysis of deep neural networks. In Proc. the 3rd World Congress on Formal Methods, Oct. 2019, pp.670–686. DOI: https://doi.org/10.1007/978-3-030-30942-8_39.

    Google Scholar 

  38. Servan-Schreiber D, Cleeremans A, McClelland J L. Graded state machines: The representation of temporal contingencies in simple recurrent networks. Machine Learning, 1991, 7(2/3): 161–193. DOI: https://doi.org/10.1007/BF00114843.

    Article  Google Scholar 

  39. Schellhammer I, Diederich J, Towsey M, Brugman C. Knowledge extraction and recurrent neural networks: An analysis of an Elman network trained on a natural language learning task. In Proc. the 1998 New Methods in Language Processing and Computational Natural Language Learning, Jan. 1998, pp.73–78. DOI: https://doi.org/10.5555/1603899.1603912.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wan-Wei Liu  (刘万伟).

Ethics declarations

Conflict of Interest The authors declare that they have no conflict of interest.

Additional information

This work was supported by the National Natural Science Foundation of China under Grant Nos. 61872371, 62032024, and U19A2062, and the Open Fund from the State Key Laboratory of High Performance Computing of China (HPCL) under Grant No. 202001-07.

Zhen Liang received his B.S. degree in computer science and technology from National University of Defense Technology, Changsha, in 2019. He is currently a Ph.D. candidate at National University of Defense Technology, Changsha. His research interests include model checking, interpretation and formal verification of artificial intelligence.

Wan-Wei Liu received his Ph.D degree in computer science from National University of Defense Technology, Changsha, in 2009. He is a professor at National University of Defense Technology, Changsha. His research interests include theoretical computer science (particularly in automata theory and temporal logic), formal methods (particularly in verification), and software engineering.

Fu Song received his Ph.D. degree in computer science from University Paris-Diderot, Paris, in 2013. He is an associate professor with ShanghaiTech University, Shanghai. His research interests include formal methods and computer/AI security.

Bai Xue received his Ph.D. degree in applied mathematics from Beihang University, Beijing, in 2014. He is currently a research professor with the Institute of Software, Chinese Academy of Sciences, Beijing. His research interests involve formal verification of hybrid systems and AI.

Wen-Jing Yang received her Ph.D. degree in multi-scale modeling from Manchester University, Manchester, in 2014. She is currently an associate research fellow at the State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha. Her research interests include machine learning, robotics software, and high-performance computing.

Ji Wang received his PhD degree in computer science from National University of Defense Technology, Chang-sha, in 1995. He is currently a full professor at National University of Defense Technology, Changsha, and he is a fellow of CCF. His research interests include software engineering and formal methods.

Zheng-Bin Pang received his B.S., M.S., and Ph.D. degrees in computer science from National University of Defense Technology, Changsha. Currently, he is a professor at National University of Defense Technology, Changsha. His research interests range across high-speed interconnect, heterogeneous computing, and high performance computer systems.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, Z., Liu, WW., Song, F. et al. Qualitative and Quantitative Model Checking Against Recurrent Neural Networks. J. Comput. Sci. Technol. 39, 1292–1311 (2024). https://doi.org/10.1007/s11390-023-2703-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-023-2703-2

Keywords