Skip to main content

Constrained Training of Recurrent Neural Networks for Automata Learning

  • Conference paper
  • First Online:
Software Engineering and Formal Methods (SEFM 2022)

Abstract

In this paper, we present a novel approach to learning finite automata with the help of recurrent neural networks. Our goal is not only to train a neural network that predicts the observable behavior of an automaton but also to learn its structure, including the set of states and transitions. In contrast to previous work, we constrain the training with a specific regularization term. We evaluate our approach with standard examples from the automata learning literature, but also include a case study of learning the finite-state models of real Bluetooth Low Energy protocol implementations. The results show that we can find an appropriate architecture to learn the correct automata in all considered cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The actual corresponding input, resp. output, symbol is obtained from the input, resp. output, symbol alphabet through an appropriate indexed mapping. For simplicity, we don’t show this mapping here.

References

  1. Aichernig, B.K., Mostowski, W., Mousavi, M.R., Tappler, M., Taromirad, M.: Model learning and model-based testing. In: Bennaceur, A., Hähnle, R., Meinke, K. (eds.) Machine Learning for Dynamic Software Analysis: Potentials and Limits. LNCS, vol. 11026, pp. 74–100. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96562-8_3

    Chapter  Google Scholar 

  2. Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987). https://doi.org/10.1016/0890-5401(87)90052-6

    Article  MathSciNet  Google Scholar 

  3. Carr, S., Jansen, N., Topcu, U.: Verifiable RNN-based policies for POMDPs under temporal logic constraints. In: IJCAI, pp. 4121–4127. ijcai.org (2020). https://doi.org/10.24963/ijcai.2020/570

  4. Dong, G., et al.: Towards interpreting recurrent neural networks through probabilistic abstraction. In: ASE, pp. 499–510. IEEE (2020). https://doi.org/10.1145/3324884.3416592

  5. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990). https://doi.org/10.1207/s15516709cog1402_1

    Article  Google Scholar 

  6. Gold, E.M.: Complexity of automaton identification from given data. Inf. Control 37(3), 302–320 (1978). https://doi.org/10.1016/S0019-9958(78)90562-4

    Article  MathSciNet  Google Scholar 

  7. Goudreau, M.W., Giles, C.L., Chakradhar, S.T., Chen, D.: First-order versus second-order single-layer recurrent neural networks. IEEE Trans. Neural Netw. 5(3), 511–513 (1994). https://doi.org/10.1109/72.286928

    Article  Google Scholar 

  8. Heule, M., Verwer, S.: Software model synthesis using satisfiability solvers. Empir. Softw. Eng. 18(4), 825–856 (2013)

    Article  Google Scholar 

  9. de la Higuera, C.: Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, New York (2010)

    Book  Google Scholar 

  10. Howar, F., Steffen, B.: Active automata learning in practice. In: Bennaceur, A., Hähnle, R., Meinke, K. (eds.) Machine Learning for Dynamic Software Analysis: Potentials and Limits. LNCS, vol. 11026, pp. 123–148. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96562-8_5

    Chapter  Google Scholar 

  11. Irfan, M.N., Oriat, C., Groz, R.: Model inference and testing. In: Advances in Computers, vol. 89, pp. 89–139. Elsevier (2013)

    Google Scholar 

  12. Isberner, M., Howar, F., Steffen, B.: The open-source LearnLib. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015, Part I. LNCS, vol. 9206, pp. 487–495. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21690-4_32

    Chapter  Google Scholar 

  13. Khmelnitsky, I., et al.: Property-directed verification and robustness certification of recurrent neural networks. In: Hou, Z., Ganesh, V. (eds.) ATVA 2021. LNCS, vol. 12971, pp. 364–380. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88885-5_24

    Chapter  Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  15. Kleene, S.C.: Representation of Events in Nerve Nets and Finite Automata. RAND Corporation, Santa Monica (1951)

    Google Scholar 

  16. Koul, A., Fern, A., Greydanus, S.: Learning finite state representations of recurrent policy networks. In: ICLR. OpenReview.net (2019)

    Google Scholar 

  17. Ma, Y., Principe, J.C.: A taxonomy for neural memory networks. IEEE Trans. Neural Netw. Learn. Syst. 31(6), 1780–1793 (2020). https://doi.org/10.1109/TNNLS.2019.2926466

    Article  MathSciNet  Google Scholar 

  18. Mayr, F., Yovine, S.: Regular inference on artificial neural networks. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2018. LNCS, vol. 11015, pp. 350–369. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99740-7_25

    Chapter  Google Scholar 

  19. Michalenko, J.J., Shah, A., Verma, A., Baraniuk, R.G., Chaudhuri, S., Patel, A.B.: Representing formal languages: A comparison between finite automata and recurrent neural networks. In: ICLR. OpenReview.net (2019)

    Google Scholar 

  20. Minsky, M.L.: Computation: Finite and Infinite Machines. Prentice-Hall Inc., USA (1967)

    Google Scholar 

  21. Muškardin, E., Aichernig, B.K., Pill, I., Pferscher, A., Tappler, M.: AALpy: An active automata learning library. In: Hou, Z., Ganesh, V. (eds.) ATVA 2021. LNCS, vol. 12971, pp. 67–73. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88885-5_5

    Chapter  Google Scholar 

  22. Muskardin, E., Aichernig, B.K., Pill, I., Tappler, M.: Learning finite state models from recurrent neural networks. In: ter Beek, M.H., Monahan, R. (eds.) IFM 2022. LNCS, vol. 13274, pp. 229–248. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-07727-2_13

    Chapter  Google Scholar 

  23. Oliva, C., Lago-Fernández, L.F.: Stability of internal states in recurrent neural networks trained on regular languages. Neurocomputing 452, 212–223 (2021). https://doi.org/10.1016/j.neucom.2021.04.058

    Article  Google Scholar 

  24. Omlin, C.W., Giles, C.L.: Extraction of rules from discrete-time recurrent neural networks. Neural Netw. 9(1), 41–52 (1996). https://doi.org/10.1016/0893-6080(95)00086-0

    Article  Google Scholar 

  25. Oncina, J., Garcia, P.: Identifying regular languages in polynomial time. In: Advances in Structural and Syntactic Pattern Recognition. Machine Perception and Artificial Intelligence, vol. 5, pp. 99–108. World Scientific (1992)

    Google Scholar 

  26. Paszke, A., et al.: PyTorch: An imperative style, high-performance deep learning library. In: NeurIPS, pp. 8024–8035. Curran Associates, Inc. (2019)

    Google Scholar 

  27. Pferscher, A., Aichernig, B.K.: Fingerprinting Bluetooth Low Energy devices via active automata learning. In: Huisman, M., Păsăreanu, C., Zhan, N. (eds.) FM 2021. LNCS, vol. 13047, pp. 524–542. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90870-6_28

    Chapter  Google Scholar 

  28. Shahbaz, M., Groz, R.: Inferring Mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-05089-3_14

    Chapter  Google Scholar 

  29. Smetsers, R., Fiterău-Broştean, P., Vaandrager, F.: Model learning as a satisfiability modulo theories problem. In: Klein, S.T., Martín-Vide, C., Shapira, D. (eds.) LATA 2018. LNCS, vol. 10792, pp. 182–194. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77313-1_14

    Chapter  Google Scholar 

  30. Tappler, M., Aichernig, B.K., Lorber, F.: Timed automata learning via SMT solving. In: Deshmukh, J.V., Havelund, K., Perez, I. (eds.) NFM 2022. LNCS, vol. 13260, pp. 489–507. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-06773-0_26

    Chapter  Google Scholar 

  31. Tin̆o, P., S̆ajda, J.: Learning and extracting initial Mealy automata with a modular neural network model. Neural Comput. 7(4), 822–844 (1995). https://doi.org/10.1162/neco.1995.7.4.822

  32. Tomita, M.: Dynamic construction of finite automata from examples using hill-climbing. In: Conference of the Cognitive Science Society, pp. 105–108 (1982)

    Google Scholar 

  33. Weiss, G., Goldberg, Y., Yahav, E.: Extracting automata from recurrent neural networks using queries and counterexamples. In: ICML. Proceedings of Machine Learning Research, vol. 80, pp. 5244–5253. PMLR (2018)

    Google Scholar 

  34. Weiss, G., Goldberg, Y., Yahav, E.: Learning deterministic weighted automata with queries and counterexamples. In: NeurIPS, pp. 8558–8569 (2019)

    Google Scholar 

  35. Yellin, D.M., Weiss, G.: Synthesizing context-free grammars from recurrent neural networks. In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12651, pp. 351–369. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72016-2_19

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was collaboratively done in the TU Graz LEAD project Dependable Internet of Things in Adverse Environments project, the LearnTwins project funded by FFG (Österreichische Forschungsförderungsgesellschaft) under grant 880852, and the “University SAL Labs” initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Pferscher .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Aichernig, B.K., König, S., Mateis, C., Pferscher, A., Schmidt, D., Tappler, M. (2022). Constrained Training of Recurrent Neural Networks for Automata Learning. In: Schlingloff, BH., Chai, M. (eds) Software Engineering and Formal Methods. SEFM 2022. Lecture Notes in Computer Science, vol 13550. Springer, Cham. https://doi.org/10.1007/978-3-031-17108-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17108-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17107-9

  • Online ISBN: 978-3-031-17108-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics