Abstract
We investigate a finer-grained understanding of the characteristics of particular deterministic finite automata (DFA). Specifically, we study and identify the transitions of a DFA that are more important for maintaining the correctness of the underlying regular language associated with this DFA. To estimate transition importance, we develop an approach that is similar to the approach widely used to expose the vulnerability of neural networks with the adversarial example problem. In our approach, we propose an adversarial model that reveals the sensitive transitions embedded in a DFA. In addition, we find for a DFA its critical patterns where a pattern is a substring that can be taken as the signature of this DFA. Our defined patterns can be implemented as synchronizing words, which represent the passages from different states to the absorbing state of a DFA. Finally, we validate our study through empirical evaluations by showing that our proposed algorithms can effectively identify important transitions and critical patterns. To our knowledge, this is some of the first work to explore adversarial models for DFAs and is important due to the wide use of DFAs in cyberphysical systems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Tomita [11] defined the following grammars with a binary alphabet: (1) \(a^*\), (2) \((ab)^*\), (3) an odd number of consecutive \('a'\)s is always followed by an even number of consecutive \('b'\)s, (4) any binary string not containing “bbb” as a substring, (5) even number of bs and even number of \('a'\)s, (6) the difference between the numbers of \('b'\)s and \('a'\)s is a multiple of 3, (7) \(b^{*}a^{*}b^{*}a^{*}\). These grammars have been widely used in grammatical inference.
- 2.
Recent research [7] on explaining DNNs have demonstrated the difficulty of analyzing and inspecting these powerful models.
- 3.
The constant number 1 is omitted for simplicity.
- 4.
For a DFA, P (N) represents the space of strings accepted (rejected) by this DFA.
- 5.
The example DFA is associated with the Tomita-4 grammar.
References
Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987)
Bottou, L., Bousquet, O.: The tradeoffs of large scale learning. In: NIPS, pp. 161–168. Curran Associates Inc., (2007)
Černỳ, J.: Poznámka k homogénnym experimentom s konečnỳmi automatmi (a note on homogeneous experiments with finite automata). Matematicko-fyzikálny časopis 14(3), 208–216 (1964)
Chomsky, N.: Three models for the description of language. IRE Trans. Inf. Theory 2(3), 113–124 (1956)
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989). https://doi.org/10.1007/BF02551274
Don, H., Zantema, H.: Finding DFAs with maximal shortest synchronizing word length. In: Drewes, F., Martín-Vide, C., Truthe, B. (eds.) LATA 2017. LNCS, vol. 10168, pp. 249–260. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-53733-7_18
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2019)
Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to automata theory, languages, and computation. ACM Sigact News 32(1), 60–65 (2001)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of the 34th International Conference on Machine Learning. ICML, pp. 1885–1894 (2017)
Serban, A.C., Poll, E.: Adversarial examples - a complete characterisation of the phenomenon. CoRR, abs/1810.01185 (2018)
Tomita, M.: Dynamic construction of finite-state automata from examples using hill-climbing. In: Proceedings of the Fourth Annual Conference of the Cognitive Science Society, pp. 105–108 (1982)
Wang, Q.: A comparative study of rule extraction for recurrent neural networks. arXiv preprint arXiv:1801.05420 (2018)
Weiss, G., Goldberg, Y., Yahav, E.: Extracting automata from recurrent neural networks using queries and counterexamples. In: Proceedings of Machine Learning Research. ICML, vol. 80, pp. 5244–5253. PMLR (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, K., Wang, Q., Giles, C.L. (2020). Adversarial Models for Deterministic Finite Automata. In: Goutte, C., Zhu, X. (eds) Advances in Artificial Intelligence. Canadian AI 2020. Lecture Notes in Computer Science(), vol 12109. Springer, Cham. https://doi.org/10.1007/978-3-030-47358-7_55
Download citation
DOI: https://doi.org/10.1007/978-3-030-47358-7_55
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-47357-0
Online ISBN: 978-3-030-47358-7
eBook Packages: Computer ScienceComputer Science (R0)