Skip to main content

Inductive inference of tree automata by recursive neural networks

  • Machine Learning 1
  • Conference paper
  • First Online:
AI*IA 97: Advances in Artificial Intelligence (AI*IA 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1321))

Included in the following conference series:

Abstract

Recurrent neural networks are powerful learning machines capable of processing sequences. A recent extension of these machines can conveniently be used to process also general data structures like trees and graphs, which opens the doors to a number of new very interesting applications previously unexplored.

In this paper, we show that when the problem of learning is restricted to purely symbolic data structures, like trees, the continuous representation developed after learning can also be given a symbolic interpretation. In particular, we show that a proper quantization of the neuron activation trajectory makes it possible to induce tree automata. We present preliminary experiments for small-size problems that, however, are very promising, especially when considering that this methodology is very robust with respect to accidental or malicious corruption of the learning set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Boden, “Horses of a different colour?,” in Artificial Intelligence and Neural Networks (V. Honavar and L. Uhr, eds.), pp. 3–19, Academic Press, 1994.

    Google Scholar 

  2. D. Angluin and C. Smith, “Inductive inference: Theory and methods,” Computing Surveys, vol. 15, no. 3, pp. 237–269, 1983.

    Google Scholar 

  3. K.-S. Fu and T. Booth, “Grammatical inference: Introduction and survey — part i,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 5, pp. 95–111, January 1975.

    Google Scholar 

  4. K.-S. Fu and T. Booth, “Grammatical inference: Introduction and survey — part ii,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 5, pp. 409–423, May 1975.

    Google Scholar 

  5. R. Watrous and G. Kuhn, “Induction of finite-state languages using second-order recurrent networks,” Neural Computation, vol. 4, no. 3, pp. 406–414, 1992.

    Google Scholar 

  6. C. Giles, C. Miller, D. Chen, G. Sun, H. Chen, and Y. Lee, “Learning and extracting finite state automata with second-order recurrent neural networks,” Neural Computation, vol. 4, no. 3, pp. 393–405, 1992.

    Google Scholar 

  7. P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Representation of finite state automata in recurrent radial basis function networks,” Machine Learning, vol. 23, pp. 5–32, 1996.

    Google Scholar 

  8. C. Omlin and C. Giles, “Constructing deterministic finite-state automata in recurrent neural networks,” Journal of the ACM, vol. 43, no. 6, pp. 937–972, 1996.

    Google Scholar 

  9. A. Sperduti and T. Starita, “Supervised neural networks for classification of structures,” IEEE Transactions on Neural Networks. to appear.

    Google Scholar 

  10. M. Arbib and Y. Given'on, “Algebra automata i: Parallel programming as a prolegomena to the categorical approach,” Information and Control, vol. 12, pp. 331–345, 1968.

    Google Scholar 

  11. C. Giles and T. Maxwell, “Learning, invariance, and generalization in high-order neural networks,” Applied Optics, vol. 26, no. 23, p. 4972, 1987.

    Google Scholar 

  12. C. Miller and C. Giles, “Experimental comparison of the effect of order in recurrent neural networks,” Int. Journal of Pattern Recognition and Artificial Intelligence, 1993. Special Issue on Applications of Neural Networks to Pattern Recognition (I. Guyon Ed.).

    Google Scholar 

  13. E. Sontag and H. Sussman, “Backpropagation separates when perceptrons do,” in International Joint Conference on Neural Networks, vol. 1, (Washington DC), pp. 639–642, IEEE Press, June 1989.

    Google Scholar 

  14. J. Thatcher, “Tree automata: An informal survey,“ in Current Trends in the Theory of Computing (A. Aho, ed.), pp. 143–172, Prentice-Hall, Inc.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Maurizio Lenzerini

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Frasconi, P., Gori, M., Maggini, M., Martinelli, E., Soda, G. (1997). Inductive inference of tree automata by recursive neural networks. In: Lenzerini, M. (eds) AI*IA 97: Advances in Artificial Intelligence. AI*IA 1997. Lecture Notes in Computer Science, vol 1321. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63576-9_94

Download citation

  • DOI: https://doi.org/10.1007/3-540-63576-9_94

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63576-5

  • Online ISBN: 978-3-540-69601-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics