Skip to main content

Inductive learning in symbolic domains using structure-driven recurrent neural networks

  • Conference paper
  • First Online:
KI-96: Advances in Artificial Intelligence (KI 1996)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1137))

Included in the following conference series:

Abstract

While neural networks are widely applied as powerful tools for inductive learning of mappings in domains of fixed-length feature vectors, there are still expressed principled doubts whether the domain can be enlarged to structured objects of arbitrary shape (like trees or graphs). We present a connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size. Labeled directed acyclic graphs are the most general structures that can be handled. The processing in this architecture is driven by the inherent recursive nature of the given structures. Our approach can be viewed as a generalization of the well-known discrete-time, continuous-space recurrent neural networks and their corresponding training procedures. We give first results from experiments with inductive learning tasks consisting in the classification of logical terms. These range from the detection of a certain subterm to the satisfaction of complex matching constraints and also capture certain concepts of syntactical variables.

This research was supported by the German Research Foundation (DFG) under grant No. Pa 268/10-1 and by the EC (ESPRIT BRP MIX-9119)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G. Berg, “Connectionist Parser with Recursive Sentence Structure and Lexical Disambiguation,” in Proceedings Tenth National Conference on Artificial Intelligence — AAAI-92, pp. 32–37, AAAI, Menlo Park, California, USA, 1992.

    Google Scholar 

  2. D. Blank, L. Meeden, and J. Marshall, “Exploring the symbolic/subsymbolic continuum: A case study of raam,” in The Symbolic and Connectionist Paradigms: Closing the Gap, (J. Dinsmore, ed.), LEA Publishers, 1992.

    Google Scholar 

  3. V. Cadoret, “Encoding Syntactical Trees with Labelling Recursive Auto-Associative Memory,” in Proceedings of the 11th Conference on Artificial Intelligence (ECAI 94), (A. Cohn, ed.), pp. 555–559, John Wiley & Sons, 1994.

    Google Scholar 

  4. L. Chrisman, “Learning Recursive Distributed Representations for Holistic Computation,” Connection Science, no. 3, pp. 345–366, 1991.

    Google Scholar 

  5. C. L. Giles and C. W. Omlin, “Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Networks,” Connection Science, vol. 5, no. 3 & 4, pp. 307–337, 1993.

    Google Scholar 

  6. L. Goldfarb, J. Abela, V. C. Bhavsar, and V. N. Kamat, “Can a Vector Space Based Learning Model Discover Inductive Class Generalization in a Symbolic Environment?,” Pattern Recognition Letters, no. 16, pp. 719–726, 1995.

    Google Scholar 

  7. C. Goller and A. Küchler, “Learning Task-Dependent Distributed Representations by Backpropagation Through Structure,” in Proceedings of the IEEE International Conference on Neural Networks (ICNN'96), 1996. to appear.

    Google Scholar 

  8. C. Goller, A. Sperduti, and A. Starita, “Learning Distributed Representations for the Classification of Terms,” in Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), (C. S. Mellish, ed.), pp. 509–515, Morgan Kaufmann Publishers, August 1995.

    Google Scholar 

  9. B. Horne and C. Giles, “An Experimental Comparison of Recurrent Neural Networks,” in Advances in Neural Information Processing Systems (NIPS 7), (G. Tesauro, D. Touretzky, and T. Leen, eds.), pp. 697-, MIT Press, 1995.

    Google Scholar 

  10. K. Hornik, M. Stinchcombe, and H. White, “Multilayer Feedforward Network are Universal Approximators,” Neural Networks, vol. 2, pp. 359–366, 1989.

    Google Scholar 

  11. J. B. Pollack, “Recursive Distributed Representations,” Artificial Intelligence, no. 46, pp. 77–105, 1990.

    Google Scholar 

  12. H. T. Siegelmann and E. D. Sontag, “On the Computational Power of Neural Nets,” Journal of Computer and System Sciences, vol. 50, pp. 132–150, 1995.

    Google Scholar 

  13. A. Sperduti, “Encoding of Labeled Graphs by Labeling RAAM,” in Advances in Neural Information Processing Systems (NIPS 6), (J. D. Cowan, G. Tesauro, and J. Alspector, eds.), pp. 1125–1132, 1994.

    Google Scholar 

  14. A. Sperduti and A. Starita, “Supervised Neural Networks for the Classification of Structures,” Technical Report TR-16/95, University of Pisa, Dipartimento di Informatica, 1995.

    Google Scholar 

  15. R. J. Williams and D. Zipser, “Gradient-Based Learning Algorithms for Recurrent Networks and Their Computational Complexity,” in Backpropagation: Theory, Architectures and Applications, (Y. Chauvin and D. E. Rummelhart, eds.), ch. 13, pp. 433–486, Hillsdale, NJ: Lawrence Erlbaum Associates, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Günther Görz Steffen Hölldobler

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Küchler, A., Goller, C. (1996). Inductive learning in symbolic domains using structure-driven recurrent neural networks. In: Görz, G., Hölldobler, S. (eds) KI-96: Advances in Artificial Intelligence. KI 1996. Lecture Notes in Computer Science, vol 1137. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61708-6_60

Download citation

  • DOI: https://doi.org/10.1007/3-540-61708-6_60

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61708-2

  • Online ISBN: 978-3-540-70669-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics