Skip to main content

Towards Encoding Background Knowledge with Temporal Extent into Neural Networks

  • Conference paper
Book cover Knowledge Science, Engineering and Management (KSEM 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6291))

  • 1406 Accesses

Abstract

Neuro-symbolic integration merges background knowledge and neural networks to provide a more effective learning system. It uses the Core Method as a means to encode rules. However, this method has several drawbacks in dealing with rules that have temporal extent. First, it demands some interface with the world which buffers the input patterns so they can be represented all at once. This imposes a rigid limit on the duration of patterns and further suggests that all input vectors be the same length. These are troublesome in domains where one would like comparable representations for patterns that are of variable length (e.g. language). Second, it does not allow dynamic insertion of rules conveniently. Finally and also most seriously, it cannot encode rules having preconditions satisfied at non-deterministic time points – an important class of rules. This paper presents novel methods for encoding such rules, thereby improves and extends the power of the state-of-the-art neuro-symbolic integration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Marques, N.C., Bader, S., Rocio, V., Hölldobler, S.: Neuro-Symbolic Word Tagging. In: Workshop on Text Mining and Applications, Portuguese Conf. on Artificial Intelligence. IEEE, Los Alamitos (2007)

    Google Scholar 

  2. Elman, J.L.: Finding Structure in Time. Cognitive Science 14, 179–211 (1990)

    Article  Google Scholar 

  3. Hölldobler, S., Kalinke, Y.: Towards a massively parallel computational model for logic programming. In: Workshop on Combining Symbolic and Connectionist Processing. ECAI, pp. 68–77 (1994)

    Google Scholar 

  4. Bader, S., Hölldobler, S., Marques, N.C.: Guiding backprop by inserting rules. In: Procs. 4th Intl. Workshop on Neural-Symbolic Learning and Reasoning (2008)

    Google Scholar 

  5. Marques, N.C.: An Extension of the Core Method for Continuous Values: Learning with Probabilities. In: Procs. 14th Portuguese Conf. on Artificial Intelligence, pp. 319–328 (2009)

    Google Scholar 

  6. Marques, N.C., Lopes, J.G.: Using Neural Nets for Portuguese Part-of-Speech Tagging. In: Procs. the 5th Intl Conf. on Cognitive Science of Natural Language Processing, Ireland (1996)

    Google Scholar 

  7. Bader, S., Hitzler, P., Hölldobler, S.: Connectionist model generation: A first-order approach. Neurocomputing 51, 2420–2432 (2008)

    Article  Google Scholar 

  8. Marques, N.C., Lopes, G.P.: Neural networks, part-of-speech tagging and lexicons. In: Hoffmann, F., Adams, N., Fisher, D., Guimarães, G., Hand, D.J. (eds.) IDA 2001. LNCS, vol. 2189, pp. 63–72. Springer, Heidelberg (2001)

    Google Scholar 

  9. d’Avila Garcez, A.S., Broda, K.B., Gabbay, D.M.: Neural- Symbolic Learning Systems Foundations and Applications. In: Perspectives in Neural Computing, Springer, Berlin (2002)

    Google Scholar 

  10. Zell, A.: SNNS, stuttgart neural network simulator, user manual, version 2.1. Technical report, Stuttgart (1992)

    Google Scholar 

  11. Pereira, F.C.N., Shieber, S.M.: A Prolog and natural-language analysis. CSLI Lecture Notes, vol. 10 (1987)

    Google Scholar 

  12. Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (March 1997)

    MATH  Google Scholar 

  13. Haykin, S.: Neural networks: a comprehensive foundation. Prentice Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

  14. van Halteren, H.: Syntactic Wordclass Tagging. Kluwer Academic Publishers, Dordrecht (1999)

    MATH  Google Scholar 

  15. Sampson, G.: English for the Computer: The SUSANNE Corpus and Analytic Scheme. Oxford University Press, Oxford (1995)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

The Anh, H., Marques, N.C. (2010). Towards Encoding Background Knowledge with Temporal Extent into Neural Networks. In: Bi, Y., Williams, MA. (eds) Knowledge Science, Engineering and Management. KSEM 2010. Lecture Notes in Computer Science(), vol 6291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15280-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15280-1_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15279-5

  • Online ISBN: 978-3-642-15280-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics