Skip to main content

A Neurobiologically Motivated Model for Self-organized Learning

  • Conference paper
MICAI 2005: Advances in Artificial Intelligence (MICAI 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3789))

Included in the following conference series:

  • 1450 Accesses

Abstract

We present a neurobiologically motivated model for an agent which generates a representation of its spacial environment by an active exploration. Our main objectives is the introduction of an action-selection mechanism based on the principle of self-reinforcement learning. We introduce the action-selection mechanism under the constraint that the agent receives only information an animal could receive too. Hence, we have to avoid all supervised learning methods which require a teacher. To solve this problem, we define a self-reinforcement signal as qualitative comparison between predicted an perceived stimulus of the agent. The self-reinforcement signal is used to construct internally a self-punishment function and the agent chooses its actions to minimize this function during learning. As a result it turns out that an active action-selection mechanism can improve the performance significantly if the problem to be learned becomes more difficult.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Döerner, D., Hille, K.: Artificial Souls: Motivated Emotional Robots. In: IEEE Conference Proceedings, International Conference on Systems Man, and Cybernetics; Intelligent Systems for the 21st Century, Vancouver, vol. 4(5), pp. 3828–3832 (1995)

    Google Scholar 

  2. Emmert-Streib, F.: Aktive Computation in offenen Systemen. Lerndynamiken in biologischen Systemen: Vom Netzwerk zum Organisms, Ph.D. Thesis, Universität Bremen (Germany), Mensch & Buch Verlag (2003)

    Google Scholar 

  3. Herrmann, J.M., Emmert-Streib, F., Pawelzik, K.: Autonomous robots and neuroethology: Emergence of behavior from a sense of curiosity. In: Rückert, U., Löffler, A., Mondada, F. (eds.) Experiments with the Mini-Robot Khepera, Proceedings of the 1st Int. Khepera Workshop, Paderborn, pp. 89–98. HNI-Verlagsschriftenreihe, Bd 64 (1999)

    Google Scholar 

  4. Herrmann, J.M., Pawelzik, K.: Self-localization of autonomous robots by hidden representations. Autonomous Robots 7(1), 31–40 (1999)

    Article  Google Scholar 

  5. Oore, S., Hinton, G.E., Dudek, G.: A mobile robot that learns its place. Neural Computation 9(3), 683–699 (1997)

    Article  Google Scholar 

  6. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  7. Samuel, A.: Some studies in machine learning using the game of checkers. IBM J. Res. Develop. 3, 210–229 (1959)

    Article  MathSciNet  Google Scholar 

  8. Sutton, R.S., Burto, A.G.: Reinforcement Learning: An introduction. MIT Press, Cambridge, Mass. (1998)

    Google Scholar 

  9. Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211, 453–458 (1981)

    Article  MathSciNet  Google Scholar 

  10. von Uexküll, J.: Theoretische Biologie, Berlin Verlag von Gebrüder Paetel, 260. English translation (1926): Theoretical Biology. Kegan Paul, Trench, Trubner & Co., London (1920)

    Google Scholar 

  11. Zhu, X., Lafferty, J., Ghahramani, Z.: Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. In: Proc. of the ICML 2003 workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 58–65 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Emmert-Streib, F. (2005). A Neurobiologically Motivated Model for Self-organized Learning. In: Gelbukh, A., de Albornoz, Á., Terashima-Marín, H. (eds) MICAI 2005: Advances in Artificial Intelligence. MICAI 2005. Lecture Notes in Computer Science(), vol 3789. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11579427_42

Download citation

  • DOI: https://doi.org/10.1007/11579427_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-29896-0

  • Online ISBN: 978-3-540-31653-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics