Skip to main content

Emergence of Discrete and Abstract State Representation through Reinforcement Learning in a Continuous Input Task

  • Chapter
Robot Intelligence Technology and Applications 2012

Abstract

“Concept” is a kind of discrete and abstract state representation, and is considered useful for efficient action planning. However, it is supposed to emerge in our brain as a parallel processing and learning system through learning based on a variety of experiences, and so it is difficult to be developed by hand-coding. In this paper, as a previous step of the “concept formation”, it is investigated whether the discrete and abstract state representation is formed or not through learning in a task with multi-step state transitions using Actor-Q learning method and a recurrent neural network. After learning, an agent repeated a sequence two times, in which it pushed a button to open a door and moved to the next room, and finally arrived at the third room to get a reward. In two hidden neurons, discrete and abstract state representation not depending on the door opening pattern was observed. The result of another learning with two recurrent neural networks that are for Q-values and for Actors suggested that the state representation emerged to generate appropriate Q-values.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Tani, J., Nolfi, S.: Learning to Perceive the World as Articulated: An Approach for Hierarchical Learning in Sensory-Motor Systems. Neural Networks 12, 1131–1141 (1999)

    Article  Google Scholar 

  2. Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Computational Biology 4, e100220 (2008)

    Google Scholar 

  3. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Parallel Distributed Processing, pp. 318–362. The MIT Press (1986)

    Google Scholar 

  4. Shibata, K., Nishino, T., Okabe, Y.: Active Perception and Recognition Learning System Based on Actor-Q Architecture Systems and Computers in Japan  33(14), 12–22 (2002)

    Google Scholar 

  5. Samsudin, M.F., Shibata, K.: Emergence of Multi-Step Discrete State Transition through Reinforcement Learning with a Recurrent Neural Network. In: Proc. of ICONIP 2012 (2012) (to appear)

    Google Scholar 

  6. Utsunomiya, H., Shibata, K.: Contextual Behaviors and Internal Representations Acquired by Reinforcement Learning with a Recurrent Neural Network in a Continuous State and Action Space Task. In: Köppen, M., Kasabov, N., Coghill, G. (eds.) ICONIP 2008, Part II. LNCS, vol. 5507, pp. 970–978. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  7. Shibata, K., Ito, K.: Adaptive Space Reconstruction on Hidden Layer and Knowledge Transfer based on Hidden-level Generalization in Layered Neural Networks. Trans. SICE 43(1), 54–63 (2007) (in Japanese)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshito Sawatsubashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Sawatsubashi, Y., Samusudin, M.F.b., Shibata, K. (2013). Emergence of Discrete and Abstract State Representation through Reinforcement Learning in a Continuous Input Task. In: Kim, JH., Matson, E., Myung, H., Xu, P. (eds) Robot Intelligence Technology and Applications 2012. Advances in Intelligent Systems and Computing, vol 208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37374-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37374-9_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37373-2

  • Online ISBN: 978-3-642-37374-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics