Skip to main content

Motivated Reinforcement Learning Using Self-Developed Knowledge in Autonomous Cognitive Agent

  • Conference paper
  • First Online:
Artificial Intelligence and Soft Computing (ICAISC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10841))

Included in the following conference series:

  • 2163 Accesses

Abstract

This paper describes the development of a cognitive agent using motivated reinforcement learning. The conducted research was based on the example of a virtual robot, that placed in an unknown maze, was learned to reach a given goal optimally. The robot should expand knowledge about the surroundings and learn how to move in it to achieve a given target. The built-in motivation factors allow it to focus initially on collecting experiences instead of reaching the goal. In this way, the robot gradually broadens its knowledge with the advancement of exploration of its surroundings. The correctly formed knowledge is used for effective controlling the reinforcement learning routine to reach the target by the robot. In such a way, the motivation factors allow the robot to adapt and control its motivated reinforcement learning routine automatically and autonomously.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Duch, W.: Brain-inspired conscious computing architecture. J. Mind Behav. 26, 1–22 (2005)

    Google Scholar 

  2. Horzyk, A., Starzyk, J.A., Graham, J.: Integration of semantic and episodic memories. IEEE Trans. Neural Netw. Learn. Syst. 28(12), 3084–3095 (2017)

    Article  MathSciNet  Google Scholar 

  3. Horzyk, A.: How does generalization and creativity come into being in neural associative systems and how does it form human-like knowledge? Neurocomputing 144, 238–257 (2014)

    Article  Google Scholar 

  4. Horzyk, A.: Artificial Associative Systems and Associative Artificial Intelligence, pp. 1–276. EXIT, Warsaw (2013)

    Google Scholar 

  5. Horzyk, A.: Human-like knowledge engineering, generalization, and creativity in artificial neural associative systems. In: Skulimowski, A.M.J., Kacprzyk, J. (eds.) Knowledge, Information and Creativity Support Systems: Recent Trends, Advances and Solutions. AISC, vol. 364, pp. 39–51. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-19090-7_4

    Chapter  Google Scholar 

  6. Kalat, J.W.: Biological Grounds of Psychology. PWN, Warsaw (2006)

    Google Scholar 

  7. Larose, D.T.: Discovering Knowledge from Data: Introduction to Data Mining. PWN, Warsaw (2006)

    MATH  Google Scholar 

  8. Rutkowski, L.: Techniques and Methods of Artificial Intelligence. PWN, Warsaw (2012)

    Google Scholar 

  9. Tadeusiewicz, R.Y.: New trends in neurocybernetics. Comput. Methods Mater. Sci. 10(1), 1–7 (2010)

    Google Scholar 

  10. Smart, W.D.: Making reinforcement learning work on real robots. Ph.D. dissertation, Department of Computer Science, Brown University, Providence, RI (2002)

    Google Scholar 

  11. Papiez, P.: Use of a neural network to move a robot to the designated target position in a maze using reinforcement learning. Engineering dissertation, AGH University of Science and Technology in Krakow, Supervised and Promoted by A. Horzyk (2018)

    Google Scholar 

  12. Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning: a survey. Int. J. Robot. Res. 32, 1238–1274 (2013)

    Article  Google Scholar 

  13. Starzyk, J.A.: Motivated learning for computational intelligence. In: Igelnik, B. (ed.) Computational Modeling and Simulation of Intellect: Current State and Future Perspectives (Chap. 11), pp. 265–292. IGI Publishing, Hershey (2011)

    Chapter  Google Scholar 

  14. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  15. Janowski, M.: Use of a neural network to move a robot back to the starting position in a maze using supervised learning. Engineering dissertation, AGH University of Science and Technology in Krakow, Supervised and Promoted by A. Horzyk (2018)

    Google Scholar 

  16. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with Deep Reinforcement Learning. arXiv preprint arXiv:1312.5602 (2013)

  17. Starzyk, J.A., Graham, J., Horzyk, A.: Trust in motivated learning agents. In: 2016 IEEE SSCI, pp. 1–8. Inc. 57 Morehouse Lane Red Hook, NY, 12571, USA. IEEE Xplore (2016). https://doi.org/10.1109/SSCI.2016.7850027, ISBN 978-1-5090-4239-5

  18. Tadeusiewicz, R.: Introduction to intelligent systems. In: Wilamowski, B.M., Irvin, J.D. (eds.) The Industrial Electronics Handbook Intelligent Systems, pp. 1–12. CRC Press, Boca Raton (2011)

    Google Scholar 

  19. Matiisen, T.: Demystifying Deep Reinforcement Learning (2015). http://neuro.cs.ut.ee/demystifying-deep-reinforcement-learning

  20. Tadeusiewicz, R., Horzyk, A.: Man-machine interaction improvement by means of automatic human personality identification. In: Saeed, K., Snášel, V. (eds.) CISIM 2014. LNCS, vol. 8838, pp. 278–289. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-45237-0_27

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was supported by AGH 11.11.120.612 and the grant from the National Science Centre DEC-2016/21/B/ST7/02220.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Horzyk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Papiez, P., Horzyk, A. (2018). Motivated Reinforcement Learning Using Self-Developed Knowledge in Autonomous Cognitive Agent. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2018. Lecture Notes in Computer Science(), vol 10841. Springer, Cham. https://doi.org/10.1007/978-3-319-91253-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91253-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91252-3

  • Online ISBN: 978-3-319-91253-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics