Skip to main content

Knowledge Reuse of Learning Agent Based on Factor Information of Behavioral Rules

  • Conference paper
  • First Online:
  • 2600 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1142))

Abstract

In this study, we attempt to extract knowledge by collecting results from multiple environments using an autonomous learning agent. A common factor of the environment is extracted by applying non-negative matrix factorization to the set of learning results of the reinforcement learning agent. In transfer learning of knowledge management of agents, as the number of experienced tasks increases, the knowledge database becomes larger and the cost of knowledge selection increases. By the proposed approach, an agent that can adapt to multiple environments can be developed without increasing cost of knowledge selection.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    Article  Google Scholar 

  2. Saito, M., Kobayashi, I.: A study on efficient transfer learning for reinforcement learning using sparse coding. J. Autom. Control Eng. 4(4), 324–330 (2016)

    Article  Google Scholar 

  3. Ohmura, H., Katagami, D., Nitta, K.: Multi user learning agent with clustering. In: Proceeding of 8th International Symposium on Advanced Intelligent Systems (ISIS 2007), pp. 70–72 (2007)

    Article  Google Scholar 

  4. Fernández, F., García, J., Veloso, M.: Probabilistic policy reuse for inter-task transfer learning. Robot. Auton. Syst. 58, 866–871 (2010)

    Article  Google Scholar 

  5. Fernandez, F., Veloso, M.: Probabilistic policy reuse in a reinforcement learning agent. In: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 720–727 (2006)

    Google Scholar 

  6. Fachantidis, A., Partalas, I., Tsoumakas, G., Vlahavas, I.: Transferring task models in reinforcement Learning agents. Neurocomputing 107, 23–32 (2013)

    Article  Google Scholar 

  7. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  8. Minato, T., Asada, M.: Environmental change adaptation for mobile robot navigation. J. Robot. Soc. Jpn, 18(5), 706–712 (2000)

    Article  Google Scholar 

  9. Laroche, R., Barlier, M.: Transfer reinfrocement learning with shared dynamics. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 2147–2153 (2017)

    Google Scholar 

  10. Silva, F.L., Taylor, M.E., Costa, R.A.H.: Autonomously reusing knowledge in multiagent reinforcement learning. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 5487–5493 (2018)

    Google Scholar 

  11. Zhang, H., et al.: Learning to design games: strategic environments in reinforcement learning. In: Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI), pp. 3068–3074 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B) Numbers 15K16295 and Scientific Research C 19K04887.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fumiaki Saıtoh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saıtoh, F. (2019). Knowledge Reuse of Learning Agent Based on Factor Information of Behavioral Rules. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1142. Springer, Cham. https://doi.org/10.1007/978-3-030-36808-1_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36808-1_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36807-4

  • Online ISBN: 978-3-030-36808-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics