Skip to main content

A Theory of Profit Sharing in Dynamic Environment

  • Conference paper
PRICAI 2000 Topics in Artificial Intelligence (PRICAI 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1886))

Included in the following conference series:

Abstract

Reinforcement learning is one of the most popular learning method for machine learning. Some reinforcement learning algorithms for adapting to the dynamic environment are proposed. In this paper, the number of episode to suppress the ineffective rule after the change of the environment was examined analytically. Afterwards, the forgettable profit sharing method to suppress the ineffective rule quickly is proposed, and the effectiveness was experimentally confirmed comparing the proposed method with conventional method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Leslie Pack Kealbling, Michael L. Littman, and Andrew W. Moore: Reinforcement Learning: A Survey, Journal of Artificial Intelligence Research 4, pp237–285(1996)

    Google Scholar 

  2. Miyazaki, K., and Kobayashi, S.: Reinforcement Learning Systems for Discrete Markov Decision Processes, JJSAI,Vol.12,No.6,pp811–821(1997)

    Google Scholar 

  3. Miyazaki, K., Yamamura, M., and Kobayashi, S.: MarcoPolo: A Reinforcement Learning System Considering Tradeoff Exploitation and Exploration under Markovian Environments, JSSAI, Vol.12, No.1, pp78–88(1997)

    Google Scholar 

  4. Yamaguchi, T., Masubuchi, M., Fujihara, K., and Yachida, M.: Accelerating Reinforcement Learning for a Real Robot with Automated Abstract Sub-Rewards Generation, JSSAI, Vol.12, No.5, pp712–723(1997)

    Google Scholar 

  5. Minoru A.: Research Issues on Real Robot Reinforcement Learning, JSSAI, Vol.12, No.6, pp.831–836(1997)

    Google Scholar 

  6. Devika Subramanian, Peter Druschel, and Johnny Chen: Ants and reinforcement learning: A case study in routing in dynamic networks, In Proceedings of IJCAI-97, 1997.

    Google Scholar 

  7. Justin A. Boyan and Michael L. Littman. Packet routing in dynamically changing networks: A reinforcement learning approach. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 671–678. Morgan Kaufmann, San Francisco CA, 1993

    Google Scholar 

  8. Arai, S., Miyazaki, K., and Kobayashi, S.: Methodology in Multi-Agent Reinforcement Learning-Approaches by Q-learning and Profit Sharing-, JJSAI, Vol.13, No.4, pp609–617(1998)

    Google Scholar 

  9. Unemi, T.: Collective Behavior of Reinforcement Learning Agents, MACC, pp137–150(1993)

    Google Scholar 

  10. Yamamura, M., Miyazaki, K., and Kobayashi, S.: A Survey on Learning for Agents, JSSAI, Vol.10, No.5, pp683–689(1995)

    Google Scholar 

  11. Yamaguchi, T., Miura, M., and Yachida, M.: Multi-Agent Reinforcement Learning with Adaptive Mimetism, JSSAI, Vol.12, No.2, pp323–330(1997)

    Google Scholar 

  12. Miyazaki, K., Arai, S., and Kobayashi, S.: A Theory of Profit Sharing in Multi-Agent Reinforcement Learning, JSSAI, Vol.14, No.6, pp1156–1164(1999)

    Google Scholar 

  13. Kimura, H., Yamamura, M., and Kobayashi, S.: Reinforcement Learning in Partially Observable Markov Decision Processes: A Stochastic Gradient Method, JSSAI, Vol.11, No.5, pp761–768(1996)

    Google Scholar 

  14. Yamamoto, S., Yamaguchi, F., Saito, H., and Nakanishi, M.: A recognization method of environmental change on reinforcement learning, TECHNICAL REPORT OF Institute of Electronics Information and Communication Engineers, AI99-81, pp31–36(2000-01)

    Google Scholar 

  15. Miyazaki, K., Yamamura, M., and Kobayashi, S.: A Theory of Profit Sharing in Reinforcement Learning, JJSAI, Vol.9, No.4, pp.580–587(1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kato, S., Matsuo, H. (2000). A Theory of Profit Sharing in Dynamic Environment. In: Mizoguchi, R., Slaney, J. (eds) PRICAI 2000 Topics in Artificial Intelligence. PRICAI 2000. Lecture Notes in Computer Science(), vol 1886. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44533-1_15

Download citation

  • DOI: https://doi.org/10.1007/3-540-44533-1_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67925-7

  • Online ISBN: 978-3-540-44533-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics