Skip to main content

A Reinforcement Learning with Condition Reduced Fuzz Rules

  • Conference paper
  • First Online:
  • 943 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1585))

Abstract

This paper proposes a new Q-learning method for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all the combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-value of a condition/action pair is estimated. The values are revised by the Q-learning algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction can be done by using what we call “Condition Reduced Fuzzy Rules (CRFR)”. The antecedent part of CRFR consists of all the actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equivalently to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is reduced and the number of the revised parameters at each step of learning is increased.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. C. Watkins and P. Dayan: Technical Note: Q-learning. Machine Learning 8-3/4 (1992) 279–292

    MATH  Google Scholar 

  2. R. S. Sutton: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding. Advances in Neural Information Processing Systems 8, MIT Press (1996)

    Google Scholar 

  3. T. Horiuchi, A. Fujino, O. Katai and T. Sawaragi: Fuzzy Interpolation-Based Q-Learning with Profit Sharing Plan Scheme. Proc. of the 6th IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’97) 3 (1997) 1,707-1,712

    Google Scholar 

  4. M. Sugeno: Fuzzy Controls. Nikkan kogyo Sinbun (1988), (in Japanese)

    Google Scholar 

  5. L. J. Lin: Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8-3/4 (1992) 293–321

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kawakami, H., Katai, O., Konishi, T. (1999). A Reinforcement Learning with Condition Reduced Fuzz Rules. In: McKay, B., Yao, X., Newton, C.S., Kim, JH., Furuhashi, T. (eds) Simulated Evolution and Learning. SEAL 1998. Lecture Notes in Computer Science(), vol 1585. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48873-1_27

Download citation

  • DOI: https://doi.org/10.1007/3-540-48873-1_27

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65907-5

  • Online ISBN: 978-3-540-48873-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics