Skip to main content

State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network

  • Conference paper
Advances in Neural Networks – ISNN 2007 (ISNN 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4492))

Included in the following conference series:

Abstract

In this paper, a tabular reinforcement learning (RL) method is proposed based on improved fuzzy min-max (FMM) neural network. The method is named FMM-RL. The FMM neural network is used to segment the state space of the RL problem. The aim is to solve the “curse of dimensionality” problem of RL. Furthermore, the speed of convergence is improved evidently. Regions of state space serve as the hyperboxes of FMM. The minimal and maximal points of the hyperbox are used to define the state space partition boundaries. During the training of FMM neural network, the state space is partitioned via operations on hyperbox. Therefore, a favorable generalization performance of state space can be obtained. Finally, the method of this paper is applied to learn behaviors for the reactive robot. The experiment shows that the algorithm can effectively solve the problem of navigation in a complicated unknown environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  2. Michie, D., Chambers, R.A.: Box: An experiment in adaptive control. Machine Intelligent 2, 137–152 (1968)

    MATH  Google Scholar 

  3. Moore, A.W., Atkeson, C.G.: The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces. Machine Learning 21, 199–233 (1995)

    Google Scholar 

  4. Munos, R., Moore, A.W.: Variable Resolution Discretization for High-accuracy Solutions of Optimal Control Problems. In: Proc. 16th International Joint Conf. on Artificial Intelligence, pp. 1348–1355 (1999)

    Google Scholar 

  5. Reynolds, S.I.: Adaptive Resolution Model-free Reinforcement Learning: Decision Boundary Partitioning. In: Proc. 17th International Conf. on Maching Learning, pp. 783–790 (2000)

    Google Scholar 

  6. Murao, H., Kitamura, S.: Q-Learning with Adaptive State Segmentation (QLASS). In: Proc. IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 179–184 (1997)

    Google Scholar 

  7. Lee, I.S.K., Lau, H.Y.K.: Adaptive State Space Partitioning for Reinforcement Learning. Engineering Applications of Artificial Intelligence 17, 577–588 (2004)

    Article  Google Scholar 

  8. Simpson, P.: Fuzzy Min-max Neural Network-Part I: Classification. IEEE Trans. on Neural Networks 3(5), 776–786 (1992)

    Article  Google Scholar 

  9. Simpson, P.K.: Fuzzy Min-max Neural Network-Part II: Clustering. IEEE Trans. on Fuzzy Systems 1(1), 32–45 (1993)

    Article  Google Scholar 

  10. Gabrys, B., Bargiela, A.: General Fuzzy Min-max Neural Network for Clustering and Classification. IEEE Trans. on Neural Networks 11(3), 769–783 (1999)

    Article  Google Scholar 

  11. Zhang, R.B.: Reinforcement Learning Theory and Applications. Harbin Engineering University Press, Harbin (2000)

    Google Scholar 

  12. Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8(3), 279–292 (1992)

    MATH  Google Scholar 

  13. Peng, J., Williams, R.J.: Incremental Multi-step Q-learning. In: Machine Learning: Proceedings of the Eleventh International Conference (ML94), New Brunswick, NJ, USA, pp. 226–232. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Derong Liu Shumin Fei Zengguang Hou Huaguang Zhang Changyin Sun

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Duan, Y., Cui, B., Xu, X. (2007). State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4492. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72393-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72393-6_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72392-9

  • Online ISBN: 978-3-540-72393-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics