Abstract
In reinforcement learning, it is important to get nearly right answers early. Good prediction early can reduce the prediction error afterward and accelerate learning speed. We propose Fuzzy Q-Map, function approximation algorithm based on on-line fuzzy clustering in order to accelerate learning. Fuzzy Q-Map can handle the uncertainty owing to the absence of environment model. Appling membership function to reinforcement learning can reduce the prediction error and destructive interference phenomenon caused by changes of the distribution of training data. In order to evaluate fuzzy Q-Map’s performance, we experimented on the mountain car problem and compared it with CMAC. CMAC achieves the prediction rate 80% from 250 training data, Fuzzy Q-Map learns faster and keep up the prediction rate 80% from 250 training data. Fuzzy Q-Map may be applied to the field of simulation that has uncertainty and complexity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Sutton, R., Barto, A.G.: Reinforcement Learning:An Introduction. MIT Press, Cambridge, MA (1998)
Kaelbling, L.P., Littman, M.L., Moor, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)
Glorennce, P.Y.: Reinforcement Learning: an Overview. In: Proceedings of the European Symposium on Intelligent Techniques (2000)
Smart, W.D.: Making Reinforcement Learning Work on Real Robots, Ph. D. Thesis, Brown University (2002)
Jain, A.K., Murty, M.N, Flynn, P.J.: Data Clustering: A Review, ACM Computing Surveys, 31(3) (1999)
Baraldi, A., Blonda, P.: A survey of fuzzy clustering algorithms for pattern recognition. IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 29(6), 778–785 (1999)
Likas, A.: A Reinforcement Learning Approach to On-line Clustering. Neural computation 11(8), 1915–1932 (1999)
Karayiannis, N.B., Bezdek, J.C.: An Integrated Approach to Fuzzy Learning Vector Quantization and Fuzzy c-Means Clstering. IEEE Transactions of Fuzzy systems 5(4) (1997)
Hammer, B., Villmann, T.: Generalized Relevance Learning Vector Quantization. Neural Networks 15(8-9), 1059–1068 (2002)
Hu, S.J.: Pattern Recognition by LVQ and GLVQ Networks, http://neuron.et.ntust.edu.tw/homework/87/NN/87Homework%232/M8702043
Herrmann, M., Der, R.: Efficient Q-Learning by Division of Labor. In: Proceedings of International Conference on Artificial Neural Networks (1995)
Yamada, K., Svinin, M., Ueda, K.: Reinforcement Learning with Autonomous State Space Construction using Unsupervised Clustering Method. In: Proceedings of the 5th International Symposium on Artificial Life and Robotics (2000)
Jouffe, L.: Fuzzy Inference System Learning by Reinforcement Methods. IEEE Transactions on Systems, Man and Cybernetics, 338–355 (1998)
Bonarini, A.: Delayed Reinforcement, Fuzzy Q-Learning and Fuzzy Logic Controllers. In: Herrera, F., Verdegay, J.L. (eds.) Genetic Algorithms and Soft Computing, pp. 447–466 (1996)
Glorennec, P.Y., Jouffe, L.: Fuzzy Q-Learning. In: Proceedings of Sixth IEEE International Conference on Fuzzy Systems, pp. 719–724 (1997)
Sutton, R.S.: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding. In: Advances in Neural Information Processing Systems, vol. 8, pp. 1038–1044. MIT Press, Cambridge, MA (1996)
Kretchmar, R.M., Anderson, C.W.: Comparison of CMACs and Radial Basis Functions for Local Function Approximators in Reinforcement Learning. In: Proceedings of International Conference on Neural Networks (1997)
Santamaria, J.C., Sutton, R.S., Ram, A.: Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces, COINS Technical Report, pp. 96–88 (1996)
Smart, W.D., Kaelbling, L.P.: Practical Reinforcement Learning in Continuous Spaces. In: Proceedings of International Conference on Machine Learning (2000)
Smart, W.D., Kaelbling, L.P.: Reinforcement Learning for Robot Control, In: Mobile Robots XVI (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lee, Y., Hong, S. (2007). Fuzzy Q-Map Algorithm for Reinforcement Learning. In: Wang, Y., Cheung, Ym., Liu, H. (eds) Computational Intelligence and Security. CIS 2006. Lecture Notes in Computer Science(), vol 4456. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74377-4_32
Download citation
DOI: https://doi.org/10.1007/978-3-540-74377-4_32
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74376-7
Online ISBN: 978-3-540-74377-4
eBook Packages: Computer ScienceComputer Science (R0)