Abstract
Policy gradient methods are amongst the most efficient for on-policy, model-free reinforcement learning. However, they suffer from high variance in gradient updates, making them unstable during training. Subtracting a baseline from the rewards is an effective strategy to reduce variance, such as in actor-critic models. This work presents a variation of the actor-critic model that uses a fuzzy system instead of a neural network to estimate the state value function. The fuzzy value approximation is inspired by previous value-based methods such as fuzzy Q-learning. Experiments with the cart-pole benchmark show that fuzzy value approximation outperforms several reinforcement learning algorithms in terms of sample-efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Berenji, H.: Fuzzy q-learning: a new approach for fuzzy dynamic programming. In: Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference, vol. 1, pp. 486–491 (1994). https://doi.org/10.1109/FUZZY.1994.343737
Botvinick, M., Ritter, S., Wang, J.X., Kurth-Nelson, Z., Blundell, C., Hassabis, D.: Reinforcement learning, fast and slow. Trends Cogn. Sci. 23(5), 408–422 (2019)
Brockman, G., et al.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)
Cooper, M.G., Vidal, J.J.: Genetic design of fuzzy controllers: the cart and jointed-pole problem. In: Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference, pp. 1332–1337. IEEE (1994)
Duan, Y., et al.: One-shot imitation learning. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 1087–1098. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/6709-one-shot-imitation-learning.pdf
Geist, M., Piot, B., Pietquin, O.: Is the bellman residual a bad proxy? In: Advances in Neural Information Processing Systems, pp. 3205–3214 (2017)
Glorennec, P., Jouffe, L.: Fuzzy q-learning. In: Proceedings of 6th International Fuzzy Systems Conference, vol. 2, pp. 659–662 (1997). https://doi.org/10.1109/FUZZY.1997.622790
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290 (2018)
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters (2019)
Hussein, A., Gaber, M.M., Elyan, E., Jayne, C.: Imitation learning: a survey of learning methods. ACM Comput. Surv. 50(2), 1–35 (2017). https://doi.org/10.1145/3054912
Jouffe, L.: Actor-critic learning based on fuzzy inference system. In: 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No. 96CH35929), vol. 1, pp. 339–344. IEEE (1996)
Jouffe, L.: Fuzzy inference system learning by reinforcement methods. IEEE Trans. Syst. Man Cybern.-Part C: Appl. Rev. 28(3), 338–355 (1998)
Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, pp. 1008–1014. Citeseer (2000)
Kool, W., Van Hoof, H., Welling, M.: Attention, learn to solve routing problems! arXiv preprint arXiv:1803.08475 (2018)
Li, G., Mueller, M., Casser, V., Smith, N., Michels, D.L., Ghanem, B.: Oil: observational imitation learning. arXiv preprint arXiv:1803.01129 (2018)
Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., dAlché-Buc, A., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Pedrycz, W., Gomide, F.: Fuzzy Systems Engineering: Toward Human-Centric Computing. Wiley, Hoboken (2007)
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms (2014)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Sutton, R., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems, pp. 1057–1063 (2000)
Wang, X.S., Cheng, Y.H., Yi, J.Q.: A fuzzy actor-critic reinforcement learning network. Inf. Sci. 177(18), 3764–3781 (2007)
Wu, Y.H., Charoenphakdee, N., Bao, H., Tangkaratt, V., Sugiyama, M.: Imitation learning from imperfect demonstration. arXiv preprint arXiv:1901.09387 (2019)
Zhang, P., et al.: Kogun: accelerating deep deinforcement learning via integrating human suboptimal knowledge. arXiv preprint arXiv:2002.07418 (2020)
Acknowledgements
The authors acknowledge the anonymous referees for their invaluable comments and suggestions to improve the paper. The first and second authors thank Loggi for the infrastructure and technical support. The last author is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 302467/2019-0.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Surita, G., Lemos, A., Gomide, F. (2022). Fuzzy Baselines to Stabilize Policy Gradient Reinforcement Learning. In: Rayz, J., Raskin, V., Dick, S., Kreinovich, V. (eds) Explainable AI and Other Applications of Fuzzy Techniques. NAFIPS 2021. Lecture Notes in Networks and Systems, vol 258. Springer, Cham. https://doi.org/10.1007/978-3-030-82099-2_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-82099-2_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82098-5
Online ISBN: 978-3-030-82099-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)