Abstract
Reinforcement Learning (RL) can generally be distinguished into two main classes: model-based and model-free. While model-based approaches use some kind of model of the environment and exploit it for learning, model-free methods learn with the complete absence of a model. Interpolation-based RL, and more specifically Interpolated Experience Replay (IER), comes with some properties that fit very well into the domain of Organic Computing (OC). We demonstrate how an OC system can benefit from this concept and attempt to place IER into one of the two RL classes. To do so, we give a broad overview of how both of the terms (model-based and model-free) are defined and detail different model-based categorizations. It turns out that replay-based techniques are quite on the edge between both. Furthermore, even if interpolation based on stored samples could be classified as a kind of model, the general way of using the interpolated experiences remains replay-based. Here, the borders get blurry and the classes overlap. In conclusion, we define a third class: semi-model-based. Additionally, we show that some architectural approaches of the OC domain fit this new class very well and even encourage such methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Deisenroth, M., Rasmussen, C.E.: PILCO: a model-based and data-efficient approach to policy search. In: Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Citeseer (2011)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT press (2016)
van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1 (2016). https://doi.org/10.1609/aaai.v30i1.10295, https://ojs.aaai.org/index.php/AAAI/article/view/10295
Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning (2017). https://doi.org/10.48550/ARXIV.1710.02298
Jacob, E.K.: Classification and categorization: a difference that makes a difference (2004). publisher: Graduate School of Library and Information Science. University of Illinois
Kaiser, L., et al.: Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374 (2019)
LaValle, S.M., et al.: Rapidly-exploring random trees: a new tool for path planning. publisher: Ames. IA, USA (1998)
Levine, S., Abbeel, P.: Learning neural network policies with guided policy search under unknown dynamics. Adv. Neural Inf. Process. Syst. 27 (2014)
Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning (2015). https://doi.org/10.48550/ARXIV.1509.02971
Lin, L.J.: Reinforcement learning for robots using neural networks. Carnegie-Mellon Univ Pittsburgh Pa School of Computer Science, Technical report (1993)
Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937. PMLR (2016)
Mnih, V., et al.: Playing Atari with deep reinforcement learning. CoRR abs/1312.5602 (2013). http://arxiv.org/abs/1312.5602
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Moerland, T.M., Broekens, J., Plaat, A., Jonker, C.M.: Model-based reinforcement learning: a survey (2020). https://doi.org/10.48550/ARXIV.2006.16712
Moore, A.W., Atkeson, C.G.: Prioritized sweeping: reinforcement learning with less data and less time. Mach. Learn. 13(1), 103–130 (1993). https://doi.org/10.1007/BF00993104
Müller-Schloer, C., Schmeck, H., Ungerer, T.: Organic Computing-a Paradigm Shift for Complex Systems. Springer, Cham (2011). https://doi.org/10.1007/978-3-0348-0130-0
M üller-Schloer, C., Tomforde, S.: Organic computing - technical systems for survival in the real world. Birkh äuser (2017). https://doi.org/10.1007/978-3-319-68477-2
Peng, B., Li, X., Gao, J., Liu, J., Wong, K.F., Su, S.Y.: Deep dyna-Q: integrating planning for task-completion dialogue policy learning. arXiv preprint arXiv:1801.06176 (2018)
Pilar von Pilchau, W.: Averaging rewards as a first approach towards interpolated experience replay. In: Draude, C., Lange, M., Sick, B. (eds.) INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik - Informatik für Gesellschaft (Workshop-Beiträge), pp. 493–506. Gesellschaft für Informatik e.V., Bonn (2019). https://doi.org/10.18420/inf2019_ws53
Pilar von Pilchau, W., Stein, A., Hähner, J.: Bootstrapping a DQN replay memory with synthetic experiences. In: Merelo, J.J., Garibaldi, J., Wagner, C., Bäck, T., Madani, K., Warwick, K. (eds.) Proceedings of the 12th International Joint Conference on Computational Intelligence (IJCCI 2020), 2–4 November 2020 (2020). https://doi.org/10.5220/0010107904040411
Pilar von Pilchau, W., Stein, A., Hähner, J.: Synthetic experiences for accelerating DQN performance in discrete non-deterministic environments. Algorithms 14(8), 226 (2021). https://doi.org/10.3390/a14080226
Pilar von Pilchau, W., Stein, A., Hähner, J.: Interpolated experience replay for continuous environments. In: Proceedings of the 14th International Joint Conference on Computational Intelligence (IJCCI 2020), 24–46 October 2022, p. to appear (2022)
Prothmann, H., et al.: Organic traffic control. In: Müller-Schloer, C., Schmeck, H., Ungerer, T. (eds.) Organic Computing — A Paradigm Shift for Complex Systems. Autonomic Systems, vol. 1, pp. 431–446. Springer, Cham (2011). https://doi.org/10.1007/978-3-0348-0130-0_28
Sander, R.M.: Interpolated experience replay for improved sample efficiency of model-free deep reinforcement learning algorithms. Ph.D. thesis, Massachusetts Institute of Technology (2021)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). https://doi.org/10.48550/ARXIV.1707.06347
Schwarz, H., Köckler, N.: Interpolation und approximation. In: Numerische Mathematik, pp. 91–182. Vieweg+Teubner Verlag (2011). https://doi.org/10.1007/978-3-8348-8166-3_4
Silver, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419), 1140–1144 (2018)
Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550(7676), 354–359 (2017). https://doi.org/10.1038/nature24270
Stein, A., Tomforde, S., Diaconescu, A., Hähner, J., Müller-Schloer, C.: A concept for proactive knowledge construction in self-learning autonomous systems. In: 2018 IEEE 3rd International Workshops on Foundations and Applications of Self* Systems (FAS*W), pp. 204–213 (2018). https://doi.org/10.1109/FAS-W.2018.00048
Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Porter, B., Mooney, R. (eds.) Machine Learning Proceedings 1990, pp. 216–224. Morgan Kaufmann, San Francisco (CA) (1990). https://doi.org/10.1016/B978-1-55860-141-3.50030-4
Sutton, R.S.: Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bull. 2(4), 160–163 (1991)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press (2018)
Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. Adv. Neural Inf. Process. Syst. 12 (1999)
Tomforde, S., Sick, B., Müller-Schloer, C.: organic computing in the spotlight. CoRR abs/1701.08125 (2017)
Van Hasselt, H.P., Hessel, M., Aslanides, J.: When to use parametric models in reinforcement learning? Adv. Neural Inf. Process. Syst. 32 (2019)
Vanseijen, H., Sutton, R.: A deeper look at planning as learning from replay. In: International Conference on Machine Learning, pp. 2314–2322. PMLR (2015)
Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1995–2003. PMLR (2016)
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992). https://doi.org/10.1007/BF00992698
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
von Pilchau, W.P., Stein, A., Hähner, J. (2022). Semi-model-Based Reinforcement Learning in Organic Computing Systems. In: Schulz, M., Trinitis, C., Papadopoulou, N., Pionteck, T. (eds) Architecture of Computing Systems. ARCS 2022. Lecture Notes in Computer Science, vol 13642. Springer, Cham. https://doi.org/10.1007/978-3-031-21867-5_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-21867-5_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21866-8
Online ISBN: 978-3-031-21867-5
eBook Packages: Computer ScienceComputer Science (R0)