Abstract
In this paper we propose a novel, phenomenological approach to explainable Reinforcement Learning (RL). While the ever-increasing performance of RL agents surpasses human capabilities on many problems, it falls short concerning explainability, which might be of minor importance when solving toy problems but is certainly a major obstacle for the application of RL in industrial and safety-critical processes. The literature contains different approaches to increase explainability of deep artificial networks. However, to our knowledge there is no simple, agent-agnostic method to extract human-readable rules from trained RL agents. Our approach is based on the idea of observing the agent and its environment during evaluation episodes and inducing a decision tree from the collected samples, obtaining an explainable mapping of the environment’s state to the agent’s corresponding action. We tested our idea on classical control problems provided by OpenAI Gym using handcrafted rules as a benchmark as well as trained deep RL agents with two different algorithms for decision tree induction. The extracted rules demonstrate how this new approach might be a valuable step towards the goal of explainable RL.
This research was supported by the research training group “Dataninja” (Trustworthy AI for Seamless Problem Solving: Next Generation Intelligence Joins Robust Data Analysis) funded by the German federal state of North Rhine-Westphalia.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
For details on the implementation and reproducibility the code can be found on https://github.com/RaphaelEngelhardt/sbreferl.
- 2.
We report the average mean return ± average standard deviation of returns for the 5 repetitions with different seeds.
- 3.
The condition \(\omega >0.08 \wedge \theta <-0.04\) is neglected here, since it can be assumed to appear very seldom: Positive \(\omega \) will normally lead to positive \(\theta \).
References
Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst. Man Cybern. 13(5), 834–846 (1983). https://doi.org/10.1109/TSMC.1983.6313077
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Routledge, London (1984)
Brockman, G., et al.: OpenAI Gym (2016). https://arxiv.org/abs/1606.01540
Coppens, Y., Efthymiadis, K., et al.: Distilling deep reinforcement learning policies in soft decision trees. In: Proceedings of IJCAI 2019 Workshop on Explainable Artificial Intelligence, pp. 1–6 (2019)
Fujimoto, S., van Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. In: Dy, J., Krause, A. (eds.) Proceedings of 35th ICML, pp. 1587–1596. PMLR (2018)
Liu, G., Schulte, O., Zhu, W., Li, Q.: Toward interpretable deep reinforcement learning with linear model U-trees. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11052, pp. 414–429. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10928-8_25
Mnih, V., Kavukcuoglu, K., et al.: Playing Atari with deep reinforcement learning (2013). https://arxiv.org/abs/1312.5602
Moore, A.W.: Efficient memory-based learning for robot control. Technical report, University of Cambridge (1990)
Muggleton, S.: Predicate invention and utilization. J. Exp. Theor. Artif. Intell. 6(1), 121–130 (1994). https://doi.org/10.1080/09528139408953784
Muggleton, S.H., Lin, D., Tamaddoni-Nezhad, A.: Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach. Learn. 100(1), 49–73 (2015). https://doi.org/10.1007/s10994-014-5471-y
Pedregosa, F., Varoquaux, G., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12(85), 2825–2830 (2011)
Raffin, A., Hill, A., et al.: Stable-baselines3: reliable reinforcement learning implementations. J. Mach. Learn. Res. 22(268), 1–8 (2021)
Ross, S., Gordon, G., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: Proceedings of 14th International Conference on Artificial Intelligence and Statistics, vol. 15, pp. 627–635 (2011)
Schaal, S.: Is imitation learning the route to humanoid robots? Trends Cogn. Sci. 3(6), 233–242 (1999). https://doi.org/10.1016/S1364-6613(99)01327-3
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). https://arxiv.org/abs/1707.06347
Stepišnik, T., Kocev, D.: Oblique predictive clustering trees. Knowl.-Based Syst. 227, 107228 (2021). https://doi.org/10.1016/j.knosys.2021.107228
Verma, A., Murali, V., et al.: Programmatically interpretable reinforcement learning. In: Dy, J., Krause, A. (eds.) Proceedings of 35th ICML, pp. 5045–5054. PMLR (2018)
Wickramarachchi, D., Robertson, B., Reale, M., Price, C., Brown, J.: HHCART: an oblique decision tree. Comput. Stat. Data Anal. 96, 12–23 (2016). https://doi.org/10.1016/j.csda.2015.11.006
Zilke, J.R., Loza Mencía, E., Janssen, F.: DeepRED – rule extraction from deep neural networks. In: Calders, T., Ceci, M., Malerba, D. (eds.) DS 2016. LNCS (LNAI), vol. 9956, pp. 457–473. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46307-0_29
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Engelhardt, R.C., Lange, M., Wiskott, L., Konen, W. (2023). Sample-Based Rule Extraction for Explainable Reinforcement Learning. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2022. Lecture Notes in Computer Science, vol 13810. Springer, Cham. https://doi.org/10.1007/978-3-031-25599-1_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-25599-1_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25598-4
Online ISBN: 978-3-031-25599-1
eBook Packages: Computer ScienceComputer Science (R0)