Abstract
In reinforcement learning field, off-policy evaluation(OPE),a core task to learn a new policy from existing trajectory data of real policy to evaluate, is highly important for real policy deployment before policy running, avoiding unexpected dangerous or expensive agent actions. Among existing methods, the return value of a trajectory is calculated through Markov decision process (MDP)-based rewards summation of sequential states’ actions, and the aim of a new policy is to achieve the minimum variances compared with return values from existing trajectory data. However, such methods ignore to guide the influence of key states in OPE, which are critical to success and should be set with more preference as well as the return value bias. In this paper, we develop a configurable OPE with key state-based bias constraints. We first adopt FP-Growth to mine the key states and get corresponding reward expectations of key states. Through further configuring every reward expectation scope as bias constraint, we then construct new goal function with the combination of bias and variance and realize a guided importance sampling-based OPE. Taking the GridWorld game as our experiment platform, we evaluate our method with performance analysis and case studies, as well as make comparisons with mainstream methods to show the effectiveness.
This work was supported by the National Natural Science Foundation of China (61972025, 61802389, 61672092, U1811264, 61966009), the Fundamental Research Funds for the Central Universities of China (2018JBZ103, 2019RC008), Science and Technology on Information Assurance Laboratory, Guangxi Key Laboratory of Trusted Software (KX201902).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Murphy, S.A., van der Laan, M.J., Robins, J.M.: Marginal mean models for dynamic regimes. J. Am. Stat. Assoc. 96(456), 1410–1423 (2001)
Petersen, M., Schwan, J., Gruber, S., Blaser, N., Schomaker, M., van der Lan, M.: Targeted maximum likelihood estimation for dynamic and static longitudinal marginal structural working models. J. Causal Inference 2(2), 147–185 (2014)
Theocharous, G., Thomas, P.S., Ghavamzadeh, M.: Personalized ad recommendation systems for life-time value optimization with guarantees. In: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI 2015, pp. 1806–1812. AAAI Press (2015)
Hoiles, W., Van Der Schaar, M.: Bounded off-policy evaluation with missing data for course recommendation and curriculum design. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning – Vol. 48, ICML2016, pp. 1596–1604. JMLR.org (2016)
Jiang, N., Li, L.: Doubly Robust Off-policy Value Evaluation for Reinforcement Learning. arXiv e-prints, art. arXiv:1511.03722, (2015)
Mannor, S., Simester, D., Sun, P., Tsitsiklis, J.N.: Bias and variance approximation in value function estimates. Manage. Sci. 53(2), 308–322 (2007). https://doi.org/10.1287/mnsc.1060.0614
Rosenbaum, P.R., Rubin, D.B.: The central role of the propensity score in observational studies for causal effects. Biometrika 70(1), 41–55 (1983). https://doi.org/10.1093/biomet/70.1.41
Precup, D.: Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts Amherst, 2000. https://scholarworks.umass.edu/dissertations/AAI9978540
Precup, D., Sutton, R.S., Singh, S.P.: Eligibility traces for off-policy policy evaluation. In: Proceedings of the Seventeenth International Conference on Machine Learning, ICML 2000, pp. 759–766, San Francisco, CA, USA (2000). ISBN 1-55860-707-2
Farajtabar, M., Chow, Y., Ghavamzadeh, M.: More robust doubly robust off-policy evaluation. CoRR, abs/1802.03493, 2018
Thomas, P., Brunskill, E.: Data-efficient off-policy policy evaluation for reinforcement learning. In: Balcan, M.F., Weinberger, K.Q., (eds.), Proceedings of The 33rd International Conference on Machine Learning, vol. 48 of Proceedings of Machine Learning Research, pp. 2139–2148, New York, USA (2016.)
Thomas, Philip S., Theocharous, G., Ghavamzadeh, M.: High confidence off-policy evaluation. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), (2015)
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York (2014)
Han, J., Pei, J., Yin, Y.: Mining frequent patterns without candidate generation. In: Chen, W., Naughton, J., Bernstein, P.A., editors, 2000 ACM SIGMOD International Conference on Management of Data, pp. 1–12. ACM Press (2000)
Hanna, J.P, Niekum, S., Stone, P., et al.: Importance sampling policy evaluation with an estimated behavior policy[C]. In: International Conference on Machine Learning, pp. 2605–2613 (2019)
Bibaut, AF., Malenica, I., Vlassis, N., et al.: More efficient off-policy evaluation through regularized targeted learning.[C]. In: International Conference on Machine Learning, pp. 654–663 (2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, S. et al. (2020). A Configurable off-Policy Evaluation with Key State-Based Bias Constraints in AI Reinforcement Learning. In: Xiang, Y., Liu, Z., Li, J. (eds) Security and Privacy in Social Networks and Big Data. SocialSec 2020. Communications in Computer and Information Science, vol 1298. Springer, Singapore. https://doi.org/10.1007/978-981-15-9031-3_11
Download citation
DOI: https://doi.org/10.1007/978-981-15-9031-3_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-9030-6
Online ISBN: 978-981-15-9031-3
eBook Packages: Computer ScienceComputer Science (R0)