Abstract
Temporal Difference (TD) learning is one of the most simple but efficient algorithms for policy evaluation in reinforcement learning. Although the finite-time convergence results of the TD algorithm are abundant now, the distributional convergence has still been blank. This paper shows that TD with constant step size simulates Markov chains converging to some stationary distribution under both i.i.d. and Markov chain observation models. We prove that TD enjoys the geometric distributional convergence rate and show how the step size affects the expectation and covariance of the stationary distribution. All assumptions used in our paper are mild and common in the TD community. Our proved results indicate a tradeoff between the convergence speed and accuracy for TD. Based on our theoretical findings, we explain why the Jacobi preconditioner can accelerate the TD algorithms.
J. Dai–Independent Researcher.
This research is supported by the National Natural Science Foundation of China under the grant (12002382).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We call it semi-gradient because the \(\textbf{g}^t\) does not follow the stochastic gradient direction of any fixed objective.
- 2.
The i.i.d. observation model happens if the initial state \(s_0\) is drawn from the stationary distribution.
- 3.
In practice, \(\textbf{J}\) is approximated by a Monte Carlo method.
- 4.
The random walk gradient descent can be regarded as a special MCGD because the random walk is a Markov chain process.
References
Baird, L.: Residual algorithms: reinforcement learning with function approximation. In: Machine Learning, pp. 30–37 (1995)
Bertsekas, D.P.: A new class of incremental gradient methods for least squares problems. SIAM J. Optim. 7(4), 913–926 (1997)
Bhandari, J., Russo, D., Singal, R.: A finite time analysis of temporal difference learning with linear function approximation. In: Conference on learning theory (2018)
Borkar, V.S.: Stochastic approximation: a dynamical systems viewpoint, vol. 48. Springer (2009)
Brosse, N., Durmus, A., Moulines, E.: The promises and pitfalls of stochastic gradient langevin dynamics. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Can, B., Gurbuzbalaban, M., Zhu, L.: Accelerated linear convergence of stochastic momentum methods in Wasserstein distances. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 891–901. PMLR, 09–15 Jun 2019
Dalal, G., Szörényi, B., Thoppe, G., Mannor, S.: Finite sample analyses for td(0) with function approximation. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Dieuleveut, A., Durmus, A., Bach, F.: Bridging the gap between constant step size stochastic gradient descent and markov chains. Ann. Stat. 48(3), 1348–1382 (2020)
Duchi, J.C., Agarwal, A., Johansson, M., Jordan, M.I.: Ergodic mirror descent. SIAM J. Optim. 22(4), 1549–1578 (2012)
Gitman, I., Lang, H., Zhang, P., Xiao, L.: Understanding the role of momentum in stochastic gradient methods. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Gupta, A., Haskell, W.B.: Convergence of recursive stochastic algorithms using wasserstein divergence. SIAM J. Math. Data Sci. 3(4), 1141–1167 (2021)
Gupta, A., Chen, H., Pi, J., Tendolkar, G.: Some limit properties of markov chains induced by recursive stochastic algorithms. SIAM J. Math. Data Sci. 2(4), 967–1003 (2020)
Hu, B., Syed, U.: Characterizing the exact behaviors of temporal difference learning algorithms using markov jump linear system theory. In: Advances in Neural Information Processing Systems, pp. 8477–8488, Vancouver, Canada, December 2019
Johansson, B., Rabi, M., Johansson, M.: A randomized incremental subgradient method for distributed optimization in networked systems. SIAM J. Optim. 20(3), 1157–1170 (2010)
Lakshminarayanan, C., Szepesvari, C.: Linear stochastic approximation: how far does constant step-size and iterate averaging go? In: International Conference on Artificial Intelligence and Statistics, pp. 1347–1355 (2018)
Lee, D., He, N.: Target-based temporal-difference learning. In: International Conference on Machine Learning, pp. 3713–3722. PMLR (2019)
Mandt, S., Hoffman, M.D., Blei, D.M.: Stochastic gradient descent as approximate bayesian inference. J. Mach. Learn. Res. 18, 1–35 (2017)
Meyn, S.P.: Markov Chains and Stochastic Stability. Markov Chains and Stochastic Stability (1999)
Nlar, E.: Probability and stochastics. Probability and Stochastics (2011). https://doi.org/10.1007/978-0-387-87859-1
Ram, S.S., Nedić, A., Veeravalli, V.V.: Incremental stochastic subgradient algorithms for convex optimization. SIAM J. Optim. 20(2), 691–717 (2009)
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 400–407 (1951)
Romoff, J., et al.: Tdprop: does adaptive optimization with jacobi preconditioning help temporal difference learning? In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1082–1090 (2021)
Simoncini, V.: Computational methods for linear matrix equations. SIAM Rev. 58(3), 377–441 (2016)
Srikant, R., Ying, L.: Finite-time error bounds for linear stochastic approximation and TD learning. In: COLT (2019)
Sun, T., Li, D., Wang, B.: Adaptive random walk gradient descent for decentralized optimization. In: International Conference on Machine Learning, pp. 20790–20809. PMLR (2022)
Sun, T., Shen, H., Chen, T., Li, D.: Adaptive temporal difference learning with linear function approximation. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 8812–8824 (2021)
Sun, T., Sun, Y., Yin, W.: On markov chain gradient descent. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Sutton, R.S.: Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)
Sutton, R.S., Barto, A.G., et al.: Introduction to reinforcement learning, vol. 2. MIT Press, Cambridge (1998)
Tsitsiklis, J.N., Roy, B.V.: An analysis of temporal-difference learning with function approximation. IEEE Trans. Autom. Control (1997)
Villani, C.: Optimal Transport: Old and New, vol. 338. Springer, Berlin (2009). https://doi.org/10.1007/978-3-540-71050-9
Xiong, H., Tengyu, X., Liang, Y., Zhang, W.: Non-asymptotic convergence of adam-type reinforcement learning algorithms under markovian sampling. Proc. AAAI Conf. Artif. Intell. 35, 10460–10468 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Ethical Statement
Our paper is devoted to the theoretical aspect of general stochastic algorithm, which does not present any foreseeable societal consequence.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dai, J., Chen, X. (2023). On the Distributional Convergence of Temporal Difference Learning. In: Koutra, D., Plant, C., Gomez Rodriguez, M., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14172. Springer, Cham. https://doi.org/10.1007/978-3-031-43421-1_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-43421-1_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43420-4
Online ISBN: 978-3-031-43421-1
eBook Packages: Computer ScienceComputer Science (R0)