Reinforcement learning via kernel temporal difference | IEEE Conference Publication | IEEE Xplore

Reinforcement learning via kernel temporal difference


Abstract:

This paper introduces a kernel adaptive filter implemented with stochastic gradient on temporal differences, kernel Temporal Difference (TD)(λ), to estimate the state-act...Show More

Abstract:

This paper introduces a kernel adaptive filter implemented with stochastic gradient on temporal differences, kernel Temporal Difference (TD)(λ), to estimate the state-action value function in reinforcement learning. The case λ=0 will be studied in this paper. Experimental results show the method's applicability for learning motor state decoding during a center-out reaching task performed by a monkey. The results are compared to the implementation of a time delay neural network (TDNN) trained with backpropagation of the temporal difference error. From the experiments, it is observed that kernel TD(0) allows faster convergence and a better solution than the neural network.
Date of Conference: 30 August 2011 - 03 September 2011
Date Added to IEEE Xplore: 01 December 2011
ISBN Information:

ISSN Information:

PubMed ID: 22255624
Conference Location: Boston, MA, USA

Contact IEEE to Subscribe

References

References is not available for this document.