Abstract:
Adaptive critic methods for reinforcement learning are known to provide consistent solutions to optimal control problems, and are also considered plausible models for cog...Show MoreMetadata
Abstract:
Adaptive critic methods for reinforcement learning are known to provide consistent solutions to optimal control problems, and are also considered plausible models for cognitive learning processes. This work discusses binary reinforcement in the context of three adaptive critic methods: heuristic dynamic programming (HDP), dual heuristic programming (DHP), and globalized dual heuristic programming (GDHP). Binary reinforcement arises when the qualitative measure of success is simply "pass" or "fail". We implement binary reinforcement with adaptive critic methods for the pole-cart benchmark problem. Results demonstrate two qualitatively dissimilar classes of controllers: those that replicate the system stabilization achieved with quadratic utility, and those that merely succeed at not dropping the pole. It is found that the GDHP method is effective for learning an approximately optimal solution, with results comparable to those obtained via DHP that uses a more informative, quadratic utility function.
Date of Conference: 25-29 July 2004
Date Added to IEEE Xplore: 17 January 2005
Print ISBN:0-7803-8359-1
Print ISSN: 1098-7576