Elsevier

Neural Networks

Volume 129, September 2020, Pages 149-162
Neural Networks

Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks

https://doi.org/10.1016/j.neunet.2020.06.002Get rights and content
Under a Creative Commons license
open access

Abstract

Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown distinct advantages, e.g., solving memory-dependent tasks and meta-learning. However, little effort has been spent on improving RNN architectures and on understanding the underlying neural mechanisms for performance gain. In this paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical results show that the network can autonomously learn to abstract sub-goals and can self-develop an action hierarchy using internal dynamics in a challenging continuous control task. Furthermore, we show that the self-developed compositionality of the network enhances faster re-learning when adapting to a new task that is a re-composition of previously learned sub-goals, than when starting from scratch. We also found that improved performance can be achieved when neural activities are subject to stochastic rather than deterministic dynamics.

Keywords

Recurrent neural network
Reinforcement learning
Partially observable Markov decision process
Multiple timescale
Compositionality

Cited by (0)