Continuous Adaptation in Nonstationary Environments Based on Actor-Critic Algorithm | IEEE Conference Publication | IEEE Xplore

Continuous Adaptation in Nonstationary Environments Based on Actor-Critic Algorithm


Abstract:

In reinforcement learning, the training process for the agent is highly relevant to the dynamics, Agent's dynamics are generally considered to be parts of environments. W...Show More

Abstract:

In reinforcement learning, the training process for the agent is highly relevant to the dynamics, Agent's dynamics are generally considered to be parts of environments. When dynamics changed, the previous learning model may be unable to adapt to the new environment. In this paper, we propose a simple adaptive method based on the traditional actor-critic framework. A new component named Adaptor is added to the original model. The kernel of the Adaptor is a network which has the same structure as the Critic. The component can adaptively adjust the Actor's actions. Experiments show the agents pre-trained in different environments including Gym and MuJoCo achieve better performances in the tasks of adapting to the new dynamics-changed environments than the original methods. Moreover, the proposed method shows superior performance over the baseline method just learning form the scratch in some original tasks.
Date of Conference: 22-25 November 2022
Date Added to IEEE Xplore: 31 March 2023
ISBN Information:
Conference Location: Penang, Malaysia

References

References is not available for this document.