Abstract:
This paper deals with collision avoidance for an autonomous vehicle (AV) using a model-free Reinforcement Learning (RL) algorithm rooted in the actor-critic paradigm. To ...Show MoreMetadata
Abstract:
This paper deals with collision avoidance for an autonomous vehicle (AV) using a model-free Reinforcement Learning (RL) algorithm rooted in the actor-critic paradigm. To achieve this objective, the actor network (AN) has to generate a collision-free path for an autonomous robot from a start to an end position as well as to follow this desired path accurately. Within this framework, the actor provides a sequence of input signals for the underlying velocity controllers of the robot drives. To accomplish this purpose for a large number of obstacles, it turns out to be essential to sort the algorithm’s input vector regarding the smallest Euclidean distance between an obstacle and the agent as well as to consider the robot’s relative direction. In a first step, the training of the agent is performed in a simulated environment. The second step involves the successful experimental validation of the trained AN on a TurtleBot 3 Burger (TB3B) - a test platform for autonomous robots.
Date of Conference: 10-12 October 2024
Date Added to IEEE Xplore: 11 November 2024
ISBN Information: