Abstract:
This paper addresses the trajectory planning problem for autonomous vehicles in traffic. We build a stochastic Markov decision process (MDP) model to represent the behavi...Show MoreMetadata
Abstract:
This paper addresses the trajectory planning problem for autonomous vehicles in traffic. We build a stochastic Markov decision process (MDP) model to represent the behaviors of the vehicles. This MDP model takes into account the road geometry and is able to reproduce more diverse driving styles. We introduce a new concept, namely, the “dynamic cell,” to dynamically modify the state of the traffic according to different vehicle velocities, driver intents (signals), and the sizes of the surrounding vehicles (i.e., truck, sedan, and so on). We then use Bézier curves to plan smooth paths for lane switching. The maximum curvature of the path is enforced via certain design parameters. By designing suitable reward functions, different desired driving styles of the intelligent vehicle can be achieved by solving a reinforcement learning problem. The desired driving behaviors (i.e., autonomous highway overtaking) are demonstrated with an in-house developed traffic simulator.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 21, Issue: 6, June 2020)