Abstract:
Traffic simulation has the potential to facilitate the development and testing of autonomous vehicles, as a supplement to road testing. Since autonomous vehicles will coe...Show MoreMetadata
Abstract:
Traffic simulation has the potential to facilitate the development and testing of autonomous vehicles, as a supplement to road testing. Since autonomous vehicles will coexist with human drivers in the transportation system for a period of time, it is important to have intelligent driving agents in traffic simulation to interact with them just like human drivers. Directly learning from human drivers' driving behavior is an attractive solution with potential. In this study, Adversarial Inverse Reinforcement Learning (AIRL) is applied to learn decision-making policies in complex and interactive traffic simulation environments with high traffic density. Bird's Eye View (BEV) is proposed as an observation model for driving agents, providing effective information for the agents' decision-making. Results show that compared with Behavioral Cloning (BC) and Proximal Policy Optimization (PPO), the driving agents generated by AIRL demonstrate higher levels of safety and robustness and they are capable of imitating the car-following and lane-changing characteristics from expert demonstrations. The results further confirm that different driving characteristics can be learned based on AIRL method.
Date of Conference: 24-28 September 2023
Date Added to IEEE Xplore: 13 February 2024
ISBN Information: