Abstract:
Autonomous vehicles (AVs) have the potential to revolutionize transportation, but their effective integration into the real world requires addressing the challenge of int...Show MoreMetadata
Abstract:
Autonomous vehicles (AVs) have the potential to revolutionize transportation, but their effective integration into the real world requires addressing the challenge of interacting with human drivers. Real-world driving involves negotiating and cooperating with fellow drivers through social cues, necessitating AVs to also demonstrate such social compatibilities. However, despite the popularity, current learning-based control methods for AV policy synthesis often overlook this crucial aspect. In this work, we look at the problem of enabling socially compatible driving when AV control policies are learned. We leverage human driving data to learn a social preference model of human driving and then integrate it with reinforcement learning-based AV policy synthesis using Social Value Orientation theory. In particular, we propose to use multi-task reinforcement learning to learn diverse social compatibility levels in driving (ex: altruistic, prosocial, individualistic, and competitive), focusing on the requirement of having diverse behaviors in real-world driving. Using highway driving scenarios, we demonstrate through experiments that socially compatible AV driving not only enables naturalistic driving behaviors but also reduces collision rates from the baseline. Our findings reveal that without social compatibility, AV policies tend to adopt dangerously competitive driving behaviors, while the incorporation of social compatibility fosters smoother vehicle maneuvers.
Date of Conference: 24-28 September 2023
Date Added to IEEE Xplore: 13 February 2024
ISBN Information: