Abstract:
Trajectory prediction is an essential and challenging task for autonomous driving and mobile robots. The main difficulty is to model actor-actor interaction and actor-sce...Show MoreMetadata
Abstract:
Trajectory prediction is an essential and challenging task for autonomous driving and mobile robots. The main difficulty is to model actor-actor interaction and actor-scene interaction. In addition, the different motion characteristics of each actor also increase the challenge of prediction. Most existing data-driven methods mainly focus on the interaction between actors but ignore the influence of their independent motion characteristics and actor-scene interaction. In this letter, we propose a multiple contextual cues integrated trajectory prediction method. Specifically, an LSTM-based encoder extracts the motion features to express the driving characteristics of each actor. Meanwhile, an attention-based graph module is applied to accurately model interaction behaviors. The scene features are extracted from high-definition vector maps by convolution neural networks. Combining these three types of attribute features, the decoder module then infers the future trajectory. We evaluate the proposed approach on two widely-used datasets, i.e. ApolloScape and Argoverse, and state-of-the-art results demonstrate the effectiveness of our approach.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4, October 2021)