Abstract:
We formulate camera and LiDAR fusion tracking as a sequential decision-making process. With our deep reinforcement learning framework, we try to optimize the tracking tra...Show MoreMetadata
Abstract:
We formulate camera and LiDAR fusion tracking as a sequential decision-making process. With our deep reinforcement learning framework, we try to optimize the tracking trajectory to be as accurate, smooth, and long as possible. In contrast to traditional fusion algorithms involving complex feature and strategy design and hyperparameters tuned for different scenarios, our fusion agent can learn the confidence of each input by tracking the results from raw observation in a data-driven fashion. Given the input states of different sensors, our approach chooses one input with a higher expected cumulative reward as the observation of a Kalman filter to iteratively predict the target position. The expected cumulative reward is estimated with a convolutional neural network, trained with a modified DQN algorithm, which takes inputs from both LiDAR and a camera. Through case studies and quantitative result evaluation on our dataset from the 4th Ring Road in Beijing, our algorithm is validated to achieve more accurate and robust tracking performance.
Published in: 2019 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 09-12 June 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information: