Loading [a11y]/accessibility-menu.js
Camera and LiDAR Fusion for On-road Vehicle Tracking with Reinforcement Learning | IEEE Conference Publication | IEEE Xplore

Camera and LiDAR Fusion for On-road Vehicle Tracking with Reinforcement Learning


Abstract:

We formulate camera and LiDAR fusion tracking as a sequential decision-making process. With our deep reinforcement learning framework, we try to optimize the tracking tra...Show More

Abstract:

We formulate camera and LiDAR fusion tracking as a sequential decision-making process. With our deep reinforcement learning framework, we try to optimize the tracking trajectory to be as accurate, smooth, and long as possible. In contrast to traditional fusion algorithms involving complex feature and strategy design and hyperparameters tuned for different scenarios, our fusion agent can learn the confidence of each input by tracking the results from raw observation in a data-driven fashion. Given the input states of different sensors, our approach chooses one input with a higher expected cumulative reward as the observation of a Kalman filter to iteratively predict the target position. The expected cumulative reward is estimated with a convolutional neural network, trained with a modified DQN algorithm, which takes inputs from both LiDAR and a camera. Through case studies and quantitative result evaluation on our dataset from the 4th Ring Road in Beijing, our algorithm is validated to achieve more accurate and robust tracking performance.
Date of Conference: 09-12 June 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information:

ISSN Information:

Conference Location: Paris, France

Contact IEEE to Subscribe

References

References is not available for this document.