Abstract:
Conventional perception-planning pipelines of autonomous vehicles (AV) utilize deep learning (DL) techniques that typically generate deterministic outputs without explici...Show MoreMetadata
Abstract:
Conventional perception-planning pipelines of autonomous vehicles (AV) utilize deep learning (DL) techniques that typically generate deterministic outputs without explicitly evaluating their uncertainties and trustworthiness. Therefore, the downstream decision-making components may generate unsafe outputs leading to system failure or accidents, if the preceding perception component provides highly uncertain information. To mitigate this issue, this article proposes a coherent safe perception-planning framework that quantifies and transfers DL-based perception uncertainties. Following the Bayesian Deep Learning paradigm, we design a probabilistic 3D object detector that extracts objects from LiDAR point clouds while quantifying the corresponding aleatoric and epistemic uncertainty. A chance-constrained motion planner is designed to formulate an explicit link between DL-based perception uncertainties and operation risk and generate safe and risk-bounding trajectories. The proposed framework is validated through various challenging scenarios in the CARLA simulator. Experiment results demonstrate that our framework can effectively capture the uncertainties in DL, and generate trajectories that bound the risk under DL perception uncertainties. It also outperforms counterpart approaches without explicitly evaluating the uncertainties of DL-based perception.
Published in: IEEE Transactions on Intelligent Vehicles ( Volume: 9, Issue: 1, January 2024)