Abstract:
Accurate object detection with on-board LiDAR sensors is crucial for ensuring driving safety of Connected Automated Vehicles (CAVs), especially at accident black spots wi...Show MoreMetadata
Abstract:
Accurate object detection with on-board LiDAR sensors is crucial for ensuring driving safety of Connected Automated Vehicles (CAVs), especially at accident black spots with more occlusions. Fortunately, road-side infrastructure equipped with traffic cameras is usually available at these places, offers an extensive field of view and encounters fewer occlusions, and thus can provide sustained assistance to CAVs to improve their object detection performance. However, vehicle-to-infrastructure (V2I) cooperative object detection is quite challenging due to modality heterogeneity, agent heterogeneity, and bandwidth limitations. To address these challenges, in this paper, we propose V2I-Coop, an accurate object detection approach with V2I cross-modality cooperation for CAVs to improve perception performance at accident black spots. In V2I-Coop, first, we extract bird-eye-view (BEV) features from both multi-view 2D images and 3D point clouds, which facilitates the feature fusion of different modalities. Next, the most valuable features from the images are adaptively selected according to available bandwidth and then transmitted to CAVs. Then, a cross-modality feature fusion algorithm is adopted at CAVs to mitigate the modality difference and improve the feature fusion efficiency. Finally, extensive experiments demonstrate that V2I-Coop significantly improves the 3D object detection performance of CAVs at accident black spots.
Published in: IEEE Transactions on Mobile Computing ( Volume: 24, Issue: 3, March 2025)