Abstract:
3D object detection plays a critical role in autonomous driving perception. Although multi-view-based perception solutions have made significant progress, their performan...Show MoreMetadata
Abstract:
3D object detection plays a critical role in autonomous driving perception. Although multi-view-based perception solutions have made significant progress, their performance is still far from being ready for practical use. The estimation of pixel depth is dependent on camera intrinsic properties, which led us to explore a depth-aware model guided by camera parameters. Our contribution in this paper is the DAFormer, which incorporates camera parameters and position-aware image features to detect 3D objects. The depthaware module uses camera parameters to reweight image features and estimate depth, enhancing and offsetting the 3D position embedding. The object query uses depth information enhanced features for end-to-end 3D detection by Attention layers. DAFormer achieves impressive results on the standard nuScenes dataset without any additional embellishments.
Date of Conference: 24-26 March 2023
Date Added to IEEE Xplore: 08 May 2023
ISBN Information: