Abstract:
Vehicle-to-Everything network enabled connected autonomous driving has been regarded as a promising solution to realize advanced autonomous driving. However, non-ideal fa...Show MoreMetadata
Abstract:
Vehicle-to-Everything network enabled connected autonomous driving has been regarded as a promising solution to realize advanced autonomous driving. However, non-ideal factors in wireless communication and localization severely limit the development. In this work, we propose MoRFF, a Mobility-robust Regional Features Fusion framework for multi-terminal multi-view object detection to realize wireless cooperative perception. To conquer the limited communication bandwidth, stochastic latency, and inaccurate positioning caused by wireless links and mobility, our method features a universal two-stage detection paradigm with deep metric learning, matching the same object from different viewpoints directly on the regional feature maps, and thus helps to greatly reduce the data size to transmit. Our proposed architecture only requires image data without any additional information such as geo-positions, sensor poses, or point clouds from LiDAR, and thus conducive to the promotion of connected autonomous driving. Experimental evaluations show that the proposed algorithm successfully benefits from other viewpoints, increases the detection precision of barely visible objects by 13.42%, and achieves tenfold promotion in communication bandwidth requirements. Furthermore, the proposed algorithm is robust under various communication delays.
Date of Conference: 10-13 October 2023
Date Added to IEEE Xplore: 11 December 2023
ISBN Information: