Abstract:
Perception serves as the vital cornerstone of autonomous driving system, influencing the decision-making and control performance of vehicles. The rich semantic color info...Show MoreMetadata
Abstract:
Perception serves as the vital cornerstone of autonomous driving system, influencing the decision-making and control performance of vehicles. The rich semantic color information of images, the low cost of cameras and the support of deep learning make visual perception play a pivotal role. However, there are occlusions and blind areas when capturing data using only the on-board camera. With the development of vehicle-to-everything (V2X), information interaction can be achieved based on vehicle-to-vehicle (V2V), cooperative perception of connected autonomous vehicles (CAVs) based on information interaction has become a new trend. This study delves into visual perception based on Transformer attention, and enhances the encoder-decoder through multi-scale feature extraction and queries initialization. Furthermore, a visual cooperative perception method driven by V2V interaction is proposed. Based on spatial registration, data association and multi-source cooperation, perception enhancement of far-sight and see-through is achieved. Experiments were conducted on the real-world dataset and the PreScan simulator, evaluating the proposed method under various traffic state and density scenarios. Experimental results demonstrate that visual cooperative perception can improve the perception effect of CAVs and adapt to more complex traffic environments.
Published in: 2024 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information: