Abstract:
The method based on the fusion of radar and video in this paper is oriented to detecting surrounding objects while driving. This is usually a method of improving robustne...Show MoreMetadata
Abstract:
The method based on the fusion of radar and video in this paper is oriented to detecting surrounding objects while driving. This is usually a method of improving robustness and accuracy by using several senses, which makes sensor fusion a key part of the perception system. We propose a new fusion method called CT-EPNP, which uses radar and camera data for fast detection. Adding a central fusion algorithm on the basis of EPNP, and use the truncated cone method to compensate the radar information on the associated image when mapping. CT-EPNP returns to the object attributes depth, rotation, speed and other attributes. Based on this, simulation verification and related derivation of mathematical formulas are proved. We combined the improved algorithm with the RetinaNet model to ensure that the model is satisfied with the normal driving of the vehicle while gaining a certain increase in the detection rate. We have also made a certain improvement in ensuring repeated detection without using any additional time information.
Published in: 2021 7th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)
Date of Conference: 17-19 November 2021
Date Added to IEEE Xplore: 04 January 2022
ISBN Information: