Abstract:
Radar semantic segmentation is a challenging task in environmental understanding, due as the radar data is noisy and suffers measurement ambiguities, which could lead to ...View moreMetadata
Abstract:
Radar semantic segmentation is a challenging task in environmental understanding, due as the radar data is noisy and suffers measurement ambiguities, which could lead to poor feature learning. To better tackle such difficulties, we present a novel and high-performance Transformer-based Radar Semantic Segmentation method, named TransRSS, to effectively and efficiently feature extraction for radar segmentation. Our approach first introduces the transformer into radar semantic segmentation and deeply integrates the merits of the Convolutional Neural Network (CNN) and transformer to extract more discriminative and global-level semantic features. On the one hand, it takes advantage of the CNN with flexible receptive fields to process images thanks to the shift convolution scheme. On the other hand, it takes advantage of the transformer to model long-range dependency with the self-attention mechanism. Meanwhile, we propose a Dual Position Attention module to aggregate rich context interdependencies between the multi-view features, which achieves an implicit mechanism for adaptively feature aggregation. Extensive experiments on the CARRADA dataset and RADIal dataset demonstrate that our TransRSS surpasses the state-of-the-art (SOTA) radar segmentation methods with remarkable margins.
Date of Conference: 29 May 2023 - 02 June 2023
Date Added to IEEE Xplore: 04 July 2023
ISBN Information: