Abstract:
In the current fully open and highly dynamic electronic countermeasures environment, acquiring sufficient jamming label priors is particularly challenging. Especially whe...Show MoreMetadata
Abstract:
In the current fully open and highly dynamic electronic countermeasures environment, acquiring sufficient jamming label priors is particularly challenging. Especially when the labeled samples are extremely limited, existing intelligent recognition models for radar active jamming struggle to effectively learn discriminative features, resulting in inaccurate and unstable recognition results. Therefore, this letter proposes a visual-text alignment network (VTANet) that introduces the text modality to reuse the label prior information, thereby leveraging labeling knowledge to enhance jamming recognition accuracy under few-shot conditions. During training, VTANet utilizes a text feature encoding module (TFEM) to encode text constructed based on label priors. Through a contrastive learning strategy, these obtained text features are then used to guide the visual feature encoding module (VFEM) to learn more discriminative visual time-frequency (TF) representations. Experimental results show that reusing label priors through the text modality significantly improves the intraclass compactness and interclass separability of visual TF features. With the jamming-to-noise ratio (JNR) of 5 dB and only two labeled samples per jamming type, VTANet achieves a recognition accuracy of over 90%, improving by more than 5% compared to existing methods, thus demonstrating its superiority.
Published in: IEEE Geoscience and Remote Sensing Letters ( Volume: 22)