Adversarial unsupervised domain adaptation for cross scenario waveform recognition
Introduction
In recent years, the development of Deep Learning (DL) has been pushing recent performance boundaries for a variety of machine learning tasks in fields such as computer vision, natural language processing and speech recognition. DL architectures are also applied to communications field such as waveform classification [1], channel encoding and decoding [2]. Waveform recognition has been accomplished for years through traditional approaches making use of hand-crafted features such as cumulants, cyclostationary features, and distribution distances [3], [4], [5]. Some related work has introduced deep learning to improve the performance of waveform recognition. Timothy J. O’Shea demonstrates that deep convolutional neural networks is a viable and strong candidate approach for the modulation recognition task [2]. The authors of [1] apply Convolutional Long short-term Deep Neural Networks (CLDNN) to this task and achieves higher classification accuracy. In [6], a two channel convolutional neural network was proposed to further improve the performance of both modulation and protocol recognition.
However, due to a phenomenon known as dataset bias or domain shift, recognition models trained on one large dataset do not generalize well to unfamiliar datasets and tasks [7]. Dataset bias or domain shift may derived from different sample frequencies, varying wireless propagation conditions and so on. Suffering from complex and variable scenarios, it is impossible to train a universal model for all waveform recognition tasks. To this end, T. J. OShea et al. [8] provide a possible mitigation idea to improve generalization for varying wireless propagation conditions. This includes domain-matched attention mechanisms such as the radio transformer network in the network architecture, however, they have not presented a specific implementation method.
Another typical solution is to further fine-tune these radio transformer networks on task-specific datasets [6], [8]. However, it is often difficult and expensive to obtain enough labeled data to fine-tune the large number of parameters in deep multilayer networks. Besides, domain adaptation method, which is a subclass of transductive Transfer Learning, attempts to mitigate the harmful effects of domain shift. Assume that we have two domains, namely: source domain and target domain, denoting the training dataset with sufficient labeled data and testing dataset with a small amount of labeled data or no labeled data, respectively. Transfer learning aims to build learning machines that generalize across different domains following different probability distributions. The main technical problem of transfer learning is how to reduce the shifts in data distributions across domains. The transductive transfer learning has the following characteristics [9]: 1) The source and target tasks are the same but their domains can be different; 2) The label is easy to be obtained in source domain but quite difficult to be obtained in target domain; 3) The marginal probability distributions of the input data in source and target domain are different. Due to the complex and heterogeneous wireless propagation conditions, domain adaptation is a commendable solution for cross scenario waveform recognition.
Recent work in domain adaptation has mostly focused on two fields, mapping source domain data to target domain [10] mapping both domains to a shared space before classification [11], [12], [13]. Both fields require a map function. Therefore, we implement deep neural network, which is able to model complex non-linear map relationships, as the map function for domain adaptation.
In the original formulation by Goodfellow, a generative adversarial net (GAN) model is trained through a min-max game between a generator, that maps noise vectors in the image space, and a discriminator, trained to discriminate generated images from real ones [14]. However, the downside of the standard unconditional GAN is that there is no control on modes of the data being generated. GAN can be extended to a conditional model (CGAN) [15] by conditioning some extra information y (i.e. labels) on both the generator and discriminator. By this way, it is possible to control the data generation process.
Adversarial adaptation method has become an increasingly popular incarnation which seeks to minimize domain discrepancy distance through an adversarial objective with respect to a domain discriminator, learning a representation that is simultaneously discriminative of source labels while not being able to distinguish between domains. The gradient reversal algorithm [12] treats domain invariance as a binary classification problem, but directly maximizes the loss of the domain classifier by reversing its gradients. Adversarial Discriminative Domain Adaptation (ADDA) [11] applies two Encoder networks, Source and Target Encoder, to map both domain to a similar feature space by adversarial learning with a GAN loss. In contrast, Domain Invariance Feature Augmentation (DIFA) [13] just applies one shared Encoder network for shared space and performs feature augmentation using CGANs to learn the class distribution in the feature space, and therefore to generate an arbitrary number of labeled feature vectors.
It should be noted that the unsupervised waveform recognition for cross scenario has not been considered in the literatures. Thus, we propose a novel waveform recognition method based on an adversarial unsupervised domain adaptation (AUDA) to improve the cross scenario recognition performance in this paper. To do this, we first design a robust deep neural network, namely Encoder, as the map function. It transforms both source and target domains to a shared feature space. We then present an adversarial unsupervised learning framework via adversarial learning, which makes the features of both domains indistinguishable in high-dimensional space. So the classifier trained on the source domain can works well on the target domain. At the last, we conduct numerical experiments to validate the proposed framework in an abundant waveform dataset under different scenarios.
Section snippets
Unsupervised domain adaptation architecture for waveform recognition
In this section, we firstly present our proposed generalized AUDA architecture for waveform recognition including supervised training and adversarial training procedures, with a two-channel convolutional neural network as Encoder. Then, two effective implementation AUDA networks, one-stage gradient reversal and three-stage feature augmentation methods, are proposed based on our AUDA architecture for waveform recognition.
Experimental evaluation
In this section, we firstly introduce the waveform datasets in Table 1. Then, the proposed method is evaluated on unsupervised adaptation tasks within the modulation and protocol dataset, respectively. We construct one domain adaptation pair for modulation recognition: MR10a ↔ MR04c, and three domain adaptation pairs for protocol recognition: PRin ↔ PRcor, PRin ↔ PRout, PRout ↔ PRcor. All experiments are performed in the unsupervised settings, where labels in the target domain are withheld.
Conclusion
Inspired by the adversarial domain adaptation, we design a novel AUDA waveform recognition framework to improve cross scenario recognition performance. The proposed approach is devoted to transform source and target domain to a shared feature space with a robust feature extractor network. In addition, the domain invariant features are learned by adversarial learning in the shared space. By our experimental results, we demonstrate that the recognition performance of target domain waveforms can
CRediT authorship contribution statement
Qing Wang: Conceptualization, Writing - review & editing, Supervision, Project administration. Panfei Du: Methodology, Software, Writing - original draft. Xiaofeng Liu: Validation, Writing - review & editing. Jingyu Yang: Writing - review & editing. Guohua Wang: Writing - review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
This work was supported by the National Natural Science Foundation of China under grant 61871282.
References (21)
- et al.
Deep architectures for modulation recognition
2017 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN)
(2017) - et al.
An introduction to deep learning for the physical layer
IEEE Trans. Cognit. Commun. Networking
(2017) - et al.
Hierarchical digital modulation classification using cumulants
IEEE Trans. Commun.
(2000) - et al.
Cyclostationary approaches to signal detection and classification in cognitive radio
2007 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks
(2007) - et al.
Computationally efficient modulation level classification based on probability distribution distance functions
IEEE Commun. Lett.
(2011) - et al.
Transferred deep learning based waveform recognition for cognitive passive radar
Signal Process.
(2018) - et al.
Unbiased look at dataset bias
CVPR 2011
(2011) - et al.
Over-the-air deep learning based radio signal classification
IEEE J. Sel. Top. Signal Process.
(2018) - et al.
A survey on transfer learning
IEEE Trans. Knowl. Data Eng.
(2010) - et al.
Learning the roots of visual domain shift
European Conference on Computer Vision (ECCV)
(2016)
Cited by (19)
Adaptive semantic transfer network for unsupervised 2D image-based 3D model retrieval
2024, Computer Vision and Image UnderstandingDynamic classifier approximation for unsupervised domain adaptation
2023, Signal ProcessingCitation Excerpt :However, they often ignore the internal structure of the data. Deep adaptation methods [6–7] solve the domain shift problem by adjusting the parameters of the neural network through back propagation, which often take a lot of time to train network. Feature-based domain adaptation methods [8–10] learn a common subspace, in which all samples from the source and target domains are projected into a common subspace to obtain the new feature representation corresponding to the original samples.
Breaking the Performance Gap of Fully and Semisupervised Learning in Electromagnetic Signature Recognition
2024, IEEE Internet of Things JournalView-target relation-guided unsupervised 2D image-based 3D model retrieval via transformer
2023, Multimedia SystemsResearch on modulation recognition method of multi-component radar signals based on deep convolution neural network
2023, IET Radar, Sonar and Navigation