skip to main content
10.1145/3324921.3328792acmconferencesArticle/Chapter ViewAbstractPublication PageswisecConference Proceedingsconference-collections
research-article

Targeted Adversarial Examples Against RF Deep Classifiers

Published: 15 May 2019 Publication History

Abstract

Adversarial examples (AdExs) in machine learning for classification of radio frequency (RF) signals can be created in a targeted manner such that they go beyond general misclassification and result in the detection of a specific targeted class. Moreover, these drastic, targeted misclassifications can be achieved with minimal waveform perturbations, resulting in catastrophic impact to deep learning based spectrum sensing applications (e.g. WiFi is mistaken for Bluetooth). This work addresses targeted deep learning AdExs, specifically those obtained using the Carlini-Wagner algorithm, and analyzes previously introduced defense mechanisms that performed successfully against non-targeted FGSM-based attacks. To analyze the effects of the Carlini-Wagner attack, and the defense mechanisms, we trained neural networks on two datasets. The first dataset is a subset of the DeepSig dataset, comprised of three synthetic modulations BPSK, QPSK, 8-PSK, which we use to train a simple network for Modulation Recognition. The second dataset contains real-world, well-labeled, curated data from the 2.4 GHz Industrial, Scientific and Medical (ISM) band, that we use to train a network for wireless technology (protocol) classification using three classes: WiFi 802.11n, Bluetooth (BT) and ZigBee. We show that for attacks of limited intensity the impact of the attack in terms of percentage of misclassifications is similar for both datasets, and that the proposed defense is effective in both cases. Finally, we use our ISM data to show that the targeted attack is effective against the deep learning classifier but not against a classical demodulator.

References

[1]
2008. Kolmogorov--Smirnov Test. Springer New York.
[2]
Aleksander Madry at al. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint (2017).
[3]
N. Carlini and D. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE Symp. on Security and Privacy (SP).
[4]
Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Deep Learning and Security Workshop.
[5]
Christian Szegedy et al. 2014. Intriguing properties of neural networks. In Intern. Conf. on Learning Representations.
[6]
Deepsig Inc. 2018. Deepsig Inc. RF DATASETS FOR MACHINE LEARNING. https://www.deepsig.io/datasets/ Accessed on 11/20/2018.
[7]
Yinpeng Dong et al. 2018. Boosting Adversarial Attacks with Momentum. In IEEE Computer Vision and Pattern Recognition (CVPR).
[8]
I. J Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint (2014).
[9]
Bluetooth Special Interest Group. {n. d.}. Bluetooth Specification Version 5.0. https://www.bluetooth.com/specifications/bluetooth-core-specification Accessed on 10/25/2018.
[10]
https://wime-project.net. 2018. IEEE 802.11 a/g/p Transceiver. https://github.com/bastibl/gr-ieee802-11 Accessed on 04/10/2019.
[11]
IEEE. {n. d.}. IEEE P802.11nâĎć/D3.00 - Draft Amendment to STANDARD for Information Technology-Telecommunications and information exchange between Systems - Local and Metropolitan networks-Specific requirements-Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY). Amendment 4: Enhancements for Higher Throughput.
[12]
K. Merchant et al. 2018. Deep Learning for RF Device Fingerprinting in Cognitive Communication Networks. IEEE Journal of Selected Topics in Signal Processing (2018).
[13]
Silvija Kokalj-Filipovic and Rob Miller. 2019. Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness. arXiv preprint 1902.06044 (2019).
[14]
Silvija Kokalj-Filipovic, Rob Miller, and Joshua Morman. 2019. AutoEncoders for Training Compact Deep Learning RF Classifiers for Wireless Protocols. arXiv preprint 1904.11874 (2019).
[15]
A. Kurakin, I. Goodfellow, and S. Bengio. 2016. Adversarial examples in the physical world. arXiv preprint (2016).
[16]
M. Z. Hameed and A. Gyorgy and D. Gunduz. 2019. Communication without Interception: Defense against Deep-Learning-based Modulation Detection. arXiv preprint:1902.10674v1 (2019).
[17]
T. OShea and J. Hoydis. 2017. An introduction to deep learning for the physical layer. IEEE Transactions on Cognitive Communications and Networking (2017).
[18]
S. Kokalj-Filipovic et al. 2019. Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training. In International Conference on Military Communications and Information Systems, ICMCIS.
[19]
Meysam Sadeghi and Erik G. Larsson. 2018. Adversarial Attacks on DeepLearning Based Radio Signal Classification. arXiv preprint (2018).
[20]
N. Salman, I. Rasool, and A. H. Kemp. 2010. Overview of the IEEE 802.15.4 standards family for Low Rate Wireless Personal Area Networks. In 7th Int. Symp. on Wireless Communication Systems.
[21]
SigMF. {n. d.}. The Signal Metadata Format Specification. https://github.com/gnuradio/SigMF Accessed on 10/25/2018.
[22]
C. M. Spooner, A. N. Mody, J. Chuang, and J. Petersen. 2017. Modulation recognition using second-and higher-order cyclostationarity. In Int. Symp. on Dynamic Spectrum Access Networks (IEEE DySPAN).
[23]
T. OShea et al. 2018. Over the Air Deep Learning Based Radio Signal Classification. IEEE Journal of Selected Topics in Signal Processing (2018).

Cited By

View all
  • (2025)Deep Time-Frequency Denoising Transform Defense for Spectrum Monitoring in Integrated NetworksTsinghua Science and Technology10.26599/TST.2024.901004530:2(851-863)Online publication date: Apr-2025
  • (2024)Survey of Security Issues in Memristor-Based Machine Learning Accelerators for RF AnalysisChips10.3390/chips30200093:2(196-215)Online publication date: 13-Jun-2024
  • (2024)Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CWBig Data and Cognitive Computing10.3390/bdcc80100088:1(8)Online publication date: 16-Jan-2024
  • Show More Cited By

Index Terms

  1. Targeted Adversarial Examples Against RF Deep Classifiers

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WiseML 2019: Proceedings of the ACM Workshop on Wireless Security and Machine Learning
    May 2019
    76 pages
    ISBN:9781450367691
    DOI:10.1145/3324921
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 May 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ModRec attack
    2. RF AdExs
    3. RF machine learning
    4. RFML
    5. adversarial attack to RFML
    6. deep learning
    7. neural networks
    8. radio frequency adversarial examples
    9. wireless protocol classification
    10. wireless spectrum sensing

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    WiSec '19
    Sponsor:

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)48
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 17 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Deep Time-Frequency Denoising Transform Defense for Spectrum Monitoring in Integrated NetworksTsinghua Science and Technology10.26599/TST.2024.901004530:2(851-863)Online publication date: Apr-2025
    • (2024)Survey of Security Issues in Memristor-Based Machine Learning Accelerators for RF AnalysisChips10.3390/chips30200093:2(196-215)Online publication date: 13-Jun-2024
    • (2024)Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CWBig Data and Cognitive Computing10.3390/bdcc80100088:1(8)Online publication date: 16-Jan-2024
    • (2024)Practical Adversarial Attack on WiFi Sensing Through Unnoticeable Communication Packet PerturbationProceedings of the 30th Annual International Conference on Mobile Computing and Networking10.1145/3636534.3649367(373-387)Online publication date: 29-May-2024
    • (2024)AIR: Threats of Adversarial Attacks on Deep Learning-Based Information RecoveryIEEE Transactions on Wireless Communications10.1109/TWC.2024.337469923:9_Part_1(10698-10711)Online publication date: 1-Sep-2024
    • (2024)DIBAD: A Disentangled Information Bottleneck Adversarial Defense Method Using Hilbert-Schmidt Independence Criterion for Spectrum SecurityIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.337279819(3879-3891)Online publication date: 2024
    • (2024)Channel-Robust Class-Universal Spectrum-Focused Frequency Adversarial Attacks on Modulated Classification ModelsIEEE Transactions on Cognitive Communications and Networking10.1109/TCCN.2024.338212610:4(1280-1293)Online publication date: Aug-2024
    • (2024)Transferable Sparse Adversarial Attack on Modulation Recognition With Generative NetworksIEEE Communications Letters10.1109/LCOMM.2024.337322228:5(999-1003)Online publication date: May-2024
    • (2024)Sparse Adversarial Attack on Modulation Recognition with Adversarial Generative Networks2024 4th International Conference on Information Communication and Software Engineering (ICICSE)10.1109/ICICSE61805.2024.10625694(104-108)Online publication date: 10-May-2024
    • (2024)Low‐Interception Waveforms: To Prevent the Recognition of Spectrum Waveform Modulation via Adversarial ExamplesRadio Science10.1029/2022RS00748659:8Online publication date: 14-Aug-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media