Deep Learning Models as Moving Targets to Counter Modulation Classification Attacks | IEEE Conference Publication | IEEE Xplore

Deep Learning Models as Moving Targets to Counter Modulation Classification Attacks


Abstract:

Malicious entities abuse advanced modulation classification (MC) techniques to launch traffic analysis, selective jamming, evasion, and poison attacks. Recent studies sho...Show More

Abstract:

Malicious entities abuse advanced modulation classification (MC) techniques to launch traffic analysis, selective jamming, evasion, and poison attacks. Recent studies show that current defenses against such attacks are static in nature and vulnerable to persistent adversaries who invest time and resources into learning the defenses, thereby being able to design and execute more sophisticated attacks to circumvent them. In this paper, we present a moving-target defense framework to support a novel modulation-masking mechanism we develop against advanced and persistent MC attacks. The modulated symbols are first masked using small perturbations to make them appear to an adversary in a state of ambiguity about the model as if they are from another modulation scheme. By deploying a pool of deep learning models and perturbation-generating techniques, our defense strategy keeps changing (moving) them as needed, making it difficult (cubic time complexity) for adversaries to keep up with the evolving defense system over time. We show that the overall system performance remains unaffected under our technique. We further demonstrate that, over time, a persistent adversary can learn and eventually circumvent our masking technique, along with other existing defenses, unless a moving target defense approach is adopted.
Date of Conference: 20-23 May 2024
Date Added to IEEE Xplore: 12 August 2024
ISBN Information:

ISSN Information:

Conference Location: Vancouver, BC, Canada

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.