Loading [a11y]/accessibility-menu.js
Toward Transferable Adversarial Attacks Against Autoencoder-Based Network Intrusion Detectors | IEEE Journals & Magazine | IEEE Xplore
Scheduled Maintenance: On Tuesday, 25 February, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

Toward Transferable Adversarial Attacks Against Autoencoder-Based Network Intrusion Detectors


Abstract:

Deploying machine learning (ML)-based network intrusion detection systems has become a mainstream solution to improve the security of network efficiently. However, recent...Show More

Abstract:

Deploying machine learning (ML)-based network intrusion detection systems has become a mainstream solution to improve the security of network efficiently. However, recent research has shown that ML models are vulnerable to adversarial examples. It is a formidable challenge for attackers to obtain the structure and gradients of intrusion detectors, thus transferable adversarial attacks that can deceive the unknown models pose a greater threat in practical scenarios. In this work, our goal is to investigate the cross-model transferability of adversarial examples toward autoencoder (AE)-based network intrusion detectors. Unlike adversarial methods in the image domain focusing on the distance between benign input and adversarial example, adversarial algorithms in network field emphasize complying with network protocols and maintaining malicious payload. We first introduce the common adversarial attacks in the image domain into AE-based network intrusion detectors with constraints. The experimental results show that iterative attacks perform better than single-step attacks against different AE-based models. At the same time, we discover that the transferable adversarial attacks in image domain are not very effective in facilitating the transferability of adversarial examples in this scenario because of fewer changeable features. To address this issue, from the perspective of the substitute model, we propose linear autoencoder (LAE) which is simply removed the activation functions of AE model but shares the same main structure with the original model. Extensive experimental evaluation demonstrates that by employing LAE as the source model, the transferability of both gradient-based and optimization-based adversarial attack methods can be improved significantly.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 12, December 2024)
Page(s): 13863 - 13872
Date of Publication: 14 August 2024

ISSN Information:

Funding Agency:


References

References is not available for this document.