Abstract:
Deploying machine learning (ML)-based network intrusion detection systems has become a mainstream solution to improve the security of network efficiently. However, recent...Show MoreMetadata
Abstract:
Deploying machine learning (ML)-based network intrusion detection systems has become a mainstream solution to improve the security of network efficiently. However, recent research has shown that ML models are vulnerable to adversarial examples. It is a formidable challenge for attackers to obtain the structure and gradients of intrusion detectors, thus transferable adversarial attacks that can deceive the unknown models pose a greater threat in practical scenarios. In this work, our goal is to investigate the cross-model transferability of adversarial examples toward autoencoder (AE)-based network intrusion detectors. Unlike adversarial methods in the image domain focusing on the distance between benign input and adversarial example, adversarial algorithms in network field emphasize complying with network protocols and maintaining malicious payload. We first introduce the common adversarial attacks in the image domain into AE-based network intrusion detectors with constraints. The experimental results show that iterative attacks perform better than single-step attacks against different AE-based models. At the same time, we discover that the transferable adversarial attacks in image domain are not very effective in facilitating the transferability of adversarial examples in this scenario because of fewer changeable features. To address this issue, from the perspective of the substitute model, we propose linear autoencoder (LAE) which is simply removed the activation functions of AE model but shares the same main structure with the original model. Extensive experimental evaluation demonstrates that by employing LAE as the source model, the transferability of both gradient-based and optimization-based adversarial attack methods can be improved significantly.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 12, December 2024)