ABSTRACT
Cyber deception has emerged as a valuable technique in the field of cybersecurity, closely linked with adversarial Artificial Intelligence. In an era of pervasive automation, it is getting prominence as a research topic aimed at understanding how novel machine learning algorithms can be deceived using adversarial attacks that exploit vulnerabilities of their models. To this end, the paper describes the state-of-the-art of cyber deception for adversarial AI purposes, focusing on its benefits, challenges, and advanced techniques. In addition, this exploratory research attempts to extend its applicability to the fact that an appropriate and timely discovery of adversarial plans and associated actions may enhance own cyber resilience by introducing analytical findings of the adversary's intent into decision-making for cyber situational awareness. The study of adversarial thinking is as old as history and is one of the most relevant subjects rapidly incorporated into the operational planning process – a methodology to understand the operational environment. Adversarial knowledge is used for adapting own cyber defences in response to the cyber threat landscape.
- Liebowitz, Nepal “Deception for Cyber Defence: Challenges and Opportunities”, Cyber Security Research Centre Limited, 2022Google Scholar
- Buczak, Guven, “A survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection”, IEEE Communications Surveys & Tutorials, Vol. 18, No. 2, Second Quarter 2016Google Scholar
- Cohen, Lambert “A framework for Deception”, National Security Issues in Science, Law, and Technology, (pp. 123-219), 2007Google ScholarCross Ref
- Papernot, McDaniel et. al. “Practical black-box attacks against machine learning”, Asia conference on computer and communications security, 2017.Google ScholarDigital Library
- Calder, “A Case for Deception in the Defense”, Military Cyber Affairs, Vol 2, Issue 1, 2016Google ScholarCross Ref
- Biggio, Nelson, Laskov, “Poisoning Attacks against Support Vector Machines”, Proceedings of the 29th International Conference on International Conference on Machine Learning, 2012Google ScholarDigital Library
- NIST AI 100-2e2023 ipd, “Adversarial Machine Learning – a taxonomy and terminology of attacks and mitigations”, 2023Google Scholar
- Zhang, Song et. al, “Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning”, 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020Google ScholarDigital Library
- Chen, Zhang, “ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models”, 24th ACM Conference on Computer and Communications Security, 2017Google ScholarDigital Library
- Carlini, Wagner, “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods”, University of California, 2017Google ScholarDigital Library
- Madry, Makelov “Towards Deep Learning Models Resistant to Adversarial Attacks”, ICLR, 2017Google Scholar
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial SamplesGoogle Scholar
- Papernot, McDaniel, On the Effectiveness of Defensive Distillation, Pennsylvania State University, 2016Google Scholar
- Doan, Abbasnejad et. al., “Februus: Input Purification Defense Against Trojan Attacks on Deep NeuralNetwork Systems”, 2020Google Scholar
- Wang, Sun , “Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey”, 2023Google Scholar
- Kumar, Shetty et. al., " Adversarial Attack on Machine Learning Models", International Journal of Advanced Research in Computer and Communication Engineering, 2023.Google Scholar
Index Terms
- The Age of fighting machines: the use of cyber deception for Adversarial Artificial Intelligence in Cyber Defence
Recommendations
Game Theoretic Models for Cyber Deception
MTD '21: Proceedings of the 8th ACM Workshop on Moving Target DefenseCyber deception has great potential in thwarting cyberattacks [1, 4, 8]. A defender (e.g., network administrator) can use deceptive cyber artifacts such as honeypots and faking services to confuse attackers (e.g., hackers) and thus reduce the success ...
Hybrid cyber defense strategies using Honey-X: A survey
AbstractThe development and adoption of network technologies reshape our daily life; however, network-connected devices have become popular targets of cybercrimes. Honey-X-based cyber defense technologies, like honeypots, honeynets, and honeytokens, can ...
Cyber Deception Against Zero-Day Attacks: A Game Theoretic Approach
Decision and Game Theory for SecurityAbstractReconnaissance activities precedent other attack steps in the cyber kill chain. Zero-day attacks exploit unknown vulnerabilities and give attackers the upper hand against conventional defenses. Honeypots have been used to deceive attackers by ...
Comments