skip to main content
10.1145/3474370.3485661acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Combinatorial Boosting of Classifiers for Moving Target Defense Against Adversarial Evasion Attacks

Published: 15 November 2021 Publication History

Abstract

Adversarial evasion attacks challenge the integrity of machine learning models by creating out-of-distribution samples that are consistently misclassified by these models. While a variety of detection and mitigation approaches have been proposed, they are typically defeated by designing even more sophisticated attacks. One of the most promising group of such approaches is based on creating multiple diversified machine learning models and leveraging their ensemble properties for detection and mitigation of adversarial attacks in a dynamic "moving target'' fashion. However, an efficient implementation of such approaches imposes heavy computation cost for designing and enforcing diversity of multiple classifiers and then training a significant number of them. This paper proposes a scalable modification of dynamic ensemble approach that provides (i) a combinatorial boosting of the number of diversified classifiers which provides an exponentially expanded scope of reliable decisions for dynamic "moving target'' defense, and (ii) robust methods for fusion of ensemble decisions of the resulting classifiers and their combinations towards enhanced confidence in classification decisions in both benign and adversarial scenarios. Two versions of the approach were implemented and tested for machine learning models operating in two different modalities (network intrusion detection and color image classification). Both show significant increase of resiliency against adversarial evasion attacks with moderate to low impact on benign performance of defended machine learning model. For network modality, different versions of approach improved the benign accuracy from 98% to 100% while raising the adversarial accuracy from 0% to 90%-95%; for image modality, benign accuracy remained at the same level of 90% while the adversarial accuracy improved from 0% to about 85%.

References

[1]
[n.d.]. CivTAK News, Licensing, Support & Download for CivTAK / TAK Tools. Available at https://www.civtak.org/ (2021/07/14).
[2]
Mahdieh Abbasi, Arezoo Rajabi, Christian Gagne, and Rakesh B. Bobba. 2020. Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks. arXiv:2005.08321
[3]
Sahar Abdelnabi and Mario Fritz. 2021. "What's in the box?!": Deflecting Adversarial Attacks by Randomly Deploying Adversarially-Disjoint Models. arXiv:2102.05104 [cs.LG]
[4]
Takuma Amada, Kazuya Kakizaki, Seng Pei Liew, Toshinori Araki, Joseph Keshet, and Jun Furukawa. 2021. Adversarial Robustness for Face Recognition: How to Introduce Ensemble Diversity among Feature Extractors?. In Proceedings of the Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) co-located with the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021), Virtual, February 8, 2021 (CEUR Workshop Proceedings, Vol. 2808), Huáscar Espinoza, John McDermid, Xiaowei Huang, Mauricio Castillo-Effen, Xin Cynthia Chen, José Hernández-Orallo, Seán Ó hÉigeartaigh, and Richard Mallah (Eds.). CEUR-WS.org.
[5]
Adi Ashkenazy and Shahar Zini. 2019. Cylance, I Kill You! https://skylightcyber. com/2019/07/18/cylance-i-kill-you/. [Online; accessed 07-July-2021].
[6]
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion Attacks against Machine Learning at Test Time. In Machine Learning and Knowledge Discovery in Databases, Hendrik Blockeel, Kristian Kersting, Siegfried Nijssen, and Filip elezný (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 387--402.
[7]
Battista Biggio, Giorgio Fumera, and Fabio Roli. 2013. Security Evaluation of Pattern Classifiers Under Attack. IEEE Transactions on Knowledge and Data Engineering 99 (01 2013), 1.
[8]
Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, and Dawn Song. 2021. Anomalous Example Detection in Deep Learning: A Survey. arXiv:2003.06979 [cs.LG]
[9]
Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. arXiv:1810.00069 [cs.LG]
[10]
Alesia Chernikova and Alina Oprea. 2019. FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments. arXiv:1909.10480 (2019).
[11]
Jin-Hee Cho, Dilli P. Sharma, Hooman Alavizadeh, Seunghyun Yoon, Noam Ben- Asher, Terrence J. Moore, Dong Seong Kim, Hyuk Lim, and Frederica F. Nelson. 2019. Toward Proactive, Adaptive Defense: A Survey on Moving Target Defense. arXiv:1909.08092
[12]
Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, Jeremy Dawson, and Nasser M. Nasrabadi. 2020. Exploiting Joint Robustness to Adversarial Perturbations. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1119--1128.
[13]
Thomas G. Dietterich. 2000. Ensemble Methods in Machine Learning. In MULTIPLE CLASSIFIER SYSTEMS, LBCS-1857. Springer, 1--15.
[14]
Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial Example Defense: Ensembles of Weak Defenses are not Strong. In 11th USENIX Workshop on Offensive Technologies (WOOT 17). USENIX Association.
[15]
Rauf Izmailov, Peter Lin, Chris Mesterharm, and Samyadeep Basu. 2019. Privacy Leakage Avoidance with Switching Ensembles. arXiv:1911.07921 [cs.LG]
[16]
Kaidi Jin, Tianwei Zhang, Chao Shen, Yufei Chen, Ming Fan, Chenhao Lin, and Ting Liu. 2020. A Unified Framework for Analyzing and Detecting Malicious Examples of DNN Models. arXiv:2006.14871 [cs.LG]
[17]
Sanjay Kariyappa and Moinuddin K. Qureshi. 2019. Improving Adversarial Robustness of Ensembles with Diversity Training. arXiv:1901.09981 [stat.ML]
[18]
Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. Adversarial malware binaries: Evading deep learning for malware detection in executables. In 2018 26th European signal processing conference (EUSIPCO). IEEE, 533--537.
[19]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2009. CIFAR-10 (Canadian Institute for Advanced Research). (2009). http://www.cs.toronto.edu/~kriz/cifar. html
[20]
Ludmila I. Kuncheva and Christopher J. Whitaker. 2003. Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Mach. Learn. 51, 2 (May 2003), 181--207.
[21]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. CoRR (07 2016).
[22]
Guofu Li, Pengjia Zhu, Jin Li, Zhemin Yang, Ning Cao, and Zhiyi Chen. 2018. Security Matters: A Survey on Adversarial Machine Learning. arXiv:1810.07339 [cs.LG]
[23]
Blerta Lindqvist, Shridatt Sugrim, and Rauf Izmailov. 2018. AutoGAN: Robust Classifier Against Adversarial Attacks. CoRR (2018). arXiv:1812.03405
[24]
Qiang Liu, Pan Li, Wentao Zhao, Wei Cai, Shui Yu, and Victor C. M. Leung. 2018. A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View. IEEE Access 6 (2018), 12103--12117.
[25]
Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E. Houle, and James Bailey. 2018. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. arXiv:1801.02613 [cs.LG]
[26]
Gabriel Resende Machado, Eugenio Silva, and Ronaldo Ribeiro Goldschmidt. 2020. Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective. arXiv:2009.03728 [cs.CV]
[27]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[28]
Davide Maiorca, Giorgio Giacinto, and Igino Corona. 2012. A pattern recognition system for malicious pdf files detection. In International workshop on machine learning and data mining in pattern recognition. Springer, 510--524.
[29]
Dongyu Meng and Hao Chen. 2017. MagNet: a Two-Pronged Defense against Adversarial Examples. CoRR abs/1705.09064 (2017). arXiv:1705.09064
[30]
Ying Meng, Jianhai Su, Jason O'Kane, and Pooyan Jamshidi. 2020. ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense. arXiv:2001.00308 [cs.LG]
[31]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: a simple and accurate method to fool deep neural networks. CVPR (11 2016).
[32]
Blaine Nelson, Benjamin Rubinstein, Ling Huang, Anthony D. Joseph, Steven J. Lee, Satish Rao, and J D. Tygar. 2010. Query Strategies for Evading Convex- Inducing Classifiers. Journal of Machine Learning Research 13 (07 2010).
[33]
Federico Nesti, Alessandro Biondi, and Giorgio Buttazzo. 2021. Detecting Adver- sarial Examples by Input Transformations, Defense Perturbations, and Voting. arXiv:2101.11466 [cs.CV]
[34]
Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. 2019. Improving Adversarial Robustness via Promoting Ensemble Diversity. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 4970--4979.
[35]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR (05 2016).
[36]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security (Abu Dhabi, United Arab Emirates) (ASIA CCS '17). Association for Computing Machinery, New York, NY, USA, 506--519.
[37]
Nicolas Papernot, P. Mcdaniel, S. Jha, Matt Fredrikson, Z. Y. Celik, and A. Swami. 2016. The Limitations of Deep Learning in Adversarial Settings. 2016 IEEE European Symposium on Security and Privacy (EuroS&P) (2016), 372--387.
[38]
Abhishek Roy, Anshuman Chhabra, Charles A. Kamhoua, and Prasant Moha- patra. 2019. A Moving Target Defense against Adversarial Machine Learning. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing (SEC '19). Association for Computing Machinery, 383--388.
[39]
Sailik Sengupta, Tathagata Chakraborti, and Subbarao Kambhampati. 2019. MT- Deep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense. 479--491.
[40]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (Vienna, Austria) (CCS '16). Association for Computing Machinery, New York, NY, USA, 1528--1540.
[41]
Samuel Henrique Silva and Peyman Najafirad. 2020. Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey. arXiv:2007.00753 [cs.LG]
[42]
Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. 2020. Adver- sarial Machine Learning-Industry Perspectives. In 2020 IEEE Security and Privacy Workshops (SPW). 69--75.
[43]
Pavel Laskov. 2016. Hidost: a static machine-learning-based detector of malicious files. EURASIP Journal on Information Security 2016, 1 (2016), 1--20.
[44]
Thilo Strauss, Markus Hanselmann, Andrej Junginger, and Holger Ulmer. 2018. Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks. arXiv:1709.03423
[45]
Octavian Suciu, Scott E Coull, and Jeffrey Johns. 2019. Exploring adversarial examples in malware detection. In 2019 IEEE Security and Privacy Workshops (SPW). IEEE, 8--14.
[46]
E. K. Tang, P. N. Suganthan, and X. Yao. 2006. An Analysis of Diversity Measures. Mach. Learn. 65, 1 (Oct. 2006), 247--271.
[47]
Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On Adaptive Attacks to Adversarial Example Defenses. arXiv:2002.08347 [cs.LG]
[48]
Jingyuan Wang, Yufan Wu, Mingxuan Li, Xin Lin, Junjie Wu, and Chao Li. 2020. Interpretability is a Kind of Safety: An Interpreter-Based Ensemble for Adversary Defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '20). 15--24.
[49]
Bartosz Wójcik, Pawel Morawiecki, Marek Smieja, Tomasz Krzyzek, Przemyslaw Spurek, and Jacek Tabor. 2020. Adversarial Examples Detection and Analysis with Layer-wise Autoencoders. arXiv:2006.10013 [cs.LG]
[50]
Lei Wu and Zhanxing Zhu. 2020. Towards Understanding and Improving the Transferability of Adversarial Examples in Deep Neural Networks. In Proceedings of The 12th Asian Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 129), Sinno Jialin Pan and Masashi Sugiyama (Eds.). PMLR, Bangkok, Thailand, 837--850.
[51]
Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. Proceedings 2018 Network and Distributed System Security Symposium (2018).
[52]
Darpan Kumar Yadav, Kartik Mundra, Rahul Modpur, Arpan Chattopadhyay, and Indra Narayan Kar. 2020. Efficient detection of adversarial images. arXiv:2007.04564 [eess.IV]
[53]
Gokberk Yaltirakli. 2015. slowloris.py - Simple slowloris in Python. https: //github.com/gkbrk/slowloris. [Online; accessed 07-July-2021].
[54]
Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubin- stein, Ce Zhang, and Bo Li. 2021. TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness. arXiv:2103.07640v1 [cs.LG]
[55]
Rui Zhao. 2020. The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms. CoRR abs/2011.05976 (2020). arXiv:2011.05976 https://arxiv.org/abs/2011.05976
[56]
Zhi-Hua Zhou. 2012. Ensemble Methods: Foundations and Algorithms. Chapman & Hall/CRC.

Cited By

View all
  • (2023)A Survey of Attacks and Defenses for Deep Neural Networks2023 IEEE International Conference on Cyber Security and Resilience (CSR)10.1109/CSR57506.2023.10224947(254-261)Online publication date: 31-Jul-2023
  • (2022)Robust Botnet DGA Detection: Blending XAI and OSINT for Cyber Threat Intelligence SharingIEEE Access10.1109/ACCESS.2022.316258810(34613-34624)Online publication date: 2022

Index Terms

  1. Combinatorial Boosting of Classifiers for Moving Target Defense Against Adversarial Evasion Attacks

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        MTD '21: Proceedings of the 8th ACM Workshop on Moving Target Defense
        November 2021
        48 pages
        ISBN:9781450386586
        DOI:10.1145/3474370
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 November 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. adversarial evasion attack
        2. adversarial examples
        3. evasion attack
        4. image classification
        5. intrusion detection
        6. machine learning classifier
        7. supervised machine learning

        Qualifiers

        • Research-article

        Funding Sources

        • U.S. Army Research Laboratory and DARPA

        Conference

        CCS '21
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 40 of 92 submissions, 43%

        Upcoming Conference

        ICSE 2025

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)37
        • Downloads (Last 6 weeks)4
        Reflects downloads up to 15 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)A Survey of Attacks and Defenses for Deep Neural Networks2023 IEEE International Conference on Cyber Security and Resilience (CSR)10.1109/CSR57506.2023.10224947(254-261)Online publication date: 31-Jul-2023
        • (2022)Robust Botnet DGA Detection: Blending XAI and OSINT for Cyber Threat Intelligence SharingIEEE Access10.1109/ACCESS.2022.316258810(34613-34624)Online publication date: 2022

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media