Skip to main content

A Comparative Analysis of Evolutionary Adversarial One-Pixel Attacks

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2024)

Abstract

Adversarial attacks pose significant challenges to the robustness of machine learning models. This paper explores the one-pixel attacks in image classification, a black-box adversarial attack that introduces changes to the pixels of the input images to make the classifier predict erroneously. We use a pragmatic approach by employing different evolutionary algorithms - Differential Evolution, Genetic Algorithms, and Covariance Matrix Adaptation Evolution Strategy - to find and optimise these one-pixel attacks. We focus on understanding how these algorithms generate effective one-pixel attacks. The experimentation was carried out on the CIFAR-10 dataset, a widespread benchmark in image classification. The experimental results cover an analysis of the following aspects: fitness optimisation, number of evaluations to generate an adversarial attack, success rate, number of adversarial attacks found per image, solution space coverage and level of distortion done to the original image to generate the attack. Overall, the experimentation provided insights into the nuances of the one-pixel attack and compared three standard evolutionary algorithms, showcasing each algorithm’s potential and evolutionary computation’s ability to find solutions in this strict case of the adversarial attack.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/carlini/nn_robust_attacks.

  2. 2.

    https://github.com/luanaclare/evo_one_pixel_attack.

References

  1. Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.J., Srivastava, M.B.: GenAttack: practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, pp. 1111–1119. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3321707.3321749

  2. Banzhaf, W., Machado, P., Zhang, M.: Handbook of Evolutionary Machine Learning. Genetic and Evolutionary Computation. Springer, Cham (2023). https://doi.org/10.1007/978-981-99-3814-8, https://books.google.pt/books?id=fGLuzwEACAAJ

  3. Bradley, J.R., Blossom, A.P.: The generation of visually credible adversarial examples with genetic algorithms. ACM Trans. Evol. Learn. Optim. 3(1), 1–44 (2023). https://doi.org/10.1145/3582276

  4. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. CoRR abs/1608.04644 (2016). http://arxiv.org/abs/1608.04644

  5. Chen, J., Su, M., Shen, S., Xiong, H., Zheng, H.: POBA-GA: perturbation optimized black-box adversarial attacks via genetic algorithm. Comput. Secur. 85, 89–106 (2019). https://doi.org/10.1016/j.cose.2019.04.014, https://www.sciencedirect.com/science/article/pii/S0167404818314378

  6. Dong, X., et al.: GreedyFool: distortion-aware sparse adversarial attack (2020). https://doi.org/10.48550/ARXIV.2010.13773, https://arxiv.org/abs/2010.13773

  7. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). https://doi.org/10.48550/ARXIV.1412.6572, https://arxiv.org/abs/1412.6572

  8. Hansen, N.: The CMA evolution strategy: a tutorial. CoRR abs/1604.00772 (2016). http://arxiv.org/abs/1604.00772

  9. Ilie, A., Popescu, M., Stefanescu, A.: EvoBA: an evolution strategy as a strong baseline for black-box adversarial attacks. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds.) Neural Information Processing, pp. 188–200. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92238-2_16

  10. Jere, M., Rossi, L., Hitaj, B., Ciocarlie, G., Boracchi, G., Koushanfar, F.: Scratch that! An evolution-based adversarial attack against neural networks. arXiv preprint arXiv:1912.02316 (2019)

  11. Lapid, R., Haramaty, Z., Sipper, M.: An evolutionary, gradient-free, query-efficient, black-box algorithm for generating adversarial instances in deep convolutional neural networks. Algorithms 15(11), 407 (2022). https://doi.org/10.3390/a15110407, https://www.mdpi.com/1999-4893/15/11/407

  12. Lin, J., Xu, L., Liu, Y., Zhang, X.: Black-box adversarial sample generation based on differential evolution. J. Syst. Softw. 170, 110767 (2020). https://doi.org/10.1016/j.jss.2020.110767, https://www.sciencedirect.com/science/article/pii/S0164121220301850

  13. Storn, R., Price, K.: Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997). https://doi.org/10.1023/A:1008202821328

  14. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019). https://doi.org/10.1109/TEVC.2019.2890858

  15. Szegedy, C., et al.: Intriguing properties of neural networks (2013). https://doi.org/10.48550/ARXIV.1312.6199, https://arxiv.org/abs/1312.6199

  16. Williams, P., Li, K.: Art-attack: black-box adversarial attack via evolutionary art (2022)

    Google Scholar 

  17. Wu, C., Luo, W., Zhou, N., Xu, P., Zhu, T.: Genetic algorithm with multiple fitness functions for generating adversarial examples. In: 2021 IEEE Congress on Evolutionary Computation (CEC), pp. 1792–1799 (2021). https://doi.org/10.1109/CEC45853.2021.9504790

Download references

Acknowledgements

This work has been partially supported by Project “NEXUS Pacto de Inovação - Transição Verde e Digital para Transportes, Logística e Mobilidade”. ref. No. 7113, supported by the Recovery and Resilience Plan (PRR) and by the European Funds Next Generation EU, following Notice No. 02/C05-i01/2022.PC645112083-00000059 (project 53), Component 5 - Capitalization and Business Innovation - Mobilizing Agendas for Business Innovation; by Project No. 7059 - Neuraspace - AI fights Space Debris, reference C644877546-00000020, supported by the RRP - Recovery and Resilience Plan and the European Next Generation EU Funds, following Notice No. 02/C05-i01/2022, Component 5 - Capitalization and Business Innovation - Mobilizing Agendas for Business Innovation and; by the FCT - Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R &D Unit - UIDB/00326/2020 or project code UIDP/00326/2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luana Clare .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Clare, L., Marques, A., Correia, J. (2024). A Comparative Analysis of Evolutionary Adversarial One-Pixel Attacks. In: Smith, S., Correia, J., Cintrano, C. (eds) Applications of Evolutionary Computation. EvoApplications 2024. Lecture Notes in Computer Science, vol 14635. Springer, Cham. https://doi.org/10.1007/978-3-031-56855-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56855-8_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56854-1

  • Online ISBN: 978-3-031-56855-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics