Skip to main content

They Might NOT Be Giants Crafting Black-Box Adversarial Examples Using Particle Swarm Optimization

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12309))

Abstract

As machine learning is deployed in more settings, including in security-sensitive applications such as malware detection, the risks posed by adversarial examples that fool machine-learning classifiers have become magnified. Black-box attacks are especially dangerous, as they only require the attacker to have the ability to query the target model and observe the labels it returns, without knowing anything else about the model. Current black-box attacks either have low success rates, require a high number of queries, produce adversarial images that are easily distinguishable from their sources, or are not flexible in controlling the outcome of the attack. In this paper, we present AdversarialPSO, (Code available: https://github.com/rhm6501/AdversarialPSOImages) a black-box attack that uses few queries to create adversarial examples with high success rates. AdversarialPSO is based on Particle Swarm Optimization, a gradient-free evolutionary search algorithm, with special adaptations to make it effective for the black-box setting. It is flexible in balancing the number of queries submitted to the target against the quality of the adversarial examples. We evaluated AdversarialPSO on CIFAR-10, MNIST, and Imagenet, achieving success rates of 94.9%, 98.5%, and 96.9%, respectively, while submitting numbers of queries comparable to prior work. Our results show that black-box attacks can be adapted to favor fewer queries or higher quality adversarial images, while still maintaining high success rates.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/MadryLab/cifar10_challenge.

  2. 2.

    https://keras.io/applications/#inceptionv3.

  3. 3.

    https://github.com/snu-mllab/parsimonious-blackbox-attack.

References

  1. Alzantot, M., Sharma, Y., Chakraborty, S., Srivastava, M.B.: Genattack: practical black-box attacks with gradient-free optimization. CoRR, abs/1805.11090 (2018)

    Google Scholar 

  2. Bhagoji, A.N., He, W., Li, B., Song, D.: Practical black-box attacks on deep neural networks using efficient query mechanisms. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216, pp. 158–174. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01258-8_10

    Chapter  Google Scholar 

  3. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)

    Google Scholar 

  4. Carneiro, G., Zheng, Y., Xing, F., Yang, L.: Review of deep learning methods in mammography, cardiovascular, and microscopy image analysis. In: Lu, L., Zheng, Y., Carneiro, G., Yang, L. (eds.) Deep Learning and Convolutional Neural Networks for Medical Image Computing. ACVPR, pp. 11–32. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-42999-1_2

    Chapter  Google Scholar 

  5. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. CoRR, abs/1708.03999v2 (2017)

    Google Scholar 

  6. Gaing, Z.-L.: Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Syst. 18(3), 1187–1195 (2003)

    Article  Google Scholar 

  7. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  8. Guo, C., Gardner, J.R., You, Y., Wilson, A.G., Weinberger, K.Q.: Simple black-box adversarial attacks. CoRR, abs/1905.07121 (2019)

    Google Scholar 

  9. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. CoRR, abs/1804.08598 (2018)

    Google Scholar 

  10. Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. CoRR, abs/1807.07978 (2018)

    Google Scholar 

  11. Izakian, H., Tork Ladani, B., Zamanifar, K., Abraham, A.: A novel particle swarm optimization approach for grid job scheduling. In: Prasad, S.K., Routray, S., Khurana, R., Sahni, S. (eds.) ICISTM 2009. CCIS, vol. 31, pp. 100–109. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00405-6_14

    Chapter  Google Scholar 

  12. James Kennedy and Russell Eberhart. Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995)

    Google Scholar 

  13. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. CoRR, abs/1607.02533 (2016)

    Google Scholar 

  14. Yonghe, L., Liang, M., Ye, Z., Cao, L.: Improved particle swarm optimization algorithm and its application in text feature selection. Appl. Soft Comput. 35, 629–636 (2015)

    Article  Google Scholar 

  15. Moon, S., An, G., Song, H.O.: Parsimonious black-box adversarial attacks via efficient combinatorial optimization. In: ICML (2019)

    Google Scholar 

  16. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS 2017, pp. 506–519. ACM, New York (2017)

    Google Scholar 

  17. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy (EuroSP), pp. 372–387, November 2016

    Google Scholar 

  18. Raff, E., Barker, J., Sylvester, J., Brandon, R., Catanzaro, B., Nicholas, C.: Malware detection by eating a whole exe. In: The Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  19. Shi, Y., Eberhart, R.C.: Empirical study of particle swarm optimization. In: Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), vol. 3, pp. 1945–1950, February 1999

    Google Scholar 

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2015)

    Google Scholar 

  21. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR, abs/1710.08864 (2017)

    Google Scholar 

  22. Szegedy, C., et al.: Intriguing properties of neural networks. CoRR, abs/1312.6199v4 (2014)

    Google Scholar 

  23. Zhang, Y., et al.: Towards end-to-end speech recognition with deep convolutional neural networks. CoRR, abs/1701.02720 (2017)

    Google Scholar 

Download references

Acknowledgment

We would like to thank the reviewers for their constructive comments that helped clarify and improve this paper. This material is based upon work supported by the National Science Foundation under Awards No. 1816851, 1433736, and 1922169.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rayan Mosli .

Editor information

Editors and Affiliations

A Appendix

A Appendix

As discussed in Sect. 3.2, the AdversarialPSO attack iteratively performs several operations to generate adversarial examples from images. Algorithm 1 provides a high-level view of the main AdversarialPSO loop that is responsible for initializing the swarm, moving the particles, randomizing the particles, increasing the granularity of the search space, and reversing any movement with a negative fitness:

figure a

In preparation for the attack, AdversarialPSO initializes the swarm by separating the image into blocks and assigning a different set of blocks to each particle. The attack then moves the particles according to the blocks they were assigned and evaluates the new position to calculate the fitness of each new position. Algorithm 2 provides the steps for the initialization process:

figure b

In each iteration, particles are moved using traditional PSO operations, which consist of calculating the velocity of each particle and adding that velocity to the particle’s current position. After each movement, the fitness for the new position is calculated and compared against the particle’s best fitness and the swarn-wide best fitness. Future particle movements depend on the outcome of each fitness comparison. Algorithm 3 provides the steps for the velocity-based particle movements:

figure c

In addition to velocity-based movements, in every iteration, each particles is assigned new blocks with directions that are unique to that particle. Algorithm 4 shows the process of assigning blocks and directions to particles:

figure d

If a given particle movement produced a negative fitness, we observed that moving in the opposite direction would most likely produce a positive fitness. Algorithm 5 provides the steps for these reversal operations:

figure e

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mosli, R., Wright, M., Yuan, B., Pan, Y. (2020). They Might NOT Be Giants Crafting Black-Box Adversarial Examples Using Particle Swarm Optimization. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds) Computer Security – ESORICS 2020. ESORICS 2020. Lecture Notes in Computer Science(), vol 12309. Springer, Cham. https://doi.org/10.1007/978-3-030-59013-0_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59013-0_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59012-3

  • Online ISBN: 978-3-030-59013-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics