Skip to main content

Advertisement

Log in

Evoattack: suppressive adversarial attacks against object detection models using evolutionary search

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

State-of-the-art deep neural networks are increasingly used in image classification, recognition, and detection tasks for a range of real-world applications. Moreover, many of these applications are safety-critical, where the failure of the system may cause serious harm, injuries, or even deaths. Adversarial examples are expected inputs that are maliciously modified, but difficult to detect, such that the machine learning models fail to classify them correctly. While a number of evolutionary search-based approaches have been developed to generate adversarial examples against image classification problems, evolutionary search-based attacks against object detection algorithms remain largely unexplored. This paper describes EvoAttack that demonstrates how evolutionary search-based techniques can be used as a black-box, model- and data-agnostic approach to attack state-of-the-art object detection algorithms (e.g., RetinaNet, Faster R-CNN, and YoloV5). A proof-of-concept implementation is provided to demonstrate how evolutionary search can generate adversarial examples that existing models fail to correctly process, which can be used to assess model robustness against such attacks. In contrast to other adversarial example approaches that cause misclassification or incorrect labeling of objects, EvoAttack applies minor perturbations to generate adversarial examples that suppress the ability of object detection algorithms to detect objects. We applied EvoAttack to popular benchmark datasets for autonomous terrestrial and aerial vehicles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability

No datasets were generated or analysed during the current study.

Notes

  1. This paper uses the term annotation to refer to the output of an object detection model.

References

  • Alzantot et al.: GenAttack: Practical black-box attacks with gradient-free optimization. In: Proceedings of the genetic and evolutionary computation conference, pp. 1111–1119 (2019)

  • Cai, Z., Rane, S., Brito, A.E., Song, C., Krishnamurthy, S.V., Roy-Chowdhury, A.K., Asif, M.S.: Zero-query transfer attacks on context-aware object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15024–15034 (2022)

  • Cao, Y., He, Z., Wang, L., Wang, W., Yuan, Y., Zhang, D., Zhang, J., Zhu, P., Van Gool, L., Han, J., et al.: Visdrone-det2021: the vision meets drone object detection challenge results. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 2847–2854 (2021)

  • Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 Ieee symposium on security and privacy (sp), pp. 39–57. IEEE (2017)

  • Chan, K., Cheng, B.H.C.: Evoattack: an evolutionary search-based adversarial attack for object detection models. In: International symposium on search based software engineering, pp. 83–97. Springer (2022)

  • Chan, K.H., Cheng, B.H.C.: Expound: a black-box approach for generating diversity-driven adversarial examples. In: International symposium on search based software engineering, pp. 19–34. Springer (2023)

  • Chen, J., Su, M., Shen, S., Xiong, H., Zheng, H.: POBA-GA: perturbation optimized black-box adversarial attacks via genetic algorithm. Comput. Secur. 85, 89–106 (2019)

    Article  Google Scholar 

  • Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  MATH  Google Scholar 

  • Eykholt et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1625–1634 (2018)

  • Feng, D., Harakeh, A., Waslander, S.L., Dietmayer, K.: A review and comparative study on probabilistic object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 23(8), 9961–9980 (2021)

    Article  MATH  Google Scholar 

  • Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International conference on learning representations. https://doi.org/10.48550/arXiv.1412.6572. Cited by 21194 (2015)

  • Han, J.K., Kim, H., Woo, S.S.: Nickel to lego: using Foolgle to create adversarial examples to fool google cloud speech-to-text API. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security. CCS ’19, pp. 2593–2595. Association for Computing Machinery, New York, NY, USA (2019)

  • Han, J., Davids, J., Ashrafian, H., Darzi, A., Elson, D.S., Sodergren, M.: A systematic review of robotic surgery: from supervised paradigms to fully autonomous robotic approaches. Int. J. Med. Robot. Comput. Assist. Surg. 18(2), 2358 (2022)

    Article  Google Scholar 

  • Hardesty, L.: Explained: neural networks. MIT News 14, (2017)

  • Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., Fang, J., Yifu, Z., Wong, C., Montes, D., et al.: ultralytics/yolov5. Zenodo (2022)

  • Kaszas, D., Roberts, A.C.: Comfort with varying levels of human supervision in self-driving cars: determining factors in Europe. Int. J. Transp. Sci. Technol. 12(3), 809–821 (2023)

    Article  MATH  Google Scholar 

  • Knight, J.C.: Safety critical systems: challenges and directions. In: Proceedings of the 24th International conference on software engineering. ICSE 2002, pp. 547–550 (2002)

  • Kocić, J., Jovičić, N., Drndarević, V.: An end-to-end deep neural network for autonomous driving designed for embedded automotive platforms. Sensors 19(9), 2064 (2019)

    Article  Google Scholar 

  • Langford, M.A., Cheng, B.H.C.: Know What You Know: predicting behavior for learning-enabled systems when facing uncertainty. In: 2021 international symposium on software engineering for adaptive and self-managing systems (SEAMS), pp. 78–89 (2021)

  • Langford, M.A., Cheng, B.H.C.: Enki: a diversity-driven approach to test and train robust learning-enabled systems. ACM Trans. Auton. Adapt. Syst. (TAAS) 15(2), 1–32 (2021)

    MATH  Google Scholar 

  • Lapid, R., Sipper, M.: Patch of invisibility: naturalistic black-box adversarial attacks on object detectors. arXiv preprint arXiv:2303.04238 (2023)

  • LeCun, Y., Touresky, D., Hinton, G., Sejnowski, T.: A theoretical framework for back-propagation. In: Proceedings of the 1988 connectionist models summer school, vol. 1, pp. 21–28 (1988)

  • Lehman, J., Stanley, K.O.: Abandoning objectives: evolution through the search for novelty alone. Evol. Comput. 19(2), 189–223 (2011)

    Article  MATH  Google Scholar 

  • Li, X., Jiang, Y., Liu, C., Liu, S., Luo, H., Yin, S.: Playing against deep-neural-network-based object detectors: a novel bidirectional adversarial attack approach. IEEE Trans. Artif. Intell. 3(1), 20–28 (2021)

    Article  MATH  Google Scholar 

  • Liang, K.J., Heilmann, G., Gregory, C., Diallo, S.O., Carlson, D., Spell, G.P., Sigman, J.B., Roe, K., Carin, L.: Automatic threat recognition of prohibited items at aviation checkpoint with X-ray imaging: a deep learning approach. In: Anomaly Detection and Imaging with X-Rays (ADIX) III, vol. 10632, p. 1063203. SPIE (2018)

  • Lin et al.: Microsoft coco: common objects in context. In: European conference on computer vision, pp. 740–755. Springer (2014)

  • Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp. 2980–2988 (2017)

  • Marie-Sainte, S.L., Alamir, M.B., Alsaleh, D., Albakri, G., Zouhair, J.: Enhancing credit card fraud detection using deep neural network. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) Intelligent computing, pp. 301–313. Springer, Cham (2020)

    Chapter  Google Scholar 

  • Metzen, J., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: Proceedings of the 5th international conference on learning representations. https://doi.org/10.48550/arXiv.1702.04267 . Cited by 1140 (2017)

  • Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582 (2016)

  • Nandita, G., Chandra, T.M.: Malicious host detection and classification in cloud forensics with dnn and sflo approaches. Int. J. Syst. Assur. Eng. Manag. 1–13 (2021)

  • Nowostawski, M., Poli, R.: Parallel genetic algorithm taxonomy. In: 1999 Third international conference on knowledge-based intelligent information engineering systems. Proceedings (Cat. No. 99TH8410), pp. 88–92. IEEE (1999)

  • Otsu, K., Tepsuporn, S., Thakker, R., Vaquero, T.S., Edlund, J.A., Walsh, W., Miles, G., Heywood, T., Wolf, M.T., Agha-Mohammadi, A.-A.: Supervised autonomy for communication-degraded subterranean exploration by a robot team. In: 2020 IEEE aerospace conference, pp. 1–9. IEEE (2020)

  • Paszke, et al.: PyTorch: an imperative style. Curran Associates, Inc, High-Performance Deep Learning Library (2019)

  • Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28, 2969239 (2015)

    Google Scholar 

  • Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 6(3), 346–360 (2020)

    Article  MATH  Google Scholar 

  • Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 25–32 (2016)

  • Rudd, E.M., Harang, R., Saxe, J.: Meade: Towards a malicious email attachment detection engine. In: 2018 IEEE international symposium on technologies for homeland security (HST), pp. 1–7 (2018)

  • Schiaretti, M., Chen, L., Negenborn, R.R.: Survey on autonomous surface vessels: Part i-a new detailed definition of autonomy levels. In: Computational Logistics: 8th international conference, ICCL 2017, Southampton, UK, October 18-20, 2017, Proceedings 8, pp. 219–233. Springer (2017)

  • Serban, A., Poll, E., Visser, J.: Adversarial examples on object recognition: a comprehensive survey. ACM Comput. Surv. 53(3), 1–38 (2020)

    Article  MATH  Google Scholar 

  • Sivanandam, S., Deepa, S.: Introduction to genetic algorithms. Springer (2008)

    MATH  Google Scholar 

  • Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  MATH  Google Scholar 

  • Sun et al.: Scalability in perception for autonomous driving: Waymo open dataset. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446–2454 (2020)

  • Sun, J., Yao, W., Jiang, T., Wang, D., Chen, X.: Differential evolution based dual adversarial camouflage: fooling human eyes and object detectors. Neural Netw. 163, 256–271 (2023). https://doi.org/10.1016/j.neunet.2023.03.041

    Article  MATH  Google Scholar 

  • Szegedy et al.: Intriguing properties of neural networks. In: International conference on learning representations. https://doi.org/10.48550/arXiv.1312.6199 . Cited by 16845 (2014)

  • Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. (2019)

  • Vidnerová, P., Neruda, R.: Vulnerability of classifiers to evolutionary generated adversarial examples. Neural Netw. 127, 168–181 (2020)

    Article  MATH  Google Scholar 

  • Wang, Y., Xu, W.: Leveraging deep learning with lda-based text analytics to detect automobile insurance fraud. Decis. Support Syst. 105, 87–95 (2018)

    Article  MATH  Google Scholar 

  • Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  MATH  Google Scholar 

  • Wang, Y., Tan, Y.-A., Zhang, W., Zhao, Y., Kuang, X.: An adversarial attack on dnn-based black-box object detectors. J. Netw. Comput. Appl. 161, 102634 (2020). https://doi.org/10.1016/j.jnca.2020.102634

    Article  MATH  Google Scholar 

  • Wei, X., Liang, S., Chen, N., Cao, X.: Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641 (2018)

  • Wei, X., Liang, S., Chen, N., Cao, X.: Transferable Adversarial Attacks for Image and Video Object Detection. AAAI Press (2019)

  • Wu, B., Iandola, F., Jin, P.H., Keutzer, K.: Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 129–137 (2017)

  • Wu, Z., Lim, S.-N., Davis, L.S., Goldstein, T.: Making an invisibility cloak: real world adversarial attacks on object detectors. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16, pp. 1–17. Springer (2020)

  • Wu, C., Luo, W., Zhou, N., Xu, P., Zhu, T.: Genetic algorithm with multiple fitness functions for generating adversarial examples. In: 2021 IEEE Congress on evolutionary computation (CEC), pp. 1792–1799 (2021)

  • Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE international conference on computer vision, pp. 1369–1378 (2017)

  • Xu, H., Ma, Y., Liu, H.-C., Deb, D., Liu, H., Tang, J.-L., Jain, A.K.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17(2), 151–178 (2020)

    Article  MATH  Google Scholar 

  • Ye, J., Wang, Y., Zhang, X., Xu, L., Ni, R.: Adversarial attack algorithm for object detection based on improved differential evolution. In: 6th international workshop on advanced algorithms and control engineering (IWAACE 2022), vol. 12350, pp. 669–678. SPIE (2022)

  • Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learning Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by funding provided by Michigan State University and the BEACON Center.

Author information

Authors and Affiliations

Authors

Contributions

K.H.C. and B.H.C. have contributed equally to this work.

Corresponding author

Correspondence to Kenneth H. Chan.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chan, K.H., Cheng, B.H.C. Evoattack: suppressive adversarial attacks against object detection models using evolutionary search. Autom Softw Eng 32, 3 (2025). https://doi.org/10.1007/s10515-024-00470-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10515-024-00470-9

Keywords