Skip to main content
Log in

SC-PCA: Shape Constraint Physical Camouflage Attack Against Vehicle Detection

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Physical adversarial attacks against vehicle detection are gaining attention. However, most prior works focus on improving attack ability by amplifying the intensity and scope of perturbations, which results in a visually suspicious appearance that exposes attackers’ behavior. Motivated by the shape preference characteristic exhibited in human cognitive processes, we propose a shape constraint physical camouflage attack (SC-PCA) to generate vehicle camouflage. To generate naturalistic perturbations, we use a contour image as the control condition and introduce a shape-aware loss in conditional generative adversarial network. Then, we map the perturbations onto the surface of the target vehicle to form the camouflage. By setting different transformation parameters, vehicle images of multiple perspectives and multiple scenes can be rendered. Experiments conducted in both digital and physical worlds demonstrate that our method has a good attack ability, can deceive the vehicle detector in the real world, and can adapt to various angle, distance, and background changes. Moreover, the outcome of the human perception survey indicates that our approach outperforms the state-of-the-art techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Algorithm 1
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Similar content being viewed by others

References

  1. Qiu, H., Zheng, Q., Memmi, G., et al. (2020). Deep residual learning-based enhanced jpeg compression in the internet of things. IEEE Transactions on Industrial Informatics, 17(3), 2124–2133.

    Google Scholar 

  2. Zhang, Y., Qiu, M., & Gao, H. (2023). Communication-efficient stochastic gradient descent ascent with momentum algorithms. In: Proceedings of the 32nd International Joint Conference on Artificial Intelligence.

  3. Ling, C., Jiang, J., Wang, J., et al. (2023). Deep graph representation learning and optimization for influence maximization. In: Proceedings of the 40th International Conference on Machine Learning, pp 21350–21361.

  4. Qiu, H., Qiu, M., & Lu, R. (2019). Secure v2x communication network based on intelligent pki and edge computing. IEEE Network, 34(2), 172–178.

    Article  Google Scholar 

  5. Song, Y., Li, Y., Jia, L., & Qiu, M. (2019). Retraining strategy-based domain adaption network for intelligent fault diagnosis. IEEE Transactions on Industrial Informatics, 16(9), 6163–6171.

  6. Huang, H., Chaturvedi, V., Quan, G., Fan, J., & Qiu, M. (2014). Throughput maximization for periodic real-time systems under the maximal temperature constraint. ACM Transactions on Embedded Computing Systems (TECS), 13(2s), 1–22.

  7. Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  8. Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (sp), IEEE, pp 39–57.

  9. Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2574–2582.

  10. Qiu, H., Zeng, Y., Guo, S., et al. (2021). Deepsweep: An evaluation framework for mitigating dnn backdoor attacks using data augmentation. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pp 363–377.

  11. Qiu, M., & Qiu, H. (2020). Review on image processing based adversarial example defenses in computer vision. In: IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE, pp 94–99.

  12. Madry, A., Makelov, A., Schmidt, L., et al. (2018). Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations, pp 1–23.

  13. Serban, A., Poll, E., & Visser, J. (2020). Adversarial examples on object recognition: A comprehensive survey. ACM Computing Surveys (CSUR), 53(3), 1–38.

    Article  Google Scholar 

  14. Athalye, A., Engstrom, L., Ilyas, A., et al. (2018). Synthesizing robust adversarial examples. In: International Conference on Machine Learning, PMLR, pp 284–293.

  15. Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828–841.

    Article  Google Scholar 

  16. Zeng, Y., Pan, M., Just, H. A., et al. (2023). Narcissus: A practical clean-label backdoor attack with limited information. In: Proceedings of the ACM Conference on Computer and Communications Security, pp 1–14.

  17. Nie, S., Liu, L., & Du, Y. (2017). Free-fall: Hacking tesla from wireless to can bus. Briefing, Black Hat USA, 25, 1–16.

    Google Scholar 

  18. Nassi, B., Mirsky, Y., Nassi, D., et al. (2020) Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks. In: Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, pp 293–308.

  19. Xiao, Z., Gao, X., Fu, C., et al. (2021). Improving transferability of adversarial patches on face recognition with generative models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 11845–11854.

  20. Finlayson, S. G., Bowers, J. D., Ito, J., et al. (2019). Adversarial attacks on medical machine learning. Science, 363(6433), 1287–1289.

    Article  Google Scholar 

  21. Hu, C., & Shi, W. (2022). Adversarial color film: Effective physical-world attack to dnns. arXiv preprint arXiv:2209.02430

  22. Sayles, A., Hooda, A., Gupta, M., et al. (2021). Invisible perturbations: Physical adversarial examples exploiting the rolling shutter effect. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14666–14675.

  23. Xiao, C., Yang, D., Li, B., et al. (2019). Meshadv: Adversarial meshes for visual recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 6898–6907.

  24. Gnanasambandam, A., Sherman, A. M., & Chan, S. H. (2021). Optical adversarial attack. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 92–101.

  25. Duan, R., Mao, X., Qin, A. K., et al. (2021). Adversarial laser beam: Effective physical-world attack to dnns in a blink. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 16062–16071.

  26. Zhong, Y., Liu, X., Zhai, D., et al. (2022). Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 15345–15354.

  27. Eykholt, K., Evtimov, I., Fernandes, E., et al. (2018). Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE conference on Computer Vision And Pattern Recognition, pp 1625–1634.

  28. Liu, A., Liu, X., Fan, J., et al. (2019). Perceptual-sensitive gan for generating adversarial patches. In: Proceedings of the AAAI conference on artificial intelligence, pp 1028–1035.

  29. Liu, A., Wang, J., Liu, X., et al. (2020). Bias-based universal adversarial patch attack for automatic check-out. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, Springer, pp 395–410.

  30. Thys, S., Van Ranst, W., & Goedemé, T. (2019). Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp 1–7.

  31. Xu, K., Zhang, G., Liu, S., et al. (2020). Adversarial t-shirt! evading person detectors in a physical world. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, Springer, pp 665–681.

  32. Hu, Y. C. T., Kung, B. H., Tan, D. S., et al. (2021). Naturalistic physical adversarial patch for object detectors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 7848–7857.

  33. Ritter, S., Barrett, D. G., Santoro, A., et al. (2017). Cognitive psychology for deep neural networks: A shape bias case study. In: International Conference on Machine Learning, PMLR, pp 2940–2949

  34. Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3(3), 299–321.

    Article  Google Scholar 

  35. Wang, D., Jiang, T., Sun, J., et al. (2022). Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 2414–2422

  36. Song, D., Eykholt, K., Evtimov, I., et al. (2018). Physical adversarial examples for object detectors. In: 12th USENIX workshop on offensive technologies (WOOT 18), pp 1–10.

  37. Brown, T. B., Mané, D., Roy, A., et al. (2017). Adversarial patch. arXiv preprint arXiv:1712.09665

  38. Zhang, Y., Foroosh, H., David, P., et al. (2019). Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In: International Conference on Learning Representations, pp 1–20

  39. Duan, R., Ma, X., Wang, Y., et al. (2020). Adversarial camouflage: Hiding physical-world attacks with natural styles. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1000–1008

  40. Huang, L., Gao, C., Zhou, Y., et al. (2020). Universal physical camouflage attacks on object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 720–729

  41. Wu, T., Ning, X., Li, W., et al. (2020). Physical adversarial attack on vehicle detector in the carla simulator. arXiv preprint arXiv:2007.16118

  42. Wang, J., Liu, A., Yin, Z., et al. (2021). Dual attention suppression attack: Generate adversarial camouflage in physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8565–8574.

  43. Selvaraju, R. R., Cogswell, M., Das, A., et al. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp 618–626.

  44. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.

  45. Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784

  46. Gazzaniga, M. S. (2004). The cognitive neurosciences. MIT press.

  47. Isola, P., Zhu, J. Y., Zhou, T., et al. (2017). Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1125–1134

  48. Zhao, H., Gallo, O., Frosio, I., et al. (2016). Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging, 3(1), 47–57.

    Article  Google Scholar 

  49. Sharif, M., Bhagavatula, S., Bauer, L., et al. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM Sigsac Conference on Computer and Communications Security, pp 1528–1540

  50. Kato, H., Ushiku, Y., & Harada, T. (2018). Neural 3d mesh renderer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3907–3916.

  51. Liu, W., Anguelov, D., Erhan, D., et al. (2016). SSD: Single shot multibox detector. In: European Conference on Computer Vision, Springer, pp 21–37.

  52. Girshick, R. (2015). Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1440–1448.

  53. Ge, Z., Liu, S., Wang, F., et al. (2021). Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430

  54. Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696

  55. Wang, Z., Bovik, A. C., Sheikh, H. R., et al. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Hao Wang: Conceptualization, Methodology, Writing - Review & Editing. Jingjing Qin: Software, Validation, Formal Analysis, Writing - Original Draft. Yixue Huang: Algorithm Implementation, Visualization. Genping Wu: Methodology, Formal Analysis. Hongfeng Zhang: Software, Algorithm Implementation. Jintao Yang: Methodology, Writing - Review & Editing.

Corresponding author

Correspondence to Jintao Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Qin, J., Huang, Y. et al. SC-PCA: Shape Constraint Physical Camouflage Attack Against Vehicle Detection. J Sign Process Syst 95, 1405–1424 (2023). https://doi.org/10.1007/s11265-023-01890-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-023-01890-8

Keywords

Navigation