Skip to main content

ImgQuant: Towards Adversarial Defense with Robust Boundary via Dual-Image Quantization

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15034))

Included in the following conference series:

  • 153 Accesses

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial example attacks, which pose a potential threat to safety-sensitive autonomous driving. Previous research on adversarial defense mainly concentrates on modifying DNN models, preprocessing adversarial examples, and detecting adversarial examples. However, these methods usually suffer from limited defense effectiveness, lacking a well-defined and robust boundary. To tackle these problems, we propose a novel adversarial defense mechanism, ImgQuant, which enhances model adversarial robustness through dual-image quantization. Compared with existing methods, ImgQuant has two competitive advantages. First, it diminishes the adversary’s search space by squeezing unnecessary input details, thereby shrinking the living space of adversarial examples and improving the adversarial robustness of the model. Second, we implement dual-image quantization through a client-server communication model to establish a robust security boundary for ImgQuant. This ensures the elimination of adversarial noise as long as the perturbation magnitude \(\epsilon \) does not exceed the boundary. Our proposed ImgQuant is comprehensively verified on universal datasets and extended to real road sign recognition. Extensive experiments show that ImgQuant has an identical high accuracy within the robust security boundary under various attacks, and can also be used to improve the performance of adversarial training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bilik, I.: Comparative analysis of radar and lidar technologies for automotive applications. IEEE Intell. Transp. Syst. Mag. 15(1), 244–269 (2023)

    Article  Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of IEEE Symposium on Security and Privacy, pp. 39–57 (2017)

    Google Scholar 

  3. Galloway, A., Taylor, G.W., Moussa, M.: Attacking binarized neural networks. Proceedings of International Conference Learning Representations (2018)

    Google Scholar 

  4. Giang, L.T., Hung, T.L.: An extension of random summations of independent and identically distributed random variables. Commun. Korean Mathem. Soc. 33(2), 605–618 (2018)

    MathSciNet  Google Scholar 

  5. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference Learning Representations (2015)

    Google Scholar 

  6. Gupta, K., Ajanthan, T.: Improved gradient-based adversarial attacks for quantized networks. In: Proceedings of AAAI Conference on Artificial Intelligence. vol. 36, pp. 6810–6818 (2022)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. He, K., Kim, D.D., Asghar, M.R.: Adversarial machine learning for network intrusion detection systems: a comprehensive survey. IEEE Commun. Surv. Tutorials (2023)

    Google Scholar 

  9. Jmila, H., Khedher, M.I.: Adversarial machine learning for network intrusion detection: a comparative study. Comput. Netw. 214, 109073 (2022)

    Article  Google Scholar 

  10. Kim, H.: Torchattacks: A pytorch repository for adversarial attacks. arXiv preprint arXiv:2010.01950 (2020)

  11. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)

    Google Scholar 

  12. Lin, C., Han, S., Zhu, J., Li, Q., Shen, C., Zhang, Y., Guan, X.: Sensitive region-aware black-box adversarial attacks. Inf. Sci. 637, 118929 (2023)

    Article  Google Scholar 

  13. Lin, J., Gan, C., Han, S.: Defensive quantization: When efficiency meets robustness. Proceedings of International Conference Learning Representations (2019)

    Google Scholar 

  14. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. Proceedings of International Conference Learning Representations (2018)

    Google Scholar 

  15. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In: Proceedings of ACM Computer and Communications Security, pp. 135–147 (2017)

    Google Scholar 

  16. Min, W., Liu, R., He, D., Han, Q., Wei, Q., Wang, Q.: Traffic sign recognition based on semantic scene understanding and structural traffic sign location. IEEE Trans. Intell. Transp. Syst. 23(9), 15794–15807 (2022)

    Article  Google Scholar 

  17. Modas, A., Sanchez-Matilla, R., Frossard, P., Cavallaro, A.: Toward robust sensing for autonomous vehicles: An adversarial perspective. IEEE Signal Process. Mag. 37(4), 14–23 (2020)

    Article  Google Scholar 

  18. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of of ACM Conference on Computer and Communications Security, pp. 506–519 (2017)

    Google Scholar 

  19. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)

    Google Scholar 

  20. Quiring, E., Klein, D., Arp, D., Johns, M., Rieck, K.: Adversarial preprocessing: understanding and preventing \(\{\)Image-Scaling\(\}\) attacks in machine learning. In: Proceedings of USENIX. Usenix Security Symposium, pp. 1363–1380 (2020)

    Google Scholar 

  21. Rauber, J., Zimmermann, R., Bethge, M., Brendel, W.: Foolbox native: fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax. J. Open Source Softw. 5(53), 2607 (2020)

    Article  Google Scholar 

  22. Shafahi, A., Najibi, M., Ghiasi, M.A., Xu, Z., Dickerson, J., Studer, C., Davis, L.S., Taylor, G., Goldstein, T.: Adversarial training for free! Proceedings of Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  23. Wang, J., Ma, Y., Huang, S., Hui, T., Wang, F., Qian, C., Zhang, T.: A keypoint-based global association network for lane detection. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognit, pp. 1392–1401 (2022)

    Google Scholar 

  24. Wei, Z., Chen, J., Goldblum, M., Wu, Z., Goldstein, T., Jiang, Y.G.: Towards transferable adversarial attacks on vision transformers. In: Proceedings of AAAI Conference on Artificial Intelligence. vol. 36, pp. 2668–2676 (2022)

    Google Scholar 

  25. Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)

  26. Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle. Proceedings of Advances in neural information processing systems 32 (2019)

    Google Scholar 

  27. Zhang, J., Peng, S., Gao, Y., Zhang, Z., Hong, Q.: Apmsa: adversarial perturbation against model stealing attacks. IEEE Trans. Inf. Forensics Secur. 18, 1667–1679 (2023)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the Collaborative Innovation Center of Novel Software Technology and Industrialization.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lijun Chen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 61 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lv, H., Jiang, S., Wan, T., Chen, L. (2025). ImgQuant: Towards Adversarial Defense with Robust Boundary via Dual-Image Quantization. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15034. Springer, Singapore. https://doi.org/10.1007/978-981-97-8505-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-8505-6_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-8504-9

  • Online ISBN: 978-981-97-8505-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics