Abstract
Deep neural networks (DNNs) are vulnerable to adversarial example attacks, which pose a potential threat to safety-sensitive autonomous driving. Previous research on adversarial defense mainly concentrates on modifying DNN models, preprocessing adversarial examples, and detecting adversarial examples. However, these methods usually suffer from limited defense effectiveness, lacking a well-defined and robust boundary. To tackle these problems, we propose a novel adversarial defense mechanism, ImgQuant, which enhances model adversarial robustness through dual-image quantization. Compared with existing methods, ImgQuant has two competitive advantages. First, it diminishes the adversary’s search space by squeezing unnecessary input details, thereby shrinking the living space of adversarial examples and improving the adversarial robustness of the model. Second, we implement dual-image quantization through a client-server communication model to establish a robust security boundary for ImgQuant. This ensures the elimination of adversarial noise as long as the perturbation magnitude \(\epsilon \) does not exceed the boundary. Our proposed ImgQuant is comprehensively verified on universal datasets and extended to real road sign recognition. Extensive experiments show that ImgQuant has an identical high accuracy within the robust security boundary under various attacks, and can also be used to improve the performance of adversarial training.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bilik, I.: Comparative analysis of radar and lidar technologies for automotive applications. IEEE Intell. Transp. Syst. Mag. 15(1), 244–269 (2023)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Galloway, A., Taylor, G.W., Moussa, M.: Attacking binarized neural networks. Proceedings of International Conference Learning Representations (2018)
Giang, L.T., Hung, T.L.: An extension of random summations of independent and identically distributed random variables. Commun. Korean Mathem. Soc. 33(2), 605–618 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference Learning Representations (2015)
Gupta, K., Ajanthan, T.: Improved gradient-based adversarial attacks for quantized networks. In: Proceedings of AAAI Conference on Artificial Intelligence. vol. 36, pp. 6810–6818 (2022)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognition, pp. 770–778 (2016)
He, K., Kim, D.D., Asghar, M.R.: Adversarial machine learning for network intrusion detection systems: a comprehensive survey. IEEE Commun. Surv. Tutorials (2023)
Jmila, H., Khedher, M.I.: Adversarial machine learning for network intrusion detection: a comparative study. Comput. Netw. 214, 109073 (2022)
Kim, H.: Torchattacks: A pytorch repository for adversarial attacks. arXiv preprint arXiv:2010.01950 (2020)
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)
Lin, C., Han, S., Zhu, J., Li, Q., Shen, C., Zhang, Y., Guan, X.: Sensitive region-aware black-box adversarial attacks. Inf. Sci. 637, 118929 (2023)
Lin, J., Gan, C., Han, S.: Defensive quantization: When efficiency meets robustness. Proceedings of International Conference Learning Representations (2019)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. Proceedings of International Conference Learning Representations (2018)
Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In: Proceedings of ACM Computer and Communications Security, pp. 135–147 (2017)
Min, W., Liu, R., He, D., Han, Q., Wei, Q., Wang, Q.: Traffic sign recognition based on semantic scene understanding and structural traffic sign location. IEEE Trans. Intell. Transp. Syst. 23(9), 15794–15807 (2022)
Modas, A., Sanchez-Matilla, R., Frossard, P., Cavallaro, A.: Toward robust sensing for autonomous vehicles: An adversarial perspective. IEEE Signal Process. Mag. 37(4), 14–23 (2020)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of of ACM Conference on Computer and Communications Security, pp. 506–519 (2017)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Quiring, E., Klein, D., Arp, D., Johns, M., Rieck, K.: Adversarial preprocessing: understanding and preventing \(\{\)Image-Scaling\(\}\) attacks in machine learning. In: Proceedings of USENIX. Usenix Security Symposium, pp. 1363–1380 (2020)
Rauber, J., Zimmermann, R., Bethge, M., Brendel, W.: Foolbox native: fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax. J. Open Source Softw. 5(53), 2607 (2020)
Shafahi, A., Najibi, M., Ghiasi, M.A., Xu, Z., Dickerson, J., Studer, C., Davis, L.S., Taylor, G., Goldstein, T.: Adversarial training for free! Proceedings of Advances in Neural Information Processing Systems 32 (2019)
Wang, J., Ma, Y., Huang, S., Hui, T., Wang, F., Qian, C., Zhang, T.: A keypoint-based global association network for lane detection. In: Proceedings of IEEE/CVF Conference Computer Vision and Pattern Recognit, pp. 1392–1401 (2022)
Wei, Z., Chen, J., Goldblum, M., Wu, Z., Goldstein, T., Jiang, Y.G.: Towards transferable adversarial attacks on vision transformers. In: Proceedings of AAAI Conference on Artificial Intelligence. vol. 36, pp. 2668–2676 (2022)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle. Proceedings of Advances in neural information processing systems 32 (2019)
Zhang, J., Peng, S., Gao, Y., Zhang, Z., Hong, Q.: Apmsa: adversarial perturbation against model stealing attacks. IEEE Trans. Inf. Forensics Secur. 18, 1667–1679 (2023)
Acknowledgements
This work is supported by the Collaborative Innovation Center of Novel Software Technology and Industrialization.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Lv, H., Jiang, S., Wan, T., Chen, L. (2025). ImgQuant: Towards Adversarial Defense with Robust Boundary via Dual-Image Quantization. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15034. Springer, Singapore. https://doi.org/10.1007/978-981-97-8505-6_2
Download citation
DOI: https://doi.org/10.1007/978-981-97-8505-6_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8504-9
Online ISBN: 978-981-97-8505-6
eBook Packages: Computer ScienceComputer Science (R0)