Abstract
Adversarial perturbation is a severe threat to deep learning-based systems such as classification and recognition because it makes the system output wrong answers. Designing robust systems against adversarial perturbation in a certifiable manner is important, especially for security-related systems such as face recognition. However, most studies for certifiable robustness are about classifiers, which have quite different characteristics from recognition systems for verification; the former is used in the closed-set scenario, whereas the latter is used in the open-set scenario. In this study, we show that, similar to the image classifications, 1-Lipschitz condition is sufficient for certifiable robustness of the face recognition system. Furthermore, for the given pair of facial images, we derive the upper bound of adversarial perturbation where 1-Lipschitz face recognition system remains robust. At last, we find that this theoretical result should be carefully applied in practice; Applying a training method to typical face recognition systems results in a very small upper bound for adversarial perturbation. We address this by proposing an alternative training method to attain a certifiably robust face recognition system with large upper bounds. All these theoretical results are supported by experiments on proof-of-concept implementation. We released our source code to facilitate further study, which is available at github.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
In this paper, we only consider the Lipschitz constant defined in the \(\ell _{2}\) norm rather than the \(\ell _{\infty }\) norm or the general \(\ell _{p}\) norm for \(p \ge 1\).
- 2.
In Appendix C.1, we provide a visualization of Fig. 1b in MNIST and CIFAR10.
- 3.
References
Robustbench: A standardized benchmark for adversarial robustness, https://robustbench.github.io/
Amada, T., Kakizaki, K., Liew, S.P., Araki, T., Keshet, J., Furukawa, J.: Adversarial robustness for face recognition: how to introduce ensemble diversity among feature extractors? In: SafeAI@ AAAI (2021)
Anil, C., Lucas, J., Grosse, R.: Sorting out Lipschitz function approximation. In: International Conference on Machine Learning, pp. 291–301. PMLR (2019)
Araujo, A., Havens, A., Delattre, B., Allauzen, A., Hu, B.: A unified algebraic perspective on lipschitz neural networks. arXiv preprint arXiv:2303.03169 (2023)
Attias, I., Kontorovich, A., Mansour, Y.: Improved generalization bounds for robust learning. In: Algorithmic Learning Theory, pp. 162–183. PMLR (2019)
Béthune, L., Boissin, T., Serrurier, M., Mamalet, F., Friedrich, C., Gonzalez Sanz, A.: Pay attention to your loss: understanding misconceptions about Lipschitz neural networks. NeurIPS 35, 20077–20091 (2022)
Boutros, F., Damer, N., Kirchbuchner, F., Kuijper, A.: ElasticFace: elastic margin loss for deep face recognition. In: CVPRW, pp. 1578–1587 (2022)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
Cohen, J., Rosenfeld, E., Kolter, Z.: Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, pp. 1310–1320. PMLR (2019)
Croce, F., et al.: RobustBench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670 (2020)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning, pp. 2206–2216. PMLR (2020)
Cui, J., Tian, Z., Zhong, Z., Qi, X., Yu, B., Zhang, H.: Decoupled Kullback-leibler divergence loss. arXiv preprint arXiv:2305.13948 (2023)
Deb, D., Zhang, J., Jain, A.K.: Advfaces: adversarial face synthesis. arXiv preprint arXiv:1908.05008 (2019)
Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: RetinaFace: single-shot multi-level face localisation in the wild. In: CVPR, pp. 5203–5212 (2020)
Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: CVPR, pp. 4690–4699 (2019)
Dong, Y., et al.: Efficient decision-based black-box adversarial attacks on face recognition. In: CVPR, pp. 7714–7722 (2019)
Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Tran, B., Madry, A.: Adversarial robustness as a prior for learned representations. arXiv preprint arXiv:1906.00945 (2019)
Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.K.: DNDNet: reconfiguring CNN for adversarial robustness. In: CVPRW, pp. 22–23 (2020)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Goswami, G., Ratha, N., Agarwal, A., Singh, R., Vatsa, M.: Unravelling robustness of deep learning based face recognition against adversarial attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Gowal, S., et al.: Scalable verified training for provably robust image classification. In: ICCV, pp. 4842–4851 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
Hu, K., Zou, A., Wang, Z., Leino, K., Fredrikson, M.: Scaling in depth: unlocking robustness certification on imagenet. arXiv preprint arXiv:2301.12549 (2023)
Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database forstudying face recognition in unconstrained environments. In: Workshop on faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008)
Huang, Y., et al.: CurricularFace: adaptive curriculum learning loss for deep face recognition. In: CVPR, pp. 5901–5910 (2020)
Jeong, J., Shin, J.: Consistency regularization for certified robustness of smoothed classifiers. NeurIPS 33, 10558–10570 (2020)
Jia, S., et al.: Adv-attribute: inconspicuous and transferable adversarial attack on face recognition. NeurIPS 35, 34136–34147 (2022)
Kim, M., Jain, A.K., Liu, X.: AdaFace: quality adaptive margin for face recognition. In: CVPR, pp. 18750–18759 (2022)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lee, G.H., Yuan, Y., Chang, S., Jaakkola, T.: Tight certificates of adversarial robustness for randomly smoothed classifiers. NeurIPS 32 (2019)
Leino, K., Wang, Z., Fredrikson, M.: Globally-robust neural networks. In: International Conference on Machine Learning, pp. 6212–6222. PMLR (2021)
Levine, A., Singla, S., Feizi, S.: Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105 (2019)
Levine, A.J., Feizi, S.: Improved, deterministic smoothing for l_1 certified robustness. In: International Conference on Machine Learning, pp. 6254–6264. PMLR (2021)
Li, Q., Hu, Y., Liu, Y., Zhang, D., Jin, X., Chen, Y.: Discrete point-wise attack is not enough: generalized manifold adversarial attack for face recognition. In: CVPR, pp. 20575–20584 (2023)
Li, Q., Haque, S., Anil, C., Lucas, J., Grosse, R.B., Jacobsen, J.H.: Preventing gradient attenuation in Lipschitz constrained convolutional networks. NeurIPS 32 (2019)
Li, Z., et al.: Sibling-attack: Rethinking transferable adversarial attacks against face recognition. In: CVPR, pp. 24626–24637 (2023)
Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: SphereFace: deep hypersphere embedding for face recognition. In: CVPR, pp. 212–220 (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Massoli, F.V., Carrara, F., Amato, G., Falchi, F.: Detection of face recognition adversarial attacks. Comput. Vis. Image Underst. 202, 103103 (2021)
Meng, Q., Zhao, S., Huang, Z., Zhou, F.: MagFace: a universal representation for face recognition and quality assessment. In: CVPR, pp. 14225–14234 (2021)
Meunier, L., Delattre, B.J., Araujo, A., Allauzen, A.: A dynamical system perspective for Lipschitz neural networks. In: International Conference on Machine Learning, pp. 15484–15500. PMLR (2022)
Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582 (2016)
Moschoglou, S., Papaioannou, A., Sagonas, C., Deng, J., Kotsia, I., Zafeiriou, S.: AgeDB: the first manually collected, in-the-wild age database. In: CVPRW, pp. 51–59 (2017)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp. 372–387. IEEE (2016)
Peng, S., et al.: Robust principles: architectural design principles for adversarially robust CNNs. In: BMVC. BMVA (2023)
Pérez, J.C., Alfarra, M., Thabet, A., Arbeláez, P., Ghanem, B.: Towards characterizing the semantic robustness of face recognition. In: CVPRW, pp. 315–325 (2023)
Prach, B., Lampert, C.H.: Almost-orthogonal layers for efficient general-purpose Lipschitz networks. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13681, pp. 350–365. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19803-8_21
Prach, B., Lampert, C.H.: 1-Lipschitz neural networks are more expressive with n-activations. arXiv preprint arXiv:2311.06103 (2023)
Raghunathan, A., Steinhardt, J., Liang, P.: Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344 (2018)
Raghunathan, A., Steinhardt, J., Liang, P.S.: Semidefinite relaxations for certifying robustness to adversarial examples. NeurIPS 31 (2018)
Salman, H., Ilyas, A., Engstrom, L., Kapoor, A., Madry, A.: Do adversarially robust imageNet models transfer better? NeurIPS 33, 3533–3545 (2020)
Salman, H., et al.: Provably robust deep learning via adversarially trained smoothed classifiers. NeurIPS 32 (2019)
Salman, H., Yang, G., Zhang, H., Hsieh, C.J., Zhang, P.: A convex relaxation barrier to tight robustness verification of neural networks. NeurIPS 32 (2019)
Sengupta, S., Chen, J.C., Castillo, C., Patel, V.M., Chellappa, R., Jacobs, D.W.: Frontal to profile face verification in the wild. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9. IEEE (2016)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)
Singla, S., Feizi, S.: Skew orthogonal convolutions. In: International Conference on Machine Learning, pp. 9756–9766. PMLR (2021)
Singla, S., Singla, S., Feizi, S.: Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. arXiv preprint arXiv:2108.04062 (2021)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Tong, L., et al.: FaceSec: a fine-grained robustness evaluation framework for face recognition systems. In: CVPR, pp. 13254–13263 (2021)
Trockman, A., Kolter, J.Z.: Orthogonalizing convolutional layers with the cayley transform. arXiv preprint arXiv:2104.07167 (2021)
Tsuzuku, Y., Sato, I., Sugiyama, M.: Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. NeurIPS 31 (2018)
Tu, Z., Zhang, J., Tao, D.: Theoretical analysis of adversarial learning: a minimax approach. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Voracek, V., Hein, M.: Improving l1-certified robustness via randomized smoothing by leveraging box constraints. In: International Conference on Machine Learning, pp. 35198–35222. PMLR (2023)
Wang, D., Tan, X.: Robust distance metric learning via Bayesian inference. IEEE Trans. Image Process. 27(3), 1542–1553 (2017)
Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR, pp. 5265–5274 (2018)
Wang, L., Liu, X., Yi, J., Jiang, Y., Hsieh, C.J.: Provably robust metric learning. NeurIPS 33, 19302–19313 (2020)
Wang, R., Manchester, I.: Direct parameterization of Lipschitz-bounded deep networks. In: International Conference on Machine Learning, pp. 36093–36110. PMLR (2023)
Wang, Z., Pang, T., Du, C., Lin, M., Liu, W., Yan, S.: Better diffusion models further improve adversarial training. arXiv preprint arXiv:2302.04638 (2023)
Wen, Y., Liu, W., Weller, A., Raj, B., Singh, R.: Sphereface2: Binary classification is all you need for deep face recognition. arXiv preprint arXiv:2108.01513 (2021)
Weng, L., et al.: Towards fast computation of certified robustness for relu networks. In: International Conference on Machine Learning, pp. 5276–5285. PMLR (2018)
Wong, E., Rice, L., Kolter, J.Z.: Fast is better than free: revisiting adversarial training. arXiv preprint arXiv:2001.03994 (2020)
Wu, Y., Zhang, H., Huang, H.: Retrievalguard: provably robust 1-nearest neighbor image retrieval. In: International Conference on Machine Learning, pp. 24266–24279. PMLR (2022)
Xiao, Y., Pun, C.M., Liu, B.: Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation. PR 115, 107903 (2021)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)
Xu, X., Li, L., Li, B.: LOT: layer-wise orthogonal training on improving l2 certified robustness. NeurIPS 35, 18904–18915 (2022)
Yang, X., Yang, D., Dong, Y., Su, H., Yu, W., Zhu, J.: RobFR: benchmarking adversarial robustness on face recognition. arXiv preprint arXiv:2007.04118 (2020)
Yang, X., Yang, D., Dong, Y., Su, H., Yu, W., Zhu, J.: RobFR: benchmarking adversarial robustness on face recognition (2021)
Yoshida, Y., Miyato, T.: Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941 (2017)
Zhai, R., et al.: MACER: attack-free and scalable robust training via maximizing certified radius. arXiv preprint arXiv:2001.02378 (2020)
Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)
Zhang, H., et al.: Towards stable and efficient training of verifiably robust neural networks. arXiv preprint arXiv:1906.06316 (2019)
Zhang, H., Weng, T.W., Chen, P.Y., Hsieh, C.J., Daniel, L.: Efficient neural network robustness certification with general activation functions. NeurIPS 31 (2018)
Zhang, X., Zhao, R., Qiao, Y., Wang, X., Li, H.: AdaCos: adaptively scaling cosine logits for effectively learning deep face representations. In: CVPR, pp. 10823–10832 (2019)
Zhou, J., Jia, X., Li, Q., Shen, L., Duan, J.: Uniface: unified cross-entropy loss for deep face recognition. In: ICCV, pp. 20730–20739 (2023)
Zhou, J., Liang, C., Chen, J.: Manifold projection for adversarial defense on face recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 288–305. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_18
Acknowledgement
This work was supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024 (RS-2024-00332210).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Paik, S., Kim, D., Hwang, C., Kim, S., Seo, J.H. (2025). Towards Certifiably Robust Face Recognition. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15143. Springer, Cham. https://doi.org/10.1007/978-3-031-73013-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-73013-9_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-73012-2
Online ISBN: 978-3-031-73013-9
eBook Packages: Computer ScienceComputer Science (R0)