Skip to main content

Towards Certifiably Robust Face Recognition

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Adversarial perturbation is a severe threat to deep learning-based systems such as classification and recognition because it makes the system output wrong answers. Designing robust systems against adversarial perturbation in a certifiable manner is important, especially for security-related systems such as face recognition. However, most studies for certifiable robustness are about classifiers, which have quite different characteristics from recognition systems for verification; the former is used in the closed-set scenario, whereas the latter is used in the open-set scenario. In this study, we show that, similar to the image classifications, 1-Lipschitz condition is sufficient for certifiable robustness of the face recognition system. Furthermore, for the given pair of facial images, we derive the upper bound of adversarial perturbation where 1-Lipschitz face recognition system remains robust. At last, we find that this theoretical result should be carefully applied in practice; Applying a training method to typical face recognition systems results in a very small upper bound for adversarial perturbation. We address this by proposing an alternative training method to attain a certifiably robust face recognition system with large upper bounds. All these theoretical results are supported by experiments on proof-of-concept implementation. We released our source code to facilitate further study, which is available at github.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    In this paper, we only consider the Lipschitz constant defined in the \(\ell _{2}\) norm rather than the \(\ell _{\infty }\) norm or the general \(\ell _{p}\) norm for \(p \ge 1\).

  2. 2.

    In Appendix C.1, we provide a visualization of Fig. 1b in MNIST and CIFAR10.

  3. 3.

    There are stronger attacks than PGD and C&W, such as AutoAttack [11]. In fact, we selected PGD and C&W because they are also applicable for the open-set FR scenario [78]. We present a detailed discussion about AutoAttack in Appendix C.2.

References

  1. Robustbench: A standardized benchmark for adversarial robustness, https://robustbench.github.io/

  2. Amada, T., Kakizaki, K., Liew, S.P., Araki, T., Keshet, J., Furukawa, J.: Adversarial robustness for face recognition: how to introduce ensemble diversity among feature extractors? In: SafeAI@ AAAI (2021)

    Google Scholar 

  3. Anil, C., Lucas, J., Grosse, R.: Sorting out Lipschitz function approximation. In: International Conference on Machine Learning, pp. 291–301. PMLR (2019)

    Google Scholar 

  4. Araujo, A., Havens, A., Delattre, B., Allauzen, A., Hu, B.: A unified algebraic perspective on lipschitz neural networks. arXiv preprint arXiv:2303.03169 (2023)

  5. Attias, I., Kontorovich, A., Mansour, Y.: Improved generalization bounds for robust learning. In: Algorithmic Learning Theory, pp. 162–183. PMLR (2019)

    Google Scholar 

  6. Béthune, L., Boissin, T., Serrurier, M., Mamalet, F., Friedrich, C., Gonzalez Sanz, A.: Pay attention to your loss: understanding misconceptions about Lipschitz neural networks. NeurIPS 35, 20077–20091 (2022)

    Google Scholar 

  7. Boutros, F., Damer, N., Kirchbuchner, F., Kuijper, A.: ElasticFace: elastic margin loss for deep face recognition. In: CVPRW, pp. 1578–1587 (2022)

    Google Scholar 

  8. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  9. Cohen, J., Rosenfeld, E., Kolter, Z.: Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, pp. 1310–1320. PMLR (2019)

    Google Scholar 

  10. Croce, F., et al.: RobustBench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670 (2020)

  11. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning, pp. 2206–2216. PMLR (2020)

    Google Scholar 

  12. Cui, J., Tian, Z., Zhong, Z., Qi, X., Yu, B., Zhang, H.: Decoupled Kullback-leibler divergence loss. arXiv preprint arXiv:2305.13948 (2023)

  13. Deb, D., Zhang, J., Jain, A.K.: Advfaces: adversarial face synthesis. arXiv preprint arXiv:1908.05008 (2019)

  14. Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: RetinaFace: single-shot multi-level face localisation in the wild. In: CVPR, pp. 5203–5212 (2020)

    Google Scholar 

  15. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: CVPR, pp. 4690–4699 (2019)

    Google Scholar 

  16. Dong, Y., et al.: Efficient decision-based black-box adversarial attacks on face recognition. In: CVPR, pp. 7714–7722 (2019)

    Google Scholar 

  17. Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Tran, B., Madry, A.: Adversarial robustness as a prior for learned representations. arXiv preprint arXiv:1906.00945 (2019)

  18. Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.K.: DNDNet: reconfiguring CNN for adversarial robustness. In: CVPRW, pp. 22–23 (2020)

    Google Scholar 

  19. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  20. Goswami, G., Ratha, N., Agarwal, A., Singh, R., Vatsa, M.: Unravelling robustness of deep learning based face recognition against adversarial attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  21. Gowal, S., et al.: Scalable verified training for provably robust image classification. In: ICCV, pp. 4842–4851 (2019)

    Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  23. Hu, K., Zou, A., Wang, Z., Leino, K., Fredrikson, M.: Scaling in depth: unlocking robustness certification on imagenet. arXiv preprint arXiv:2301.12549 (2023)

  24. Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database forstudying face recognition in unconstrained environments. In: Workshop on faces in’Real-Life’Images: Detection, Alignment, and Recognition (2008)

    Google Scholar 

  25. Huang, Y., et al.: CurricularFace: adaptive curriculum learning loss for deep face recognition. In: CVPR, pp. 5901–5910 (2020)

    Google Scholar 

  26. Jeong, J., Shin, J.: Consistency regularization for certified robustness of smoothed classifiers. NeurIPS 33, 10558–10570 (2020)

    Google Scholar 

  27. Jia, S., et al.: Adv-attribute: inconspicuous and transferable adversarial attack on face recognition. NeurIPS 35, 34136–34147 (2022)

    Google Scholar 

  28. Kim, M., Jain, A.K., Liu, X.: AdaFace: quality adaptive margin for face recognition. In: CVPR, pp. 18750–18759 (2022)

    Google Scholar 

  29. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  30. Lee, G.H., Yuan, Y., Chang, S., Jaakkola, T.: Tight certificates of adversarial robustness for randomly smoothed classifiers. NeurIPS 32 (2019)

    Google Scholar 

  31. Leino, K., Wang, Z., Fredrikson, M.: Globally-robust neural networks. In: International Conference on Machine Learning, pp. 6212–6222. PMLR (2021)

    Google Scholar 

  32. Levine, A., Singla, S., Feizi, S.: Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105 (2019)

  33. Levine, A.J., Feizi, S.: Improved, deterministic smoothing for l_1 certified robustness. In: International Conference on Machine Learning, pp. 6254–6264. PMLR (2021)

    Google Scholar 

  34. Li, Q., Hu, Y., Liu, Y., Zhang, D., Jin, X., Chen, Y.: Discrete point-wise attack is not enough: generalized manifold adversarial attack for face recognition. In: CVPR, pp. 20575–20584 (2023)

    Google Scholar 

  35. Li, Q., Haque, S., Anil, C., Lucas, J., Grosse, R.B., Jacobsen, J.H.: Preventing gradient attenuation in Lipschitz constrained convolutional networks. NeurIPS 32 (2019)

    Google Scholar 

  36. Li, Z., et al.: Sibling-attack: Rethinking transferable adversarial attacks against face recognition. In: CVPR, pp. 24626–24637 (2023)

    Google Scholar 

  37. Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: SphereFace: deep hypersphere embedding for face recognition. In: CVPR, pp. 212–220 (2017)

    Google Scholar 

  38. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  39. Massoli, F.V., Carrara, F., Amato, G., Falchi, F.: Detection of face recognition adversarial attacks. Comput. Vis. Image Underst. 202, 103103 (2021)

    Article  Google Scholar 

  40. Meng, Q., Zhao, S., Huang, Z., Zhou, F.: MagFace: a universal representation for face recognition and quality assessment. In: CVPR, pp. 14225–14234 (2021)

    Google Scholar 

  41. Meunier, L., Delattre, B.J., Araujo, A., Allauzen, A.: A dynamical system perspective for Lipschitz neural networks. In: International Conference on Machine Learning, pp. 15484–15500. PMLR (2022)

    Google Scholar 

  42. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018)

  43. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582 (2016)

    Google Scholar 

  44. Moschoglou, S., Papaioannou, A., Sagonas, C., Deng, J., Kotsia, I., Zafeiriou, S.: AgeDB: the first manually collected, in-the-wild age database. In: CVPRW, pp. 51–59 (2017)

    Google Scholar 

  45. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp. 372–387. IEEE (2016)

    Google Scholar 

  46. Peng, S., et al.: Robust principles: architectural design principles for adversarially robust CNNs. In: BMVC. BMVA (2023)

    Google Scholar 

  47. Pérez, J.C., Alfarra, M., Thabet, A., Arbeláez, P., Ghanem, B.: Towards characterizing the semantic robustness of face recognition. In: CVPRW, pp. 315–325 (2023)

    Google Scholar 

  48. Prach, B., Lampert, C.H.: Almost-orthogonal layers for efficient general-purpose Lipschitz networks. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13681, pp. 350–365. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19803-8_21

    Chapter  Google Scholar 

  49. Prach, B., Lampert, C.H.: 1-Lipschitz neural networks are more expressive with n-activations. arXiv preprint arXiv:2311.06103 (2023)

  50. Raghunathan, A., Steinhardt, J., Liang, P.: Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344 (2018)

  51. Raghunathan, A., Steinhardt, J., Liang, P.S.: Semidefinite relaxations for certifying robustness to adversarial examples. NeurIPS 31 (2018)

    Google Scholar 

  52. Salman, H., Ilyas, A., Engstrom, L., Kapoor, A., Madry, A.: Do adversarially robust imageNet models transfer better? NeurIPS 33, 3533–3545 (2020)

    Google Scholar 

  53. Salman, H., et al.: Provably robust deep learning via adversarially trained smoothed classifiers. NeurIPS 32 (2019)

    Google Scholar 

  54. Salman, H., Yang, G., Zhang, H., Hsieh, C.J., Zhang, P.: A convex relaxation barrier to tight robustness verification of neural networks. NeurIPS 32 (2019)

    Google Scholar 

  55. Sengupta, S., Chen, J.C., Castillo, C., Patel, V.M., Chellappa, R., Jacobs, D.W.: Frontal to profile face verification in the wild. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9. IEEE (2016)

    Google Scholar 

  56. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)

    Google Scholar 

  57. Singla, S., Feizi, S.: Skew orthogonal convolutions. In: International Conference on Machine Learning, pp. 9756–9766. PMLR (2021)

    Google Scholar 

  58. Singla, S., Singla, S., Feizi, S.: Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. arXiv preprint arXiv:2108.04062 (2021)

  59. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  60. Tong, L., et al.: FaceSec: a fine-grained robustness evaluation framework for face recognition systems. In: CVPR, pp. 13254–13263 (2021)

    Google Scholar 

  61. Trockman, A., Kolter, J.Z.: Orthogonalizing convolutional layers with the cayley transform. arXiv preprint arXiv:2104.07167 (2021)

  62. Tsuzuku, Y., Sato, I., Sugiyama, M.: Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. NeurIPS 31 (2018)

    Google Scholar 

  63. Tu, Z., Zhang, J., Tao, D.: Theoretical analysis of adversarial learning: a minimax approach. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  64. Voracek, V., Hein, M.: Improving l1-certified robustness via randomized smoothing by leveraging box constraints. In: International Conference on Machine Learning, pp. 35198–35222. PMLR (2023)

    Google Scholar 

  65. Wang, D., Tan, X.: Robust distance metric learning via Bayesian inference. IEEE Trans. Image Process. 27(3), 1542–1553 (2017)

    Article  MathSciNet  Google Scholar 

  66. Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR, pp. 5265–5274 (2018)

    Google Scholar 

  67. Wang, L., Liu, X., Yi, J., Jiang, Y., Hsieh, C.J.: Provably robust metric learning. NeurIPS 33, 19302–19313 (2020)

    Google Scholar 

  68. Wang, R., Manchester, I.: Direct parameterization of Lipschitz-bounded deep networks. In: International Conference on Machine Learning, pp. 36093–36110. PMLR (2023)

    Google Scholar 

  69. Wang, Z., Pang, T., Du, C., Lin, M., Liu, W., Yan, S.: Better diffusion models further improve adversarial training. arXiv preprint arXiv:2302.04638 (2023)

  70. Wen, Y., Liu, W., Weller, A., Raj, B., Singh, R.: Sphereface2: Binary classification is all you need for deep face recognition. arXiv preprint arXiv:2108.01513 (2021)

  71. Weng, L., et al.: Towards fast computation of certified robustness for relu networks. In: International Conference on Machine Learning, pp. 5276–5285. PMLR (2018)

    Google Scholar 

  72. Wong, E., Rice, L., Kolter, J.Z.: Fast is better than free: revisiting adversarial training. arXiv preprint arXiv:2001.03994 (2020)

  73. Wu, Y., Zhang, H., Huang, H.: Retrievalguard: provably robust 1-nearest neighbor image retrieval. In: International Conference on Machine Learning, pp. 24266–24279. PMLR (2022)

    Google Scholar 

  74. Xiao, Y., Pun, C.M., Liu, B.: Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation. PR 115, 107903 (2021)

    Google Scholar 

  75. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)

    Google Scholar 

  76. Xu, X., Li, L., Li, B.: LOT: layer-wise orthogonal training on improving l2 certified robustness. NeurIPS 35, 18904–18915 (2022)

    Google Scholar 

  77. Yang, X., Yang, D., Dong, Y., Su, H., Yu, W., Zhu, J.: RobFR: benchmarking adversarial robustness on face recognition. arXiv preprint arXiv:2007.04118 (2020)

  78. Yang, X., Yang, D., Dong, Y., Su, H., Yu, W., Zhu, J.: RobFR: benchmarking adversarial robustness on face recognition (2021)

    Google Scholar 

  79. Yoshida, Y., Miyato, T.: Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:1705.10941 (2017)

  80. Zhai, R., et al.: MACER: attack-free and scalable robust training via maximizing certified radius. arXiv preprint arXiv:2001.02378 (2020)

  81. Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)

    Google Scholar 

  82. Zhang, H., et al.: Towards stable and efficient training of verifiably robust neural networks. arXiv preprint arXiv:1906.06316 (2019)

  83. Zhang, H., Weng, T.W., Chen, P.Y., Hsieh, C.J., Daniel, L.: Efficient neural network robustness certification with general activation functions. NeurIPS 31 (2018)

    Google Scholar 

  84. Zhang, X., Zhao, R., Qiao, Y., Wang, X., Li, H.: AdaCos: adaptively scaling cosine logits for effectively learning deep face representations. In: CVPR, pp. 10823–10832 (2019)

    Google Scholar 

  85. Zhou, J., Jia, X., Li, Q., Shen, L., Duan, J.: Uniface: unified cross-entropy loss for deep face recognition. In: ICCV, pp. 20730–20739 (2023)

    Google Scholar 

  86. Zhou, J., Liang, C., Chen, J.: Manifold projection for adversarial defense on face recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 288–305. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_18

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2024 (RS-2024-00332210).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jae Hong Seo .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 5064 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paik, S., Kim, D., Hwang, C., Kim, S., Seo, J.H. (2025). Towards Certifiably Robust Face Recognition. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15143. Springer, Cham. https://doi.org/10.1007/978-3-031-73013-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73013-9_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73012-2

  • Online ISBN: 978-3-031-73013-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics