Skip to main content

AdversariaLeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Face recognition (FR) systems are vulnerable to external information leakage (EIL) attacks, which can reveal sensitive information about the training data, thus compromising the confidentiality of the company’s proprietary and the privacy of the individuals concerned. Existing EIL attacks mainly rely on unrealistic assumptions, such as a high query budget for the attacker and massive computational power, resulting in impractical EIL attacks. We present AdversariaLeak, a novel and practical query-based EIL attack that targets the face verification model of the FR systems by using carefully selected adversarial samples. AdversariaLeakuses substitute models to craft adversarial samples, which are then handpicked to infer sensitive information. Our extensive evaluation on the MAAD-Face and CelebA datasets, which includes over 200 different target models, shows that AdversariaLeakoutperforms state-of-the-art EIL attacks in inferring the property that best characterizes the FR model’s training set while maintaining a small query budget and practical attacker assumptions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://sota.nizhib.ai/pytorch-insightface/iresnet100-73e07ba7.pth.

References

  1. Arashloo, S.R., Kittler, J.: Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features. IEEE Trans. Inf. Forensics Secur. 9(12), 2100–2109 (2014)

    Article  Google Scholar 

  2. Ateniese, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D., Felici, G.: Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secure. Network. 10(3), 137–150 (2015)

    Article  Google Scholar 

  3. Behrmann, J., Grathwohl, W., Chen, R.T., Duvenaud, D., Jacobsen, J.H.: Invertible residual networks. In: International Conference on Machine Learning. pp. 573–582. PMLR (2019)

    Google Scholar 

  4. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). pp. 67–74. IEEE (2018)

    Google Scholar 

  5. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp). pp. 39–57. Ieee (2017)

    Google Scholar 

  6. Chaudhari, H., Abascal, J., Oprea, A., Jagielski, M., Tramèr, F., Ullman, J.: Snap: Efficient extraction of private properties with poisoning. In: 2023 IEEE Symposium on Security and Privacy (SP). pp. 400–417. IEEE (2023)

    Google Scholar 

  7. Choquette-Choo, C.A., Tramer, F., Carlini, N., Papernot, N.: Label-only membership inference attacks. In: International conference on machine learning. pp. 1964–1974. PMLR (2021)

    Google Scholar 

  8. Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., Sun, J.: Repvgg: Making vgg-style convnets great again. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13733–13742 (2021)

    Google Scholar 

  9. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9(3–4), 211–407 (2014)

    Google Scholar 

  10. Ganju, K., Wang, Q., Yang, W., Gunter, C.A., Borisov, N.: Property inference attacks on fully connected neural networks using permutation invariant representations. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. pp. 619–633 (2018)

    Google Scholar 

  11. Goswami, G., Ratha, N., Agarwal, A., Singh, R., Vatsa, M.: Unravelling robustness of deep learning based face recognition against adversarial attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 32 (2018)

    Google Scholar 

  12. Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In: European conference on computer vision. pp. 87–102. Springer (2016)

    Google Scholar 

  13. Ijiri, Y., Sakuragi, M., Lao, S.: Security management for mobile devices by face recognition. In: 7th International Conference on Mobile Data Management (MDM’06). pp. 49–49. IEEE (2006)

    Google Scholar 

  14. Jia, J., Gong, N.Z.: Attriguard: A practical defense against attribute inference attacks via adversarial machine learning. In: 27th \(\{\)USENIX\(\}\) security symposium (\(\{\)USENIX\(\}\) security 18). pp. 513–529 (2018)

    Google Scholar 

  15. Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11 (2018)

    Google Scholar 

  16. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  17. Mahloujifar, S., Ghosh, E., Chase, M.: Property inference from poisoning. In: 2022 IEEE Symposium on Security and Privacy (SP). pp. 1569–1569. IEEE Computer Society (2022)

    Google Scholar 

  18. Majeed, F., Khan, F.Z., Iqbal, M.J., Nazir, M.: Real-time surveillance system based on facial recognition using yolov5. In: 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC). pp. 1–6. IEEE (2021)

    Google Scholar 

  19. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE symposium on security and privacy (SP). pp. 691–706. IEEE (2019)

    Google Scholar 

  20. Nicolae, M.I., Sinn, M., Tran, M.N., Buesser, B., Rawat, A., Wistuba, M., Zantedeschi, V., Baracaldo, N., Chen, B., Ludwig, H., et al.: Adversarial robustness toolbox v1. 0.0. arXiv preprint arXiv:1807.01069 (2018)

  21. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). pp. 582–597. IEEE (2016)

    Google Scholar 

  22. Ribeiro, R., Lopes, D., Neves, A.: Access control in the wild using face verification. In: Intelligent Video Surveillance, pp. 1–18. IntechOpen (2018)

    Google Scholar 

  23. Seo, H.J., Milanfar, P.: Face verification using the lark representation. IEEE Trans. Inf. Forensics Secur. 6(4), 1275–1286 (2011)

    Article  Google Scholar 

  24. Suri, A., Evans, D.: Formalizing and estimating distribution inference risks. Proceedings on Privacy Enhancing Technologies 2022 (2022)

    Google Scholar 

  25. Terhörst, P., Fährmann, D., Kolf, J.N., Damer, N., Kirchbuchner, F., Kuijper, A.: Maad-face: A massively annotated attribute dataset for face images. IEEE Trans. Inf. Forensics Secur. 16, 3942–3957 (2021)

    Article  Google Scholar 

  26. Vakhshiteh, F., Nickabadi, A., Ramachandra, R.: Adversarial attacks against face recognition: A comprehensive study. IEEE Access 9, 92735–92756 (2021)

    Article  Google Scholar 

  27. Wang, J., Liu, Y., Hu, Y., Shi, H., Mei, T.: Facex-zoo: A pytorch toolbox for face recognition. In: Proceedings of the 29th ACM International Conference on Multimedia. pp. 3779–3782 (2021)

    Google Scholar 

  28. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? Advances in neural information processing systems 27 (2014)

    Google Scholar 

  29. Yu, H., Yang, K., Zhang, T., Tsai, Y.Y., Ho, T.Y., Jin, Y.: Cloudleak: Large-scale deep learning models stealing through adversarial examples. In: NDSS (2020)

    Google Scholar 

  30. Zhou, J., Chen, Y., Shen, C., Zhang, Y.: Property inference attacks against gans. In: 30th Network and Distributed System Security Symposium (NDSS 2022) (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roye Katzav .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1559 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Katzav, R. et al. (2025). AdversariaLeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15133. Springer, Cham. https://doi.org/10.1007/978-3-031-73226-3_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73226-3_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73225-6

  • Online ISBN: 978-3-031-73226-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics