Skip to main content

Absolute Variation Distance: An Inversion Attack Evaluation Metric for Federated Learning

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14611))

Included in the following conference series:

  • 247 Accesses

Abstract

Federated Learning (FL) has emerged as a pivotal approach for training models on decentralized data sources by sharing only model gradients. However, the shared gradients in FL are susceptible to inversion attacks which can expose sensitive information. While several defense and attack strategies have been proposed, their effectiveness is often evaluated using metrics that may not necessarily reflect the success rate of an attack or information retrieval, especially in the context of multidimensional data such as images. Traditional metrics like the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Mean Squared Error (MSE) are typically used as lightweight metrics, assume only pixel-wise comparison, but fail to consider the semantic context of the recovered data. This paper introduces the Absolute Variation Distance (AVD), a lightweight metric derived from total variation, to assess data recovery and information leakage in FL. Unlike traditional metrics, AVD offers a continuous measure for extracting information in noisy images and aligns closely with human perception. Our results combined with a user experience survey demonstrate that AVD provides a more accurate and consistent measure of data recovery. It also matches the accuracy of the more costly and complex Neural Network based metric, the Learned Perceptual Image Patch Similarity (LPIPS). Hence it offers an effective tool for automatic evaluation of data security in FL and a reliable way of studying defence and inversion attacks strategies in FL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Balunović, M., Dimitrov, D.I., Staab, R., Vechev, M.: Bayesian framework for gradient leakage (2022)

    Google Scholar 

  2. Cahn, C.: A note on signal-to-noise ratio in band-pass limiters. IRE Trans. Inform. Theor. 7(1), 39–43 (1961). https://doi.org/10.1109/TIT.1961.1057616

    Article  MathSciNet  Google Scholar 

  3. Chen, Y., Gui, Y., Lin, H., Gan, W., Wu, Y.: Federated learning attacks and defenses: a survey (2022)

    Google Scholar 

  4. Dwork, C.: Differential privacy. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) Automata, Languages and Programming, pp. 1–12. Springer Berlin Heidelberg, Berlin, Heidelberg (2006). https://doi.org/10.1007/11787006_1

    Chapter  Google Scholar 

  5. Eloul, S., Silavong, F., Kamthe, S., Georgiadis, A., Moran, S.J.: Mixing gradients in neural networks as a strategy to enhance privacy in federated learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 3956–3965 (January 2024)

    Google Scholar 

  6. Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting Gradients - How easy is it to break privacy in federated learning? In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020(December), pp. 6–12, 2020. virtual (2020). https://proceedings.neurips.cc/paper/2020/hash/c4ede56bbd98819ae6112b20ac6bf145-Abstract.html

  7. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Tech. Rep. 07–49, University of Massachusetts, Amherst (October 2007)

    Google Scholar 

  8. Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (2021). https://openreview.net/forum?id=0CDKgyYaxC8

  9. Huang, Y., Song, Z., Li, K., Arora, S.: Instahide: Instance-hiding schemes for private distributed learning (2021)

    Google Scholar 

  10. LeCun, Y., et al.: Handwritten Digit Recognition with a Back-Propagation Network. In: Advances in Neural Information Processing Systems. vol. 2. Morgan-Kaufmann (1990), https://proceedings.neurips.cc/paper/1989/file/53c3bce66e43be4f209556518c2fcb54-Paper.pdf

  11. LeCun, Y., Cortes, C., Burges, C.: Mnist handwritten digit database. ATT Labs. http://yann.lecun.com/exdb/mnist 2 (2010)

  12. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.y.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282, PMLR (20–22 Apr 2017), https://proceedings.mlr.press/v54/mcmahan17a.html

  13. Melis, L., Song, C., Cristofaro, E.D., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA, USA, May 19–23, 2019, pp. 691–706. IEEE (2019). https://doi.org/10.1109/SP.2019.00029

  14. Nguyen, D.C., et al.: Federated learning for smart healthcare: a survey. ACM Comput. Surv. (CSUR) 55(3), 1–37 (2022)

    Article  Google Scholar 

  15. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018). https://doi.org/10.1109/TIFS.2017.2787987

    Article  Google Scholar 

  16. Qian, J., Nassar, H., Hansen, L.K.: Minimal model structure analysis for input reconstruction in federated learning (2021)

    Google Scholar 

  17. Rieke, N., et al.: The future of digital health with federated learning. npj Digital Med. 3(1) (Sep 2020). https://doi.org/10.1038/s41746-020-00323-1

  18. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. In: Proceedings of the Eleventh Annual International Conference of the Center for Nonlinear Studies on Experimental Mathematics: Computational Issues in Nonlinear Science: Computational Issues in Nonlinear Science, pp. 259–268. Elsevier North-Holland Inc, USA (1992)

    Google Scholar 

  19. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22–26, 2017, pp. 3–18. IEEE Computer Society (2017). https://doi.org/10.1109/SP.2017.41

  20. Sikandar, H.S., Waheed, H., Tahir, S., Malik, S.U.R., Rafique, W.: a detailed survey on federated learning attacks and defenses. Electronics 12(2) (2023). https://doi.org/10.3390/electronics12020260, https://www.mdpi.com/2079-9292/12/2/260

  21. Wainakh, A., et al.: Federated learning attacks revisited: a critical discussion of gaps, assumptions, and evaluation setups (2022)

    Google Scholar 

  22. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  23. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2) (jan 2019). https://doi.org/10.1145/3298981, https://doi.org/10.1145/3298981

  24. Yin, H., Mallya, A., Vahdat, A., Alvarez, J., Kautz, J., Molchanov, P.: See through gradients: image batch recovery via GradInversion. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16332–16341 (2021). https://doi.org/10.1109/CVPR46437.2021.01607

  25. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric (2018)

    Google Scholar 

  26. Zhao, B., Mopuri, K.R., Bilen, H.: idlg: Improved deep leakage from gradients (2020)

    Google Scholar 

  27. Zhao, Y., et al.: Local differential privacy based federated learning for internet of things (2020)

    Google Scholar 

  28. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems. vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/60a6c4002cc7b29142def8871531281a-Paper.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios Papadopoulos .

Editor information

Editors and Affiliations

Ethics declarations

Disclaimer

This paper was prepared for informational purposes by the Global Technology Applied Research center of JPMorgan Chase & Co. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates makes any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, without limitation, with respect to the completeness, accuracy, or reliability of the information contained herein and the potential legal, compliance, tax, or accounting effects thereof. This document is not intended as investment research or investment advice, or as a recommendation, offer, or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.

Appendix

Appendix

Fig. 7.
figure 7

Random recovered vectors from LFW datasets, column-wise sorted via the AVD.

Fig. 8.
figure 8

Random recovered vectors from MNIST datasets, column-wise sorted via the AVD.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Papadopoulos, G., Satsangi, Y., Eloul, S., Pistoia, M. (2024). Absolute Variation Distance: An Inversion Attack Evaluation Metric for Federated Learning. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56066-8_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56065-1

  • Online ISBN: 978-3-031-56066-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics