Skip to main content

Toward the Tradeoffs Between Privacy, Fairness and Utility in Federated Learning

  • Conference paper
  • First Online:
Emerging Information Security and Applications (EISA 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2004 ))

Abstract

Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm that guarantees user privacy and prevents the risk of data leakage due to the advantage of the client’s local training. Researchers have struggled to design fair FL systems that ensure fairness of results. However, the interplay between fairness and privacy has been less studied. Increasing the fairness of FL systems can have an impact on user privacy, while an increase in user privacy can affect fairness. In this work, on the client side, we use the fairness metrics, such as Demographic Parity (DemP), Equalized Odds (EOs), and Disparate Impact (DI), to construct the local fair model. To protect the privacy of the client model, we propose a privacy-protection fairness FL method. The results show that the accuracy of the fair model with privacy increases because privacy breaks the constraints of the fairness metrics. In our experiments, we conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69. PMLR (2018)

    Google Scholar 

  2. Awasthi, P., Kleindessner, M., Morgenstern, J.: Equalized odds postprocessing under imperfect group information. In: International Conference on Artificial Intelligence and Statistics, pp. 1770–1780. PMLR (2020)

    Google Scholar 

  3. Balle, B., Bell, J., Gascón, A., Nissim, K.: The privacy blanket of the shuffle model. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 638–667. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_22

    Chapter  Google Scholar 

  4. Balle, B., Bell, J., Gascón, A., Nissim, K.: Private summation in the multi-message shuffle model. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 657–676 (2020)

    Google Scholar 

  5. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50(1), 3–44 (2021)

    Article  MathSciNet  Google Scholar 

  6. Bagdasaryan, E., Poursaeed, O., Shmatikov, V.: Differential privacy has disparate impact on model accuracy. Adv. Neural Inf. Process. Syst. 32, 1–10 (2019)

    Google Scholar 

  7. Bietti, A., Wei, C.Y., Dudik, M., Langford, J., Wu, S.: Personalization improves privacy-accuracy tradeoffs in federated learning. In: International Conference on Machine Learning, pp. 1945–1962. PMLR (2022)

    Google Scholar 

  8. Chen, W.N., Choo, C.A.C., Kairouz, P., Suresh, A.T.: The fundamental price of secure aggregation in differentially private federated learning. In: International Conference on Machine Learning, pp. 3056–3089. PMLR (2022)

    Google Scholar 

  9. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)

    Article  Google Scholar 

  10. Cheu, A., Smith, A., Ullman, J., Zeber, D., Zhilyaev, M.: Distributed differential privacy via shuffling. In: Ishai, Y., Rijmen, V. (eds.) EUROCRYPT 2019. LNCS, vol. 11476, pp. 375–403. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17653-2_13

    Chapter  Google Scholar 

  11. Chen, H., Zhu, T., Zhang, T., Zhou, W., Yu, P.S.: Privacy and fairness in federated learning: on the perspective of trade-off. ACM Comput. Surv. 56, 1–37 (2023)

    Google Scholar 

  12. Diana, E., Gill, W., Kearns, M., Kenthapadi, K., Roth, A., Sharifi-Malvajerdi, S.: Multiaccurate proxies for downstream fairness. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1207–1239 (2022)

    Google Scholar 

  13. Duan, M., Liu, D., Chen, X., Liu, R., Tan, Y., Liang, L.: Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Trans. Parallel Distrib. Syst. 32(1), 59–71 (2020)

    Article  Google Scholar 

  14. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  Google Scholar 

  15. Erlingsson, Ú., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2468–2479. SIAM (2019)

    Google Scholar 

  16. Esipova, M.S., Ghomi, A.A., Luo, Y., Cresswell, J.C.: Disparate impact in differential privacy from gradient misalignment. arXiv preprint arXiv:2206.07737 (2022)

  17. Farrand, T., Mireshghallah, F., Singh, S., Trask, A.: Neither private nor fair: impact of data imbalance on utility and fairness in differential privacy. In Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, pp. 15–19 (2020)

    Google Scholar 

  18. Girgis, A., Data, D., Diggavi, S., Kairouz, P., Suresh, A.T.: Shuffled model of differential privacy in federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2521–2529. PMLR (2021)

    Google Scholar 

  19. Ghazi, B., Golowich, N., Kumar, R., Pagh, R., Velingker, A.: On the power of multiple anonymous messages: frequency estimation and selection in the shuffle model of differential privacy. In: Canteaut, A., Standaert, F.-X. (eds.) EUROCRYPT 2021. LNCS, vol. 12698, pp. 463–488. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77883-5_16

    Chapter  Google Scholar 

  20. Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)

  21. Gehlhar, T., Marx, F., Schneider, T., Suresh, A., Wehrle, T., Yalame, H.: Mpc-friendly framework for private and robust federated learning. Cryptology ePrint Archive, Safefl (2023)

    Google Scholar 

  22. Ganev, G., Oprisanu, B., De Cristofaro, E.: Robin hood and matthew effects: differential privacy has disparate impact on synthetic data. In: International Conference on Machine Learning, pp. 6944–6959. PMLR (2022)

    Google Scholar 

  23. Hao, W., et al.: Towards fair federated learning with zero-shot data augmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3310–3319 (2021)

    Google Scholar 

  24. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29 (2016)

    Google Scholar 

  25. Jagielski, M., et al.: Differentially private fair learning. In: International Conference on Machine Learning, pp. 3000–3008. PMLR (2019)

    Google Scholar 

  26. Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479 (2018)

  27. Kilbertus, N., Gascón, A., Kusner, M., Veale, M., Gummadi, K., Weller, A.: Blind justice: fairness with encrypted sensitive attributes. In: International Conference on Machine Learning, pp. 2630–2639. PMLR (2018)

    Google Scholar 

  28. Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)

    Google Scholar 

  29. Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  30. Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)

    Google Scholar 

  31. Lowy, A., Ghafelebashi, A., Razaviyayn, M.: Private non-convex federated learning without a trusted server. In International Conference on Artificial Intelligence and Statistics, pp. 5749–5786. PMLR (2023)

    Google Scholar 

  32. Lowy, A., Gupta, D., Razaviyayn, M.: Stochastic differentially private and fair learning. In: Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, pp. 86–119. PMLR (2023)

    Google Scholar 

  33. Li, X., et al.: Privacy enhancement via dummy points in the shuffle model. IEEE Trans. Depend. Secure Comput. (2023)

    Google Scholar 

  34. Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019)

  35. Lamy, A., Zhong, Z., Menon, A.K., Verma, N.: Noise-tolerant fair classification. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  36. Martinez, N., Bertran, M., Sapiro, G.: Minimax pareto fairness: a multi objective perspective. In: International Conference on Machine Learning, pp. 6755–6764. PMLR (2020)

    Google Scholar 

  37. McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera y Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  38. Mozannar, H., Ohannessian, M., Srebro, N.: Fair learning with private demographic data. In: International Conference on Machine Learning, pp. 7066–7075. PMLR (2020)

    Google Scholar 

  39. Padala, M., Gujar, S.: FNNC: achieving fairness through neural networks. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,\(\{\)IJCAI-20\(\}\), International Joint Conferences on Artificial Intelligence Organization (2020)

    Google Scholar 

  40. Pujol, D., McKenna, R., Kuppam, S., Hay, M., Machanavajjhala, A., Miklau, G.: Fair decision making using privacy-protected data. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 189–199 (2020)

    Google Scholar 

  41. Raskhodnikova, S., Smith, A., Lee, H.K., Nissim, K., Kasiviswanathan, S.P.: What can we learn privately. In: Proceedings of the 54th Annual Symposium on Foundations of Computer Science, pp. 531–540 (2008)

    Google Scholar 

  42. Shao, J., et al.: A survey of what to share in federated learning: perspectives on model utility, privacy leakage, and communication efficiency. arXiv preprint arXiv:2307.10655 (2023)

  43. Scheliga, D., Mäder, P., Seeland, M.: Precode-a generic model extension to prevent deep gradient leakage. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1849–1858 (2022)

    Google Scholar 

  44. Tran, C., Fioretto, F., Van Hentenryck, P.: Differentially private and fair deep learning: a lagrangian dual approach. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9932–9939 (2021)

    Google Scholar 

  45. Wang, S., Guo, W., Narasimhan, H., Cotter, A., Gupta, M., Jordan, M.: Robust optimization for fairness with noisy protected groups. Adv. Neural. Inf. Process. Syst. 33, 5190–5203 (2020)

    Google Scholar 

  46. Wu, Y., Kang, Y., Luo, J., He, Y., Yang, Q.: FEDCG: leverage conditional gan for protecting privacy and maintaining competitive performance in federated learning. arXiv preprint arXiv:2111.08211 (2021)

  47. Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698–1707. IEEE (2020)

    Google Scholar 

  48. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  49. Xu, R., Baracaldo, N., Joshi, J.: Privacy-preserving machine learning: methods, challenges and directions. arXiv preprint arXiv:2108.04417 (2021)

  50. Yu, H., et al.: A fairness-aware incentive scheme for federated learning. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 393–399 (2020)

    Google Scholar 

  51. Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. In: Artificial Intelligence and Statistics, pp. 962–970. PMLR (2017)

    Google Scholar 

  52. Zhou, Z., Xu, C., Wang, M., Kuang, X., Zhuang, Y., Yu, S.: A multi-shuffler framework to establish mutual confidence for secure federated learning. IEEE Trans. Depend. Secure Comput. 20, 4230–4244 (2022)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant U20B2048, 62202302.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianhua Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, K., Zhang, X., Lin, X., Li, G., Wang, J., Li, J. (2024). Toward the Tradeoffs Between Privacy, Fairness and Utility in Federated Learning. In: Shao, J., Katsikas, S.K., Meng, W. (eds) Emerging Information Security and Applications. EISA 2023. Communications in Computer and Information Science, vol 2004 . Springer, Singapore. https://doi.org/10.1007/978-981-99-9614-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9614-8_8

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9613-1

  • Online ISBN: 978-981-99-9614-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics