Abstract
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm that guarantees user privacy and prevents the risk of data leakage due to the advantage of the client’s local training. Researchers have struggled to design fair FL systems that ensure fairness of results. However, the interplay between fairness and privacy has been less studied. Increasing the fairness of FL systems can have an impact on user privacy, while an increase in user privacy can affect fairness. In this work, on the client side, we use the fairness metrics, such as Demographic Parity (DemP), Equalized Odds (EOs), and Disparate Impact (DI), to construct the local fair model. To protect the privacy of the client model, we propose a privacy-protection fairness FL method. The results show that the accuracy of the fair model with privacy increases because privacy breaks the constraints of the fairness metrics. In our experiments, we conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69. PMLR (2018)
Awasthi, P., Kleindessner, M., Morgenstern, J.: Equalized odds postprocessing under imperfect group information. In: International Conference on Artificial Intelligence and Statistics, pp. 1770–1780. PMLR (2020)
Balle, B., Bell, J., Gascón, A., Nissim, K.: The privacy blanket of the shuffle model. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 638–667. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_22
Balle, B., Bell, J., Gascón, A., Nissim, K.: Private summation in the multi-message shuffle model. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 657–676 (2020)
Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50(1), 3–44 (2021)
Bagdasaryan, E., Poursaeed, O., Shmatikov, V.: Differential privacy has disparate impact on model accuracy. Adv. Neural Inf. Process. Syst. 32, 1–10 (2019)
Bietti, A., Wei, C.Y., Dudik, M., Langford, J., Wu, S.: Personalization improves privacy-accuracy tradeoffs in federated learning. In: International Conference on Machine Learning, pp. 1945–1962. PMLR (2022)
Chen, W.N., Choo, C.A.C., Kairouz, P., Suresh, A.T.: The fundamental price of secure aggregation in differentially private federated learning. In: International Conference on Machine Learning, pp. 3056–3089. PMLR (2022)
Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)
Cheu, A., Smith, A., Ullman, J., Zeber, D., Zhilyaev, M.: Distributed differential privacy via shuffling. In: Ishai, Y., Rijmen, V. (eds.) EUROCRYPT 2019. LNCS, vol. 11476, pp. 375–403. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17653-2_13
Chen, H., Zhu, T., Zhang, T., Zhou, W., Yu, P.S.: Privacy and fairness in federated learning: on the perspective of trade-off. ACM Comput. Surv. 56, 1–37 (2023)
Diana, E., Gill, W., Kearns, M., Kenthapadi, K., Roth, A., Sharifi-Malvajerdi, S.: Multiaccurate proxies for downstream fairness. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1207–1239 (2022)
Duan, M., Liu, D., Chen, X., Liu, R., Tan, Y., Liang, L.: Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Trans. Parallel Distrib. Syst. 32(1), 59–71 (2020)
Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Erlingsson, Ú., Feldman, V., Mironov, I., Raghunathan, A., Talwar, K., Thakurta, A.: Amplification by shuffling: from local to central differential privacy via anonymity. In: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2468–2479. SIAM (2019)
Esipova, M.S., Ghomi, A.A., Luo, Y., Cresswell, J.C.: Disparate impact in differential privacy from gradient misalignment. arXiv preprint arXiv:2206.07737 (2022)
Farrand, T., Mireshghallah, F., Singh, S., Trask, A.: Neither private nor fair: impact of data imbalance on utility and fairness in differential privacy. In Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, pp. 15–19 (2020)
Girgis, A., Data, D., Diggavi, S., Kairouz, P., Suresh, A.T.: Shuffled model of differential privacy in federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2521–2529. PMLR (2021)
Ghazi, B., Golowich, N., Kumar, R., Pagh, R., Velingker, A.: On the power of multiple anonymous messages: frequency estimation and selection in the shuffle model of differential privacy. In: Canteaut, A., Standaert, F.-X. (eds.) EUROCRYPT 2021. LNCS, vol. 12698, pp. 463–488. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77883-5_16
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)
Gehlhar, T., Marx, F., Schneider, T., Suresh, A., Wehrle, T., Yalame, H.: Mpc-friendly framework for private and robust federated learning. Cryptology ePrint Archive, Safefl (2023)
Ganev, G., Oprisanu, B., De Cristofaro, E.: Robin hood and matthew effects: differential privacy has disparate impact on synthetic data. In: International Conference on Machine Learning, pp. 6944–6959. PMLR (2022)
Hao, W., et al.: Towards fair federated learning with zero-shot data augmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3310–3319 (2021)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29 (2016)
Jagielski, M., et al.: Differentially private fair learning. In: International Conference on Machine Learning, pp. 3000–3008. PMLR (2019)
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479 (2018)
Kilbertus, N., Gascón, A., Kusner, M., Veale, M., Gummadi, K., Weller, A.: Blind justice: fairness with encrypted sensitive attributes. In: International Conference on Machine Learning, pp. 2630–2639. PMLR (2018)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)
Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. Adv. Neural Inf. Process. Syst. 30 (2017)
Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)
Lowy, A., Ghafelebashi, A., Razaviyayn, M.: Private non-convex federated learning without a trusted server. In International Conference on Artificial Intelligence and Statistics, pp. 5749–5786. PMLR (2023)
Lowy, A., Gupta, D., Razaviyayn, M.: Stochastic differentially private and fair learning. In: Workshop on Algorithmic Fairness through the Lens of Causality and Privacy, pp. 86–119. PMLR (2023)
Li, X., et al.: Privacy enhancement via dummy points in the shuffle model. IEEE Trans. Depend. Secure Comput. (2023)
Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019)
Lamy, A., Zhong, Z., Menon, A.K., Verma, N.: Noise-tolerant fair classification. Adv. Neural Inf. Process. Syst. 32 (2019)
Martinez, N., Bertran, M., Sapiro, G.: Minimax pareto fairness: a multi objective perspective. In: International Conference on Machine Learning, pp. 6755–6764. PMLR (2020)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera y Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Mozannar, H., Ohannessian, M., Srebro, N.: Fair learning with private demographic data. In: International Conference on Machine Learning, pp. 7066–7075. PMLR (2020)
Padala, M., Gujar, S.: FNNC: achieving fairness through neural networks. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence,\(\{\)IJCAI-20\(\}\), International Joint Conferences on Artificial Intelligence Organization (2020)
Pujol, D., McKenna, R., Kuppam, S., Hay, M., Machanavajjhala, A., Miklau, G.: Fair decision making using privacy-protected data. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 189–199 (2020)
Raskhodnikova, S., Smith, A., Lee, H.K., Nissim, K., Kasiviswanathan, S.P.: What can we learn privately. In: Proceedings of the 54th Annual Symposium on Foundations of Computer Science, pp. 531–540 (2008)
Shao, J., et al.: A survey of what to share in federated learning: perspectives on model utility, privacy leakage, and communication efficiency. arXiv preprint arXiv:2307.10655 (2023)
Scheliga, D., Mäder, P., Seeland, M.: Precode-a generic model extension to prevent deep gradient leakage. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1849–1858 (2022)
Tran, C., Fioretto, F., Van Hentenryck, P.: Differentially private and fair deep learning: a lagrangian dual approach. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9932–9939 (2021)
Wang, S., Guo, W., Narasimhan, H., Cotter, A., Gupta, M., Jordan, M.: Robust optimization for fairness with noisy protected groups. Adv. Neural. Inf. Process. Syst. 33, 5190–5203 (2020)
Wu, Y., Kang, Y., Luo, J., He, Y., Yang, Q.: FEDCG: leverage conditional gan for protecting privacy and maintaining competitive performance in federated learning. arXiv preprint arXiv:2111.08211 (2021)
Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698–1707. IEEE (2020)
Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)
Xu, R., Baracaldo, N., Joshi, J.: Privacy-preserving machine learning: methods, challenges and directions. arXiv preprint arXiv:2108.04417 (2021)
Yu, H., et al.: A fairness-aware incentive scheme for federated learning. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 393–399 (2020)
Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. In: Artificial Intelligence and Statistics, pp. 962–970. PMLR (2017)
Zhou, Z., Xu, C., Wang, M., Kuang, X., Zhuang, Y., Yu, S.: A multi-shuffler framework to establish mutual confidence for secure federated learning. IEEE Trans. Depend. Secure Comput. 20, 4230–4244 (2022)
Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant U20B2048, 62202302.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sun, K., Zhang, X., Lin, X., Li, G., Wang, J., Li, J. (2024). Toward the Tradeoffs Between Privacy, Fairness and Utility in Federated Learning. In: Shao, J., Katsikas, S.K., Meng, W. (eds) Emerging Information Security and Applications. EISA 2023. Communications in Computer and Information Science, vol 2004 . Springer, Singapore. https://doi.org/10.1007/978-981-99-9614-8_8
Download citation
DOI: https://doi.org/10.1007/978-981-99-9614-8_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-9613-1
Online ISBN: 978-981-99-9614-8
eBook Packages: Computer ScienceComputer Science (R0)