Skip to main content

FairDR: Ensuring Fairness in Mixed Data of Fairly and Unfairly Treated Instances

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2023)

Abstract

Fairness has emerged as a crucial topic in data mining and machine learning applications, driven by ethical and legal considerations. It is important to recognize that not all samples are treated unfairly, resulting in data heterogeneity in fair machine learning. Existing fair models primarily focus on achieving fairness across all heterogeneous data, yet they often fall short in ensuring fairness within specific subgroups, such as fairly treated and unfairly treated data. This paper presents a novel problem of training a fair model on heterogeneous data, aiming to achieve fairness for both types of data, with a particular emphasis on the unfairly treated subset. To address this challenge, an effective approach is to recover the distribution of both fairly and unfairly treated data. In this study, we adopt the Structural Causal Model (SCM) to model the heterogeneous data as a mixture of causal structures. Leveraging the perspective of SCM, we propose a framework called FairDR, which utilizes the Hirschfeld-Gebelein-Rényi (HGR) correlation to accurately recover the distribution of both fairly and unfairly treated data. FairDR can serve as a pre-processing method for other fair machine learning models, providing protection for the unfairly treated members. Through empirical evaluation on synthetic and real-world datasets, we demonstrate that the presence of heterogeneous data can introduce unfairness in previous algorithms. However, FairDR successfully recovers the distribution of fairly and unfairly treated data, thus improving the fairness of downstream algorithms when dealing with heterogeneous data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/datasets/Adult.

  2. 2.

    https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis.

  3. 3.

    https://www.kaggle.com/muonneutrino/us-census-demographic-data.

  4. 4.

    https://archive.ics.uci.edu/ml/datasets/communities+and+crime.

References

  1. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias: there’s software used across the country to predict future criminals and it’s biased against blacks. ProPublica 23, 77–91 (2016)

    Google Scholar 

  2. Asuncion, A., Newman, D.: Uci machine learning repository (2007)

    Google Scholar 

  3. Baharlouei, S., Nouiehed, M., Beirami, A., Razaviyayn, M.: R\(\backslash \)’enyi fair inference. arXiv preprint arXiv:1906.12005 (2019)

  4. Bahng, H., Chun, S., Yun, S., Choo, J., Oh, S.J.: Learning de-biased representations with biased representations. In: International Conference on Machine Learning, pp. 528–539. PMLR (2020)

    Google Scholar 

  5. Barik, A., Honorio, J.: Fair sparse regression with clustering: An invex relaxation for a combinatorial problem. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  6. Berk, R.: Accuracy and fairness for juvenile justice risk assessments. J. Empir. Leg. Stud. 16(1), 175–194 (2019)

    Article  Google Scholar 

  7. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: The state of the art. Sociological Methods Res. 50(1), 3–44 (2021)

    Article  MathSciNet  Google Scholar 

  8. Bogen, M., Rieke, A.: Help wanted: An examination of hiring algorithms, equity, and bias (2018)

    Google Scholar 

  9. van Breugel, B., Kyono, T., Berrevoets, J., van der Schaar, M.: Decaf: Generating fair synthetic data using causally-aware generative networks. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  10. Caton, S., Haas, C.: Fairness in machine learning: a survey. arXiv preprint arXiv:2010.04053 (2020)

  11. Craig, C.C.: On the frequency function of xy. Ann. Math. Stat. 7(1), 1–15 (1936)

    Article  Google Scholar 

  12. Creager, E., et al.: Flexibly fair representation learning by disentanglement. In: International Conference on Machine Learning, pp. 1436–1445. PMLR (2019)

    Google Scholar 

  13. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.S.: Fairness through awareness. In: Goldwasser, S. (ed.) Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8–10, 2012, pp. 214–226. ACM (2012). https://doi.org/10.1145/2090236.2090255

  14. Dwork, C., Immorlica, N., Kalai, A.T., Leiserson, M.: Decoupled classifiers for group-fair and efficient machine learning. In: Conference on Fairness, Accountability and Transparency, pp. 119–133. PMLR (2018)

    Google Scholar 

  15. Grari, V., Ruf, B., Lamprier, S., Detyniecki, M.: Fairness-aware neural r\(\backslash \)’eyni minimization for continuous features. arXiv preprint arXiv:1911.04929 (2019)

  16. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems 29 (2016)

    Google Scholar 

  17. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous univariate distributions, volume 2, vol. 289. John wiley & sons (1995)

    Google Scholar 

  18. Kim, M., Reingold, O., Rothblum, G.: Fairness through computationally-bounded awareness. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  19. Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  20. Mary, J., Calauzenes, C., El Karoui, N.: Fairness-aware learning for continuous attributes and treatments. In: International Conference on Machine Learning, pp. 4382–4391. PMLR (2019)

    Google Scholar 

  21. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    Article  Google Scholar 

  22. Metz, C., Satariano, A.: An algorithm that grants freedom, or takes it away. The New York Times 6 (2020)

    Google Scholar 

  23. Mukerjee, A., Biswas, R., Deb, K., Mathur, A.P.: Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management. Int. Trans. Oper. Res. 9(5), 583–597 (2002)

    Article  Google Scholar 

  24. Pan, W., Cui, S., Bian, J., Zhang, C., Wang, F.: Explaining algorithmic fairness through fairness-aware causal path decomposition. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1287–1297 (2021)

    Google Scholar 

  25. Redmond, M.: Communities and crime unnormalized data set. UCI Machine Learning Repository, p. 66 (2011)

    Google Scholar 

  26. Roh, Y., Lee, K., Whang, S., Suh, C.: Sample selection for fair and robust training. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  27. Roh, Y., Lee, K., Whang, S.E., Suh, C.: Fairbatch: Batch selection for model fairness. In: International Conference on Learning Representations (2020)

    Google Scholar 

  28. Spirtes, P., Glymour, C.: An algorithm for fast recovery of sparse causal graphs. Soc. Sci. Comput. Rev. 9(1), 62–72 (1991)

    Article  Google Scholar 

  29. Xu, D., Wu, Y., Yuan, S., Zhang, L., Wu, X.: Achieving causal fairness through generative adversarial networks. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (2019)

    Google Scholar 

  30. Xu, D., Yuan, S., Zhang, L., Wu, X.: Fairgan: Fairness-aware generative adversarial networks. In: 2018 IEEE International Conference on Big Data (Big Data)., pp. 570–575. IEEE (2018)

    Google Scholar 

  31. Zhang, K., Peters, J., Janzing, D., Schölkopf, B.: Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775 (2012)

  32. Zhao, H., Coston, A., Adel, T., Gordon, G.J.: Conditional learning of fair representations. arXiv preprint arXiv:1910.07162 (2019)

Download references

Acknowledgement

This work was supported in part by Zhejiang Province Natural Science Foundation (LQ21F020020), National Natural Science Foundation of China (62006207, U20A20387), Young Elite Scientists Sponsorship Program by CAST (2021QNRC001), and the Fundamental Research Funds for the Central Universities (226-2022-00142, 226-2022-00051).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Kuang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Y., Kuang, K., Zhang, F., Wu, F. (2024). FairDR: Ensuring Fairness in Mixed Data of Fairly and Unfairly Treated Instances. In: Fang, L., Pei, J., Zhai, G., Wang, R. (eds) Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science(), vol 14474. Springer, Singapore. https://doi.org/10.1007/978-981-99-9119-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9119-8_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9118-1

  • Online ISBN: 978-981-99-9119-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics