Skip to main content

New Proportion Measures of Discrimination Based on Natural Direct and Indirect Effects

  • Conference paper
  • First Online:
Artificial Intelligence and Image Analysis (IWCIA 2024, ISAIM 2024)

Abstract

Discrimination-aware data mining is expected to play an important role in data-driven decision making, as “BIG data” can be obtained from the actual society. To build the appropriate decision making system, AI researchers and practitioners have proposed various discrimination measures. However, most of the existing discrimination measures cannot be interpreted as the “proportion” and thus may not provide the comparable evaluation of the discrimination level. To evaluate how much of discrimination is based on a sensitive feature directly, indirectly, or totally, we propose three proportion measures of discrimination using natural direct and indirect effects [12]. The effectiveness of the proposed discrimination measures is confirmed on Adult Census Data [2].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Balke, A., Pearl, J.: Bounds on treatment effects from studies with imperfect compliance. J. Amer. Statist. Assoc. 92(439), 1171–1176 (1997)

    Article  Google Scholar 

  2. Becker, B., Kohavi, R.: Adult. UCI Machine Learning Repository (1996). https://doi.org/10.24432/C5XW20

  3. Cai, Z., Kuroki, M., Pearl, J., Tian, J.: Bounds on direct effects in the presence of confounded intermediate variables. Biometrics 64(3), 695–701 (2008)

    Article  MathSciNet  Google Scholar 

  4. Hamilton, E.: Benchmarking four approaches to fairness-aware machine learning. Ph.D. thesis, Haverford College. Department of Computer Science (2017). https://scholarship.tricolib.brynmawr.edu/handle/10066/19295

  5. Huber, M.: Identifying causal mechanisms (primarily) based on inverse probability weighting. J. Appl. Economet. 29(6), 920–943 (2014)

    Article  MathSciNet  Google Scholar 

  6. Imai, K., Keele, L., Tingley, D., Yamamoto, T.: Unpacking the black box of causality: learning about causal mechanisms from experimental and observational studies. Am. Polit. Sci. Rev. 105(4), 765–789 (2011)

    Article  Google Scholar 

  7. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3

    Chapter  Google Scholar 

  8. Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.: Avoiding discrimination through causal reasoning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 656–666. Curran Associates Inc., Red Hook (2017)

    Google Scholar 

  9. Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, NIPS 2017, vol. 30, pp. 4066–4076. Curran Associates, Inc. (2017)

    Google Scholar 

  10. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, Burlington (1988)

    Google Scholar 

  11. Pearl, J.: Direct and indirect effects. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pp. 411–420. Morgan Kaufmann Publishers Inc., San Francisco (2001)

    Google Scholar 

  12. Pearl, J.: Causality: Models, Reasoning and Inference, 2nd edn. Cambridge University Press, Cambridge (2009)

    Book  Google Scholar 

  13. Pessach, D., Shmueli, E.: A review on fairness in machine learning. ACM Comput. Surv. 55(3), 1–44 (2022)

    Article  Google Scholar 

  14. Plecko, D., Bareinboim, E.: Causal fairness analysis. Technical report, R-90, Causal Artificial Intelligence Lab, Columbia University (2022)

    Google Scholar 

  15. Prentice, R.L.: Surrogate endpoints in clinical trials: definition and operational criteria. Stat. Med. 8(4), 431–440 (1989)

    Article  Google Scholar 

  16. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 272–283. Association for Computing Machinery, New York (2020)

    Google Scholar 

  17. Wang, Y., Taylor, J.M.G.: A measure of the proportion of treatment effect explained by a surrogate marker. Biometrics 58(4), 803–812 (2002)

    Article  MathSciNet  Google Scholar 

  18. Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, pp. 1171–1180. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2017)

    Google Scholar 

  19. Zhang, J., Bareinboim, E.: Fairness in decision-making - the causal explanation formula. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  20. Žliobaitė, I.: Measuring discrimination in algorithmic decision making. Data Min. Knowl. Disc. 31, 1060–1089 (2017)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

We would like to thank the two anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryusei Shingaki .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

This research was funded by JFE Engineering Corporation and Japan Society for the Promotion of Science (JSPS), Grant Number 19K11856 and 21H03504.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shingaki, R., Kuroki, M. (2024). New Proportion Measures of Discrimination Based on Natural Direct and Indirect Effects. In: Barneva, R.P., Brimkov, V.E., Gentile, C., Pacchiano, A. (eds) Artificial Intelligence and Image Analysis. IWCIA ISAIM 2024 2024. Lecture Notes in Computer Science, vol 14494. Springer, Cham. https://doi.org/10.1007/978-3-031-63735-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63735-3_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63734-6

  • Online ISBN: 978-3-031-63735-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics