Abstract
Discrimination-aware data mining is expected to play an important role in data-driven decision making, as “BIG data” can be obtained from the actual society. To build the appropriate decision making system, AI researchers and practitioners have proposed various discrimination measures. However, most of the existing discrimination measures cannot be interpreted as the “proportion” and thus may not provide the comparable evaluation of the discrimination level. To evaluate how much of discrimination is based on a sensitive feature directly, indirectly, or totally, we propose three proportion measures of discrimination using natural direct and indirect effects [12]. The effectiveness of the proposed discrimination measures is confirmed on Adult Census Data [2].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Balke, A., Pearl, J.: Bounds on treatment effects from studies with imperfect compliance. J. Amer. Statist. Assoc. 92(439), 1171–1176 (1997)
Becker, B., Kohavi, R.: Adult. UCI Machine Learning Repository (1996). https://doi.org/10.24432/C5XW20
Cai, Z., Kuroki, M., Pearl, J., Tian, J.: Bounds on direct effects in the presence of confounded intermediate variables. Biometrics 64(3), 695–701 (2008)
Hamilton, E.: Benchmarking four approaches to fairness-aware machine learning. Ph.D. thesis, Haverford College. Department of Computer Science (2017). https://scholarship.tricolib.brynmawr.edu/handle/10066/19295
Huber, M.: Identifying causal mechanisms (primarily) based on inverse probability weighting. J. Appl. Economet. 29(6), 920–943 (2014)
Imai, K., Keele, L., Tingley, D., Yamamoto, T.: Unpacking the black box of causality: learning about causal mechanisms from experimental and observational studies. Am. Polit. Sci. Rev. 105(4), 765–789 (2011)
Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3
Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., Schölkopf, B.: Avoiding discrimination through causal reasoning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 656–666. Curran Associates Inc., Red Hook (2017)
Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, NIPS 2017, vol. 30, pp. 4066–4076. Curran Associates, Inc. (2017)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, Burlington (1988)
Pearl, J.: Direct and indirect effects. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pp. 411–420. Morgan Kaufmann Publishers Inc., San Francisco (2001)
Pearl, J.: Causality: Models, Reasoning and Inference, 2nd edn. Cambridge University Press, Cambridge (2009)
Pessach, D., Shmueli, E.: A review on fairness in machine learning. ACM Comput. Surv. 55(3), 1–44 (2022)
Plecko, D., Bareinboim, E.: Causal fairness analysis. Technical report, R-90, Causal Artificial Intelligence Lab, Columbia University (2022)
Prentice, R.L.: Surrogate endpoints in clinical trials: definition and operational criteria. Stat. Med. 8(4), 431–440 (1989)
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 272–283. Association for Computing Machinery, New York (2020)
Wang, Y., Taylor, J.M.G.: A measure of the proportion of treatment effect explained by a surrogate marker. Biometrics 58(4), 803–812 (2002)
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, WWW 2017, pp. 1171–1180. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2017)
Zhang, J., Bareinboim, E.: Fairness in decision-making - the causal explanation formula. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Žliobaitė, I.: Measuring discrimination in algorithmic decision making. Data Min. Knowl. Disc. 31, 1060–1089 (2017)
Acknowledgments
We would like to thank the two anonymous reviewers for their helpful comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
This research was funded by JFE Engineering Corporation and Japan Society for the Promotion of Science (JSPS), Grant Number 19K11856 and 21H03504.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Shingaki, R., Kuroki, M. (2024). New Proportion Measures of Discrimination Based on Natural Direct and Indirect Effects. In: Barneva, R.P., Brimkov, V.E., Gentile, C., Pacchiano, A. (eds) Artificial Intelligence and Image Analysis. IWCIA ISAIM 2024 2024. Lecture Notes in Computer Science, vol 14494. Springer, Cham. https://doi.org/10.1007/978-3-031-63735-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-63735-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63734-6
Online ISBN: 978-3-031-63735-3
eBook Packages: Computer ScienceComputer Science (R0)