Skip to main content

Subgroup Harm Assessor: Identifying Potential Fairness-Related Harms and Predictive Bias

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track (ECML PKDD 2024)

Abstract

With the integration of artificial intelligence into real-world decision-support systems, there is increasing interest in tools that facilitate the identification of potential biases and fairness-related harms of machine learning models. While existing toolkits provide approaches to evaluate harms associated with discrete predicted outcomes, the assessment of disparities in epistemic value provided by continuous risk scores is relatively underexplored. Additionally, relatively few works focus on identifying the biases at the root of the harm. In this work, we present a visual analytics “Subgroup Harm Assessor” tool that allows users to: (1) identify disparities in the epistemic value of risk-scoring models via subgroup discovery of disparities in model log loss, (2) evaluate the extent to which the disparity might be caused by disparities in the informativeness of features via SHapley Additive exPlanations (SHAP) of model loss.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://responsibleaitoolbox.ai/introducing-responsible-ai-dashboard/.

  2. 2.

    https://youtu.be/ZjW8Kff-6Qs.

  3. 3.

    https://github.com/adubowski/subgroup-harm-assessor.

References

  1. Barocas, S., Selbst, A.D.: Big data’s disparate impact. California Law Rev. 671–732 (2016)

    Google Scholar 

  2. Bellamy, R.K.E., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias (2018). https://arxiv.org/abs/1810.01943

  3. Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: International Conference on Machine Learning. PMLR (2018)

    Google Scholar 

  4. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  5. Petersen, E., Ganz, M., Holm, S., Feragen, A.: On (assessing) the fairness of risk score models. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 817–829 (2023)

    Google Scholar 

  6. Pulizzi, M.: Identification of unequally treated subgroups in machine learning models through top-k subgroup discovery techniques. tue.nl (2021)

    Google Scholar 

  7. Weerts, H., Dudík, M., Edgar, R., Jalali, A., Lutz, R., Madaio, M.: Fairlearn: assessing and improving fairness of AI systems. arXiv:2303.16626 (2023)

  8. Weerts, H., Xenidis, R., Tarissan, F., Olsen, H.P., Pechenizkiy, M.: Algorithmic unfairness through the lens of EU non-discrimination law: or why the law is not a decision tree. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 805–816 (2023)

    Google Scholar 

Download references

Acknowledgments

This work was partially supported by Horizon Europe Smart Change project, grant agreement No. 101080965.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Dubowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dubowski, A., Weerts, H., Wolters, A., Pechenizkiy, M. (2024). Subgroup Harm Assessor: Identifying Potential Fairness-Related Harms and Predictive Bias. In: Bifet, A., et al. Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14948. Springer, Cham. https://doi.org/10.1007/978-3-031-70371-3_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70371-3_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70370-6

  • Online ISBN: 978-3-031-70371-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics