Skip to main content

Abstract

Fairness is an important objective throughout society. From the distribution of limited goods such as education, over hiring and payment, to taxes, legislation, and jurisprudence. Due to the increasing importance of machine learning approaches in all areas of daily life including those related to health, security, and equity, an increasing amount of research focuses on fair machine learning. In this work, we focus on the fairness of partition- and prototype-based models. The contribution of this work is twofold: 1) we develop a general framework for fair machine learning of partition-based models that does not depend on a specific fairness definition, and 2) we derive a fair version of learning vector quantization (LVQ) as a specific instantiation. We compare the resulting algorithm against other algorithms from the literature on theoretical and real-world data showing its practical relevance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/Felix-St/FairGLVQ.

References

  1. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  2. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness, pp. 214–226. Association for Computing Machinery, New York (2012)

    Google Scholar 

  3. European Commission and Directorate-General for Communications Networks, Content and Technology: Ethics guidelines for trustworthy AI (2019)

    Google Scholar 

  4. Kamiran, F., Calders, T., Pechenizkiy, M.: Discrimination aware decision tree learning. In: ICDM 2010, The 10th IEEE International Conference on Data Mining, pp. 869–874. IEEE Computer Society (2010)

    Google Scholar 

  5. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3

    Chapter  Google Scholar 

  6. Kohavi, R.: Census Income. UCI Machine Learning Repository (1996). https://doi.org/10.24432/C5GP7S

  7. Laux, J., Wachter, S., Mittelstadt, B.: Trustworthy artificial intelligence and the European union AI act: on the conflation of trustworthiness and acceptability of risk. Regul. Gov. 18(1), 3–32 (2023)

    Article  Google Scholar 

  8. van der Linden, J.G.M., de Weerdt, M., Demirovic, E.: Fair and optimal decision trees: a dynamic programming approach. In: NeurIPS (2022)

    Google Scholar 

  9. Lövdal, S., Biehl, M.: Improved interpretation of feature relevances: iterated relevance matrix analysis (IRMA). In: ESANN 2023 Proceedings, pp. 59–64 (Oct 2023)

    Google Scholar 

  10. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021)

    Article  Google Scholar 

  11. Qin, A., Suganthan, P.: A novel kernel prototype-based learning algorithm. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 4, pp. 621–624 (2004)

    Google Scholar 

  12. Ranzato, F., Urban, C., Zanella, M.: Fairness-aware training of decision trees by abstract interpretation. In: CIKM 2021, Queensland, Australia, 1–5 November 2021, pp. 1508–1517. ACM (2021)

    Google Scholar 

  13. Ravfogel, S., Elazar, Y., Gonen, H., Twiton, M., Goldberg, Y.: Null it out: guarding protected attributes by iterative nullspace projection. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7237–7256. Association for Computational Linguistics (2020)

    Google Scholar 

  14. Sato, A., Yamada, K.: Generalized learning vector quantization. In: Advances in Neural Information Processing Systems 8, NIPS, Denver, CO, USA, 27–30 November 1995, pp. 423–429. MIT Press (1995)

    Google Scholar 

  15. Schneider, P., Biehl, M., Hammer, B.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21(12), 3532–3561 (2009)

    Article  MathSciNet  Google Scholar 

  16. Strotherm, J., Hammer, B.: Fairness-enhancing ensemble classification in water distribution networks. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2023. LNCS, vol. 14134, pp. 119–133. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-43085-5_10

    Chapter  Google Scholar 

  17. Strotherm, J., Müller, A., Hammer, B., Paaßen, B.: Fairness in KI-Systemen. arXiv arXiv:2307.08486 (2023). in press at Springer, German

  18. Villmann, T., Ravichandran, J., Villmann, A., Nebel, D., Kaden, M.: Investigation of activation functions for generalized learning vector quantization. In: Vellido, A., Gibert, K., Angulo, C., Martín Guerrero, J.D. (eds.) WSOM 2019. AISC, vol. 976, pp. 179–188. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-19642-4_18

    Chapter  Google Scholar 

  19. Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340. ACM (2018)

    Google Scholar 

  20. Zhang, W., Ntoutsi, E.: FAHT: an adaptive fairness-aware decision tree classifier. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, 10–16 August 2019, pp. 1480–1486. ijcai.org (2019)

    Google Scholar 

Download references

Acknowledgments

We gratefully acknowledge funding from the European Research Council (ERC) under the ERC Synergy Grant Water-Futures (Grant agreement No. 951424).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Felix Störck .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Störck, F., Hinder, F., Brinkrolf, J., Paassen, B., Vaquet, V., Hammer, B. (2024). FairGLVQ: Fairness in Partition-Based Classification. In: Villmann, T., Kaden, M., Geweniger, T., Schleif, FM. (eds) Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond. WSOM+ 2024. Lecture Notes in Networks and Systems, vol 1087. Springer, Cham. https://doi.org/10.1007/978-3-031-67159-3_17

Download citation

Publish with us

Policies and ethics