Skip to main content

A Group-Level Learning Approach Using Logistic Regression for Fairer Decisions

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops (SAFECOMP 2023)

Abstract

Decision-making algorithms are becoming intertwined with each aspect of society. As we automate tasks which result in outcomes that affect an individual’s life, the need for assessing and understanding the ethical consequences of these processes becomes vital. With bias often originating from the datasets imbalanced group distributions, we propose a novel approach to in-processing fairness techniques, by considering training at a group-level. Adapting the standard training process of the logistic regression, our approach considers aggregating coefficient derivatives at a group-level to produce fairer outcomes. We demonstrate on two real-world datasets that our approach provides groups with more equal weighting towards defining the model parameters and displays potential to reduce unfairness disparities in group imbalanced data. Our experimental results illustrate a stronger influence on improving fairness when considering binary sensitive attributes, which may prove beneficial in continuing to construct fair algorithms to reduce biases existing in decision-making practices. Whilst the results present our group-level approach achieving less fair results than current state-of-the-art directly optimized fairness techniques, we primarily observe improved fairness over fairness-agnostic models. Subsequently, we find our novel approach towards fair algorithms to be a small but crucial step towards developing new methods for fair decision-making algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification (2018). https://doi.org/10.48550/arXiv.1803.02453

  2. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev. 104(3), 671–732 (2016)

    Google Scholar 

  3. Bird, S., et al.: Fairlearn: a toolkit for assessing and improving fairness in AI. Technical report, Microsoft (2020). https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai

  4. Farrand, T., Mireshghallah, F., Singh, S., Trask, A.: Neither private nor fair: impact of data imbalance on utility and fairness in differential privacy. In: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, pp. 15–19. ACM, New York (2020). https://doi.org/10.1145/3411501.3419419

  5. Gajane, P., Pechenizkiy, M.: On formalizing fairness in prediction with machine learning (2018). https://doi.org/10.48550/arXiv.1710.03184

  6. Hardt, M., Price, E., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing System, pp. 3323–3331 (2016)

    Google Scholar 

  7. Jiang, W., Pardos, Z.A.: Towards equity and algorithmic fairness in student grade prediction. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, pp. 608–617 (2021). https://doi.org/10.1145/3461702.3462623

  8. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3

    Chapter  Google Scholar 

  9. Kizilcec, R.F., Lee, H.: Algorithmic fairness in education (2021). https://doi.org/10.48550/arXiv.2007.05443

  10. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores (2016). https://doi.org/10.48550/arXiv.1609.05807

  11. Kohavi, R.: Scaling up the accuracy of Naive-Bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207. AAAI Press (1996)

    Google Scholar 

  12. Kuzilek, J., Hlosta, M., Zdráhal, Z.: Open university learning analytics dataset. Sci. Data 4 (2017). https://doi.org/10.1038/sdata.2017.171

  13. Maseleno, A., Sabani, N., Huda, M., Ahmad, R., Jasmi, K.A., Basiron, B.: Demystifying learning analytics in personalised learning. Int. J. Eng. Technol. 7, 1124–1129 (2018). https://doi.org/10.14419/ijet.v7i3.9789

    Article  Google Scholar 

  14. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 1–35 (2021). https://doi.org/10.1145/3457607

    Article  Google Scholar 

  15. Mihaescu, C., Popescu, P.: Review on publicly available datasets for educational data mining. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 11 (2021). https://doi.org/10.1002/widm.1403

  16. Nafea, I.: Machine learning in educational technology, pp. 175–183. IntechOpen, London (2018). https://doi.org/10.5772/intechopen.72906

  17. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, USA (2016)

    Google Scholar 

  18. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2285–2830 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Riazy, S., Simbeck, K., Schreck, V.: Fairness in learning analytics: student at-risk prediction in virtual learning environments. In: Proceedings of the 12th International Conference on Computer Supported Education, Prague, pp. 15–25 (2020). https://doi.org/10.5220/0009324100150025

  20. Roh, Y., Lee, K., Whang, S.E., Suh, C.: FairBatch: batch selection for model fairness (2021). https://doi.org/10.48550/arXiv.2012.01696

  21. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

    Article  Google Scholar 

  22. Shen, A., Han, X., Cohn, T., Baldwin, T., Frermann, L.: Optimising equal opportunity fairness in model training (2022). https://doi.org/10.48550/arXiv.2205.02393

  23. Sweeney, M., Lester, J., Rangwala, H., Johri, A.: Next-term student performance prediction: a recommender systems approach. J. Educ. Data Min. 8(1), 22–51 (2016). https://doi.org/10.5281/zenodo.3554603

    Article  Google Scholar 

  24. Waters, A., Miikkulainen, R.: GRADE: machine learning support for graduate admissions. AI Mag. 35, 64–75 (2014). https://doi.org/10.1609/aimag.v35i1.2504

    Article  Google Scholar 

  25. Zafar, M.B., Valera, I., Gomez-Rodriguez, M., Gummadi, K.P.: Fairness constraints: a flexible approach for fair classification. J. Mach. Learn. Res. 20(75), 1–42 (2019)

    MathSciNet  MATH  Google Scholar 

  26. Zhao, T., Dai, E., Shu, K., Wang, S.: Towards fair classifiers without sensitive attributes: exploring biases in related features. In: Proceedings of the 15th ACM International Conference on Web Search and Data Mining, pp. 1433–1442 (2022). https://doi.org/10.1145/3488560.3498493

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc Elliott .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Elliott, M., P., D. (2023). A Group-Level Learning Approach Using Logistic Regression for Fairer Decisions. In: Guiochet, J., Tonetta, S., Schoitsch, E., Roy, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops. SAFECOMP 2023. Lecture Notes in Computer Science, vol 14182. Springer, Cham. https://doi.org/10.1007/978-3-031-40953-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40953-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40952-3

  • Online ISBN: 978-3-031-40953-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics