Skip to main content

Algorithmic Factors Influencing Bias in Machine Learning

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)

Abstract

It is fair to say that many of the prominent examples of bias in Machine Learning (ML) arise from bias in the training data. In fact, some would argue that supervised ML algorithms cannot be biased, they reflect the data on which they are trained. In this paper, we demonstrate how ML algorithms can misrepresent the training data through underestimation. We show how irreducible error, regularization, and feature and class imbalance can contribute to this underestimation. The paper concludes with a demonstration of how the careful management of synthetic counterfactuals can ameliorate the impact of this underestimation bias.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://scikit-learn.org/.

  2. 2.

    https://github.com/AlgorithmicFactorsInfluencingBiasinML.

References

  1. Adler, P., et al.: Auditing black-box models for indirect influence (2016)

    Google Scholar 

  2. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. FAccT 2021. Association for Computing Machinery, New York, NY, USA (2021)

    Google Scholar 

  3. Caton, S., Haas, C.: Fairness in machine learning: a survey. arXiv preprint arXiv:2010.04053 (2020)

  4. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Int. Res. 16(1), 321–357 (2002)

    MATH  Google Scholar 

  5. Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning (2018)

    Google Scholar 

  6. Cunningham, P., Delany, S.J.: Algorithmic bias and regularisation in machine learning. arXiv preprint arXiv:2005.09052 (2020)

  7. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4(1), eaao5580 (2018)

    Google Scholar 

  8. Dunkelau, J., Leuschel, M.: Fairness-aware machine learning (2019)

    Google Scholar 

  9. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268 (2015)

    Google Scholar 

  10. Ginsberg, M.L.: Counterfactuals. Artif. Intell. 30(1), 35–79 (1986)

    Article  MathSciNet  Google Scholar 

  11. Guo, X., Yin, Y., Dong, C., Yang, G., Zhou, G.: On the class imbalance problem. In: 2008 Fourth International Conference on Natural Computation, vol. 4, pp. 192–201 (2008)

    Google Scholar 

  12. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3323–3331. NIPS 2016, Curran Associates Inc., NY, USA (2016)

    Google Scholar 

  13. Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)

    Article  Google Scholar 

  14. Hooker, S., Courville, A., Clark, G., Dauphin, Y., Frome, A.: What do compressed deep neural networks forget? (2020)

    Google Scholar 

  15. Hooker, S., Moorosi, N., Clark, G., Bengio, S., Denton, E.: Characterising bias in compressed models. arXiv e-prints pp. arXiv-2010 (2020)

    Google Scholar 

  16. Kamiran, F., Calders, T.: Data pre-processing techniques for classification without discrimination. Knowl. Inf. Syst. 33, 1–33 (2011). https://doi.org/10.1007/s10115-011-0463-8

  17. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3

    Chapter  Google Scholar 

  18. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: International Conference on Case-Based Reasoning, pp. 163–178. Springer (2020)

    Google Scholar 

  19. Kohavi, R.: Scaling up the accuracy of Naive-Bayes classifiers: a decision-tree hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, vol. 96, pp. 202–207 (1996)

    Google Scholar 

  20. Kubat, M., Holte, R.C., Matwin, S.: Machine learning for the detection of oil spills in satellite radar images. Mach. Learn. 30(2), 195–215 (1998)

    Article  Google Scholar 

  21. Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)

    Google Scholar 

  22. Mac Namee, B., Cunningham, P., Byrne, S., Corrigan, O.I.: The problem of bias in training data in regression problems in medical decision support. Artif. Intell. Med. 24(1), 51–70 (2002)

    Article  Google Scholar 

  23. Thodberg, H.H.: Improving generalization of neural networks through pruning. Int. J. Neural Syst. 01(04), 317–326 (1991)

    Article  Google Scholar 

  24. Tumer, K., Ghosh, J.: Estimating the Bayes error rate through classifier combining. In: Proceedings of 13th International Conference on Pattern Recognition, vol. 2, pp. 695–699. IEEE (1996)

    Google Scholar 

  25. Weiss, G.M.: Mining with rarity: a unifying framework. SIGKDD Explor. Newsl. 6(1), 7–19 (2004)

    Article  Google Scholar 

  26. Žliobaitė, I.: Measuring discrimination in algorithmic decision making. Data Mining Knowl. Discovery 31(4), 1060–1089 (2017)

    Google Scholar 

  27. Žliobaitė, I., Custers, B.: Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif. Intell. Law 24(2), 183–201 (2016)

    Google Scholar 

Download references

Acknowledgements

This work was funded by Science Foundation Ireland through the SFI Centre for Research Training in Machine Learning (Grant No. 18/CRT/6183) with support from Microsoft Ireland.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William Blanzeisky .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Blanzeisky, W., Cunningham, P. (2021). Algorithmic Factors Influencing Bias in Machine Learning. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics