Abstract
Data-driven algorithms are employed in many applications, in which data become available in a sequential order, forcing the update of the model with new instances. In such dynamic environments, in which the underlying data distributions might evolve with time, fairness-aware learning cannot be considered as a one-off requirement, but rather it should comprise a continual requirement over the stream. Recent fairness-aware stream classifiers ignore the problem of class distribution skewness. As a result, such methods mitigate discrimination by “rejecting” minority instances at large due to their inability to effectively learn all classes. In this work, we propose \(\mathsf {FABBOO}\), an online fairness-aware approach that maintains a valid and fair classifier over a stream. \(\mathsf {FABBOO}\) is an online boosting approach that changes the training distribution in an online fashion based on both stream imbalance and discriminatory behavior of the model evaluated over the historical stream. Our experiments show that such long-term consideration of class-imbalance and fairness are beneficial for maintaining models that exhibit good predictive- and fairness-related performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
SA definition could also be extended to cover feature combinations such as race and gender.
- 2.
References
Ali, M., et al.: Discrimination through optimization: how facebook’s ad delivery can lead to skewed outcomes. arXiv preprint arXiv:1904.02095 (2019)
Bache, K., Lichman, M.: UCI machine learning repository (2013)
Bifet, A., Gavaldà, R.: Adaptive learning from evolving data streams. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 249–260. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03915-7_22
Calders, T., Žliobaitė, I.: Why unbiased computational processes can lead to discriminative decision procedures. In: Custers, B., Calders, T., Schermer, B., Zarsky, T. (eds.) Studies in Applied Philosophy, Epistemology and Rational Ethics. Discrimination and Privacy in the Information Society, vol. 3, pp. 43–57. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-30487-3_3
Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: Advances in Neural Information Processing Systems, pp. 3992–4001 (2017)
Chapman, D., Ryan, P., Farmer, J.P.: Introducing alpha.data.gov. Office Sci. Technol. Policy (2013). www.whitehouse.gov/blog/2013/01/28/introducing-alphadatagov
Chen, S.T., Lin, H.T., Lu, C.J.: An online boosting algorithm with theoretical justifications. arXiv preprint arXiv:1206.6422 (2012)
Cortez, V.: Preventing discriminatory outcomes in credit models (2019). https://github.com/valeria-io/bias-in-credit-models
Council, N.R., et al.: Measuring Racial Discrimination. National Academies Press, Washington, DC (2004)
Datta, A., Tschantz, M.C., Datta, A.: Automated experiments on ad privacy settings. Priv. Enhancing Technol. 2015(1), 92–112 (2015)
Ditzler, G., Polikar, R.: Incremental learning of concept drift from streaming imbalanced data. IEEE Trans. Knowl. Data Eng. 25(10), 2283–2301 (2012)
Fish, B., Kun, J., Lelkes, Á.D.: A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM International Conference on Data Mining, pp. 144–152. SIAM (2016)
Forman, G.: Tackling concept drift by temporal inductive transfer. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 252–259. ACM (2006)
Gama, J.: Knowledge Discovery from Data Streams. Chapman and Hall/CRC, New York (2010)
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., Evans, O.: When will ai exceed human performance? evidence from ai experts. J. Artif. Intell. Res. 62, 729–754 (2018)
Hardt, M., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)
Hu, H., et al.: Fairnn-conjoint learning of fair representations for fair decisions. arXiv preprint arXiv:2004.02173 (2020)
Ingold, D., Soper, S.: Amazon Doesn’t Consider the Race of Its Customers. Should It, Bloomberg (2016)
Iosifidis, V., Fetahu, B., Ntoutsi, E.: Fae: a fairness-aware ensemble framework. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 1375–1380. IEEE (2019)
Iosifidis, V., Ntoutsi, E.: Dealing with bias via data augmentation in supervised learning scenarios.In: Bates, J., Clough, P.D., Jäschke, R., Otterbacher, J.: International Workshop on Bias in Information, Algorithms, and Systems (BIAS). Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems (BIAS). CEUR Workshop Proceedings, pp. 24–29 (2018). http://ceur-ws.org/Vol-2103/#paper_5
Iosifidis, V., Ntoutsi, E.: Adafair: cumulative fairness adaptive boosting. In: CIKM (2019)
Iosifidis, V., Tran, T.N.H., Ntoutsi, E.: Fairness-Enhancing Interventions in Stream Classification. In: Hartmann, S., Küng, J., Chakravarthy, S., Anderst-Kotsis, G., Tjoa, A.M., Khalil, I. (eds.) DEXA 2019. LNCS, vol. 11706, pp. 261–276. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-27615-7_20
Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)
Kamiran, F., Mansha, S., Karim, A., Zhang, X.: Exploiting reject option in classification for social discrimination control. Inf. Sci. 425, 18–33 (2018)
Krasanakis, E., Xioufis, E.S., Papadopoulos, S., Kompatsiaris, Y.: Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In: WWW, pp. 853–862. ACM (2018)
Vafa, K., Haigh, C., Leung, A., Yonack, N.: Price discrimination in the princeton review’s online sat tutoring service. JOTS Technol, Sci (2015)
Verma, S., Rubin, J.: Fairness definitions explained. In: 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pp. 1–7. IEEE (2018)
Wang, S., Minku, L.L., Yao, X.: A learning framework for online class imbalance learning. In: 2013 IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL), pp. 36–45. IEEE (2013)
Weiss, G.M.: Mining with rarity: a unifying framework. ACM SIGKDD Explor. Newsl. 6(1), 7–19 (2004)
Wenbin, Z., Ntoutsi, E.: Faht: an adaptive fairness-aware decision tree classifier. arXiv preprint arXiv:1907.07237 (2019)
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180. WWW (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Iosifidis, V., Ntoutsi, E. (2020). \(\mathsf {FABBOO}\) - Online Fairness-Aware Learning Under Class Imbalance. In: Appice, A., Tsoumakas, G., Manolopoulos, Y., Matwin, S. (eds) Discovery Science. DS 2020. Lecture Notes in Computer Science(), vol 12323. Springer, Cham. https://doi.org/10.1007/978-3-030-61527-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-61527-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61526-0
Online ISBN: 978-3-030-61527-7
eBook Packages: Computer ScienceComputer Science (R0)