Abstract
AI systems are increasingly dependent on the data and information sources they are developed with. In particular, learning machines are highly exposed to undesirable problems due to biased and incomplete coverage of training data. The autonomy exhibited by machines trained on low-quality data raises an ethical concern, as it may infringe on social rules and security constraints.
In this paper, we extensively experiment with a learning framework, called Ethics by Design, which aims to ensure a supervised learning policy that can pursue both the satisfaction of ethical constraints and the optimization of task (i.e., business) accuracy. The results obtained on tasks and datasets confirm the positive impact of the method in ensuring ethical compliance. This paves the way for a large set of industrial applications, whose ethical dimension is critical to increasing the trustworthiness with respect to this technology.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
The articles for [2] and [23] can be found respectively at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing and https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.
- 2.
The study referred by [12] is available at: https://www.bloomberg.com/graphics/2016-amazon-same-day/.
- 3.
The code is made available at: https://github.com/crux82/nn-ebd.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
The study referred by [17] can be found at: https://hai.stanford.edu/sites/default/files/2021-12/Policy%20Brief%20-%20Risks%20of%20AI%20Race%20Detection%20in%20the%20Medical%20System.pdf.
- 11.
Consistently with [21] for both benefits and risks, we fixed \(m=5\) and limit values in the [0, 1] range. The following five labels can be adopted {“\({very\_low}\)”, “low”, “mild”, “high”, “\({very\_high}\)”} corresponding to the numerical values \({v_1=0.1}, v_2=0.25, v_3=0.5, v_4=0.75\) and \(v_5=0.9\).
References
Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., Rieke, A.: Discrimination through optimization: how Facebook’s ad delivery can lead to biased outcomes. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–30 (2019)
Angwin, J., et al.: Machine bias (2016)
Bechavod, Y., Ligett, K.: Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017)
Berk, R., et al.: A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017)
Bonnemains, V., Saurel, C., Tessier, C.: Embedded ethics: some technical and ethical challenges. Ethics Inf. Technol. 20(1), 41–58 (2018). https://doi.org/10.1007/s10676-018-9444-x
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)
Calders, T., Verwer, S.: Three Naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Disc. 21(2), 277–292 (2010)
Caton, S., Haas, C.: Fairness in machine learning: a survey. arXiv preprint arXiv:2010.04053 (2020)
Donini, M., Oneto, L., Ben-David, S., Shawe-Taylor, J., Pontil, M.: Empirical risk minimization under fairness constraints. arXiv preprint arXiv:1802.08626 (2018)
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the KDD 2015, pp. 259–268 (2015)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29, pp. 3315–3323 (2016)
Ingold, D., Soper, S.: Amazon doesn’t consider the race of its customers. Should it? (2016)
Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference on Computer, Control and Communication, pp. 1–6. IEEE (2009)
Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3
Kleiman-Weiner, M., Saxe, R., Tenenbaum, J.B.: Learning a commonsense moral theory. Cognition 167, 107–123 (2017). Moral Learning
Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Manag. Sci. 65(7), 2966–2981 (2019)
Lungren, M.: Risks of AI race detection in the medical system (2021)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)
Pessach, D., Shmueli, E.: Algorithmic fairness. arXiv preprint arXiv:2001.09784 (2020)
Quy, T.L., Roy, A., Iosifidis, V., Ntoutsi, E.: A survey on datasets for fairness-aware machine learning. arXiv preprint arXiv:2110.00530 (2021)
Rossini, D., Croce, D., Mancini, S., Pellegrino, M., Basili, R.: Actionable ethics through neural learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5537–5544 (2020)
Savani, Y., White, C., Govindarajulu, N.S.: Intra-processing methods for debiasing neural networks. arXiv preprint arXiv:2006.08564, vol. 33, pp. 2798–2810 (2020)
Snow, J.: Amazon’s face recognition falsely matched 28 members of congress with mugshots (2018)
Vanderelst, D., Winfield, A.: An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 48, 56–66 (2018). Cognitive Architectures for Artificial Minds
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180 (2017)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Squadrone, L., Croce, D., Basili, R. (2023). Ethics by Design for Intelligent and Sustainable Adaptive Systems. In: Dovier, A., Montanari, A., Orlandini, A. (eds) AIxIA 2022 – Advances in Artificial Intelligence. AIxIA 2022. Lecture Notes in Computer Science(), vol 13796. Springer, Cham. https://doi.org/10.1007/978-3-031-27181-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-27181-6_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27180-9
Online ISBN: 978-3-031-27181-6
eBook Packages: Computer ScienceComputer Science (R0)