Skip to main content

Ethics by Design for Intelligent and Sustainable Adaptive Systems

  • Conference paper
  • First Online:
  • 478 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13796))

Abstract

AI systems are increasingly dependent on the data and information sources they are developed with. In particular, learning machines are highly exposed to undesirable problems due to biased and incomplete coverage of training data. The autonomy exhibited by machines trained on low-quality data raises an ethical concern, as it may infringe on social rules and security constraints.

In this paper, we extensively experiment with a learning framework, called Ethics by Design, which aims to ensure a supervised learning policy that can pursue both the satisfaction of ethical constraints and the optimization of task (i.e., business) accuracy. The results obtained on tasks and datasets confirm the positive impact of the method in ensuring ethical compliance. This paves the way for a large set of industrial applications, whose ethical dimension is critical to increasing the trustworthiness with respect to this technology.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The articles for [2] and [23] can be found respectively at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing and https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.

  2. 2.

    The study referred by [12] is available at: https://www.bloomberg.com/graphics/2016-amazon-same-day/.

  3. 3.

    The code is made available at: https://github.com/crux82/nn-ebd.

  4. 4.

    www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-2018-10.

  5. 5.

    https://github.com/propublica/compas-analysis.

  6. 6.

    https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data).

  7. 7.

    https://archive.ics.uci.edu/ml/datasets/adult.

  8. 8.

    https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients.

  9. 9.

    https://storage.googleapis.com/lawschool_dataset/bar_pass_prediction.csv.

  10. 10.

    The study referred by [17] can be found at: https://hai.stanford.edu/sites/default/files/2021-12/Policy%20Brief%20-%20Risks%20of%20AI%20Race%20Detection%20in%20the%20Medical%20System.pdf.

  11. 11.

    Consistently with [21] for both benefits and risks, we fixed \(m=5\) and limit values in the [0, 1] range. The following five labels can be adopted {“\({very\_low}\)”, “low”, “mild”, “high”, “\({very\_high}\)”} corresponding to the numerical values \({v_1=0.1}, v_2=0.25, v_3=0.5, v_4=0.75\) and \(v_5=0.9\).

References

  1. Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., Rieke, A.: Discrimination through optimization: how Facebook’s ad delivery can lead to biased outcomes. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–30 (2019)

    Google Scholar 

  2. Angwin, J., et al.: Machine bias (2016)

    Google Scholar 

  3. Bechavod, Y., Ligett, K.: Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044 (2017)

  4. Berk, R., et al.: A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017)

  5. Bonnemains, V., Saurel, C., Tessier, C.: Embedded ethics: some technical and ethical challenges. Ethics Inf. Technol. 20(1), 41–58 (2018). https://doi.org/10.1007/s10676-018-9444-x

    Article  Google Scholar 

  6. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)

    Google Scholar 

  7. Calders, T., Verwer, S.: Three Naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Disc. 21(2), 277–292 (2010)

    Article  MathSciNet  Google Scholar 

  8. Caton, S., Haas, C.: Fairness in machine learning: a survey. arXiv preprint arXiv:2010.04053 (2020)

  9. Donini, M., Oneto, L., Ben-David, S., Shawe-Taylor, J., Pontil, M.: Empirical risk minimization under fairness constraints. arXiv preprint arXiv:1802.08626 (2018)

  10. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the KDD 2015, pp. 259–268 (2015)

    Google Scholar 

  11. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol. 29, pp. 3315–3323 (2016)

    Google Scholar 

  12. Ingold, D., Soper, S.: Amazon doesn’t consider the race of its customers. Should it? (2016)

    Google Scholar 

  13. Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference on Computer, Control and Communication, pp. 1–6. IEEE (2009)

    Google Scholar 

  14. Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Flach, P.A., De Bie, T., Cristianini, N. (eds.) ECML PKDD 2012. LNCS (LNAI), vol. 7524, pp. 35–50. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33486-3_3

    Chapter  Google Scholar 

  15. Kleiman-Weiner, M., Saxe, R., Tenenbaum, J.B.: Learning a commonsense moral theory. Cognition 167, 107–123 (2017). Moral Learning

    Google Scholar 

  16. Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Manag. Sci. 65(7), 2966–2981 (2019)

    Article  Google Scholar 

  17. Lungren, M.: Risks of AI race detection in the medical system (2021)

    Google Scholar 

  18. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    Article  Google Scholar 

  19. Pessach, D., Shmueli, E.: Algorithmic fairness. arXiv preprint arXiv:2001.09784 (2020)

  20. Quy, T.L., Roy, A., Iosifidis, V., Ntoutsi, E.: A survey on datasets for fairness-aware machine learning. arXiv preprint arXiv:2110.00530 (2021)

  21. Rossini, D., Croce, D., Mancini, S., Pellegrino, M., Basili, R.: Actionable ethics through neural learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5537–5544 (2020)

    Google Scholar 

  22. Savani, Y., White, C., Govindarajulu, N.S.: Intra-processing methods for debiasing neural networks. arXiv preprint arXiv:2006.08564, vol. 33, pp. 2798–2810 (2020)

  23. Snow, J.: Amazon’s face recognition falsely matched 28 members of congress with mugshots (2018)

    Google Scholar 

  24. Vanderelst, D., Winfield, A.: An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 48, 56–66 (2018). Cognitive Architectures for Artificial Minds

    Google Scholar 

  25. Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Danilo Croce or Roberto Basili .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Squadrone, L., Croce, D., Basili, R. (2023). Ethics by Design for Intelligent and Sustainable Adaptive Systems. In: Dovier, A., Montanari, A., Orlandini, A. (eds) AIxIA 2022 – Advances in Artificial Intelligence. AIxIA 2022. Lecture Notes in Computer Science(), vol 13796. Springer, Cham. https://doi.org/10.1007/978-3-031-27181-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27181-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27180-9

  • Online ISBN: 978-3-031-27181-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics