Abstract
The growing use of data-driven decision systems based on Artificial Intelligence (AI) by governments, companies and social organizations has given more attention to the challenges they pose to society. Over the last few years, news about discrimination appeared on social media, and privacy, among others, highlighted their vulnerabilities. Despite all the research around these issues, the definition of concepts inherent to the risks and/or vulnerabilities of data-driven decision systems is not consensual. Categorizing the dangers and vulnerabilities of data-driven decision systems will facilitate ethics by design, ethics in design and ethics for designers to contribute to responsible AI. The main goal of this work is to understand which types of AI risks/ vulnerabilities are Ethical and/or Technological and the differences between human vs machine classification. We analyze two types of problems: (i) the risks/ vulnerabilities classification task by humans; and (ii) the risks/vulnerabilities classification task by machines. To carry out the analysis, we applied a survey to perform human classification and the BERT algorithm in machine classification. The results show that even with different levels of detail, the classification of vulnerabilities is in agreement in most cases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abdi, H., Valentin, D.: Multiple factor analysis (MFA). In: Encyclopedia of Measurement and Statistics, January 2007
Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20(1), 1–3 (2018)
Dignum, V., et al.: Ethics by design: necessity or curse? In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 60–66. AIES 2018, Association for Computing Machinery, New York, NY, USA (2018)
Goh, Y.C., Cai, X.Q., Theseira, W., Ko, G., Khor, K.A.: Evaluating human versus machine learning performance in classifying research abstracts. Scientometrics. 125(2), 1197–1212 (2020)
Li, Q., et al.: A survey on text classification: from traditional to deep learning. ACM Trans. Intell. Syst. Technol. 13(2), 1–41 (2022)
Mozes, M., Bartolo, M., Stenetorp, P., Kleinberg, B., Griffin, L.D.: Contrasting human- and machine-generated word-level adversarial examples for text classification. CoRR abs/2109.04385 (2021)
Nguyen, D.: Comparing automatic and human evaluation of local explanations for text classification. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1069–1078. Association for Computational Linguistics, New Orleans, Louisiana, June 2018
Orosz, T., Vági, R., Csányi, G.M., Nagy, D., Üveges, I., Vadász, J.P., Megyeri, A.: Evaluating human versus machine learning performance in a legaltech problem. Appl. Sci. 12(1), 297 (2022)
PyTorch: Pytorch softmax. https://www.educba.com/pytorch-softmax/. Accessed 31 July 2022
Sen, C., Hartvigsen, T., Yin, B., Kong, X., Rundensteiner, E.: Human attention maps for text classification: Do humans and neural networks focus on the same words? In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 4596–4608. Association for Computational Linguistics, Online, July 2020
Teixeira, S., Gama, J., Amorim, P., Figueira, G.: Trustability in algorithmic systems based on artificial intelligence in the public and private sectors. ERCIM News 122 (2020). https://ercim-news.ercim.eu/en122/r-s/trustability-in-algorithmic-systems-based-on-artificial-intelligence-in-the-public-and-private-sectors
Teixeira, S., Rodrigues, J.C., Veloso, B., Gama, J.: Challenges of data-driven decision models: implications for developers and for public policy decision-makers. In: Banerji, P., Jana, A. (eds.) Advances in Urban Design and Engineering. DSI, pp. 199–215. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-0412-7_7
Vaissie, P., Monge, A., Husson, F.: Factoshiny: Perform Factorial Analysis from ‘FactoMineR’ with a Shiny Application (2021). https://CRAN.R-project.org/package=Factoshiny. (r package version 2.4)
WebofKnowledge: Web of science core collection help, March 2022). https://images.webofknowledge.com/images/help/WOS/hp_research_areas_easca.html
Acknowledgments
The research reported in this work was partially supported by the European Commission funded project “Humane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us” (grant #820437). The support is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Annex
Annex
Annex I
Annex II
Annex III
Annex IV
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Teixeira, S., Veloso, B., Rodrigues, J.C., Gama, J. (2023). Ethical and Technological AI Risks Classification: A Human Vs Machine Approach. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. Springer, Cham. https://doi.org/10.1007/978-3-031-23618-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-23618-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23617-4
Online ISBN: 978-3-031-23618-1
eBook Packages: Computer ScienceComputer Science (R0)