Skip to main content

Ethical and Technological AI Risks Classification: A Human Vs Machine Approach

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022)

Abstract

The growing use of data-driven decision systems based on Artificial Intelligence (AI) by governments, companies and social organizations has given more attention to the challenges they pose to society. Over the last few years, news about discrimination appeared on social media, and privacy, among others, highlighted their vulnerabilities. Despite all the research around these issues, the definition of concepts inherent to the risks and/or vulnerabilities of data-driven decision systems is not consensual. Categorizing the dangers and vulnerabilities of data-driven decision systems will facilitate ethics by design, ethics in design and ethics for designers to contribute to responsible AI. The main goal of this work is to understand which types of AI risks/ vulnerabilities are Ethical and/or Technological and the differences between human vs machine classification. We analyze two types of problems: (i) the risks/ vulnerabilities classification task by humans; and (ii) the risks/vulnerabilities classification task by machines. To carry out the analysis, we applied a survey to perform human classification and the BERT algorithm in machine classification. The results show that even with different levels of detail, the classification of vulnerabilities is in agreement in most cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdi, H., Valentin, D.: Multiple factor analysis (MFA). In: Encyclopedia of Measurement and Statistics, January 2007

    Google Scholar 

  2. Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20(1), 1–3 (2018)

    Article  Google Scholar 

  3. Dignum, V., et al.: Ethics by design: necessity or curse? In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 60–66. AIES 2018, Association for Computing Machinery, New York, NY, USA (2018)

    Google Scholar 

  4. Goh, Y.C., Cai, X.Q., Theseira, W., Ko, G., Khor, K.A.: Evaluating human versus machine learning performance in classifying research abstracts. Scientometrics. 125(2), 1197–1212 (2020)

    Article  Google Scholar 

  5. Li, Q., et al.: A survey on text classification: from traditional to deep learning. ACM Trans. Intell. Syst. Technol. 13(2), 1–41 (2022)

    Google Scholar 

  6. Mozes, M., Bartolo, M., Stenetorp, P., Kleinberg, B., Griffin, L.D.: Contrasting human- and machine-generated word-level adversarial examples for text classification. CoRR abs/2109.04385 (2021)

    Google Scholar 

  7. Nguyen, D.: Comparing automatic and human evaluation of local explanations for text classification. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1069–1078. Association for Computational Linguistics, New Orleans, Louisiana, June 2018

    Google Scholar 

  8. Orosz, T., Vági, R., Csányi, G.M., Nagy, D., Üveges, I., Vadász, J.P., Megyeri, A.: Evaluating human versus machine learning performance in a legaltech problem. Appl. Sci. 12(1), 297 (2022)

    Article  Google Scholar 

  9. PyTorch: Pytorch softmax. https://www.educba.com/pytorch-softmax/. Accessed 31 July 2022

  10. Sen, C., Hartvigsen, T., Yin, B., Kong, X., Rundensteiner, E.: Human attention maps for text classification: Do humans and neural networks focus on the same words? In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 4596–4608. Association for Computational Linguistics, Online, July 2020

    Google Scholar 

  11. Teixeira, S., Gama, J., Amorim, P., Figueira, G.: Trustability in algorithmic systems based on artificial intelligence in the public and private sectors. ERCIM News 122 (2020). https://ercim-news.ercim.eu/en122/r-s/trustability-in-algorithmic-systems-based-on-artificial-intelligence-in-the-public-and-private-sectors

  12. Teixeira, S., Rodrigues, J.C., Veloso, B., Gama, J.: Challenges of data-driven decision models: implications for developers and for public policy decision-makers. In: Banerji, P., Jana, A. (eds.) Advances in Urban Design and Engineering. DSI, pp. 199–215. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-0412-7_7

    Chapter  Google Scholar 

  13. Vaissie, P., Monge, A., Husson, F.: Factoshiny: Perform Factorial Analysis from ‘FactoMineR’ with a Shiny Application (2021). https://CRAN.R-project.org/package=Factoshiny. (r package version 2.4)

  14. WebofKnowledge: Web of science core collection help, March 2022). https://images.webofknowledge.com/images/help/WOS/hp_research_areas_easca.html

Download references

Acknowledgments

The research reported in this work was partially supported by the European Commission funded project “Humane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us” (grant #820437). The support is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sónia Teixeira .

Editor information

Editors and Affiliations

Annex

Annex

Annex I

Table 2. Main concerns/risks identified.

Annex II

See Tables 3, 4, 5 and 6

Table 3. Description of risk/vulnerability concepts

Annex III

Table 4. Risks/Vulnerabilities contributions using MFA
Table 5. Risks/Vulnerabilities contributions using MFA

Annex IV

Table 6. Human vs Machine classifications

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Teixeira, S., Veloso, B., Rodrigues, J.C., Gama, J. (2023). Ethical and Technological AI Risks Classification: A Human Vs Machine Approach. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1752. Springer, Cham. https://doi.org/10.1007/978-3-031-23618-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23618-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23617-4

  • Online ISBN: 978-3-031-23618-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics