Skip to main content

Assessing the Impact of Cognitive Biases in AI Project Development

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Abstract

Biases are a major issue in the field of Artificial Intelligence (AI). They can come from the data, be algorithmic or cognitive. If the first two types of biases are studied in the literature, few works focus on the last type, even though the task of designing AI systems is conducive to the emergence of cognitive biases. To address this gap, we propose a study on the impact of cognitive biases during the development cycle of AI projects. Our study focuses on six cognitive biases selected for their impact on ideation and development processes: Conformity, Confirmation, Illusory correlation, Measurement, Presentation, and Normality. Our major contribution is the realization of a cognitive bias awareness tool, in the form of a mind map, for AI professionals that address the impact of cognitive biases at each stage of an AI project. This tool was evaluated through semi-structured interviews and Technology Acceptance Model (TAM) questionnaires. User testing shows that (i) the majority admitted to being more aware of cognitive biases in their work thanks to our tool, (ii) the mind map would improve the quality of their decisions, their confidence in their realization, and their satisfaction with the work done, which impact directly their performance and efficiency, (iii) the mind map was well received by the professionals, who appropriated it by planning how to integrate it into their current work process: for awareness-raising purposes for the onboarding process of new employees and to develop reflexes in their work to question their decision-making.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Statistical AI is a subfield of AI that exploits probabilistic graphical models to provide a framework for both (i) efficient reasoning and learning and (ii) modeling of complex domains such as in machine learning, network communication, computational biology, computer vision and robotics [13].

    Symbolic AI refers to AI research methods based on high-level symbolic representations of problems, logic, and search, that are accessible and readable by humans [14].

    Hybrid AI combines approaches from symbolic AI and statistical AI. .

  2. 2.

    Please refer to the footnote 1.

  3. 3.

    They studied at the Ecole Nationale Supérieure de Cognitique, known also as ENSC which is an engineering school in Bordeaux, France that aims to provide an education that places humans at the heart of its designs by blending the fields of cognitive science, human-computer interaction, and AI.

References

  1. Barenkamp, M., Rebstadt, J., Thomas, O.: Applications of AI in classical software engineering. AI Perspectives 2(1), 1 (2020)

    Article  Google Scholar 

  2. Baron, J., Ritov, I.: Omission bias, individual differences, and normality. Organ. Behav. Hum. Decis. Process. 94(2), 74–85 (2004)

    Article  Google Scholar 

  3. Bellamy, R.K., et al.: Ai fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic Bias. arXiv preprint arXiv:1810.01943 (2018)

  4. BIAS, F.O.C.: The evolution of cognitive bias. The Handbook of Evolutionary Psychology, Volume 2: Integrations 2, 968 (2015)

    Google Scholar 

  5. Bonabeau, E.: Don’t trust your gut. Harv. Bus. Rev. 81(5), 116–23 (2003)

    Google Scholar 

  6. Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186 (2017)

    Article  Google Scholar 

  7. Cazes, M., Franiatte, N., Delmas, A., André, J., Rodier, M., Kaadoud, I.C.: Evaluation of the sensitivity of cognitive biases in the design of artificial intelligence. In: Rencontres des Jeunes Chercheurs en Intelligence Artificielle (RJCIA2021) Plate-Forme Intelligence Artificielle (PFIA2021) (2021)

    Google Scholar 

  8. Chapman, L.J., Chapman, J.P.: Genesis of popular but erroneous psychodiagnostic observations. J. Abnorm. Psychol. 72(3), 193 (1967)

    Google Scholar 

  9. Cunningham, G.E.: Mindmapping: its effects on student achievement in high school biology. The University of Texas at Austin (2006)

    Google Scholar 

  10. Danks, D., London, A.J.: Algorithmic bias in autonomous systems. In: IJCAI, vol. 17, pp. 4691–4697 (2017)

    Google Scholar 

  11. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4(1), eaao5580 (2018)

    Google Scholar 

  12. Farrand, P., Hussain, F., Hennessy, E.: The efficacy of themind map’study technique. Med. Educ. 36(5), 426–431 (2002)

    Article  Google Scholar 

  13. Frontiers in Robotics and AI: Statistical relational artificial intelligence (2018). https://www.frontiersin.org/research-topics/5640/statistical-relational-artificial-intelligence. Accessed 23 Feb 2023

  14. Garnelo, M., Shanahan, M.: Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Curr. Opin. Behav. Sci. 29, 17–23 (2019)

    Article  Google Scholar 

  15. Gebru, T., et al.: Datasheets for datasets. Commun. ACM 64(12), 86–92 (2021)

    Article  Google Scholar 

  16. Gordon, D.F., Desjardins, M.: Evaluation and selection of biases in machine learning. Mach. Learn. 20, 5–22 (1995)

    Article  Google Scholar 

  17. Hamilton, D.L., Rose, T.L.: Illusory correlation and the maintenance of stereotypic beliefs. J. Pers. Soc. Psychol. 39(5), 832 (1980)

    Article  Google Scholar 

  18. Hoorens, V.: Self-enhancement and superiority biases in social comparison. Eur. Rev. Soc. Psychol. 4(1), 113–139 (1993)

    Article  Google Scholar 

  19. Howard, A., Borenstein, J.: The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Sci. Eng. Ethics 24, 1521–1536 (2018)

    Article  Google Scholar 

  20. Intahchomphoo, C., Gundersen, O.E.: Artificial intelligence and race: a systematic review. Leg. Inf. Manag. 20(2), 74–84 (2020)

    Google Scholar 

  21. Johansen, J., Pedersen, T., Johansen, C.: Studying the transfer of biases from programmers to programs. arXiv preprint arXiv:2005.08231 (2020)

  22. Kahneman, D., Lovallo, D., Sibony, O.: Before you make that big decision. Harvard Business Review (2011)

    Google Scholar 

  23. Lallemand, C., Gronier, G.: Méthodes de design UX: 30 méthodes fondamentales pour concevoir et évaluer les systèmes interactifs. Editions Eyrolles (2015)

    Google Scholar 

  24. Leavy, S.: Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, pp. 14–16 (2018)

    Google Scholar 

  25. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. Philosophy Technol. 31, 611–627 (2018)

    Article  Google Scholar 

  26. Likert, R.: A technique for the measurement of attitudes. Archives of Psychology (1932)

    Google Scholar 

  27. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)

    Article  Google Scholar 

  28. Mohanani, R., Salman, I., Turhan, B., Rodríguez, P., Ralph, P.: Cognitive biases in software engineering: a systematic mapping study. IEEE Trans. Software Eng. 46(12), 1318–1339 (2018)

    Article  Google Scholar 

  29. Nelson, G.S.: Bias in artificial intelligence. N. C. Med. J. 80(4), 220–222 (2019)

    Google Scholar 

  30. Nesbit, J.C., Adesope, O.O.: Learning with concept and knowledge maps: a meta-analysis. Rev. Educ. Res. 76(3), 413–448 (2006)

    Article  Google Scholar 

  31. Neves, J.M.T.D.: The impact of artificial intelligence in banking, Ph. D. thesis, Universidade Nova de Lisboa (2022)

    Google Scholar 

  32. Nickerson, R.S.: Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2(2), 175–220 (1998)

    Article  Google Scholar 

  33. Nissenbaum, H.: How computer systems embody values. Computer 34(3), 120–119 (2001)

    Article  Google Scholar 

  34. Norori, N., Hu, Q., Aellen, F.M., Faraci, F.D., Tzovara, A.: Addressing bias in big data and AI for health care: a call for open science. Patterns 2(10), 100347 (2021)

    Article  Google Scholar 

  35. Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems-an introductory survey. Wiley Interdiscip. Rev. Data Mining Knowl. Discov. 10(3), e1356 (2020)

    Article  Google Scholar 

  36. Padalia, D.: Conformity bias: a fact or an experimental artifact? Psychol. Stud. 59, 223–230 (2014)

    Article  Google Scholar 

  37. Panch, T., Szolovits, P., Atun, R.: Artificial intelligence, machine learning and health systems. J. Global Health 8(2), 020303 (2018)

    Google Scholar 

  38. Quadrianto, N., Sharmanska, V.: Recycling privileged learning and distribution matching for fairness. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  39. Raji, I.D., Buolamwini, J.: Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 429–435 (2019)

    Google Scholar 

  40. Raji, I.D., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 33–44 (2020)

    Google Scholar 

  41. Re, R.M., Solow-Niederman, A.: Developing artificially intelligent justice. Stan. Tech. L. Rev. 22, 242 (2019)

    Google Scholar 

  42. Sharmanska, V., Quadrianto, N.: Learning from the mistakes of others: matching errors in cross-dataset learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3967–3975 (2016)

    Google Scholar 

  43. Silvestro, D., Goria, S., Sterner, T., Antonelli, A.: Improving biodiversity protection through artificial intelligence. Nat. Sustainability 5(5), 415–424 (2022)

    Article  Google Scholar 

  44. Soleimani, M., Intezari, A., Taskin, N., Pauleen, D.: Cognitive biases in developing biased artificial intelligence recruitment system. In: Proceedings of the 54th Hawaii International Conference on System Sciences, pp. 5091–5099 (2021)

    Google Scholar 

  45. Suresh, H., Guttag, J.: A framework for understanding sources of harm throughout the machine learning life cycle. In: Equity and Access in Algorithms. Mechanisms, and Optimization, pp. 1–9. Association for Computing Machinery, New York, NY, USA (2021)

    Google Scholar 

  46. Vapnik, V., Vashist, A.: A new learning paradigm: learning using privileged information. Neural Netw. 22(5–6), 544–557 (2009)

    Article  MATH  Google Scholar 

  47. Wason, P.C.: On the failure to eliminate hypotheses in a conceptual task. Quart. J. Exper. Psychol. 12(3), 129–140 (1960)

    Article  Google Scholar 

  48. West, S.M., Whittaker, M., Crawford, K.: Discriminating systems. AI Now (2019)

    Google Scholar 

  49. Yves Martin, N.P.: Acceptabilité, acceptation et expérience utilisateur: évaluation et modélisation des facteurs d’adoption des produits technologiques, Ph. D. thesis, Université Rennes 2 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Sara Juan and Chloé Bernault contributed equally to this work: conception and realization of the experiments, bibliographical research, and writing of the article. Alexandra Delmas, Marc Rodiez, and Jean-Marc Andre contributed to the experiments’ design and the project’s supervision. Ikram Chraibi Kaadoud contributed to the bibliographic research, the design of the experiments, the writing of the article, and the supervision of the project. All authors contributed to the revision of the manuscript, read and approved the submitted version.

Corresponding author

Correspondence to Ikram Chraibi Kaadoud .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bernault, C., Juan, S., Delmas, A., Andre, JM., Rodier, M., Chraibi Kaadoud, I. (2023). Assessing the Impact of Cognitive Biases in AI Project Development. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics