Skip to main content

The Role of Response Time for Algorithm Aversion in Fast and Slow Thinking Tasks

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2023)

Abstract

Artificial intelligence (AI) outperforms humans in plentiful domains. Despite security and ethical concerns, AI is expected to provide crucial improvements on both personal and societal levels. However, algorithm aversion is known to reduce the effectiveness of human-AI interaction and diminish the potential benefits of AI. In this paper, we built upon the Dual System Theory and investigate the effect of the AI response time on algorithm aversion for slow-thinking and fast-thinking tasks. To answer our research question, we conducted a 2\(\,\times \,\)2 incentivized laboratory experiment with 116 students in an advice-taking setting. We manipulated the length of the AI response time (short vs. long) and the task type (fast-thinking vs. slow-thinking). Additional to these treatments, we varied the domain of the task. Our results demonstrate that long response times are associated with lower algorithm aversion, both when subjects think fast and slow. Moreover, when subjects were thinking fast, we found significant differences in algorithm aversion between the task domains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–18 (2018)

    Google Scholar 

  2. Araujo, T., Helberger, N., Kruikemeier, S., de Vreese, C.H.: In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 35(3), 611–623 (2020). https://doi.org/10.1007/s00146-019-00931-w

    Article  Google Scholar 

  3. Bailey, P.E., Leon, T., Ebner, N.C., Moustafa, A.A., Weidemann, G.: A meta-analysis of the weight of advice in decision-making. Current Psychology, pp. 1–26 (2022)

    Google Scholar 

  4. Bonaccio, S., Dalal, R.S.: Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ. Behav. Hum. Decis. Process. 101(2), 127–151 (2006)

    Article  Google Scholar 

  5. Bonnefon, J.F., Rahwan, I.: Machine thinking, fast and slow. Trends Cogn. Sci. 24(12), 1019–1027 (2020)

    Article  Google Scholar 

  6. Booch, G., et al.: Thinking fast and slow in AI (2020)

    Google Scholar 

  7. Castelo, N., Bos, M.W., Lehmann, D.R.: Task-dependent algorithm aversion. J. Mark. Res. 56(5), 809–825 (2019)

    Article  Google Scholar 

  8. Chen, D.L., Schonger, M., Wickens, C.: oTree-an open-source platform for laboratory, online, and field experiments. J. Behav. Exp. Financ. 9, 88–97 (2016)

    Article  Google Scholar 

  9. Daniel, K.: Thinking, fast and slow (2017)

    Google Scholar 

  10. De Graaf, M.M., Malle, B.F.: How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series (2017)

    Google Scholar 

  11. De Winter, J.C., Dodou, D.: Why the fitts list has persisted throughout the history of function allocation. Cogn. Technol. Work 16(1), 1–11 (2014)

    Article  Google Scholar 

  12. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114 (2015)

    Article  Google Scholar 

  13. Efendić, E., Van de Calseyde, P.P., Evans, A.M.: Slow response times undermine trust in algorithmic (but not human) predictions. Organ. Behav. Hum. Decis. Process. 157, 103–114 (2020)

    Article  Google Scholar 

  14. Enholm, I.M., Papagiannidis, E., Mikalef, P., Krogstie, J.: Artificial intelligence and business value: a literature review. Inf. Syst. Front. 24(5), 1709–1734 (2022)

    Article  Google Scholar 

  15. Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., Ivaldi, S.: Trust as indicator of robot functional and social acceptance. an experimental study on user conformation to iCub answers. Comput. Hum. Behav. 61, 633–655 (2016)

    Google Scholar 

  16. Gino, F., Brooks, A.W., Schweitzer, M.E.: Anxiety, advice, and the ability to discern: feeling anxious motivates individuals to seek and use advice. J. Pers. Soc. Psychol. 102(3), 497 (2012)

    Article  Google Scholar 

  17. Gino, F., Moore, D.A.: Effects of task difficulty on use of advice. J. Behav. Decis. Mak. 20(1), 21–35 (2007)

    Article  Google Scholar 

  18. Glikson, E., Woolley, A.W.: Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14(2), 627–660 (2020)

    Article  Google Scholar 

  19. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., De Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)

    Article  Google Scholar 

  20. Hofheinz, C., Germar, M., Schultze, T., Michalak, J., Mojzisch, A.: Are depressed people more or less susceptible to informational social influence? Cogn. Ther. Res. 41(5), 699–711 (2017). https://doi.org/10.1007/s10608-017-9848-7

    Article  Google Scholar 

  21. Hou, Y.T.Y., Jung, M.F.: Who is the expert? reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceed. ACM Hum.-Comput. Interact. 5(CSCW2), 1–25 (2021)

    Article  Google Scholar 

  22. Jussupow, E., Benbasat, I., Heinzl, A.: Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. In: Proceedings of the 28th European Conference on Information Systems (ECIS), pp. 1–16 (2020)

    Google Scholar 

  23. Lee, M.K.: Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5(1), 2053951718756684 (2018)

    Article  Google Scholar 

  24. Logg, J.M., Minson, J.A., Moore, D.A.: Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019)

    Article  Google Scholar 

  25. Mahmud, H., Islam, A.N., Ahmed, S.I., Smolander, K.: What influences algorithmic decision-making? a systematic literature review on algorithm aversion. Technol. Forecast. Soc. Chang. 175, 121390 (2022)

    Article  Google Scholar 

  26. Makridakis, S.: The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90, 46–60 (2017)

    Article  Google Scholar 

  27. McBride, M., Carter, L., Ntuen, C.: The impact of personality on nurses’ bias towards automated decision aid acceptance. Int. J. Inf. Syst. Change Manage. 6(2), 132–146 (2012)

    Google Scholar 

  28. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  29. Park, J.S., Barber, R., Kirlik, A., Karahalios, K.: A slow algorithm improves users’ assessments of the algorithm’s accuracy. Proceed. ACM Hum.-Comput. Interact. 3(CSCW), 1–15 (2019)

    Google Scholar 

  30. Prahl, A., Van Swol, L.: Understanding algorithm aversion: when is advice from automation discounted? J. Forecast. 36(6), 691–702 (2017)

    Article  MathSciNet  Google Scholar 

  31. Rahwan, I., et al.: Machine behaviour. Nature 568, 477–486 (2019). https://doi.org/10.1038/s41586-019-1138-y

  32. Rossi, F., Loreggia, A.: Preferences and ethical priorities: thinking fast and slow in AI. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 3–4. AAMAS 2019, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2019)

    Google Scholar 

  33. Schoonderwoerd, T.A., Jorritsma, W., Neerincx, M.A., Van Den Bosch, K.: Human-centered XAI: developing design patterns for explanations of clinical decision support systems. Int. J. Hum Comput Stud. 154, 102684 (2021)

    Article  Google Scholar 

  34. Sharan, N.N., Romano, D.M.: The effects of personality and locus of control on trust in humans versus artificial intelligence. Heliyon 6(8), e04572 (2020)

    Article  Google Scholar 

  35. Wang, X., Yin, M.: Are explanations helpful? a comparative study of the effects of explanations in AI-assisted decision-making. In: 26th International Conference on Intelligent User Interfaces, pp. 318–328 (2021)

    Google Scholar 

  36. Yeomans, M., Shah, A., Mullainathan, S., Kleinberg, J.: Making sense of recommendations. J. Behav. Decis. Mak. 32(4), 403–414 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

This research is funded by the German Federal Ministry of Education and Research (BMBF) within the “The Future of Value Creation - Research on Production, Services and Work” program (02L19C115). Olesja Lammert and Jaroslaw Kornowicz acknowledge funding by the Deutsche Forschungsgemeinschaft (TRR 318/1 2021 - 438445824). The authors are responsible for the content of this publication. The authors thank Kirsten Thommes and René Fahr for valuable discussion and constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anastasia Lebedeva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lebedeva, A., Kornowicz, J., Lammert, O., Papenkordt, J. (2023). The Role of Response Time for Algorithm Aversion in Fast and Slow Thinking Tasks. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science(), vol 14050. Springer, Cham. https://doi.org/10.1007/978-3-031-35891-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35891-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35890-6

  • Online ISBN: 978-3-031-35891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics