Skip to main content

Is Overreliance on AI Provoked by Study Design?

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Abstract

Recent studies found that humans tend to overrely on AI when making decisions with AI support. AI explanations were often insufficient as mitigation, and sometimes even increased overreliance. However, typical AI-assisted decision-making studies consist of long series of decision tasks, potentially causing complacent behavior, and not properly reflecting many real-life scenarios. We therefore raise the question whether these findings might be favored by the design of these studies. In a first step to answer this question, we compared different study designs in an experiment and found indications that observations of overreliance might indeed be favored by common study designs. Further research is needed to clarify to what extent overreliance can be attributed to study designs rather than more fundamental human-AI interaction issues.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bansal, G., et al.: Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 81:1–81:16. CHI 2021, ACM, Yokohama, Japan (May 2021). https://doi.org/10.1145/3411764.3445717

  2. Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: Proceedings of the 2015 International Conference on Healthcare Informatics, pp. 160–169. ICHI 2015, IEEE, Dallas, TX, USA (Oct 2015). https://doi.org/10.1109/ICHI.2015.26

  3. Buçinca, Z., Malaya, M.B., Gajos, K.Z.: To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proc. ACM on Hum. Comput. Interact. 5, 188:1–188:21 (2021). https://doi.org/10.1145/3449287

  4. De-Arteaga, M., et al.: Bias in bios: a case study of semantic representation bias in a high-stakes setting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 120–128. FAT* 2019, ACM, Atlanta, GA, USA (Jan 2019). https://doi.org/10.1145/3287560.3287572

  5. Gajos, K.Z., Mamykina, L.: Do people engage cognitively with AI? Impact of AI assistance on incidental learning. In: 27th International Conference on Intelligent User Interfaces, pp. 794–806. IUI 2022, ACM, Helsinki, Finland (Mar 2022). https://doi.org/10.1145/3490099.3511138

  6. Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum. Comput. Interact. 3, 50:1–50:24 (2019). https://doi.org/10.1145/3359152

  7. Guszcza, J.: Smarter together: why artificial intelligence needs human-centered design. Deloitte Rev. 22, 36–45 (2018)

    Google Scholar 

  8. Jacobs, M., Pradier, M.F., McCoy, T.H., Perlis, R.H., Doshi-Velez, F., Gajos, K.Z.: How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Transl. Psychiatry 11(1), 108:1–108:9 (2021). https://doi.org/10.1038/s41398-021-01224-x

  9. Jarrahi, M.H.: Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Bus. Horiz. 61(4), 577–586 (2018). https://doi.org/10.1016/j.bushor.2018.03.007

    Article  Google Scholar 

  10. Kahneman, D.: Thinking Fast Slow. Farrar, Straus and Giroux, New York (2011)

    Google Scholar 

  11. Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 29–38. FAT* 2019, ACM, Atlanta, GA, USA (Jan 2019). https://doi.org/10.1145/3287560.3287590

  12. Liu, H., Lai, V., Tan, C.: Understanding the effect of out-of-distribution examples and interactive explanations on human-AI decision making. Proc. ACM Hum. Comput. Interact. 5, 408:1–408:45 (2021). https://doi.org/10.1145/3479552

  13. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 237:1–237:52. CHI 2021, ACM, Yokohama, Japan (May 2021). https://doi.org/10.1145/3411764.3445315

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. KDD 2016, ACM, San Francisco, CA, USA (Aug 2016). https://doi.org/10.1145/2939672.2939778

  15. Schmidt, P., Biessmann, F.: Calibrating human-AI collaboration: impact of risk, ambiguity and transparency on algorithmic bias. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 431–449. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_24

    Chapter  Google Scholar 

  16. SurveyCircle: Research website SurveyCircle. Published 2016. (2022). https://www.surveycircle.com

  17. Wang, X., Yin, M.: Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making. In: Proceedings of the 26th International Conference on Intelligent User Interfaces, pp. 318–328. IUI 2021, ACM, College Station, TX, USA (Apr 2021). https://doi.org/10.1145/3397481.3450650

  18. Yang, Q., Steinfeld, A., Zimmerman, J.: Unremarkable AI: fitting intelligent decision support into critical, clinical decision-making processes. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 238:1–238:11. CHI 2019, ACM, Glasgow, Scotland, UK (May 2019). https://doi.org/10.1145/3290605.3300468

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zelun Tony Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Z.T., Tong, S., Liu, Y., Butz, A. (2023). Is Overreliance on AI Provoked by Study Design?. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14144. Springer, Cham. https://doi.org/10.1007/978-3-031-42286-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42286-7_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42285-0

  • Online ISBN: 978-3-031-42286-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics