Skip to main content

When Imprecision Improves Advice: Disclosing Algorithmic Error Probability to Increase Advice Taking from Algorithms

  • Conference paper
  • First Online:
HCI International 2020 - Posters (HCII 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1224))

Included in the following conference series:

Abstract

Due to the increasing number and complexity of information and available data, people use algorithmic decision support systems to improve the quality of their decisions. However, research has shown that despite an increasing “algorithm appreciation”, people still abandon algorithmic decision support when they see it fail – even if they know that the alternative, the human advisor, is advising worse. As algorithms consist of statistical models and never mirror the reality completely, mistakes of algorithms are inevitable – and as described, the following algorithmic refusal posts a threat to the quality of subsequent decision-making. With human advisors, people use the confidence of the advisor, often operationalized as a probability or Likert-scale estimate, as an indicator for correctness. The question arises whether information about algorithmic insecurities, e.g., their error probabilities, can serve a similar goal: to increase the transparency for human users and eventually counteract algorithmic refusal.

Therefore, the authors are currently developing a study design that compares how algorithms that indicate vs. do not indicate an estimated accuracy influence participants’ advice taking. To investigate the effect on algorithmic refusal, participants get wrong information from an algorithm that indicates vs. does not indicate its error probability and in a second round, participants can decide whether they will use it again. The poster will describe open questions and limitations and provide a basis for further discussion on the ongoing research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shih, C., Chen, F.-C., Cheng, S.-W., Kao, D.-Y.: Using google maps to track down suspects in a criminal investigation. Proc. Comput. Sci. 159, 1900–1906 (2019). https://doi.org/10.1016/j.procs.2019.09.362

    Article  Google Scholar 

  2. Mishra, M., Chopde, J., Shah, M., Parikh, P., Babu, R.C., Woo, J.: Big data predictive analysis of amazon product review. In: KSII The 14th Asia Pacific International Conference on Information Science and Technology, APIC-IST, KSII, Beijing, China (2019)

    Google Scholar 

  3. Werz, J.M., Varney, V., Isenhardt, I.: The curse of self-presentation: Looking for career patterns in online CVs. In: Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Minging, ASONAM, pp. 733–736., Vancouver, BC, Canada (2019). https://doi.org/10.1145/3341161.3343681

  4. Werz, J.M., Stehling, V., Haberstroh, M., Isenhardt, I.: Promoting women in STEM: requirements for an automated career-development recommender. In: Paoloni, P., Paoloni, M., Arduini, S., (eds.) Proceedings of the 2nd International Conference on Gender Research, ICGR 2019, Rome, Italy, pp. 653–660(2019)

    Google Scholar 

  5. Sim, L.L.W., Ban, K.H.K., Tan, T.W., Sethi, S.K., Loh, T.P.: Development of a clinical decision support system for diabetes care: a pilot study. PLoS ONE 12, e0173021 (2017). https://doi.org/10.1371/journal.pone.0173021

    Article  Google Scholar 

  6. Posada Moreno, A.F., Klein, C., Haßler, M., Pehar, D., Solvay, A.F., Kohlschein, C.P.: Cargo wagon structural health estimation using computer vision. In: 8th Transport Research Arena, TRA2020, 2020-04-27 - 2020-04-30, Helsinki, Finland (2020)

    Google Scholar 

  7. Fischer, S., Petersen, T.: Was Deutschland über Algorithmen weiß und denkt. Bertelsmann Stiftung, Gütersloh (2018)

    Google Scholar 

  8. Meehl, P.E.: Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. University of Minnesota Press, Minneapolis (1954). https://doi.org/10.1037/11281-000

  9. Dawes, R.M., Corrigan, B.: Linear models in decision making. Psychol. Bull. 81, 95–106 (1974). https://doi.org/10.1037/h0037613

    Article  Google Scholar 

  10. Yeomans, M., Shah, A., Mullainathan, S., Kleinberg, J.: Making sense of recommendations. J. Behav. Decis. Mak. 32, 403–414 (2019). https://doi.org/10.1002/bdm.2118

    Article  Google Scholar 

  11. Longoni, C., Bonezzi, A., Morewedge, C.K.: Resistance to medical artificial intelligence. J. Consum. Res. 46, 629–650 (2019). https://doi.org/10.1093/jcr/ucz013

    Article  Google Scholar 

  12. Castelo, N., Bos, M.W., Lehmann, D.R.: Task-dependent algorithm aversion. J. Mark. Res. 56, 809–825 (2019). https://doi.org/10.1177/0022243719851788

    Article  Google Scholar 

  13. Logg, J.M., Minson, J.A., Moore, D.A.: Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019). https://doi.org/10.1016/j.obhdp.2018.12.005

    Article  Google Scholar 

  14. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors 44, 79–94 (2002). https://doi.org/10.1518/0018720024494856

    Article  Google Scholar 

  15. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2014). https://doi.org/10.1037/xge0000033

    Article  Google Scholar 

  16. Madhavan, P., Wiegmann, D.A.: Similarities and differences between human–human and human–automation trust: an integrative review. Theor. Issues Ergon. Sci. 8, 277–301 (2007). https://doi.org/10.1080/14639220500337708

    Article  Google Scholar 

  17. Prahl, A., Swol, L.V.: Understanding algorithm aversion: when is advice from automation discounted? J. Forecast. 36, 691–702 (2017). https://doi.org/10.1002/for.2464

    Article  MathSciNet  Google Scholar 

  18. Jungermann, H., Fischer, K.: Using expertise and experience for giving and taking advice. In: Betsch, T., Haberstroh, S. (eds.) The Routines of Decision Making, pp. 157–175. Psychology Press, New York (2005)

    Google Scholar 

  19. Yates, J.F., Price, P.C., Lee, J.-W., Ramirez, J.: Good probabilistic forecasters: the ‘consumer’s’ perspective. Int. J. Forecast. 12, 41–56 (1996)

    Article  Google Scholar 

  20. Yaniv, I., Milyavsky, M.: Using advice from multiple sources to revise and improve judgments. Organ. Behav. Hum. Decis. Process. 103, 104–120 (2007). https://doi.org/10.1016/j.obhdp.2006.05.006

    Article  Google Scholar 

  21. Van Swol, L.M., Sniezek, J.A.: Factors affecting the acceptance of expert advice. Br. J. Soc. Psychol. 44, 443–461 (2005). https://doi.org/10.1348/014466604X17092

    Article  Google Scholar 

  22. Van Swol, L.M.: Forecasting another’s enjoyment versus giving the right answer: trust, shared values, task effects, and confidence in improving the acceptance of advice. Int. J. Forecast. 27, 103–120 (2011). https://doi.org/10.1016/j.ijforecast.2010.03.002

    Article  Google Scholar 

  23. Bonaccio, S., Dalal, R.S.: Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ. Behav. Hum. Decis. Process. 101, 127–151 (2006). https://doi.org/10.1016/j.obhdp.2006.07.001

    Article  Google Scholar 

  24. Stangor, C., McMillan, D.: Memory for expectancy-congruent and expectancy-incongruent information: a review of the social and social developmental literatures. Psychol. Bull. 111, 42–61 (1992). https://doi.org/10.1037/0033-2909.111.1.42

    Article  Google Scholar 

  25. Fishbein, M., Ajzen, I.: Attitudes towards objects as predictors of single and multiple behavioral criteria. Psychol. Rev. 81, 59–74 (1974). https://doi.org/10.1037/h0035872

    Article  Google Scholar 

  26. Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q. 27, 425–478 (2003)

    Article  Google Scholar 

  27. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340 (1989). https://doi.org/10.2307/249008

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johanna M. Werz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Werz, J.M., Borowski, E., Isenhardt, I. (2020). When Imprecision Improves Advice: Disclosing Algorithmic Error Probability to Increase Advice Taking from Algorithms. In: Stephanidis, C., Antona, M. (eds) HCI International 2020 - Posters. HCII 2020. Communications in Computer and Information Science, vol 1224. Springer, Cham. https://doi.org/10.1007/978-3-030-50726-8_66

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50726-8_66

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50725-1

  • Online ISBN: 978-3-030-50726-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics