Abstract
Due to the increasing number and complexity of information and available data, people use algorithmic decision support systems to improve the quality of their decisions. However, research has shown that despite an increasing “algorithm appreciation”, people still abandon algorithmic decision support when they see it fail – even if they know that the alternative, the human advisor, is advising worse. As algorithms consist of statistical models and never mirror the reality completely, mistakes of algorithms are inevitable – and as described, the following algorithmic refusal posts a threat to the quality of subsequent decision-making. With human advisors, people use the confidence of the advisor, often operationalized as a probability or Likert-scale estimate, as an indicator for correctness. The question arises whether information about algorithmic insecurities, e.g., their error probabilities, can serve a similar goal: to increase the transparency for human users and eventually counteract algorithmic refusal.
Therefore, the authors are currently developing a study design that compares how algorithms that indicate vs. do not indicate an estimated accuracy influence participants’ advice taking. To investigate the effect on algorithmic refusal, participants get wrong information from an algorithm that indicates vs. does not indicate its error probability and in a second round, participants can decide whether they will use it again. The poster will describe open questions and limitations and provide a basis for further discussion on the ongoing research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shih, C., Chen, F.-C., Cheng, S.-W., Kao, D.-Y.: Using google maps to track down suspects in a criminal investigation. Proc. Comput. Sci. 159, 1900–1906 (2019). https://doi.org/10.1016/j.procs.2019.09.362
Mishra, M., Chopde, J., Shah, M., Parikh, P., Babu, R.C., Woo, J.: Big data predictive analysis of amazon product review. In: KSII The 14th Asia Pacific International Conference on Information Science and Technology, APIC-IST, KSII, Beijing, China (2019)
Werz, J.M., Varney, V., Isenhardt, I.: The curse of self-presentation: Looking for career patterns in online CVs. In: Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Minging, ASONAM, pp. 733–736., Vancouver, BC, Canada (2019). https://doi.org/10.1145/3341161.3343681
Werz, J.M., Stehling, V., Haberstroh, M., Isenhardt, I.: Promoting women in STEM: requirements for an automated career-development recommender. In: Paoloni, P., Paoloni, M., Arduini, S., (eds.) Proceedings of the 2nd International Conference on Gender Research, ICGR 2019, Rome, Italy, pp. 653–660(2019)
Sim, L.L.W., Ban, K.H.K., Tan, T.W., Sethi, S.K., Loh, T.P.: Development of a clinical decision support system for diabetes care: a pilot study. PLoS ONE 12, e0173021 (2017). https://doi.org/10.1371/journal.pone.0173021
Posada Moreno, A.F., Klein, C., Haßler, M., Pehar, D., Solvay, A.F., Kohlschein, C.P.: Cargo wagon structural health estimation using computer vision. In: 8th Transport Research Arena, TRA2020, 2020-04-27 - 2020-04-30, Helsinki, Finland (2020)
Fischer, S., Petersen, T.: Was Deutschland über Algorithmen weiß und denkt. Bertelsmann Stiftung, Gütersloh (2018)
Meehl, P.E.: Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. University of Minnesota Press, Minneapolis (1954). https://doi.org/10.1037/11281-000
Dawes, R.M., Corrigan, B.: Linear models in decision making. Psychol. Bull. 81, 95–106 (1974). https://doi.org/10.1037/h0037613
Yeomans, M., Shah, A., Mullainathan, S., Kleinberg, J.: Making sense of recommendations. J. Behav. Decis. Mak. 32, 403–414 (2019). https://doi.org/10.1002/bdm.2118
Longoni, C., Bonezzi, A., Morewedge, C.K.: Resistance to medical artificial intelligence. J. Consum. Res. 46, 629–650 (2019). https://doi.org/10.1093/jcr/ucz013
Castelo, N., Bos, M.W., Lehmann, D.R.: Task-dependent algorithm aversion. J. Mark. Res. 56, 809–825 (2019). https://doi.org/10.1177/0022243719851788
Logg, J.M., Minson, J.A., Moore, D.A.: Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019). https://doi.org/10.1016/j.obhdp.2018.12.005
Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors 44, 79–94 (2002). https://doi.org/10.1518/0018720024494856
Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2014). https://doi.org/10.1037/xge0000033
Madhavan, P., Wiegmann, D.A.: Similarities and differences between human–human and human–automation trust: an integrative review. Theor. Issues Ergon. Sci. 8, 277–301 (2007). https://doi.org/10.1080/14639220500337708
Prahl, A., Swol, L.V.: Understanding algorithm aversion: when is advice from automation discounted? J. Forecast. 36, 691–702 (2017). https://doi.org/10.1002/for.2464
Jungermann, H., Fischer, K.: Using expertise and experience for giving and taking advice. In: Betsch, T., Haberstroh, S. (eds.) The Routines of Decision Making, pp. 157–175. Psychology Press, New York (2005)
Yates, J.F., Price, P.C., Lee, J.-W., Ramirez, J.: Good probabilistic forecasters: the ‘consumer’s’ perspective. Int. J. Forecast. 12, 41–56 (1996)
Yaniv, I., Milyavsky, M.: Using advice from multiple sources to revise and improve judgments. Organ. Behav. Hum. Decis. Process. 103, 104–120 (2007). https://doi.org/10.1016/j.obhdp.2006.05.006
Van Swol, L.M., Sniezek, J.A.: Factors affecting the acceptance of expert advice. Br. J. Soc. Psychol. 44, 443–461 (2005). https://doi.org/10.1348/014466604X17092
Van Swol, L.M.: Forecasting another’s enjoyment versus giving the right answer: trust, shared values, task effects, and confidence in improving the acceptance of advice. Int. J. Forecast. 27, 103–120 (2011). https://doi.org/10.1016/j.ijforecast.2010.03.002
Bonaccio, S., Dalal, R.S.: Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ. Behav. Hum. Decis. Process. 101, 127–151 (2006). https://doi.org/10.1016/j.obhdp.2006.07.001
Stangor, C., McMillan, D.: Memory for expectancy-congruent and expectancy-incongruent information: a review of the social and social developmental literatures. Psychol. Bull. 111, 42–61 (1992). https://doi.org/10.1037/0033-2909.111.1.42
Fishbein, M., Ajzen, I.: Attitudes towards objects as predictors of single and multiple behavioral criteria. Psychol. Rev. 81, 59–74 (1974). https://doi.org/10.1037/h0035872
Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q. 27, 425–478 (2003)
Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340 (1989). https://doi.org/10.2307/249008
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Werz, J.M., Borowski, E., Isenhardt, I. (2020). When Imprecision Improves Advice: Disclosing Algorithmic Error Probability to Increase Advice Taking from Algorithms. In: Stephanidis, C., Antona, M. (eds) HCI International 2020 - Posters. HCII 2020. Communications in Computer and Information Science, vol 1224. Springer, Cham. https://doi.org/10.1007/978-3-030-50726-8_66
Download citation
DOI: https://doi.org/10.1007/978-3-030-50726-8_66
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50725-1
Online ISBN: 978-3-030-50726-8
eBook Packages: Computer ScienceComputer Science (R0)