Abstract
Machine learning–based decision support systems (DSS) are attracting the interest of the medical community. Their usage, however, could have deep consequences in terms of biasing the doctor’s interpretation of a case through automation bias and deskilling. In this work we address the design of DSS with the goal of minimizing these biases through the design and implementation of programmed inefficiencies (PIs), that is, features with the stated purpose of making the reliance of the human doctor on the DSS less obvious (or more difficult). We illustrate this concept by presenting a real-life medical DSS, called DataWise, embedding different PIs and currently undergoing iterative prototyping and testing with the medical users in two clinical settings. We describe the main features of DataWise, and show how different PIs have been conveyed by prompting doctors for multiple input and using qualitative visualizations instead of precise, but possibly misleading, indications. Finally, we discuss the implications of this design approach to naturalistic decision making, especially in life-saving domains like medicine is.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
The value \(\text {offset}(p)\) could be greater than 1 (thus, resulting in the patient’s circle being outside of the dashed circle); this means that the training set is unrepresentative of the patient under consideration.
References
Berg, M.: Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. MIT Press, Cambridge (1997)
Bien, N., Rajpurkar, P., Ball, R.: Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet. PLoS Med. 15(11), e1002699 (2018)
Cabitza, F., Rasoini, R., Gensini, G.F.: Unintended consequences of machine learning in medicine. JAMA 318(6), 517–518 (2017)
Campagner, A., Cabitza, F., Ciucci, D.: Exploring medical data classification with three-way decision tree. In: HEALTHINF 2019. SCITEPRESS (2019)
Campagner, A., Ciucci, D.: Three-way and semi-supervised decision tree learning based on orthopartitions. In: Medina, J., et al. (eds.) IPMU 2018. CCIS, vol. 854, pp. 748–759. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91476-3_61
Frischmann, B., Selinger, E.: Re-engineering Humanity. Cambridge University Press, New York (2018)
Google What-If Tool. https://pair-code.github.io/what-if-tool/index.html
Haenssle, H., et al.: Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 29(8), 1836–1842 (2018)
Han, S.S., Park, G.H., Lim, W., et al.: Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis. PloS One 13(1), e0191493 (2018)
Klein, G.: Naturalistic decision making. Hum. Factors 50(3), 456–460 (2008)
Mongtomery, K.: How Doctors Think: Clinical Judgment and the Practice of Medicine. Oxford University Press, New York (2005)
Muller, J.Z.: The Tyranny of Metrics. Princeton University Press, Princeton (2018)
Ohm, P., Frankle, J.: Desirable inefficiency. Florida Law Review (777) (2017)
O’Mahony, S.: Medicine and the mcnamara fallacy. JRCPE 47(3), 281–287 (2017)
Revicki, D., et al.: Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes. J. Clin. Epidemiol. 61(2), 102–109 (2008)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. CoRR abs/1602.04938 (2016)
Rosenberg, L., Lungren, M., Halabi, S., et al.: Artificial swarm intelligence employed to amplify diagnostic accuracy in radiology. In: IEMCON 2018, pp. 1186–1191 (2018)
Rumball-Smith, J., Shekelle, P.G., Bates, D.W.: Using the electronic health record to understand and minimize overuse. JAMA 317(3), 257–258 (2017)
Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? IJHCS 51(5), 991–1006 (1999)
Tenner, E.: The Efficiency Paradox: What Big Data Can’t Do. Knopf, New York (2018)
Topol, E.J.: High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25(1), 44 (2019)
Vetter, T.R., Uhler, L.M., Bozic, K.J.: Value-based healthcare: a novel transitional care service strives to improve patient experience and outcomes. Clin. Orthop. Relat. Res. 475(11), 2638–2642 (2017)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 2018 (2017)
Yu, K.H., Kohane, I.S.: Framing the challenges of artificial intelligence in medicine. BMJ Qual. Saf. 28(3), 238–241 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Cabitza, F., Campagner, A., Ciucci, D., Seveso, A. (2019). Programmed Inefficiencies in DSS-Supported Human Decision Making. In: Torra, V., Narukawa, Y., Pasi, G., Viviani, M. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2019. Lecture Notes in Computer Science(), vol 11676. Springer, Cham. https://doi.org/10.1007/978-3-030-26773-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-26773-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26772-8
Online ISBN: 978-3-030-26773-5
eBook Packages: Computer ScienceComputer Science (R0)