Skip to main content

Programmed Inefficiencies in DSS-Supported Human Decision Making

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11676))

Abstract

Machine learning–based decision support systems (DSS) are attracting the interest of the medical community. Their usage, however, could have deep consequences in terms of biasing the doctor’s interpretation of a case through automation bias and deskilling. In this work we address the design of DSS with the goal of minimizing these biases through the design and implementation of programmed inefficiencies (PIs), that is, features with the stated purpose of making the reliance of the human doctor on the DSS less obvious (or more difficult). We illustrate this concept by presenting a real-life medical DSS, called DataWise, embedding different PIs and currently undergoing iterative prototyping and testing with the medical users in two clinical settings. We describe the main features of DataWise, and show how different PIs have been conveyed by prompting doctors for multiple input and using qualitative visualizations instead of precise, but possibly misleading, indications. Finally, we discuss the implications of this design approach to naturalistic decision making, especially in life-saving domains like medicine is.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    As such a PI is something in between what Tenner calls inspired inefficiency [20] and Ohm and Frankle desirable [13].

  2. 2.

    The value \(\text {offset}(p)\) could be greater than 1 (thus, resulting in the patient’s circle being outside of the dashed circle); this means that the training set is unrepresentative of the patient under consideration.

References

  1. Berg, M.: Rationalizing Medical Work: Decision-Support Techniques and Medical Practices. MIT Press, Cambridge (1997)

    Google Scholar 

  2. Bien, N., Rajpurkar, P., Ball, R.: Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet. PLoS Med. 15(11), e1002699 (2018)

    Article  Google Scholar 

  3. Cabitza, F., Rasoini, R., Gensini, G.F.: Unintended consequences of machine learning in medicine. JAMA 318(6), 517–518 (2017)

    Article  Google Scholar 

  4. Campagner, A., Cabitza, F., Ciucci, D.: Exploring medical data classification with three-way decision tree. In: HEALTHINF 2019. SCITEPRESS (2019)

    Google Scholar 

  5. Campagner, A., Ciucci, D.: Three-way and semi-supervised decision tree learning based on orthopartitions. In: Medina, J., et al. (eds.) IPMU 2018. CCIS, vol. 854, pp. 748–759. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91476-3_61

    Chapter  Google Scholar 

  6. Frischmann, B., Selinger, E.: Re-engineering Humanity. Cambridge University Press, New York (2018)

    Book  Google Scholar 

  7. Google What-If Tool. https://pair-code.github.io/what-if-tool/index.html

  8. Haenssle, H., et al.: Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 29(8), 1836–1842 (2018)

    Article  Google Scholar 

  9. Han, S.S., Park, G.H., Lim, W., et al.: Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis. PloS One 13(1), e0191493 (2018)

    Article  Google Scholar 

  10. Klein, G.: Naturalistic decision making. Hum. Factors 50(3), 456–460 (2008)

    Article  Google Scholar 

  11. Mongtomery, K.: How Doctors Think: Clinical Judgment and the Practice of Medicine. Oxford University Press, New York (2005)

    Google Scholar 

  12. Muller, J.Z.: The Tyranny of Metrics. Princeton University Press, Princeton (2018)

    Book  Google Scholar 

  13. Ohm, P., Frankle, J.: Desirable inefficiency. Florida Law Review (777) (2017)

    Google Scholar 

  14. O’Mahony, S.: Medicine and the mcnamara fallacy. JRCPE 47(3), 281–287 (2017)

    Article  Google Scholar 

  15. Revicki, D., et al.: Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes. J. Clin. Epidemiol. 61(2), 102–109 (2008)

    Article  Google Scholar 

  16. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. CoRR abs/1602.04938 (2016)

    Google Scholar 

  17. Rosenberg, L., Lungren, M., Halabi, S., et al.: Artificial swarm intelligence employed to amplify diagnostic accuracy in radiology. In: IEMCON 2018, pp. 1186–1191 (2018)

    Google Scholar 

  18. Rumball-Smith, J., Shekelle, P.G., Bates, D.W.: Using the electronic health record to understand and minimize overuse. JAMA 317(3), 257–258 (2017)

    Article  Google Scholar 

  19. Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? IJHCS 51(5), 991–1006 (1999)

    Google Scholar 

  20. Tenner, E.: The Efficiency Paradox: What Big Data Can’t Do. Knopf, New York (2018)

    Google Scholar 

  21. Topol, E.J.: High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25(1), 44 (2019)

    Article  Google Scholar 

  22. Vetter, T.R., Uhler, L.M., Bozic, K.J.: Value-based healthcare: a novel transitional care service strives to improve patient experience and outcomes. Clin. Orthop. Relat. Res. 475(11), 2638–2642 (2017)

    Article  Google Scholar 

  23. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Technol. 31(2), 2018 (2017)

    Google Scholar 

  24. Yu, K.H., Kohane, I.S.: Framing the challenges of artificial intelligence in medicine. BMJ Qual. Saf. 28(3), 238–241 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Federico Cabitza .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cabitza, F., Campagner, A., Ciucci, D., Seveso, A. (2019). Programmed Inefficiencies in DSS-Supported Human Decision Making. In: Torra, V., Narukawa, Y., Pasi, G., Viviani, M. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2019. Lecture Notes in Computer Science(), vol 11676. Springer, Cham. https://doi.org/10.1007/978-3-030-26773-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26773-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26772-8

  • Online ISBN: 978-3-030-26773-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics