Skip to main content

Advertisement

Log in

Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems

  • Patient Facing Systems
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

Ongoing research efforts have been examining how to utilize artificial intelligence technology to help healthcare consumers make sense of their clinical data, such as diagnostic radiology reports. How to promote the acceptance of such novel technology is a heated research topic. Recent studies highlight the importance of providing local explanations about AI prediction and model performance to help users determine whether to trust AI’s predictions. Despite some efforts, limited empirical research has been conducted to quantitatively measure how AI explanations impact healthcare consumers’ perceptions of using patient-facing, AI-powered healthcare systems. The aim of this study is to evaluate the effects of different AI explanations on people's perceptions of AI-powered healthcare system. In this work, we designed and deployed a large-scale experiment (N = 3,423) on Amazon Mechanical Turk (MTurk) to evaluate the effects of AI explanations on people's perceptions in the context of comprehending radiology reports. We created four groups based on two factors—the extent of explanations for the prediction (High vs. Low Transparency) and the model performance (Good vs. Weak AI Model)—and randomly assigned participants to one of the four conditions. Participants were instructed to classify a radiology report as describing a normal or abnormal finding, followed by completing a post-study survey to indicate their perceptions of the AI tool. We found that revealing model performance information can promote people's trust and perceived usefulness of system outputs, while providing local explanations for the rationale of a prediction can promote understandability but not necessarily trust. We also found that when model performance is low, the more information the AI system discloses, the less people would trust the system. Lastly, whether human agrees with AI predictions or not and whether the AI prediction is correct or not could also influence the effect of AI explanations. We conclude this paper by discussing implications for designing AI systems for healthcare consumers to interpret diagnostic report.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References:

  1. Ross, S.E., et al., Expectations of patients and physicians regarding patient-accessible medical records. J. Med. Internet. Res. 7(2):13, 2005.

    Article  Google Scholar 

  2. Rubin, D.L., Informatics methods to enable patient-centered radiology. Acad. Radiol. 16(5):524-534, 2009.

    Article  Google Scholar 

  3. Basu, P.A., et al., Creating a patient-centered imaging service: determining what patients want. Am. J. Roentgenol. 196(3): 605-610, 2011.

    Article  Google Scholar 

  4. Berlin, L., Communicating results of all radiologic examinations directly to patients: has the time come? Am. J. Roentgenol. 189(6):1275-1282, 2007.

    Article  Google Scholar 

  5. Peacock, S., et al., Patient portals and personal health information online: perception, access, and use by US adults. J. Am. Med. Inform. Assoc. 24(e1):e173-e177, 2016.

    Article  Google Scholar 

  6. Ma, X., et al., Professional Medical Advice at your Fingertips: An empirical study of an online. Proceedings of the ACM on Human-Computer Interaction. 2(CSCW):116, 2018.

  7. Zhang, Z., et al., Understanding Patient Information Needs about their Clinical Laboratory Results: A Study of Social Q&A Site. Stud. Health.Technol. Inform. 264:1403, 2019.

    PubMed  PubMed Central  Google Scholar 

  8. Rosenkrantz, A.B. and E.R. Flagg, Survey-based assessment of patients’ understanding of their own imaging examinations. J. Am. Coll. Radiol. 12(6):549-555, 2015.

    Article  Google Scholar 

  9. Hong, M.K., et al. Supporting families in reviewing and communicating about radiology imaging studies. in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017.

  10. Arnold, C.W., et al., Imaging informatics for consumer health: towards a radiology patient portal. J Am Med Inform Assoc. 20(6):1028-1036, 2013.

    Article  Google Scholar 

  11. Oh, S.C., T.S. Cook, and C.E. Kahn, PORTER: a prototype system for patient-oriented radiology reporting. J. Digit. Imaging. 29(4):450-454, 2016.

    Article  Google Scholar 

  12. Alpert, J.M., et al., Applying multiple methods to comprehensively evaluate a patient portal’s effectiveness to convey information to patients. J Med Internet Res. 18(5):e112, 2016.

    Article  Google Scholar 

  13. Reynolds, T.L., et al. Understanding Patient Questions about their Medical Records in an Online Health Forum: Opportunity for Patient Portal Design. in AMIA Annual Symposium Proceedings. 2017. American Medical Informatics Association.

  14. Zikmund-Fisher, B.J., et al., Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results. J. Am. Med. Inform. Assoc. 24(3):520-528, 2016.

    Article  Google Scholar 

  15. Chen, H., S. Compton, and O. Hsiao. DiabeticLink: a health big data system for patient empowerment and personalized healthcare. in International Conference on Smart Health. Springer, 2013.

  16. Long, J., M.J. Yuan, and R. Poonawala, An Observational Study to Evaluate the Usability and Intent to Adopt an Artificial Intelligence–Powered Medication Reconciliation Tool. Interact. J. Med. Res. 5(2):e14, 2016.

    Article  Google Scholar 

  17. Palanica, A., et al., Physicians’ Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey. J. Med. Internet Res. 21(4):e12887, 2019.

    Article  Google Scholar 

  18. Zhang, Z., et al., Lay individuals' perceptions of artificial intelligence (AI)‐empowered healthcare systems. Proc. Assoc. Inform. Sci. Technol. 57(1):e326, 2020.

    Google Scholar 

  19. Hoermann, S., et al., Application of Synchronous Text-Based Dialogue Systems in Mental Health Interventions: Systematic Review. J. Med. Internet Res. 19(8):e267, 2017.

    Article  Google Scholar 

  20. Harwich, E. and K. Laycock, Thinking on its own: AI in the NHS. Reform Research Trust, 2018.

  21. Johnson, H. and J. Peter. Explanation facilities and interactive systems. in Proceedings of the 1st international conference on Intelligent user interfaces, 1993.

  22. Muramatsu, J. and W. Pratt, Transparent Queries: investigation users’ mental models of search engines, in Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. Association for Computing Machinery: New Orleans, Louisiana, USA 217–224, 2001

  23. Sinha, R. and K. Swearingen, The role of transparency in recommender systems, in CHI ’02 Extended Abstracts on Human Factors in Computing Systems. 2002, Association for Computing Machinery: Minneapolis, Minnesota, USA. 830–831, 2002.

    Chapter  Google Scholar 

  24. Herlocker, J.L., J.A. Konstan, and J. Riedl, Explaining collaborative filtering recommendations, in Proceedings of the 2000 ACM conference on Computer supported cooperative work.  Association for Computing Machinery: Philadelphia, Pennsylvania, USA. 241–250, 2000.

  25. Pu, P. and L. Chen, Trust building with explanation interfaces, in Proceedings of the 11th international conference on Intelligent user interfaces. 2006, Association for Computing Machinery: Sydney, Australia. p. 93–100.

  26. McGuinness, D.L., et al. Explanation interfaces for the semantic web: Issues and models. in Proceedings of the 3rd International Semantic Web User Interaction Workshop, 2006.

  27. Vorm, E.S. Assessing Demand for Transparency in Intelligent Systems Using Machine Learning. in 2018 Innovations in Intelligent Systems and Applications (INISTA). IEE, 2018.

  28. Bussone, A., S. Stumpf, and D. O'Sullivan. The role of explanations on trust and reliance in clinical decision support systems. in 2015 International Conference on Healthcare Informatics. IEEE, 2015.

  29. Poursabzi-Sangdeh, F., et al., Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810, 2018.

  30. Cai, C.J., et al., " Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-computer Interaction. 3(CSCW):1–24, 2019.

  31. Ribeiro, M.T., S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 2016

  32. Lundberg, S.M. and S.I. Lee. A unified approach to interpreting model predictions. in Advances in neural information processing systems, 2017.

  33. Yin, M., J. Wortman Vaughan, and H. Wallach. Understanding the effect of accuracy on trust in machine learning models. in Proceedings of the 2019 chi conference on human factors in computing systems, 2019.

  34. Lai, V. and C. Tan. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019.

  35. Zhang, Y., Q.V. Liao, and R.K. Bellamy, Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. arXiv preprint arXiv:2001.02114, 2020.

  36. Vorm, E.S. and D.M. Andrew. Assessing the Value of Transparency in Recommender Systems: An End-User Perspective. in ACM Conference on Recommender Systems. Vancouver, Canada, 2018.

  37. Kizilcec, R.F., How Much Information? Effects of Transparency on Trust in an Algorithmic Interface, in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2016, Association for Computing Machinery: San Jose, California, USA. 2390–2395.

  38. Esteva, A., et al., Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542(7639):115–118, 2017.

  39. Sirinukunwattana, K., et al., Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging. 35(5):1196-1206, 2016

    Article  Google Scholar 

  40. He, J., et al., The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 25(1):30-36, 2019.

    Article  CAS  Google Scholar 

  41. Arimura, H., et al., Magnetic resonance image analysis for brain CAD systems with machine learning, in Machine learning in computer-aided diagnosis: medical imaging intelligence and analysis. IGI Global. 258–296, 2012

  42. Erickson, B.J., et al., Machine learning for medical imaging. Radiographics. 37(2):505-515, 2017.

    Article  Google Scholar 

  43. Wang, D., et al., " Brilliant AI Doctor" in Rural China: Tensions and Challenges in AI-Powered CDSS Deployment. arXiv preprint arXiv:2101.0152, 2021.

  44. Stiggelbout, A.M., et al., Shared decision making: really putting patients at the centre of healthcare. Bmj. 344, 2012

  45. Fan, X., et al., Utilization of Self-Diagnosis Health Chatbots in Real-World Settings: Case Study. J. Med. Internet Res. 23(1):e19928, 2021.

    Article  Google Scholar 

  46. Nadarzynski, T., et al., Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit. Health. 5:2055207619871808, 2019.

    PubMed  PubMed Central  Google Scholar 

  47. Davis, F.D., Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly. 319–340, 1989.

  48. Venkatesh, V., et al., User acceptance of information technology: Toward a unified view. MIS quarterly. 425–478, 2003.

  49. Ehsan, U., et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. in Proceedings of the 24th International Conference on Intelligent User Interfaces. 2019.

  50. Broekens, J., et al. Do you get it? User-evaluated explainable BDI agents. in German Conference on Multiagent System Technologies. Springer. 2010

  51. Larasati, R. and A. DeLiddo, Building a trustworthy explainable AI in healthcare. Human Computer Interaction and Emerging Technologies: Adjunct Proceedings from. 209, 2009.

  52. Overcoming Barriers in AI Adoption in Healthcare. 2018; Available from: https://newsroom.intel.com/wp-content/uploads/sites/11/2018/07/healthcare-iot-infographic.pdf.

  53. Esmaeilzadeh, P., Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med. Inform. Decis. Mak. 20(1):1-19, 2020.

    Article  Google Scholar 

  54. Dietvorst, B.J., J. Simmons, and C. Massey. Understanding algorithm aversion: forecasters erroneously avoid algorithms after seeing them err. in Academy of Management Proceedings. 2014. Academy of Management Briarcliff Manor, NY 10510.

  55. Dzindolet, M.T., et al., The role of trust in automation reliance. Int. J Hum. Comput. Stud. 58(6):697-718, 2003.

    Article  Google Scholar 

  56. Adadi, A. and M. Berrada, Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 6:52138-52160, 2018.

    Article  Google Scholar 

  57. Cramer, H., et al., The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interactio. 18(5), 2008.

  58. Kaltenbach1, E. and I. Dolgov. On the dual nature of transparency and reliability: Rethinking factors that shape trust in automation. in Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2017. SAGE Publications Sage CA: Los Angeles, CA.

  59. Cocos, A., et al., Crowd control: Effectively utilizing unscreened crowd workers for biomedical data annotation. J. Biomed. Inform. 69:86-92, 2017.

    Article  Google Scholar 

  60. Johansson, U., et al., Trade-off between accuracy and interpretability for predictive in silico modeling. Future Med. Chem. 3(6):647-663, 2011.

    Article  CAS  Google Scholar 

  61. McGuirl, J.M. and N.B. Sarter, Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum. Factors. 48(4):656-665, 2006.

    Article  Google Scholar 

  62. Strickland, E., IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum. 56(4):24-31, 2019.

    Article  Google Scholar 

  63. Fan, X., et al., Utilization of Self-Diagnosis Health Chatbots in Real-World Settings: Case Study. J. Med. Internet Res. 22(12):e19928, 2020.

    Google Scholar 

  64. Nguyen, A., J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

  65. Kizilcec, R.F. How much information? Effects of transparency on trust in an algorithmic interface. in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2016.

  66. Kamwa, I., S. Samantaray, and G. Joós, On the accuracy versus transparency trade-off of data-mining models for fast-response PMU-based catastrophe predictors. IEEE T. Smart Grid. 3(1):152-161, 2011.

    Article  Google Scholar 

  67. Tajbakhsh, N., et al., Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging. 35(5):1299-1312, 2016.

    Article  Google Scholar 

  68. Chakraborty, S., et al. Interpretability of deep learning models: a survey of results. in 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). 2017. IEEE.

  69. Shapiro, D.N., J. Chandler, and P.A. Mueller, Using Mechanical Turk to study clinical populations. Clinic. Psychol. Sci. 1(2):213-220, 2013.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhan Zhang.

Ethics declarations

Ethical Approval

This study was approved by the first author's university Institution Review Board (IRB).

Conflict of Interest

All authors declare that he/she has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Patient Facing Systems

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Genc, Y., Wang, D. et al. Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems. J Med Syst 45, 64 (2021). https://doi.org/10.1007/s10916-021-01743-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-021-01743-6

Keywords

Navigation