skip to main content
10.1145/3411764.3445101acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers

Published: 07 May 2021 Publication History

Abstract

Online symptom checkers (OSC) are widely used intelligent systems in health contexts such as primary care, remote healthcare, and epidemic control. OSCs use algorithms such as machine learning to facilitate self-diagnosis and triage based on symptoms input by healthcare consumers. However, intelligent systems’ lack of transparency and comprehensibility could lead to unintended consequences such as misleading users, especially in high-stakes areas such as healthcare. In this paper, we attempt to enhance diagnostic transparency by augmenting OSCs with explanations. We first conducted an interview study (N=25) to specify user needs for explanations from users of existing OSCs. Then, we designed a COVID-19 OSC that was enhanced with three types of explanations. Our lab-controlled user study (N=20) found that explanations can significantly improve user experience in multiple aspects. We discuss how explanations are interwoven into conversation flow and present implications for future OSC designs.

Supplementary Material

MP4 File (3411764.3445101_videopreview.mp4)
Preview video

References

[1]
Stephanie Aboueid, Rebecca H Liu, Binyam Negussie Desta, Ashok Chaurasia, and Shanil Ebrahim. 2019. The use of artificially intelligent Self-Diagnosing digital platforms by the general public: Scoping review. JMIR medical informatics 7, 2 (2019), e13445.
[2]
Andrew Anderson, Jonathan Dodge, Amrita Sadarangani, Zoe Juozapaitis, Evan Newman, Jed Irvine, Souti Chattopadhyay, Matthew Olson, Alan Fern, and Margaret Burnett. 2020. Mental Models of Mere Mortals with Explanations of Reinforcement Learning. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 2(2020), 1–37.
[3]
AC Berry, BD Cash, B Wang, MS Mulekar, AB Van Haneghan, K Yuquimpo, A Swaney, MC Marshall, and WK Green. 2019. Online symptom checker diagnostic and triage accuracy for HIV and hepatitis C. Epidemiology & Infection 147 (2019).
[4]
Andrew C Berry, Brooks D Cash, Madhuri S Mulekar, Bin Wang, Anne Melvin, and Bruce B Berry. 2017. Symptom checkers vs. doctors, the ultimate test: a prospective study of patients presenting with abdominal pain. Gastroenterology 152, 5 (2017), S852–S853.
[5]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[6]
Susan E Brennan. 1990. Conversation as direct manipulation: An iconoclastic view. The art of human-computer interface design(1990), 393–404.
[7]
CDC. 2020. Coronavirus Self-Checker. https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/coronavirus-self-checker.html
[8]
Duncan Chambers, Anna J Cantrell, Maxine Johnson, Louise Preston, Susan K Baxter, Andrew Booth, and Janette Turner. 2019. Digital and online symptom checkers and health assessment/triage services for urgent health problems: systematic review. BMJ open 9, 8 (2019), e027743.
[9]
Christina Cheng and Matthew Dunn. 2015. Health literacy and the Internet: a study on the readability of Australian online health information. Australian and New Zealand journal of public health 39, 4 (2015), 309–314.
[10]
J Copestake. 2018. Babylon claims its chatbot beats GPs at medical exam. BBC News 2018 Jun 27. https://www.bbc.com/news/technology-44635134
[11]
Benjamin Marshall Davies, Colin Fraser Munro, and Mark RN Kotter. 2019. A novel insight into the challenges of diagnosing degenerative cervical myelopathy using web-based symptom Checkers. Journal of medical Internet research 21, 1 (2019), e10868.
[12]
David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Ed Fast, Alesia Gainer, Kallirroi Georgila, Jon Gratch, Arno Hartholt, Margaux Lhommet, 2014. SimSensei Kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems. 1061–1068.
[13]
Cecilia di Sciascio, Peter Brusilovsky, and Eduardo Veas. 2018. A study on user-controllable social exploratory search. In 23rd International Conference on Intelligent User Interfaces. ACM, 353–364.
[14]
Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, and Denis Parra. 2019. The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 408–416.
[15]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. 2019. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 263–274.
[16]
Hamish Fraser, Enrico Coiera, and David Wong. 2018. Safety of patient-facing digital symptom checkers. The Lancet 392, 10161 (2018), 2263–2264.
[17]
Gerhard Friedrich and Markus Zanker. 2011. A taxonomy for generating explanations in recommender systems. AI Magazine 32, 3 (2011), 90–98.
[18]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367–382.
[19]
Joe Greek. 2017. Artificial Intelligence: Clever Computers and Smart Machines. The Rosen Publishing Group, Inc.
[20]
Julio Guerra-Hollstein, Jordan Barria-Pineda, Christian D Schunn, Susan Bull, and Peter Brusilovsky. 2017. Fine-Grained Open Learner Models: Complexity Versus Support. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM, 41–49.
[21]
David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017).
[22]
Kristen Hanich, Yilan Jiang, and Pooja Kamble. 2019. Virtual Care and Remote Monitoring: Connected Health at Home. https://www.bbc.com/news/technology-44635134
[23]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
[24]
Ada Health. 2020. About the Ada COVID-19 assessment and screener. https://ada.com/covid-19-screener/
[25]
Sutter Health. 2019. Sutter Health Teams Up With Ada Health to Improve Patient Care with On-Demand Healthcare Guidance. http://www.mobilehealthtimes.com/sutter-health-teams-up-with-ada-health-to-improve-patient-care-by-delivering-on-demand-healthcare-guidance/
[26]
Mathijs P Hendriks, Xander AAM Verbeek, Thijs van Vegchel, Maurice JC van der Sangen, Luc JA Strobbe, Jos WS Merkus, Harmien M Zonderland, Carolien H Smorenburg, Agnes Jager, and Sabine Siesling. 2019. Transformation of the National Breast Cancer guideline into data-driven clinical decision trees. JCO clinical cancer informatics 3 (2019), 1–14.
[27]
Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 241–250.
[28]
Denis J Hilton. 1990. Conversational processes and causal explanation.Psychological Bulletin 107, 1 (1990), 65.
[29]
Andreas Holzinger, Chris Biemann, Constantinos S Pattichis, and Douglas B Kell. 2017. What do we need to build explainable AI systems for the medical domain?(2017). arXiv:1712.09923
[30]
M Shamim Hossain, Ghulam Muhammad, and Nadra Guizani. 2020. Explainable AI and mass surveillance system-based healthcare framework to combat COVID-I9 like pandemics. IEEE Network 34, 4 (2020), 126–132.
[31]
Dietmar Jannach, Sidra Naveed, and Michael Jugovac. 2016. User control in recommender systems: Overview and interaction challenges. In International Conference on Electronic Commerce and Web Technologies. Springer, 21–33.
[32]
Timothy J Judson, Anobel Y Odisho, Aaron B Neinstein, Jessica Chao, Aimee Williams, Christopher Miller, Tim Moriarty, Nathaniel Gleason, Gina Intinarelli, and Ralph Gonzales. 2020. Rapid design and implementation of an integrated patient self-triage and self-scheduling tool for COVID-19. Journal of the American Medical Informatics Association 27, 6(2020), 860–866.
[33]
Annemarie Jutel and Deborah Lupton. 2015. Digitizing diagnosis: a review of mobile applications in the diagnostic process. Diagnosis 2, 2 (2015), 89–96.
[34]
Cheng-Kai Kao and David M Liebovitz. 2017. Consumer mobile health apps: current state, barriers, and future directions. PM&R 9, 5 (2017), S106–S115.
[35]
Boris Kment. 2006. Counterfactuals and explanation. Mind 115, 458 (2006), 261–310.
[36]
Bart P Knijnenburg, Svetlin Bostandjiev, John O’Donovan, and Alfred Kobsa. 2012. Inspectability and control in social recommenders. In Proceedings of the sixth ACM conference on Recommender systems. ACM, 43–50.
[37]
Bart P Knijnenburg and Martijn C Willemsen. 2015. Evaluating recommender systems with user experiments. In Recommender Systems Handbook. Springer, 309–352.
[38]
Bart P Knijnenburg, Martijn C Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 441–504.
[39]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2017. User preferences for hybrid explanations. In Proceedings of the Eleventh ACM Conference on Recommender Systems. 84–88.
[40]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 379–390.
[41]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[42]
Q Vera Liao, Muhammed Mas-ud Hussain, Praveen Chandar, Matthew Davis, Yasaman Khazaeni, Marco Patricio Crasso, Dakuo Wang, Michael Muller, N Sadat Shami, and Werner Geyer. 2018. All Work and No Play?. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[43]
Ewa Luger and Abigail Sellen. 2016. ” Like Having a Really Bad PA” The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5286–5297.
[44]
Deborah Lupton and Annemarie Jutel. 2015. ‘It’s like having a physician in your pocket!’A critical analysis of self-diagnosis smartphone apps. Social Science & Medicine 133 (2015), 128–135.
[45]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To Explain or not to Explain: the Effects of Personal Characteristics when Explaining Music Recommendations. In Proceedings of the 2019 Conference on Intelligent User Interface. ACM, 1–12.
[46]
Michael L Millenson, Jessica L Baldwin, Lorri Zipperer, and Hardeep Singh. 2018. Beyond Dr. Google: the evidence on consumer-facing digital tools for diagnosis. Diagnosis 5, 3 (2018), 95–105.
[47]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[48]
Biswajit Mohanty, Abrar Chughtai, and Fethi Rabhi. 2019. Use of Mobile Apps for epidemic surveillance and response–availability and gaps.Global Biosecurity 1, 2 (2019).
[49]
University of Iowa Hospitals & Clinics. 2020. Check Your COVID-19 Risk Online. https://uihc.org/check-your-covid-19-risk-online
[50]
Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. 2012. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (2012), 555–583.
[51]
Denis Parra and Peter Brusilovsky. 2015. User-controllable personalization: A case study with SetFusion. International Journal of Human-Computer Studies 78 (2015), 43–67.
[52]
Urja Pawar, Donna O’Shea, Susan Rea, and Ruairi O’Reilly. 2020. Explainable ai in healthcare. In 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). IEEE, 1–2.
[53]
Lucy Powley, Graham McIlroy, Gwenda Simons, and Karim Raza. 2016. Are online symptoms checkers useful for patients with inflammatory arthritis?BMC musculoskeletal disorders 17, 1 (2016), 362.
[54]
Pearl Pu and Li Chen. 2007. Trust-inspiring explanation interfaces for recommender systems. Knowledge-Based Systems 20, 6 (2007), 542–556.
[55]
Pearl Pu, Li Chen, and Rong Hu. 2011. A user-centric evaluation framework for recommender systems. In Proceedings of the fifth ACM conference on Recommender systems. ACM, 157–164.
[56]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
[57]
Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of user-centered explainable AI. In IUI Workshops.
[58]
David-Hillel Ruben. 2015. Explaining explanation. Routledge.
[59]
Bahae Samhan. 2019. Self-Diagnosis Mobile Applications A Technology Trust Perspective. (2019).
[60]
Briane Paul V Samson and Yasuyuki Sumi. 2020. Are Two Heads Better than One? Exploring Two-Party Conversations for Car Navigation Voice Guidance. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–9.
[61]
Jessica Schroeder, Chelsey Wilkes, Kael Rowan, Arturo Toledo, Ann Paradiso, Mary Czerwinski, Gloria Mark, and Marsha M Linehan. 2018. Pocket skills: A conversational mobile web app to support dialectical behavioral therapy. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–15.
[62]
Hannah L Semigran, Jeffrey A Linder, Courtney Gidengil, and Ateev Mehrotra. 2015. Evaluation of symptom checkers for self diagnosis and triage: audit study. bmj 351(2015), h3480.
[63]
Ameneh Shamekhi, Q Vera Liao, Dakuo Wang, Rachel KE Bellamy, and Thomas Erickson. 2018. Face Value? Exploring the effects of embodiment for a group facilitation agent. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[64]
Amit Sharma and Dan Cosley. 2013. Do social explanations work?: studying and modeling the effects of social explanations in recommender systems. In Proceedings of the 22nd international conference on World Wide Web. ACM, 1133–1144.
[65]
Carl Shen, Michael Nguyen, Alexander Gregor, Gloria Isaza, and Anne Beattie. 2019. Accuracy of a popular online symptom checker for ophthalmic diagnoses. JAMA ophthalmology 137, 6 (2019), 690–692.
[66]
Vilma Shu. 2020. Penn State Go: COVID-19 symptom checker part of new faculty/staff experience. https://news.psu.edu/story/627659/2020/08/05/administration/penn-state-go-covid-19-symptom-checker-part-new-facultystaff
[67]
Kacper Sokol and Peter A Flach. 2018. Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. In IJCAI. 5868–5870.
[68]
Lee Sproull, Mani Subramani, Sara Kiesler, Janet H Walker, and Keith Waters. 1996. When the interface is a face. Human-computer interaction 11, 2 (1996), 97–124.
[69]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (1 Oct. 2012), 399–439.
[70]
Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender systems handbook. Springer, 353–382.
[71]
David Traum. 2017. Computational approaches to dialogue. The Routledge Handbook of Language and Dialogue. Taylor & Francis (2017), 143–161.
[72]
Chun-Hua Tsai. 2020. Controllability and explainability in a hybrid social recommender system. Ph.D. Dissertation. University of Pittsburgh.
[73]
Chun-Hua Tsai and Peter Brusilovsky. 2018. Beyond the Ranked List: User-Driven Exploration and Diversification of Social Recommendation. In 23rd International Conference on Intelligent User Interfaces. ACM, 239–250.
[74]
Chun-Hua Tsai and Peter Brusilovsky. 2019. Explaining recommendations in an interactive hybrid social recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 391–396.
[75]
Chun-Hua Tsai and Peter Brusilovsky. 2020. The Effects of Controllability and Explainability in a Social Recommender System. User Modeling and User-Adapted Interaction(2020).
[76]
Florida State University. 2020. Stay Healthy Training & Daily Wellness Check Application. https://news.fsu.edu/news/2020/08/12/stay-healthy-training-daily-wellness-check-application/
[77]
Quinnipiac University. 2020. University introduces QU Symptom Checker app to keep campus community safe. https://www.qu.edu/today/qu-symptom-checker.html
[78]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[79]
David S Watson, Jenny Krutzinna, Ian N Bruce, Christopher EM Griffiths, Iain B McInnes, Michael R Barnes, and Luciano Floridi. 2019. Clinical applications of machine learning algorithms: beyond the black box. Bmj 364(2019).
[80]
Martin Wiesner and Daniel Pfeifer. 2014. Health recommender systems: concepts, requirements, technical basics and challenges. International journal of environmental research and public health 11, 3(2014), 2580–2607.
[81]
Aaron N Winn, Melek Somai, Nicole Fergestrom, and Bradley H Crotty. 2019. Association of Use of Online Symptom Checkers With Patients’ Plans for Seeking Care. JAMA Network Open 2, 12 (2019), e1918561–e1918561.
[82]
Jennifer Wisdom and John W Creswell. 2013. Mixed methods: integrating quantitative and qualitative data collection and analysis while studying patient-centered medical home models. Rockville: Agency for Healthcare Research and Quality (2013).
[83]
Ziang Xiao, Michelle X Zhou, and Wat-Tat Fu. 2019. Who should be my teammates: Using a conversational agent to understand individuals and help teaming. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 437–447.
[84]
Ziang Xiao, Michelle X Zhou, Q Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, and Huahai Yang. 2020. Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 3(2020), 1–37.
[85]
Yue You and Xinning Gui. 2020. Self-Diagnosis through AI-enabled Chatbot-based Symptom Checkers: User Experiences and Design Considerations. In AMIA Annual Symposium Proceedings, Vol. 2020. American Medical Informatics Association.

Cited By

View all
  • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
  • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
  • (2024)Incremental XAI: Memorable Understanding of AI with Incremental ExplanationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642689(1-17)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
      May 2021
      10862 pages
      ISBN:9781450380966
      DOI:10.1145/3411764
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 May 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. COVID-19
      2. Explanation
      3. Health
      4. Symptom Checker
      5. Transparency

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Penn State College of IST's seed grant

      Conference

      CHI '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)316
      • Downloads (Last 6 weeks)22
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
      • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
      • (2024)Incremental XAI: Memorable Understanding of AI with Incremental ExplanationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642689(1-17)Online publication date: 11-May-2024
      • (2024)AI is Entering Regulated Territory: Understanding the Supervisors' Perspective for Model Justifiability in Financial Crime DetectionProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642326(1-21)Online publication date: 11-May-2024
      • (2024)Dealing with Uncertainty: Understanding the Impact of Prognostic Versus Diagnostic Tasks on Trust and Reliance in Human-AI Decision MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641905(1-17)Online publication date: 11-May-2024
      • (2024)Designing and Evaluating Online Health Consultation Interfaces: A Perspective of Physician-Patient Power AsymmetryIEEE Access10.1109/ACCESS.2024.345421312(124111-124127)Online publication date: 2024
      • (2024)Explainable AI decision support improves accuracy during telehealth strep throat screeningCommunications Medicine10.1038/s43856-024-00568-x4:1Online publication date: 24-Jul-2024
      • (2024)Designing explainable AI to improve human-AI team performanceArtificial Intelligence in Medicine10.1016/j.artmed.2024.102780149:COnline publication date: 2-Jul-2024
      • (2024)Do We Learn From Each Other: Understanding the Human-AI Co-Learning Process Embedded in Human-AI CollaborationGroup Decision and Negotiation10.1007/s10726-024-09912-xOnline publication date: 3-Dec-2024
      • (2024)Enhancing Explainability in Medical AI: Developing Human-Centered Participatory Design CardsHCI International 2024 – Late Breaking Papers10.1007/978-3-031-76827-9_10(164-194)Online publication date: 31-Dec-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media