skip to main content
10.1145/3511047.3537685acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article

Explaining User Models with Different Levels of Detail for Transparent Recommendation: A User Study

Published: 04 July 2022 Publication History

Abstract

In this paper, we shed light on explaining user models for transparent recommendation while considering user personal characteristics. To this end, we developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides interactive, layered explanations of the user model with three levels of detail (basic, intermediate, advanced) to meet the demands of different types of end-users. We conducted a within-subject study (N=31) to investigate the relationship between personal characteristics and the explanation level of detail, and the effects of these two variables on the perception of the explainable recommender system with regard to different explanation goals. Based on the study results, we provided some suggestions to support the effective design of user model explanations for transparent recommendation.

References

[1]
Ritu Agarwal and Jayesh Prasad. 1998. A conceptual and operational definition of personal innovativeness in the domain of information technology. Information systems research 9, 2 (1998), 204–215.
[2]
Qurat Ul Ain, Mohamed Amine Chatti, Mouadh Guesmi, and Shoeb Joarder. 2022. A Multi-Dimensional Conceptualization Framework for Personalized Explanations in Recommender Systems. In Companion Proceedings of the 27th International Conference on Intelligent User Interfaces.
[3]
Sameh Al-Natour, Izak Benbasat, and Ronald T Cenfetelli. 2008. The effects of process and outcome similarity on users’ evaluations of decision aids. Decision Sciences 39, 2 (2008), 175–211.
[4]
Hernan Badenes, Mateo N Bengualid, Jilin Chen, Liang Gou, Eben Haber, Jalal Mahmud, Jeffrey W Nichols, Aditya Pal, Jerald Schoudt, Barton A Smith, 2014. System U: automatically deriving personality traits from social media for people recommendation. In Proceedings of the 8th ACM Conference on Recommender Systems. 373–374.
[5]
Fedor Bakalov, Marie-Jean Meurs, Birgitta König-Ries, Bahar Sateli, René Witte, Greg Butler, and Adrian Tsang. 2013. An approach to controlling user models and personalization effects in recommender systems. In Proceedings of the 2013 international conference on Intelligent user interfaces. 49–56.
[6]
Krisztian Balog and Filip Radlinski. 2020. Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 329–338.
[7]
Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 265–274.
[8]
Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 265–274.
[9]
Jordan Barria-Pineda, Kamil Akhuseyinoglu, and Peter Brusilovsky. 2019. Explaining need-based educational recommendations using interactive open learner models. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization. 273–277.
[10]
Jordan Barria Pineda and Peter Brusilovsky. 2019. Making educational recommendations transparent through a fine-grained open learner model. In Proceedings of Workshop on Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies at the 24th ACM Conference on Intelligent User Interfaces, IUI 2019, Los Angeles, USA, March 20, 2019, Vol. 2327.
[11]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[12]
Susan Bull and Judy Kay. 2016. SMILI: A framework for interfaces to learning data in open learner models, learning analytics and related fields. International Journal of Artificial Intelligence in Education 26, 1(2016), 293–331.
[13]
John T Cacioppo, Richard E Petty, and Chuan Feng Kao. 1984. The efficient assessment of need for cognition. Journal of personality assessment 48, 3 (1984), 306–307.
[14]
Mohamed Amine Chatti, Fangzheng Ji, Mouadh Guesmi, Arham Muslim, Ravi Kumar Singh, and Shoeb Ahmed Joarder. 2021. SIMT: A Semantic Interest Modeling Toolkit. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 75–78.
[15]
J. Cohen. 1988. Statistical power analysis for the behavioral sciences. Lawrence Erlbaum Associates, Hillsdale, New Jersey.
[16]
Fan Du, Sana Malik, Georgios Theocharous, and Eunyee Koh. 2018. Personalizable and interactive sequence recommender system. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. 1–6.
[17]
Fan Du, Catherine Plaisant, Neil Spring, and Ben Shneiderman. 2018. Visual interfaces for recommendation systems: Finding similar and dissimilar peers. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 1(2018), 1–23.
[18]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367–382.
[19]
D Graus, M Sappelli, and D Manh Chu. 2018. ” let me tell you who you are”-Explaining recommender systems by opening black box user profiles. In The 2nd fatrec workshop on responsible recommendation. [Sl: sn].
[20]
Stephen J Green, Paul Lamere, Jeffrey Alexander, François Maillet, Susanna Kirk, Jessica Holt, Jackie Bourque, and Xiao-Wen Mak. 2009. Generating transparent, steerable recommendations from textual descriptions of items. In Proceedings of the third ACM conference on Recommender systems. 281–284.
[21]
M Guesmi, M Chatti, Y Sun, S Zumor, F Ji, A Muslim, L Vorgerd, and SA Joarder. 2021. Open, scrutable and explainable interest models for transparent recommendation. In Proceedings of the IUI Workshops.
[22]
M Guesmi, MA Chatti, L Vorgerd, 2021. Input or Output: Effects of Explanation Focus on the Perception of Explainable Recommendation with Varying Level of Details. In Proceedings of the CEUR Workshop Proceedings, Vol. 2948. 55–72.
[23]
Mouadh Guesmi, Mohamed Amine Chatti, Laura Vorgerd, Shoeb Joarder, Shadi Zumor, Yiqi Sun, Fangzheng Ji, and Arham Muslim. 2021. On-demand Personalized Explanation for Transparent Recommendation. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 246–252.
[24]
Diana C Hernandez-Bocanegra and Jürgen Ziegler. 2020. Explaining Review-Based Recommendations: Effects of Profile Transparency, Presentation Style and User Characteristics. i-com 19, 3 (2020), 181–200.
[25]
Xiaowen Huang, Quan Fang, Shengsheng Qian, Jitao Sang, Yan Li, and Changsheng Xu. 2019. Explainable interaction-driven user modeling over knowledge graph for sequential recommendation. In Proceedings of the 27th ACM International Conference on Multimedia. 548–556.
[26]
Yucheng Jin, Karsten Seipp, Erik Duval, and Katrien Verbert. 2016. Go with the flow: effects of transparency and user control on targeted advertising using flow charts. In Proceedings of the international working conference on advanced visual interfaces. 68–75.
[27]
René F Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2390–2395.
[28]
Sherrie Yi Xiao Komiak. 2003. The impact of internalization and familiarity on trust and adoption of recommendation agents. Ph.D. Dissertation. University of British Columbia.
[29]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 379–390.
[30]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces. 126–137.
[31]
Johannes Kunkel, Thao Ngo, Jürgen Ziegler, and Nicole Krämer. 2021. Identifying Group-Specific Mental Models of Recommender Systems: A Novel Quantitative Approach (submitted). (2021).
[32]
Matthew KO Lee and Efraim Turban. 2001. A trust model for consumer internet shopping. International Journal of electronic commerce 6, 1 (2001), 75–91.
[33]
Yu Liang and Martijn C Willemsen. 2021. Interactive Music Genre Exploration with Visualization and Mood Control. In 26th International Conference on Intelligent User Interfaces. 175–185.
[34]
Gabriel Lins de Holanda Coelho, Paul HP Hanel, and Lukas J. Wolf. 2020. The very efficient assessment of need for cognition: Developing a six-item version. Assessment 27, 8 (2020), 1870–1885.
[35]
Millecamp Martijn, Cristina Conati, and Katrien Verbert. 2022. “Knowing me, knowing you”: personalized explanations for a music recommender system. User Modeling and User-Adapted Interaction(2022), 1–38.
[36]
D Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. Developing and validating trust measures for e-commerce: An integrative typology. Information systems research 13, 3 (2002), 334–359.
[37]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 397–407.
[38]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[39]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv (2018), arXiv–1811.
[40]
Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3-5 (2017), 393–444.
[41]
Behnam Rahdari, Peter Brusilovsky, and Dmitriy Babichenko. 2020. Personalizing information exploration with an open user model. In Proceedings of the 31st ACM Conference on Hypertext and Social Media. 167–176.
[42]
Behnam Rahdari, Peter Brusilovsky, and Alireza Javadian Sabet. 2021. Connecting Students with Research Advisors Through User-Controlled Recommendation. In Fifteenth ACM Conference on Recommender Systems. 745–748.
[43]
Emily Sullivan, Dimitrios Bountouridis, Jaron Harambam, Shabnam Najafian, Felicia Loecherbach, Mykola Makhortykh, Domokos Kelen, Daricia Wilkinson, David Graus, and Nava Tintarev. 2019. Reading news with a purpose: Explaining user profiles for self-actualization. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization. 241–245.
[44]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109–119.
[45]
Nava Tintarev and Judith Masthoff. 2007. A survey of explanations in recommender systems. In 2007 IEEE 23rd international conference on data engineering workshop. IEEE, 801–810.
[46]
Nava Tintarev and Judith Masthoff. 2008. The effectiveness of personalized movie explanations: An experiment using commercial meta-data. In International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems. Springer, 204–213.
[47]
Nava Tintarev and Judith Masthoff. 2011. Designing and evaluating explanations for recommender systems. In Recommender systems handbook. Springer, 479–510.
[48]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 399–439.
[49]
Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender systems handbook. Springer, 353–382.
[50]
Weiquan Wang and Izak Benbasat. 2007. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23, 4 (2007), 217–246.
[51]
Rainer Wasinger, James Wallbank, Luiz Pizzato, Judy Kay, Bob Kummerfeld, Matthias Böhmer, and Antonio Krüger. 2013. Scrutable user models and personalised item recommendation in mobile lifestyle applications. In International Conference on User Modeling, Adaptation, and Personalization. Springer, 77–88.
[52]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189–201.
[53]
Ruijing Zhao, Izak Benbasat, and Hasan Cavusoglu. 2019. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems.
[54]
Martin Zürn, Malin Eiband, and Daniel Buschek. 2020. What if? Interaction with Recommendations. In ExSS-ATEC@ IUI.

Cited By

View all
  • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
  • (2024)ScrollyPOI: A Narrative-Driven Interactive Recommender System for Points-of-Interest Exploration and ExplainabilityAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3665183(292-304)Online publication date: 27-Jun-2024
  • (2024)Balanced Explanations in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664915(25-29)Online publication date: 27-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
July 2022
409 pages
ISBN:9781450392327
DOI:10.1145/3511047
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. explainable recommendation
  2. explainable user modeling
  3. intelligent explanation interfaces
  4. personal characteristics
  5. recommender systems

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

UMAP '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)85
  • Downloads (Last 6 weeks)8
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)An Overview of the Empirical Evaluation of Explainable AI (XAI): A Comprehensive Guideline for User-Centered Evaluation in XAIApplied Sciences10.3390/app14231128814:23(11288)Online publication date: 3-Dec-2024
  • (2024)ScrollyPOI: A Narrative-Driven Interactive Recommender System for Points-of-Interest Exploration and ExplainabilityAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3665183(292-304)Online publication date: 27-Jun-2024
  • (2024)Balanced Explanations in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664915(25-29)Online publication date: 27-Jun-2024
  • (2024)Dynamic Ridge Plot Sliders: Supporting Users' Understanding of the Item Space Structure and Feature Dependencies in Interactive Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664872(106-113)Online publication date: 27-Jun-2024
  • (2024)Designing Effective Warnings for Manipulative Designs in Mobile ApplicationsProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659550(12-17)Online publication date: 22-Jun-2024
  • (2024)On the Negative Perception of Cross-domain Recommendations and ExplanationsProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657735(2102-2113)Online publication date: 10-Jul-2024
  • (2024)Evaluation of the User-Centric Explanation Strategies for Interactive RecommendersExplainable and Transparent AI and Multi-Agent Systems10.1007/978-3-031-70074-3_2(21-38)Online publication date: 25-Sep-2024
  • (2023)Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender SystemInformation10.3390/info1407040114:7(401)Online publication date: 14-Jul-2023
  • (2023)Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender SystemInternational Journal of Human–Computer Interaction10.1080/10447318.2023.226279740:22(7248-7269)Online publication date: 15-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media