Abstract
In this chapter, we analyse influencing factors for the assessment of user experience (UX) from a chatbot operating in the domain of technical customer support. To find out which UX factors can be assessed reliably in a crowdsourcing setup, we conduct a crowd-based UX assessment study through a set of scenario-based tasks and analyse the UX assessments in the light of influencing user characteristics, i.e., self-reported self-efficacy of individual users. By segmenting users according to self-efficacy, we find significant differences in UX assessment and expectations of users with respect to a series of UX constituents like acceptability, task efficiency, system error, ease of use, naturalness, personality and promoter score. Our results strongly suggest a potential application for essential personalization and user adaptation strategies utilizing self-efficacy for the personalization of technical customer support chatbots. Therefore, we recommend considering its influence when designing chatbot adaptation strategies for maximized customer experience.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Banchs RE (2016) Expert-generated vs. crowd-sourced annotations for evaluating chatting sessions at the turn level. In: WOCHAT: second work-shop on chatbots and conversational agent technologies, IVA 2016
Bruun A, Stage J (2015) New approaches to usability evaluation in software development: Barefoot and crowdsourcing. J Syst Softw 105:40–53
Deriu J, Rodrigo A, Otegi A, Echegoyen G, Rosset S, Agirre E, Cieliebak M (2021) Survey on evaluation methods for dialogue systems. Artif Intell Rev 54(1):755–810
Finstad K (2010) The usability metric for user experience. Interact Comput 22(5):323–327
Freter H (2008) Markt-und Kundensegmentierung: kundenorientierte Markterfassung und-bearbeitung, vol 6. W. Kohlhammer Verlag
Gomide VH, Valle PA, Ferreira JO, Barbosa JR, Da Rocha AF, Barbosa T (2014) Affective crowdsourcing applied to usability testing. Int J Comput Sci Inf Technol 5(1):575–579
Hoßfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, Tran-Gia P (2013) Best practices for qoe crowdtesting: qoe assessment with crowdsourcing. IEEE Trans Multimedia 16(2):541–558
Iskender N, Polzehl T, Möller S (2020) Crowdsourcing versus the laboratory: towards crowd-based linguistic text quality assessment of query-based extractive summarization. In: Proceedings of the conference on digital curation technologies (Qurator 2020). CEUR, pp 1–16
Iskender N, Polzehl T, Möller S (2020) Towards a reliable and robust methodology for crowd-based subjective quality assessment of query-based extractive text summarization. In: Proceedings of the 12th LREC. European Language Resources Association, pp 245–253
ISO (2010) Ergonomics of human system interaction-part 210: Human-centred design for interactive systems (formerly known as 13407). Standard ISO DIS 9241-210, International Organization for Standardization, Switzerland
ITU-T (2003) Subjective quality evaluation of telephone services based on spoken dialogue systems. ITU-T Rec. P.851, International Telecommunication Union, Geneva
Kittur A, Chi E, Suh B (2008) Crowdsourcing for usability: using micro-task markets for rapid, remote, and low-cost user measurements. In: Proceedings of the CHI 2008
Lai A (2016) The rise of the empowered customer. Technical report, Forrester Research, Inc., 60 Acorn Park Drive, Cambridge, MA 02140 USA
Law ELC, Roto V, Hassenzahl M, Vermeeren AP, Kort J (2009) Understanding, scoping and defining user experience: a survey approach. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 719–728
Liu D, Bias RG, Lease M, Kuipers R (2012) Crowdsourcing for usability testing. Proc Amer Soc Inf Sci Technol 49(1):1–10
Meffert H, Burmann C, Kirchgeorg M, Eisenbeiß M (2018) Marketing: Grundlagen marktorientierter Unternehmensführung Konzepte–Instrumente–Praxisbeispiele. Springer
Möller S, Smeele P, Boland H, Krebber J (2007) Evaluating spoken dialogue systems according to de-facto standards: a case study. Comput Speech & Lang 21(1):26–53
Nebeling M, Speicher M, Norrie MC (2013) Crowdstudy: general toolkit for crowdsourced evaluation of web interfaces. In: Proceedings of the 5th ACM SIGCHI symposium on engineering interactive computing systems, pp 255–264
Quarteroni S, Manandhar S (2009) Designing an interactive open-domain question answering system. Nat Lang Eng 15(1):73
Reichheld F (2011) The ultimate question 2.0 (revised and expanded edition): how net promoter companies thrive in a customer-driven world. Harvard Business Review Press
Rogers CR (1951) Client-centered therapy: its current practice, implications, and theory, with chapters. Houghton Mifflin Oxford, United Kingdom
Yu Z, Xu Z, Black AW, Rudnicky A (2016) Chatbot evaluation and database expansion via crowdsourcing. In: Proceedings of the chatbot workshop of LREC, vol 63, p 102
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cao, Y. et al. (2022). On the Impact of Self-efficacy on Assessment of User Experience in Customer Service Chatbot Conversations. In: Stoyanchev, S., Ultes, S., Li, H. (eds) Conversational AI for Natural Human-Centric Interaction. Lecture Notes in Electrical Engineering, vol 943. Springer, Singapore. https://doi.org/10.1007/978-981-19-5538-9_18
Download citation
DOI: https://doi.org/10.1007/978-981-19-5538-9_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-5537-2
Online ISBN: 978-981-19-5538-9
eBook Packages: Computer ScienceComputer Science (R0)