Skip to main content

On the Impact of Self-efficacy on Assessment of User Experience in Customer Service Chatbot Conversations

  • Conference paper
  • First Online:
Conversational AI for Natural Human-Centric Interaction

Abstract

In this chapter, we analyse influencing factors for the assessment of user experience (UX) from a chatbot operating in the domain of technical customer support. To find out which UX factors can be assessed reliably in a crowdsourcing setup, we conduct a crowd-based UX assessment study through a set of scenario-based tasks and analyse the UX assessments in the light of influencing user characteristics, i.e., self-reported self-efficacy of individual users. By segmenting users according to self-efficacy, we find significant differences in UX assessment and expectations of users with respect to a series of UX constituents like acceptability, task efficiency, system error, ease of use, naturalness, personality and promoter score. Our results strongly suggest a potential application for essential personalization and user adaptation strategies utilizing self-efficacy for the personalization of technical customer support chatbots. Therefore, we recommend considering its influence when designing chatbot adaptation strategies for maximized customer experience.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 279.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://moli.lenovo.com/callcenter/moli.

  2. 2.

    https://www.crowdee.com.

References

  1. Banchs RE (2016) Expert-generated vs. crowd-sourced annotations for evaluating chatting sessions at the turn level. In: WOCHAT: second work-shop on chatbots and conversational agent technologies, IVA 2016

    Google Scholar 

  2. Bruun A, Stage J (2015) New approaches to usability evaluation in software development: Barefoot and crowdsourcing. J Syst Softw 105:40–53

    Article  Google Scholar 

  3. Deriu J, Rodrigo A, Otegi A, Echegoyen G, Rosset S, Agirre E, Cieliebak M (2021) Survey on evaluation methods for dialogue systems. Artif Intell Rev 54(1):755–810

    Article  Google Scholar 

  4. Finstad K (2010) The usability metric for user experience. Interact Comput 22(5):323–327

    Article  Google Scholar 

  5. Freter H (2008) Markt-und Kundensegmentierung: kundenorientierte Markterfassung und-bearbeitung, vol 6. W. Kohlhammer Verlag

    Google Scholar 

  6. Gomide VH, Valle PA, Ferreira JO, Barbosa JR, Da Rocha AF, Barbosa T (2014) Affective crowdsourcing applied to usability testing. Int J Comput Sci Inf Technol 5(1):575–579

    Google Scholar 

  7. Hoßfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, Tran-Gia P (2013) Best practices for qoe crowdtesting: qoe assessment with crowdsourcing. IEEE Trans Multimedia 16(2):541–558

    Article  Google Scholar 

  8. Iskender N, Polzehl T, Möller S (2020) Crowdsourcing versus the laboratory: towards crowd-based linguistic text quality assessment of query-based extractive summarization. In: Proceedings of the conference on digital curation technologies (Qurator 2020). CEUR, pp 1–16

    Google Scholar 

  9. Iskender N, Polzehl T, Möller S (2020) Towards a reliable and robust methodology for crowd-based subjective quality assessment of query-based extractive text summarization. In: Proceedings of the 12th LREC. European Language Resources Association, pp 245–253

    Google Scholar 

  10. ISO (2010) Ergonomics of human system interaction-part 210: Human-centred design for interactive systems (formerly known as 13407). Standard ISO DIS 9241-210, International Organization for Standardization, Switzerland

    Google Scholar 

  11. ITU-T (2003) Subjective quality evaluation of telephone services based on spoken dialogue systems. ITU-T Rec. P.851, International Telecommunication Union, Geneva

    Google Scholar 

  12. Kittur A, Chi E, Suh B (2008) Crowdsourcing for usability: using micro-task markets for rapid, remote, and low-cost user measurements. In: Proceedings of the CHI 2008

    Google Scholar 

  13. Lai A (2016) The rise of the empowered customer. Technical report, Forrester Research, Inc., 60 Acorn Park Drive, Cambridge, MA 02140 USA

    Google Scholar 

  14. Law ELC, Roto V, Hassenzahl M, Vermeeren AP, Kort J (2009) Understanding, scoping and defining user experience: a survey approach. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 719–728

    Google Scholar 

  15. Liu D, Bias RG, Lease M, Kuipers R (2012) Crowdsourcing for usability testing. Proc Amer Soc Inf Sci Technol 49(1):1–10

    Google Scholar 

  16. Meffert H, Burmann C, Kirchgeorg M, Eisenbeiß M (2018) Marketing: Grundlagen marktorientierter Unternehmensführung Konzepte–Instrumente–Praxisbeispiele. Springer

    Google Scholar 

  17. Möller S, Smeele P, Boland H, Krebber J (2007) Evaluating spoken dialogue systems according to de-facto standards: a case study. Comput Speech & Lang 21(1):26–53

    Article  Google Scholar 

  18. Nebeling M, Speicher M, Norrie MC (2013) Crowdstudy: general toolkit for crowdsourced evaluation of web interfaces. In: Proceedings of the 5th ACM SIGCHI symposium on engineering interactive computing systems, pp 255–264

    Google Scholar 

  19. Quarteroni S, Manandhar S (2009) Designing an interactive open-domain question answering system. Nat Lang Eng 15(1):73

    Google Scholar 

  20. Reichheld F (2011) The ultimate question 2.0 (revised and expanded edition): how net promoter companies thrive in a customer-driven world. Harvard Business Review Press

    Google Scholar 

  21. Rogers CR (1951) Client-centered therapy: its current practice, implications, and theory, with chapters. Houghton Mifflin Oxford, United Kingdom

    Google Scholar 

  22. Yu Z, Xu Z, Black AW, Rudnicky A (2016) Chatbot evaluation and database expansion via crowdsourcing. In: Proceedings of the chatbot workshop of LREC, vol 63, p 102

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Polzehl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cao, Y. et al. (2022). On the Impact of Self-efficacy on Assessment of User Experience in Customer Service Chatbot Conversations. In: Stoyanchev, S., Ultes, S., Li, H. (eds) Conversational AI for Natural Human-Centric Interaction. Lecture Notes in Electrical Engineering, vol 943. Springer, Singapore. https://doi.org/10.1007/978-981-19-5538-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5538-9_18

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5537-2

  • Online ISBN: 978-981-19-5538-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics