Skip to main content

Analyzing Methods, Instruments, and Tools for Evaluating the Customer eXperience

  • Conference paper
  • First Online:
Social Computing and Social Media: Applications in Education and Commerce (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13316))

Included in the following conference series:

  • 861 Accesses

Abstract

There are multiple evaluation approaches (methods, instruments, and tools) that can be used for evaluating the CX0. However, these may not be considered by different companies/organizations because they assume that these are not related to the CX. This research analyze 29 evaluation approaches identified in a previous study (Rojas and Quiñones 2021) used in the areas of usability, user experience (UX), and satisfaction. We differentiate and examine these evaluation approaches indicating: (1) the type of participants required to apply them (experts or users); (2) the overall costs needed to use them (cheap or expensive); (3) some disadvantages or potential risks of them; and (4) the CX dimensions that could be evaluated. We found that: (1) most evaluation approaches (69%) require representative users rather than expert evaluators; (2) most evaluation approach (86,2%) are inexpensive to use since they do not need equipment or training; and (3) the most evaluated CX dimension corresponded to “sensorial”, while the least evaluated CX dimension was “emotional”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Rawson, A., Duncan, E., Jones, C.: The truth about customer experience. Harv. Bus. Rev. 91(9), 90–98 (2013)

    Google Scholar 

  2. Lewis, J.R.: Usability: lessons learned. and yet to be learned. Int. J. Hum. Comput. Interact. 30(9), 663–684 (2014). https://doi.org/10.1080/10447318.2014.930311

    Article  Google Scholar 

  3. Rusu, V., Rusu, C., Botella, F., Quiñones, D., Bascur, C., Rusu, V.Z.: Customer eXperience: a bridge between service science and human-computer interaction. In: Ahram, T., Karwowski, W., Pickl, S., Taiar, R. (eds.) IHSED 2019. AISC, vol. 1026, pp. 385–390. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-27928-8_59

    Chapter  Google Scholar 

  4. Morville, P.: User Experience Design, Semantic Studios (2005)

    Google Scholar 

  5. Rojas, L., Quiñones, D.: Customer eXperience evaluation methodologies: a literature review. ACM Int. Conf. Proc. Ser. (2021). https://doi.org/10.1145/3488392.3488398

  6. Schmitt, B.: Experiential marketing. J. Mark. Manag. 15(1–3), 53–67 (1999)

    Article  Google Scholar 

  7. Meyer, C., Schwager, A.: Understanding customer experience. Harv. Bus. Rev. 85(2), 116–124 (2007). https://doi.org/10.1108/00242539410067746

    Article  Google Scholar 

  8. Gentile, C., Spiller, N., Noci, G.: How to sustain the customer experience: an overview of experience components that co-create value with the customer. Eur. Manag. J. 25(5), 395–410 (2007). https://doi.org/10.1016/j.emj.2007.08.005

    Article  Google Scholar 

  9. Verhoef, P.C., Lemon, K.N., Parasuraman, A., Roggeveen, A., Tsiros, M., Schlesinger, L.A.: Customer experience creation: determinants, dynamics and management strategies. J. Retail. 85(1), 31–41 (2009). https://doi.org/10.1016/j.jretai.2008.11.001

    Article  Google Scholar 

  10. Lemke, F., Clark, M., Wilson, H.: Customer experience quality: an exploration in business and consumer contexts using repertory grid technique. J. Acad. Mark. Sci. 39(6), 846–869 (2011). https://doi.org/10.1007/s11747-010-0219-0

    Article  Google Scholar 

  11. Lemon, K.N., Verhoef, P.C.: Understanding customer experience throughout the customer journey. J. Mark. 80(6), 69–96 (2016). https://doi.org/10.1509/jm.15.0420

    Article  Google Scholar 

  12. ISO 9241–210. “ISO 9241–210 : 2010 Ergonomics of human-system interaction—part 210 : Human-centred design for interactive systems. International Standard (2019). https://www.iso.org/standard/77520.html

  13. Norman, D., Nielsen, J.: The Definition of User Experience (UX). Nielsen Norman Group. https://www.nngroup.com/articles/definition-user-experience/

  14. Dam, R., Siang, T.: Affinity Diagrams – Learn How to Cluster and Bundle Ideas and Fact. Interaction Design Foundation (2020). https://www.interaction-design.org/literature/article/affinity-diagrams-learn-how-to-cluster-and-bundle-ideas-and-facts

  15. Meng, L.: Literature review. In: Gender in Literary Translation. CIS, vol. 3, pp. 9–28. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-3720-8_2

  16. Álvarez, T.: Metodología para la evaluación de la experiencia del usuario de sistemas de software interactivos para usuarios ciegos. Universidad veracruzana (2019)

    Google Scholar 

  17. Sherwin, K.: Group Card Sorting: Uncover Users’ Mental Models for Better Information Architecture. Nielsen Norman Group (2018). https://www.nngroup.com/articles/card-sorting-definition/

  18. Jordan, P.W.: Designing Pleasurable Products: An Introduction to the New Human Factors, vol. 53, no. 9 (2000)

    Google Scholar 

  19. Interaction Design Foundation, How to Conduct a Cognitive Walkthrough. Interaction Design Foundation (2020). https://www.interaction-design.org/literature/article/how-to-conduct-a-cognitive-walkthrough

  20. Moran, K.: Setup of An Eyetracking Study. Nielsen Norman Group (2019). https://www.nngroup.com/articles/eyetracking-setup.

  21. Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Conference on Human Factors in Computing Systems - Proceedings, pp. 249–256 (1990). https://doi.org/10.1145/97243.97281

  22. Business Research Methodology, Observation (2011). https://research-methodology.net/research-methods/qualitative-research/observation

  23. Hackett, G.: Survey research methods. Pers. Guid. J. 59(9), 599–604 (1981)

    Article  Google Scholar 

  24. Burmester, M., Mast, M., Jäger, K., Homans, H.: Valence method for formative evaluation of user experience. In: Proceedings of the 8th ACM Conference on Designing Interactive Systems, pp. 364–367 (2010)

    Google Scholar 

  25. User Interface Design GmbH, Attrakdif (2013). http://www.attrakdiff.de/

  26. Desmet, P., Overbeeke, K., Tax, S.: Designing products with added emotional value: development and appllcation of an approach for research through design. Des. J. 4(1), 32–47 (2001). https://doi.org/10.2752/146069201789378496

    Article  Google Scholar 

  27. Voss, K.E., Spangenberg, E.R., Grohmann, B.: Measuring the hedonic and utilitarian dimensions of consumer attitude. J. Mark. Res. 40(3), 310–320 (2003)

    Article  Google Scholar 

  28. Watson, D., Clark, L.A., Tellenge, A.: Development and validation of brief measures of positive and negative affect: the PANAS scales. J. Pers. Soc. Psychol. 1988(6), 1063–1070 (2017)

    Article  Google Scholar 

  29. Lewis, J.R.: Psychometric evaluation of the post-study system usability questionnaire: the PSSUQ. Proc. Hum. Fact. Soc. Ann. Meet. 36(16), 1259–1260 (1992)

    Article  Google Scholar 

  30. Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25(1), 49–59 (1994)

    Article  Google Scholar 

  31. Parasuraman, A., Zeithaml, V., Berry, L.: SERVQUAL: a multiple-item scale for measuring consumer perceptions of service quality. J. Retail. 64(1), 1–30 (1988)

    Google Scholar 

  32. Brooke, J.: SUS: A ‘Quick and Dirty’ Usability Scale. Usability Evaluation in Industry, pp. 207–212 (1996). https://doi.org/10.1201/9781498710411-35

  33. Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Symposium of the Austrian HCI and Usability Engineering Group, pp. 63–76 (2008)

    Google Scholar 

  34. Tague, N.R.: The Quality Toolbox, vol. 600. ASQ Quality Press Milwaukee (2005)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the School of Informatics Engineering of the Pontificia Universidad Católica de Valparaíso – Chile. Luis Rojas has been granted the “INF-PUCV” Graduate Scholarship. Luis Rojas is supported by Grant ANID BECAS/DOCTORADO NACIONAL, Chile, No 21211272. Daniela Quiñones is supported by Grant ANID (ex CONICYT), Chile, FONDECYT INICIACIÓN, Project Nº 11190759.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luis Rojas .

Editor information

Editors and Affiliations

Appendices

Appendix A: Participants, Costs and Disadvantages of Each Evaluation Approach

Evaluation

approach

Participant

Cost

Disadvantages

Affinity diagram [14]

Experts

Cheap

•It requires having previously collected information

•It may require a lot of time depending on the variety of information

AttrakDiff [25]

Users

Cheap

•Not much information is available about this instrument

•The original publication is in Deutsch

Automated usability evaluation software [15]

Experts

Expensive

•Usually, these software are not free to use

•Software with lack of flexibility and customization

Blind finger tracking [16]

Users

Cheap

•A screenshot must be taken each time the user performs a function on the mobile device

Card sorting [17]

Users

Cheap

•This method may not go deep enough

•The results can be very variable and inconsistent

Cause and effect diagram [34]

Experts

Cheap

•It is difficult to represent the interrelated nature of problems and causes

•It is not very useful to represent complex problems

Co-discovery [18]

Users

Cheap

•The researcher cannot control the direction of the discussion

•Not all problems can be identified, only those related to defined tasks

Cognitive walkthrough [19]

Experts

Cheap

•Experts’ experience may cause them to overlook problems

•Not all problems are identified, only those related to defined tasks

Critical to quality (CTQ) tree [34]

Experts

Cheap

•It depends on other methods to identify needs

•It is not very useful to represent complex problems

Emocards [26]

Users

Cheap

•Difficulty in differentiating emotions by users

•The method do not measure the actual emotion, it only measure the perceived pleasantness and arousal

Eye tracking [20]

Users

Expensive

•Eye tracking equipment and training can be expensive

•It is difficult for users to control eye position accurately all times

Focus group [18]

Users

Expensive

•Participants usually know they are being observed so the answers might be dishonest

•This method is not as efficient in covering maximum depth on a particular issue

Hedonic utility scale [27]

Users

Cheap

•The scale can be interpreted subjectively

•Its results may vary depending to the mood of the participant

Heuristic evaluation [21]

Experts

Cheap

•The relevance of the problems identified depends on the experience of the evaluators

•Its costs can easily increase if many expert evaluators are required

Interview [18]

Users

Cheap

•Participants usually know they are being observed so the answers might be dishonest

•Its costs can easily increase making them expensive

Observations [22]

Experts

Cheap

•The observer had limited control over physical situation

•The relevance of the data observed depends on the experience of the evaluator

Positive and negative affect schedule (PANAS) [28]

Users

Cheap

•The scale can be interpreted subjectively

•Its results may vary depending to the mood of the participant

Post-study system usability questionnaire (PSSUQ) [29]

Users

Cheap

•There is not as much information available for this instrument as others

Quality function deployment (QFD) [34]

Experts

Cheap

•Categories are based on qualitative aspects and appears to be vague and not very clear

Questionnaire [18]

Users

Cheap

•Participants may give wrong or unanswered answers

•Participants answers might be dishonest

Self-assessment manikin (SAM) [30]

Users

Cheap

•The scale can be interpreted subjectively

•Its results may vary depending to the mood of the participant

SERVQUAL [31]

Users

Cheap

•It evaluates customer perception in a general way

•It only focuses only on service delivery and not on the outcomes

Supply, Input, Process, Output and Customer (SIPOC) [34]

Experts

Cheap

•The tool is not applicable to all processes

•Not very useful to represent complex problems

Survey [23]

Users

Cheap

•Participants may give wrong or unanswered answers

•Its costs can easily increase making them expensive

System usability scale (SUS) [32]

Users

Cheap

•The instrument is a subjective measure of perceived usability

•The instrument only provide quantitative data so it is difficult to know why participants assigned certain scores

Thinking aloud [18]

Users

Cheap

•Participants limit their responses because they are observed

•Not all problems can be identified, only those related to defined tasks

UX questionnaire (UEQ) [33]

Users

Cheap

•Participants may have problems to interpret the items of the scale

Valence method [24]

Users

Cheap

•Participants should use the product or prototype for the first time during the evaluation

•Its results may vary depending to the mood of the participant

Web usage analysis [15]

Users

Expensive

•There is not as much information available for this method as others

Appendix B: Association Between CX Dimensions and Evaluation Approaches

Evaluation

approach

CX dimensions

Sensorial

Emotional

Cognitive

Pragmatic

Lifestyle

Relational

Affinity diagram [14]

X

 

X

 

X

X

AttrakDiff [25]

X

  

X

  

Automated usability evaluation software [15]

   

X

  

Blind finger tracking [16]

X

 

X

X

  

Card sorting [17]

X

 

X

X

  

Cause and effect diagram [34]

  

X

  

X

Co-discovery [18]

X

 

X

X

 

X

Cognitive walkthrough [19]

X

 

X

X

  

Critical to quality (CTQ) tree [34]

  

X

   

Emocards [26]

X

X

    

Eye tracking [20]

  

X

X

  

Focus group [18]

X

X

  

X

X

Hedonic utility scale [27]

X

X

  

X

 

Heuristic evaluation [21]

  

X

X

  

Interview [18]

X

X

X

X

X

X

Observations [22]

X

 

X

 

X

X

Positive and negative affect schedule (PANAS) [28]

X

X

    

Post-study system usability questionnaire (PSSUQ) [29]

X

 

X

X

  

Quality function deployment (QFD) [34]

  

X

  

X

Questionnaire [18]

X

X

X

X

X

X

Self-assessment manikin (SAM) [30]

X

X

    

SERVQUAL [31]

X

 

X

 

X

 

Supply, Input, Process, Output and Customer (SIPOC) [34]

  

X

  

X

Survey [23]

X

X

X

X

X

X

System usability scale (SUS) [32]

X

  

X

  

Thinking aloud [18]

X

 

X

X

  

UX questionnaire (UEQ) [33]

X

 

X

 

X

 

Valence method [24]

X

X

  

X

 

Web usage analysis [15]

  

X

X

  

Total

21

9

20

15

10

10

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rojas, L., Quiñones, D. (2022). Analyzing Methods, Instruments, and Tools for Evaluating the Customer eXperience. In: Meiselwitz, G. (eds) Social Computing and Social Media: Applications in Education and Commerce. HCII 2022. Lecture Notes in Computer Science, vol 13316. Springer, Cham. https://doi.org/10.1007/978-3-031-05064-0_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05064-0_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05063-3

  • Online ISBN: 978-3-031-05064-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics