Skip to main content

Advertisement

Log in

Improving the identification of hedonic quality in user requirements: a second controlled experiment

  • RE 2017
  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

Systematically engineering a good user experience (UX) into a computer-based system under development demands that the user requirements of the system reflect all needs, including emotional, of all stakeholders. User requirements address two different types of qualities: pragmatic qualities (PQs), that address system functionality and usability, and hedonic qualities (HQs) that address the stakeholder’s psychological well-being. Studies show that users tend to describe such satisfying UXes mainly with PQs and that some users seem to believe that they are describing an HQ when they are actually describing a PQ. The problem is to see if classification of any user requirement as PQ-related or HQ-related is difficult, and if so, why. We conducted two controlled experiments involving the same twelve requirements-engineering and UX professionals, hereinafter called “analysts.” The first experiment, which had the twelve analysts classifying each of 105 user requirements as PQ-related or HQ-related, shows that neither (1) an analyst’s involvement in the project from which the requirements came nor (2) the analyst’s use of a detailed model of the qualities in addition to the standard definitions of “PQ” and “HQ” has a positive effect on the consistency of the analyst’s classification with that of others. The second experiment, which had the twelve analysts classifying each of a set of 50 user requirements, derived from the 105 of the first experiment, showed that difficulties seem to be caused both by the analyst’s lacking skill in applying the definitions of “PQ” and “HQ” and by poorly written user requirement specifications. The first experiment revealed that classification of user requirements is a lot harder than initially assumed. The second experiment provided evidence that the difficulties can be mitigated by the combination of (1) training analysts in applying the definitions of “PQ” and “HQ” and (2) casting user requirement specifications in a new template that forces provision of the information needed for reliable classification. The experiment shows also that neither training analysts nor casting user requirement specifications in the new template, by itself, mitigates the difficulty in classifying user requirements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. For example, an expensive product communicates the value of the product’s user’s being rich to other people.

  2. In a hypothesis name, “A” means “alternative,” “N” means “null,” “E” means “concerning ease of classification,” and “C” means “concerning consistency of the classifications.”

  3. (1) ease of use and (2) utility.

  4. (1) enablement of personal development, (2) identification, (3) symbolism, (4) attachment, (5) aesthetics, (6) luxuriousness, (7) trust, (8) physical comfort, and (9) freedom from risk.

  5. There is no CorrN variable, because correctness can be evaluated for only templated, pragmatic or hedonic, and ambiguous user requirements.

  6. Wohlin et al. [32] define mono-operation bias as “If the experiment includes a single independent variable, case, subject or treatment, the experiment may under-represent the construct and thus not give the full picture of the theory. For example, if an inspection experiment is conducted with a single document as object, the cause construct is underrepresented.”

References

  1. Hassenzahl M (2004) The interplay of beauty, goodness, and usability in interactive products. Hum Comput Interact 19(4):319–349

    Article  Google Scholar 

  2. Diefenbach S, Kolb N, Hassenzahl M (2014) The ‘hedonic’ in human–computer interaction: history, contributions, and future research directions. In: Proceedings of conference on designing interactive systems (DIS), pp 305–314

  3. Hassenzahl M (2003) The thing and I: understanding the relationship between user and product. In: Blythe M, Overbeeke K, Monk A, Wright P (eds) Funology: from usability to enjoyment. Kluwer, Norwell, pp 31–42

    Chapter  Google Scholar 

  4. Jordan P (2000) Designing pleasurable products: an introduction to the new human factors. CRC, London

    Book  Google Scholar 

  5. Roto V, Law E, Vermeeren A, Hoonholt J (2011) User experience white paper. In: Demarcating user experience, 2011. http://www.allaboutux.org/uxwhitepaper

  6. Ramos I, Berry D (2005) Is emotion relevant to requirements engineering? Requir Eng J 10(3):238–242

    Article  Google Scholar 

  7. Ramos I, Berry DM, Carvalho JA (2005) Requirements engineering for organizational transformation. Inf Softw Technol 47(7):479–495

    Article  Google Scholar 

  8. Thew S, Sutcliffe A (2008) Investigating the role of ‘soft issues’ in the RE process. In: Proceedings of IEEE international requirements engineering conference (RE), pp 63–66

  9. Milne A, Maiden N (2012) Power and politics in requirements engineering: Embracing the dark side? Requir Eng J 17(2):83–98

    Article  Google Scholar 

  10. Sutcliffe A, Rayson P, Bull CN, Sawyer P (2014) Discovering affect-laden requirements to achieve system acceptance. In: Proceedings of IEEE international requirements engineering conference (RE), pp 173–182

  11. Hassenzahl M, Beu A, Burmester M (2001) Engineering joy. IEEE Softw 18:70–76

    Article  Google Scholar 

  12. Hassenzahl M, Diefenbach S, Göritz A (2010) Needs, affect, and interactive products—facets of user experience. Interact Comput 22(5):353–362

    Article  Google Scholar 

  13. Desmet P, Overbeeke C, Tax S (2001) Designing products with added emotional value: development and application of an approach for research through design. Des J 4(1):32–47

    Google Scholar 

  14. McCarthy J, Wright P (2004) Technology as experience. Interactions 11(5):42–43

    Article  Google Scholar 

  15. Hassenzahl M, Tractinsky N (2006) User experience—a research agenda. Behav Inf Technol 25(2):91–97

    Article  Google Scholar 

  16. Hassenzahl M (2008) User experience (UX): towards an experiential perspective on product quality. In: Proceedings of 20th international conference of association for Francophone d’Interaction Homme–Machine (IHM), pp. 11–15

  17. Doerr J (2011) Elicitation of a complete set of non-functional requirements, Ph.D. dissertation, University of Kaiserslautern, Kaiserslautern, DE

  18. Partala T, Kallinen A (2012) Understanding the most satisfying and unsatisfying user experiences: emotions, psychological needs, and context. Interact Comput 24(1):25–34

    Article  Google Scholar 

  19. Maier A, Berry DM (2017) Improving the identification of hedonic quality in user requirements—a controlled experiment. In: Proceedings of IEEE international requirements engineering conference (RE), pp 205–214

  20. Alliance A (2018) Glossary: role-feature-reason. https://www.agilealliance.org/glossary/role-feature/. Accessed 2 Apr 2018

  21. Diefenbach S, Hassenzahl M (2011) The dilemma of the hedonic—appreciated, but hard to justify. Interact Comput 23(5):461–472

    Article  Google Scholar 

  22. Maier A (2017) An experiment package for the evaluation of difficulties in the classification of user requirements that are provided as user stories, Fraunhofer IESE, Tech. Rep. Report No. 002.17/E. https://cs.uwaterloo.ca/~dberry/FTP_SITE/tech.reports/MaierBerryExperimentalMaterials/

  23. Basili V, Caldiera G, Rombach H (2001) Goal question metric paradigm. In: Marciniak J (ed) Encyclopedia of software engineering, vol 1. Wiley, New York, pp 528–532

    Google Scholar 

  24. Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In: Guide to advanced empirical software engineering. Springer, London, pp 201–228

  25. Fraunhofer IESE, Project Digital Villages, 2017. https://www.iese.fraunhofer.de/en/innovation_trends/sra/digital-villages.html

  26. Fleiss JL (1971) Measuring nominal scale agreement among many raters. Psycholog Bull 76(5):378–382

    Article  Google Scholar 

  27. Landis J, Koch G (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174

    Article  MATH  Google Scholar 

  28. Maier A, Berry DM (2017) Online appendix for improving the identification of hedonic quality in user requirements—two controlled experiments, University of Waterloo, Tech. Rep. Technical Report. https://cs.uwaterloo.ca/~dberry/FTP_SITE/tech.reports/MaierBerryOnlineAppendix/

  29. Keppel G, Wickens T (2004) Design and analysis: a researchers handbook, 4th edn. Prentice Hall, Englewood Cliffs

    Google Scholar 

  30. Anderson L, Krathwohl D, Airasian P, Cruikshank K, Mayer R, Pintrich P, Raths J, Wittrock M (2001) A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. Pearson, Allyn & Bacon, New York

    Google Scholar 

  31. Bloom B, Engelhart M, Furst E, Hill W, Krathwohl D (1956) Taxonomy of educational objectives, handbook I: the cognitive domain. Longmans, Green and Co, New York

    Google Scholar 

  32. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering. Springer, Heidelberg

    Book  MATH  Google Scholar 

  33. Gregor S (2002) A theory of theories in information systems. In: Gregor S, Hart D (eds) Information systems foundations: building the theoretical base. Australian National University, Canberra

    Google Scholar 

Download references

Acknowledgements

The authors thank this paper’s anonymous reviewers for RE’17 and for this special issue of REJ, Sebastian Adam, Joerg Doerr, and Andreas Jedlitschka for their comments on earlier drafts of this paper. Daniel Berry’s work was supported in part by a Canadian NSERC grant NSERC-RGPIN227055-15.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Andreas Maier or Daniel M. Berry.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maier, A., Berry, D.M. Improving the identification of hedonic quality in user requirements: a second controlled experiment. Requirements Eng 23, 401–424 (2018). https://doi.org/10.1007/s00766-018-0290-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00766-018-0290-5

Keywords