skip to main content
10.1145/2372251.2372300acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Does the prioritization technique affect stakeholders' selection of essential software product features?

Published:19 September 2012Publication History

ABSTRACT

Context: To select the essential, non-negotiable product features is a key skill for stakeholders in software projects. Such selection relies on human judgment, possibly supported by structured prioritization techniques and tools. Goal: Our goal was to investigate whether certain attributes of prioritization techniques affect stakeholders' threshold for judging product features as essential. The four investigated techniques represent four combinations of granularity (low, high) and cognitive support (low, high). Method: To control for robustness and masking effects when investigating in the field, we conducted both an artificial experiment and a field experiment using the same prioritization techniques. In the artificial experiment, 94 subjects in four treatment groups indicated the features (from a list of 16) essential when buying a new cell phone. In the field experiment, 44 domain experts indicated the software product features that were essential for the fulfillment of the project's vision. The effects of granularity and cognitive support on the number of essential ratings were analyzed and compared between the experiments. Result: With lower granularity, significantly more features were rated as essential. The effect was large in the general experiment and extreme in the field experiment. Added cognitive support had medium effect, but worked in opposite directions in the two experiments, and was not statistically significant in the field experiment. Implications: Software projects should avoid taking stakeholders' judgments of essentiality at face value. Practices and tools should be designed to counteract biases and to support the conscious knowledge-based elements of prioritizing.

References

  1. J. S. Armstrong, editor. Principles of Forecasting: A Handbook for Researchers and Practitioners. Kluwer Academic Publishers, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  2. H. C. Benestad and J. E. Hannay. A comparison of model-based and judgment-based release planning in incremental software projects. In Proc. 33rd Int'l Conf. Software Engineering (ICSE 2011), pages 766--775. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. Berander and A. Andrews. Requirements prioritization. In Engineering and Managing Software Requirements, chapter 4, pages 69--94. Springer, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  4. P. Berander and P. Jönsson. Hierarchical cumulative voting (hcv) prioritization of requirements in hierarchies. Int'l J. Software Engineering & Knowledge Engineering, 16:819--849, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  5. P. Berander, K. A. Khan, and L. Lehtola. Towards a research framework on requirements prioritization. In Proc. 6th Conf. Software Engineering Research and Practice in Sweden, pages 39--48, 2006.Google ScholarGoogle Scholar
  6. A. S. Danesh and R. Ahmad. Study of prioritization techniques using students as subjects. In Int'l Conf. Information Management and Engineering, pages 390--394. IEEE Computer Society, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. K. A. Ericsson. The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich, and R. R. Hoffman, editors, The Cambridge Handbook of Expertise and Expert Performance, chapter 38, pages 683--703. Cambridge Univ. Press, 2006.Google ScholarGoogle Scholar
  8. D. Gentner and A. L. Stevens, editors. Mental Models. Lawrence Erlbaum Associates, Inc., 1983.Google ScholarGoogle Scholar
  9. G. A. Gescheider. Psychophysical scaling. Annual Review of Psychology, 39:169--200, 1988.Google ScholarGoogle ScholarCross RefCross Ref
  10. G. Gigerenzer and P. M. Todd, editors. Simple Heuristics that Make Us Smart. Oxford University Press, 1999.Google ScholarGoogle Scholar
  11. T. Halkjelsvik and M. Jørgensen. From origami to software development: A review of studies on judgment-based predictions of performance time. accepted to Psychological Bulletin, 2011.Google ScholarGoogle Scholar
  12. J. E. Hannay. Better software effort estimationâa matter of skill or environment? Submitted to IEEE Trans. Software Engineering; available at simula.no/people/johannay/bibliography, 2012.Google ScholarGoogle Scholar
  13. J. E. Hannay and H. C. Benestad. Perceived productivity threats in large agile development projects. In Proc. 4th Int'l Symp.Empirical Software Engineering and Measurement (ESEM), pages 1--10. IEEE Computer Society, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. E. Hannay and M. Jørgensen. The role of deliberate artificial design elements in software engineering experiments. IEEE Trans. Software Eng., 34:242--259, Mar/Apr 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. T. Harker. Incomplete pairwise comparisons in the analytic hierarchy process. Mathematical Modelling, 9(11):837--848, 1987.Google ScholarGoogle ScholarCross RefCross Ref
  16. P. N. Johnson-Laird. Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge Univ. Press, 1983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Jørgensen and S. Grimstad. The impact of irrelevant and misleading information on software development effort estimates: A randomized controlled field experiment. IEEE Trans. Software Eng., 37(5):695--707, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Jørgensen and T. Halkjelsvik. The effects of request formats on judgment-based effort estimation. J. Systems and Software, 83(1):29--36, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Kahneman and S. Frederick. A model of heuristic judgment. In K. J. Holyoak and R. G. Morrison, editors, The Cambridge Handbook of Thinking and Reasoning, pages 267--294. Cambridge Univ. Press, 2004.Google ScholarGoogle Scholar
  20. V. B. Kampenes, T. Dybå, J. E. Hannay, and D. I. K. Sjøberg. A systematic review of effect size in software engineering experiments. Information and Software Technology, 49(11-12):1073--1086, Nov. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. Karlsson. Software requirements prioritizing. In 2nd Int'l Conf. Requirements Engineering (ICRE'96), pages 110--116. IEEE Computer Society, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. Karlsson, S. Olsson, and K. Ryan. Improved practical support for large-scale requirements prioritising. Requirements Engineering, 2:51--60, 1997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Karlsson, C. Wohlin, and B. Regnell. An evaluation of methods for prioritizing software requirements. Information & Software Technology, 39(14-15):939--947, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  24. L. Karlsson, M. Höst, and B. Regnell. Evaluating the practical use of different measurement scales in requirements prioritisation. In Proc. 2006 ACM/IEEE Int'l Symp. Empirical Software Engineering, ISESE '06, pages 326--335. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. L. Karlsson, T. Thelin, B. Regnell, P. Berander, and C. Wohlin. Pair-wise comparisons versus planning game partitioning-experiments on requirements prioritisation techniques. Empirical Software Engineering, 12:3--33, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. Klein. Developing expertise in decision making. Thinking & Reasoning, 3(4):337--352, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  27. L. Lehtola and M. Kauppinen. Empirical evaluation of two requirements prioritization methods in product development projects. In T. Dingsøyr, editor, Software Process Improvement, volume 3281 of Lecture Notes in Computer Science, pages 161--170. Springer, 2004.Google ScholarGoogle Scholar
  28. L. Lehtola and M. Kauppinen. Suitability of requirements prioritization methods for market-driven software product development. Software Process: Improvement and Practice, 11:7--19, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  29. T. Mussweiler. Comparison processes in social judgment: Mechanisms and consequences. Psych. Review, 110(3):472--489, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  30. A. Parducci and D. H. Wedell. The category effect with rating scales: Number of categories, number of stimuli, and method of presentation. J. Experimental Psychology: Human Perception and Performance, 12(4):496--516, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  31. A. Perini, F. Ricca, and A. Susi. Tool-supported requirements prioritization: Comparing the ahp and cbrank methods. Information and Software Technology, 51(6):1021--1032, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. E. C. Poulton. Behavioral Decision Theory: A New Approach. Cambridge University Press, 1994.Google ScholarGoogle Scholar
  33. T. L. Saaty. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. McGraw-Hill, 1990.Google ScholarGoogle Scholar
  34. T. L. Saaty. Multicriteria Decision Making: The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. RWS Publications, 1990.Google ScholarGoogle Scholar
  35. W. R. Shadish, T. D. Cook, and D. T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin, 2002.Google ScholarGoogle Scholar
  36. H. A. Simon. The Sciences of the Artificial. MIT Press, third edition, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. F. Strack and T. Mussweiler. Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. J. Personality and Social Psychology, 73(3):437--446, 1997.Google ScholarGoogle ScholarCross RefCross Ref
  38. R. K. Yin. Case Study Research: Design and Methods, volume 5 of Applied Social Research Methods Series. Sage Publications, third edition, 2003.Google ScholarGoogle Scholar

Index Terms

  1. Does the prioritization technique affect stakeholders' selection of essential software product features?

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEM '12: Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement
      September 2012
      338 pages
      ISBN:9781450310567
      DOI:10.1145/2372251

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 September 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate130of594submissions,22%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader