Skip to main content
Log in

A Procedure for Assessing the Influence of Problem Domain on Effort Estimation Consistency

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

By and large, given the inherent subjectivity in defining and measuring factors used in algorithmic effort estimation methods, when algorithmic methods produce consistent estimates it seems reasonable to assume that this is in part due to estimator experience. Further, software development factors are usually assumed to have different degrees of influence on actual effort. For example, no specific allowances for program language or problem domain were made in the original COCOMO model or in Albrecht's Function Points, whilst allowances for development mode in COCOMO and function type complexity for Albrecht's Function Points are crucial. However, work has been conducted that concluded that 4GLs are associated with higher productivity than 3GLs. Clearly, we can support such conclusions about productivity, since, for example, it usually requires less effort to develop a database using a purposely designed DBMS product than it does using a 3GL. However, in general, for a given problem domain an appropriate development language and platform will be selected. Hence, we might feel that an appropriate development language will not be a factor that influences estimate consistency unduly, given that an estimator has experience of the problem domain. However, algorithmic methods usually require calibration to different problem domains. Calibration may be needed because the method was originally designed using data from another type of domain. Furthermore, estimators' estimation consistency within problem domains may be affected for one or more reasons. Intuitively, reasons might include: estimators lack estimation experience in some domains; or the development team(s) may have different levels of experience in different domains, which the estimator finds difficult to take into account. We demonstrate how, in general, the influence of problem domain may be assessed using a Hierarchical Bayesian inference procedure. We also show how values can be derived to account for variations in estimate consistency in problem domains.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albrecht, A.J. 1979. Measuring application development, Proceedings of IBM Applications Development Joint SHARE/GUIDE Symposium, Monterey, CA, pp. 83–92.

  • Albrecht, A.J. and Gaffney, J.E. 1983. Software function, source lines of code, and development effort prediction: A software science validation, IEEE Transactions on Software Engineering 9(6): 639–648.

    Google Scholar 

  • Altman, D.G. 1993. Practical Statistics for Medical Research, Chapman and Hall.

  • Angelis, L., Stamelos, I. and Morrisio, M. 2001. Building a software cost estimation model based on categorical data, IEEE Metrics 2001, Conference Proceedings, London, 4-6 April, pp. 4–15.

  • Boehm, B.W. 1981. Software Engineering Economics, Englewood Cliffs, NJ, Prentice-Hall.

    Google Scholar 

  • Chula, S., Boehm, B. and Steece, B. 1999. Bayesian analysis of empirical software engineering cost models, IEEE Transactions on Software Engineering 25(4): 573–583.

    Google Scholar 

  • Fenton, N.E. and Pfleeger, S.L. 1997. Software Metrics: A Rigorous and Practical Approach, 2nd ed. Revised Printing, PWS Publishing Company.

  • Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B. 1998. Bayesian Data Analysis, Chapman & Hall.

  • Gilks, W.R., Richardson, S. and Spiegelhalter, D.J. 1996. Markov Monte Carlo in Practice, Chapman & Hall.

  • Hughes, R.T. 1996. Expert judgement as an estimating method, Information and Software Technology 38: 67–75.

    Google Scholar 

  • International Software Benchmarking Standards Group. 2003. Data Repository, site: http://www.isbg.org.au.

  • Kemerer, C.F. 1987. An empirical validation of software cost estimation models, Communications of the ACM 30(5): 416–429.

    Google Scholar 

  • Kitchenham, B.A. 1992. Empirical assumptions that underlie software cost-estimation models, Information and Software Technology 34(4): 211–218.

    Google Scholar 

  • Lindley, D.V. 2000. The philosophy of statistics, The Statistician 49(3): 293–337.

    Google Scholar 

  • Matson, J.E., Barrett, B.E. and Mellichamp, J.M. 1994. Software development cost estimation using function points, IEEE Transactions on Software Engineering 20(4): 275–287.

    Google Scholar 

  • McCullagh, P. and Nelder, J.A. 1983. Generalized linear models, Monographs on Statistics and Applied Probability, Chapman and Hall.

  • Miller, J. 1999. Can results from software engineering experiments be safely combined? 6th IEEE International Symposium on Software Metrics, Boca Raton, FL, 4-6 November, pp. 152–158.

  • Morris, C.N. and Normand, S.L. 1992. Hierarchical models for combining information and for meta-analysis, Bayesian Statistics 4: 321–344.

    Google Scholar 

  • Moses, J. 2000. Bayesian probability distributions for assessing subjectivity in the measurement of subjective software attributes, Information and Software Technology 42(8): 533–546.

    Google Scholar 

  • Moses, J. 2001. A consideration of the impact of interactions with module effects on the direct measurement of subjective software attributes, 7th IEEE Symposium on Software Metrics, London, UK, April, pp. 112–123.

  • Moses, J. and Clifford, J. 2000a. Learning how to improve effort estimation in small software development companies, COMPSAC 00, IEEE Computer Society, The 24th International Computer Software and Applications Conference, Taipei, Taiwan, 25-27 October, pp. 522–527.

  • Moses, J. and Clifford, J. 2000b. Support for effort estimation in small software companies using Bayesian inference, EuroSPI'2000, Practical and Innovation Based Software Process Improvement to Prepare for the New Millenium, Copenhagen, Denmark, 7-9 November.

  • Myers, R.H.J. 1990. Classical and Modern Regression with Applications, 2nd ed., Duxbury Advanced Series in Statistics and Decision, PWS-Kent Publishers.

  • Spiegelhalter, D.J., Thomas, A., Best. N., and Gilks, W. 1996. BUGS 0.5, Bayesian Inference Using Gibbs Sampling Manual (version ii), MRC Biostatistics Unit, Cambridge, August.

    Google Scholar 

  • Symons, C.R. 1991. Software Sizing and Estimating Mk II (Function Point Analysis), Wiley.

  • Walpole, R.E. and Myers, R.H. 1993. Probability and Statistics for Engineers and Scientists, 5th ed., Prentice-Hall International.

  • Western, B. 1998. Causal heterogeneity in comparative research: A Bayesian Hierarchical modeling approach, American Journal of Political Science 42(4): 1233–1259.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Moses, J., Farrow, M. A Procedure for Assessing the Influence of Problem Domain on Effort Estimation Consistency. Software Quality Journal 11, 283–300 (2003). https://doi.org/10.1023/A:1025861011126

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1025861011126

Navigation