Abstract
A description and a theoretical discussion are given of an approach that it is proposed will improve client confidence in estimates produced using algorithmic software estimation methods, such as Function Point or COCOMO estimation. The approach utilises Bayes Theorem and Bayesian inference. The underlying theory has been successfully used in other arenas of subjective measurement to help improve measurement consistency. It is also proposed that the approach can be used to improve estimation consistency during the estimation of the effort required to develop software development artefacts (e.g. project effort, milestone effort, requirements changes). Software developers will also be able to measure uncertainty in their estimates of artefacts using Bayesian inference. Outsourcers can use the approach to provide statements for client companies about the confidence they have in their estimates. The statements of confidence can then be used to assist outsourcers and clients during project negotiations. Examples are provided to show how the method can be used to measure estimate uncertainty, how estimators can be supported during their estimation procedures, and what kinds of statement can be made to aid project negotiations.
Similar content being viewed by others
References
Albrecht, A. J. 1979. Measuring application development, Proc. of IBM Applications Development Joint SHARE/GUIDE Symp. Monterey, California, pp. 83–92
Albrecht, A. J. and Gaffney, J. E. 1983. Software function, source lines of code, and development effort prediction: A software science validation, IEEE Trans. Software Engrg. 9(6): 639–648
Boehm, B. W. 1981. Software Engineering Economics, New Jersey, Prentice-Hall.
Dawid, A. P. and Skene, A. M. 1979. Maximum likelihood estimation of observer error-rates using the E-M algorithm, Applied Statistics. 28(1): 20–28.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. 1998. Bayesian Data Analysis, London, Chapman & Hall.
Gray, A. R., MacDonell, S. G., and Shepperd, M. J. 1999. Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment, Sixth IEEE Int. Symp. on Software Metrics, Boca Raton, Florida, pp. 216–229.
Heemstra, F. J. 1992. Software cost estimation, Information and Software Technology. 34(10): 627–639.
Helmer, O. 1966. Social Technology, New York, Basic Books Inc.
Hughes, R. T. 1996. Expert judgment as an estimating method, Information and Software Technology. 38: 67–75.
Kemerer, C. 1993. Reliability of function points measurement: A eld experiment, Comm. of the ACM. 36(2): 85–97
Kemerer, C. F. 1987. An empirical validation of software cost estimation models, Commun. of the ACM. 30(5): 416–429
Kitchenham, B. A. 1992. Empirical studies of assumptions that underlie software cost-estimation models, Information and Software Technology. 34(4): 211–218.
Kitchenham, B. and Linkman, S. 1997. Estimates, uncertainty, and risk, IEEE Software. May/June, pp. 69–74.
Kitchenham, B. A., Travassos, G. H., von Mayrhauser, A., Niessink, F., Schneidewind, N. F., Singer, J., Takada, S., Vehilainen, R., and Yang, H. 1999. Towards an ontology of software maintenance, J. of Software Maintenance: Research and Practice. 11: 365–389.
Kitchenham, B., Pfleeger, S. L., and Fenton, N. 1995. Towards a framework for software measurement validation, IEEE Trans. Software Engrg. 21(12): 929–944.
Lederer, A. L. and Prasad, J. 1998. A causal model for software cost estimating error, IEEE Trans. Software Engrg. 24(2): 137–147.
Lindley, D. V. 2000. The philosophy of statistics, The Statistician. 49: 293–337.
Mair, C., Kaddada, G., Lefley, M., Phelp, K., Schofield, C., Shepperd, M., and Webster, S. 2000. An investigation of machine learning based prediction systems, The J. of Systems Software. 53: 23–29.
Matson, J. E., Barrett, B. E., and Mellichamp, J. M. 1994. Software development cost estimation using function points, IEEE Trans. Software Engrg. 20(4): 275–286.
Miller, J. 1999. Can results from software engineering experiments be safely combined? Sixth IEEE Int. Symp. on Software Metrics, Boca Raton, Florida, 152–158.
Morris, C. N. and Normand, S. L. Hierarchical models for combining information and for meta-analyses, Bayesian Statistics. 4: 321–344.
Moses, J. 1999. Quantifying residual doubt in the direct measurement of subjective software attributes, Second European Software Measurement Conference, Amsterdam, Holland, pp. 357–367.
Moses, J. 2000. Bayesian probability distributions for assessing subjectivity in the measurement of subjective software attributes, Information and Software Technology. 42(8): 533–546.
Moses, J. 2001. A consideration of the impact of interactions with module effects on the direct measurement of subjective software attributes, Seventh IEEE Symp. on Software Metrics, London, U. K.
Moses, J. and Clifford, J. 2000. Learning how to improve effort estimation in small software development companies, Twenty-fourth Int. Computer Software & Applications Conference, IEEE Computer Society, Taipei, Taiwan, pp. 522–527.
Osherson, D., Shafir, E., Krantz, D. H., and Smith, E. E. 1997. Probability bootstrapping: Improving prediction by tting extensional models to knowledgeable but incoherent probability judgements, Organizational Behavior Human Decision Process. 69: 1–8.
Raferty, A. E., Madigan, D., and Hieting, J. A. 1997. Bayesian model averaging for linear regression models, J. of the American Statistical Assoc. 92: 179–191.
Shepperd, M. and Schofield, C. 1997. Estimating software project effort using analogies, IEEE Trans. Software Engrg. 23(12): 736–743.
Smith, J. Q. 1992. Decision Analysis: A Bayesian Approach, London, Chapman & Hall.
Smith, T. C., Spiegelhalter, D. J., and Thomas, A. 1995. Bayesian approaches to random-effects meta-analysis: A comparative study, Statistics in Medicine. 14(437): 2685–2699.
Spiegelhalter, D. J. and Stovin, P. G. I. 1983. An analysis of biopsies following cardiac transplantation, Statistics in Medicine. 2: 33–40.
Spiegelhalter, D. J., Thomas, A., Best. N., and Gilks, W. 1996. BUGS 0.5, Bayesian Inference Using Gibbs Sampling Manual, Version II, MR Biostatistics Unit, Cambridge, UK.
Srinivasan, K. and Fisher, D. 1995. Machine learning approaches to estimating software development effort, IEEE Trans. Software Engrg. 21(20): 126–136.
Strike, K., El Emam, K., and Madhavji, N. 2001. Software cost estimation with incomplete data, IEEE Trans. Software Engrg. 27(10): 890–908.
Strensrud, E. and Myrtveit, I. 1998. Human performance estimating with analogy and regression models: An empirical validation, Proc. Fifth Int. Metrics Symposium, IEEE Computer Society, Bethesda, Maryland, pp. 205–213.
Symons, C. R. 1991. Software Sizing and Estimating Mk II (Function Point Analysis), New York, John Wiley & Sons.
Vincinanza, S. and Prietula, M. J. 1990. Case based reasoning in software effort estimation, Proc. of the Eleventh Int. Conf. on Information Systems. http://aisel:isworld.org/article.all.asp?publication
Western, B. 1998. Causal heterogeneity in comparative research: A Bayesian hierarchical modeling approach, American J. of Political Science. 42(4): 1233–1259.
Wilson, M. E., Williams, N. B., Baskett, P. J. F., Bennett, J. A., and Skene, A. M. 1980. Assessment of fitness for surgical procedures and the variability of anaesthetists' judgments, British Medical J. 1(23): 509–512.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Moses, J. Measuring Effort Estimation Uncertainty to Improve Client Confidence. Software Quality Journal 10, 135–148 (2002). https://doi.org/10.1023/A:1020523923715
Issue Date:
DOI: https://doi.org/10.1023/A:1020523923715