Skip to main content
Log in

A productivity benchmarking case study using Bayesian credible intervals

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

A productivity benchmarking case study is presented. Empirically valid evidence exists to suggest that certain project factors, such as development type and language type, influence project effort and productivity and a comparison is made taking into account these and other factors. The case study identifies a reasonably comparable set of data that was taken from a large benchmarking data repository by using the factors. This data set was then compared with the small data set presented by a company for benchmarking. The study illustrates how productivity rates might be misleading unless these factors are taken into account. Further, rather than simply giving a ratio for the company's productivity performance against the benchmark, the study shows how confidence about the company's performance can be expressed in terms of Bayesian confidence (credible) intervals for the ratio of the arithmetic means of the two data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Altman, D.G. 1993. Practical Statistics for Medical Research, London: Chapman and Hall.

    Google Scholar 

  • Dekkers, C. and Gunter, I. 2000. Using Backfiring to Accurately Size Software—More Wishful Thinkingthan Science? IT Metric Strategies, November, 6(11): 1–11.

    Google Scholar 

  • Gelman, A., Carlin, J.B., Stern, H.S., and Rubin, D.B. (1998). Bayesian Data Analysis, London: Chapman & Hall.

    Google Scholar 

  • ISBSG, 2003. International Software Benchmarking Standards Group, Data Repository, Release 8, site: http://www.isbg.org.au.

  • Kitchenham, B.A. 1992. Empirical assumptions that underlie software cost-estimation models, Information and Software Technology, 34(4): 211–218.

    Article  Google Scholar 

  • Moses, J. and Farrow, M. 2004. A Consideration of the Variation in Development Effort Consistency Due to Function Points, 1st Software Measurement European Forum, Istituto di Ricerca Internazionale, 28–30 January, Rome, Italy, ISBN 88-86674-33-3, pp. 247–256.

  • Moses, J. and Farrow, M. 2005. Assessing variation in development effort consistency using a data source with missing data, Software Quality Journal, 13(1): 71–89, ISSN 0963-9314.

    Google Scholar 

  • Putnam, L.H. and Myers, W. 1996. Familiar Metric Management—Small is Beautiful-Once Again, Quantitative Software Management, Inc., http://http://www.qsm.com/fmm_28.pdf

  • Reifer, D.J. 2002. Reifer Consultants Incorporated, Let the Numbers Do the Talking, Software Technology Support Centre, CrossTalk The Journal of Defense Engineering, March, pp. 1–8.

  • Spiegelhalter, D.J., Thomas, A., Best, N., and Gilks, W. 1996. BUGS 0.5, Bayesian Inference Using Gibbs Sampling Manual (version ii), MRC Biostatistics Unit, Cambridge, August.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Moses.

Additional information

John Moses is a Senior Lecturer in the School of Computing at the University of Sunderland and has received a BSc. degree in Applied Mathematics and Computing Science (Hons.) from the University of Sheffield, an MSc. in Operational Research (White Fish Authority prize in O.R.) from the University of Hull and a PhD. in Computer Science from the University of Sunderland in 1997. He has over seven years experience in commercial computing, working as a systems and operational research analyst in the steel, water, plastics and printing industries. John Moses has twenty years experience teaching and has lectured at the Universities of Teesside, Humberside and Sunderland. He is a member of the IEEE Computer Society, the British Computer Society and the Operational Research Society. His research interests are in software measurement and prediction systems for software development.

Malcolm Farrow graduated in 1976 with first class honours in Statistics at the University of Newcastle upon Tyne. He then had an industrial studentship at the University of Newcastle upon Tyne with Alcan Lynemouth Ltd and got his PhD in 1979 for work on time series problems associated with the control of aluminium reduction cells. From 1979 he was a lecturer at the University of Hull, moving to Leicester Polytechnic (now De Montfort University) in 1981 and then Sunderland in 1984. Dr. Farrow became a Chartered Statistician in 1993 when this qualification was introduced by the Royal Statistical Society. In 2005 he became a Senior Lecturer in Statistics at the University of Newcastle upon Tyne.

Norman Parrington graduated from Kings College, University of London with a degree in history. This naturally led him into a career as a computer programmer and systems analyst with the UK Government. He progressed to a Masters in Computing Science at the then North Staffordshire Polytechnic (now Staffordshire University) and thereafter worked for the UK Government procurement Agency as a compiler and operating system specialist. After three years Norman joined the UK Civil service as a Lecturer and three years later joined the University of Sunderland (then Sunderland polytechnic) where he has been for 22 years, currently occupying the post of Associate Dean where his responsibilities include Postgraduate programme Development. Normans major academic interests lie in the areas of software testing and software tools he has had more than 50 journal and conference papers published in these areas. He has co-authored two books “Understanding Software Testing” and “IT Training in Singapore”

Professor Peter Smith is Dean of the School of Computing and Technology at the University of Sunderland, UK. Peter joined the University of Sunderland (then Sunderland Polytechnic) as a student in 1975. He graduated in 1978 with a BSc (Hons) Combined Studies in Science, in the subjects Mathematics and Computing. After graduating, he stayed in the University to study for a PhD in Computer Simulation. He then joined the staff of the Polytechnic as a Lecturer in Computing. His research interests are in the areas of expert systems, software engineering and computers in manufacturing and he has published over 200 papers on these subjects. He is particularly interested in developing novel techniques which can be applied for the solution of real business and industrial problems. He is also author of three textbooks on Knowledge Engineering and Software Engineering. He is a Fellow of the British Computer Society, a Chartered Engineer, a Chartered Mathematician and a Fellow of the Institute of Mathematics and its Applications. Peter has spoken at conferences around the world, including several invited conference addresses. He has also managed a large number of collaborative research projects, funded by the European Union, the UK Department of Trade and Industry and several industrial partners.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Moses, J., Farrow, M., Parrington, N. et al. A productivity benchmarking case study using Bayesian credible intervals. Software Qual J 14, 37–52 (2006). https://doi.org/10.1007/s11219-006-6000-4

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-006-6000-4

Keywords

Navigation