Abstract
Assessments are a proven and widely used method to measure one’s software process strengths and weaknesses. This helps determine where to start software process improvement programs. However, an assessment uses information internal to an organisation, but does not compare its processes to a competitor’s. Benchmarking is a way to compare one’s practices with other organisations. These types of comparisons reflect what are currently best practices within industry. In combination with assessment results, benchmarking can be used as a useful indicator on which processes to improve based upon industry assessment data. In this paper we present initial benchmarking results using data from the SPICE (Software Process Improvement and Capability dEtermination) Trials. To obtain the results, we applied an analysis technique called OSR (Optimised Set Reduction). This technique is well suited to find patterns in a database and derive interpretable models. We describe the type of benchmarks that are going to be produced for the SPICE Trails participants and how they can used for process improvement. Lastly, we describe how to integrate benchmarking into an assessment method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
ISO/IEC. ISO/IEC TR 15504-2: Information Technology - Software Process Assessment - Part 2: A Reference Model for Processes and Process Capability. Technical Report type 2, International Organisation for Standardisation (Ed.), Case Postale 56, CH-1211 Geneva, Switzerland (1989)
Beitz, A., El-Emam, K., Järvinen, J.: A Business Focus to Assessments. In: SPI 1999 Conference, Barcelona, November 30 - December 3 (1999)
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees, Wadsworth & Books/Cole Advanced Books & Software (1984)
Briand, L.C., Basili, V., Thomas, W.M.: A Pattern Recognition Approach for Software Engineering Data Analysis. IEEE Transaction on Software Engineering 18(11) (November 1992)
Briand, L.C., Basili, V., Hetmanski, C.: Developing Interpretable Models with Optimized Set Reduction for Identifying High-Risk Software Components. IEEE Transactions on Software Engineering 19(11) (November 1993)
Emam, K., El, J., Drouin, J., Melo, W.: SPICE: The Theory and Practice of Software Process Improvement and Capability Determination. IEEE Computer Society (1998)
SPICE Project Trials Team. Phase 2 Trials Interim Report (June 1998), URL: http://www.iese.fhg.de/SPICE/Trials/p2rp100pub.pdf
Shepperd, M., Schofield, C.: Estimating Software Project Effort Using Analogies. IEEE Transactions on Software Engineering 23(12), 736–743 (1997)
Weiss, S.M., Kulikowski, S.M.: C. A. Computer Systems that Learn. Morgan Kufmann, San Francisco (1991)
Zairi, M.: Benchmarking for Best Practice. In: Continuous learning through sustainable innovation, Reed Educational and Professional Publishing Ltd., Glasgow (1996)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Beitz, A., Wieczorek, I. (2000). Applying Benchmarking to Learn from Best Practices. In: Bomarius, F., Oivo, M. (eds) Product Focused Software Process Improvement. PROFES 2000. Lecture Notes in Computer Science, vol 1840. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45051-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-540-45051-1_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67688-1
Online ISBN: 978-3-540-45051-1
eBook Packages: Springer Book Archive