Skip to main content

Experimental designs for validating metrics and applying them across multiple projects

  • Conference paper
  • First Online:
Experimental Software Engineering Issues: Critical Assessment and Future Directions

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 706))

Abstract

We explain why it is important to validate metrics for use on multiple projects and how the risk of doing so can be assessed. We also point out that by prototyping the measurement plan on a project, we can eliminate the risk inherent in the process of validating metrics on one project and applying them on another project, where the two projects may have significant differences in application and development environment characteristics. In order to support the application of metrics across multiple projects, there must be reuse of methodologies, metrics, and metrics processes. A metrics methodology is reusable when there is a process associated with the methodology that can be applied across projects. Both the metrics and the process for applying metrics must be reusable. Metrics are reusable when validated metrics can be applied across projects, using the methodology. An example is given of assessing risk by using the confidence limits of a metric to evaluate the consequences of best-case and worst-case outcomes of using metrics on multiple projects.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Martin E. Bush and Norman E. Fenton, “Software Measurement: A Conceptual Framework”, The Journal of Systems and Software, Vol. 12, No. 3, July 1990, pp. 223–231.

    Google Scholar 

  2. H. Dieter Rombach, “Design Measurement: Some Lessons Learned”, IEEE Software, Vol. 7, No. 2, March 1990, pp. 17–25.

    Google Scholar 

  3. Norman F. Schneidewind, “Methodology for Validating Software Metrics”, IEEE Transactions on Software Engineering”, May 1992, pp. 410–422.

    Google Scholar 

  4. Norman F. Schneidewind, “Minimizing Risks in Applying Metrics on Multiple Projects”, Proceedings of the Third International Symposium on Software Reliability Engineering”, Raleigh, NC, October 9, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

H. Dieter Rombach Victor R. Basili Richard W. Selby

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schneidewind, N.F. (1993). Experimental designs for validating metrics and applying them across multiple projects. In: Rombach, H.D., Basili, V.R., Selby, R.W. (eds) Experimental Software Engineering Issues: Critical Assessment and Future Directions. Lecture Notes in Computer Science, vol 706. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57092-6_129

Download citation

  • DOI: https://doi.org/10.1007/3-540-57092-6_129

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57092-9

  • Online ISBN: 978-3-540-47903-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics