Skip to main content

Software measurement and experimentation frameworks, mechanisms, and infrastructure

  • Conference paper
  • First Online:
  • 244 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 706))

Abstract

Software measurement and experimentation provide a cross-cutting foundation for software understanding, analysis, evaluation, and improvement. Effective measurement and experimentation requires a variety of issues to be addressed ranging from goal specification to metric definition to data interpretation. This paper focuses on a subset of the measurement and experimentation issues related to frameworks, mechanisms, and infrastructure. In particular, the paper highlights research issues or results in these areas: frameworks for measurement and experimentation, existing measures, determining appropriate measures, data collection, experimental designs, and infrastructure for measurement.

This work was supported in part by the Defense Advanced Research Projects Agency under grant MDA972-91-J-1010; National Science Foundation under grant CCR-8704311 with cooperation from the Defense Advanced Research Projects Agency under Arpa order 6108, program code 7T10; National Aeronautics and Space Administration under grant NSG-5123; National Science Foundation under grant DCR-8521398; University of California under the MICRO program; Computer Sciences Corporation; and TRW.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W. W. Agresti and W. Evanco. Projecting software defects from analyzing ada designs. IEEE Transactions on Software Engineering, SE-18(11), November 1992.

    Google Scholar 

  2. C. E. Brophy, W. W. Agresti, and V. R. Basili. Lessons learned in use of Ada-oriented design methods. In Proceedings of the Fifth National Conference on Ada Technology, Arlington, VA, March 1987.

    Google Scholar 

  3. F. T. Baker. Chief programmer team management of production programming. IBM Systems Journal, 11(1):131–149, 1972.

    Google Scholar 

  4. Victor R. Basili. Quantitative evaluation of software methodology. In Proceedings of the First Pan Pacific Computer Conference, Melbourne, Australia, September 1985.

    Google Scholar 

  5. J. W. Bailey and V. R. Basili. A meta-model for software development resource expenditures. In Proceedings of the Fifth International Conference on Software Engineering, pages 107–116, San Diego, CA, 1981.

    Google Scholar 

  6. L. Briand, V. Basili, and M. Thomas. A pattern recognition approach for software engineering data analysis. IEEE Transactions on Software Engineering, SE-18(11), November 1992.

    Google Scholar 

  7. B. W. Boehm, T. E. Gray, and T. Seewaldt. Prototyping versus specifying: A multiproject experiment. IEEE Transactions on Software Engineering, SE-10(3):290–303, May 1984.

    Google Scholar 

  8. G. E. P. Box, W. G. Hunter, and J. S. Hunter. Statistics for Experimenters. John Wiley & Sons, New York, 1978.

    Google Scholar 

  9. V. R. Basili and E. E. Katz. Metrics of interest in an Ada development In IEEE Workshop on Software Engineering Technology Transfer, pages 22–29, Miami, FL, April 1983.

    Google Scholar 

  10. V. R. Basili and E. E. Katz. A formalization and categorization of software metrics. Technical report, Department of Computer Science, University of Maryland, College Park, 1986. (working paper).

    Google Scholar 

  11. Les Belady and Manny Lehman. An introduction to growth dynamics. In W. Freiberger, editor, Proc. Conference on Statistical Computer Performance Evaluation, pages 503–511, Academic Press, 1972.

    Google Scholar 

  12. B. W. Boehm. Software Engineering Economics. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1981.

    Google Scholar 

  13. Barry Boehm. Industrial software metrics top 10 list. IEEE Software, 4(5):84–85, September 1987.

    Google Scholar 

  14. V. R. Basili and B. T. Perricone. Software errors and complexity: An empirical investigation. Communications of the ACM, 27(1):42–52, January 1984.

    Google Scholar 

  15. V. R. Basili and H. D. Rombach. The TAME project: Towards improvement-oriented software environments. IEEE Transactions on Software Engineering, SE-14(6):758–773, June 1988.

    Google Scholar 

  16. Ralph Brettschneider. Is your software ready for release? IEEE Software, 6(4): 100,102,108, July 1989.

    Google Scholar 

  17. R. E. Brooks. Studying programmer behavior: the problem of proper methodology. Communications of the ACM, 23(4):207–213, 1980.

    Google Scholar 

  18. Victor R. Basili and Richard W. Selby. Data collection and analysis in software research and management. In Proceedings of the American Statistical Association and Biometric Society Joint Statistical Meetings, Philadelphia, PA, August 1984.

    Google Scholar 

  19. Victor R. Basili and Richard W. Selby. Calculation and use of an environment's characteristic software metric set In Proceedings of the Eighth International Conference on Software Engineering, London, August 1985.

    Google Scholar 

  20. Victor R. Basili and Richard W. Selby. Computing the effectiveness of software testing strategies. IEEE Transactions on Software Engineering, SE-13(12):1278–1296, December 1987.

    Google Scholar 

  21. Victor R. Basili, Richard W. Selby, and David H. Hutchens. Experimentation in software engineering. IEEE Transactions on Software Engineering, SE-12(7):733–743, July 1986.

    Google Scholar 

  22. Victor R. Basili, Richard W. Selby, and Tsai Y. Phillips. Metric analysis and data validation across Fortran projects. IEEE Transactions on Software Engineering, SE-9(6):652–663, November 1983.

    Google Scholar 

  23. V.R. Basili and D. M. Weiss. A methodology for collecting valid software engineering data. IEEE Transactions on Software Engineering, SE-10(6):728–738, November 1984.

    Google Scholar 

  24. V.R. Basili, M. V. Zelkowitz, F. E. McGarry, Jr. R. W. Reiter, W. F. Truszkowski, and D. L. Weiss. The software engineering laboratory. Technical Report SEL-77-001, Software Engineering Laboratory, NASA/Goddard Space Flight Center, Greenbelt, MD, May 1977.

    Google Scholar 

  25. Anita Carleton. Software engineering institute software metrics initiative. In Proceedings of STARS 92 conference, Washington, D. C., December 1992.

    Google Scholar 

  26. W. G. Cochran and G. M. Cox. Experimental Design. John Wiley & Sons, New York, 1950.

    Google Scholar 

  27. J P. Cavano and J. A. McCall. A framework for the measurement of sofwtare quality. In Proc. Software Quality and Assurance Workshop, pages 133–139, San Diego, CA, November 1978.

    Google Scholar 

  28. B. Curtis, S. B. Sheppard, and P. M. Milliman. Third time charm: Stronger replication of the ability of software complexity metrics to predict programmer performance. Proceedings of the Fourth International Conference on Software Engineering, pages 356–360, September 1979.

    Google Scholar 

  29. Bill Curtis. Measurement and experimentation in software engineering. Proceedings of the IEEE, 68(9): 1144–1157, September 1980. Also in [Cur81].

    Google Scholar 

  30. Bill Curits. Overviews. In Tutorial: Human Factors in Software Development, pages 1–6, 7–8, 79–80, 279–280, 381–382, 483–484, 531–532, 589. IEEE Computer Society Press, 1981.

    Google Scholar 

  31. Michael Cusumano. Japan Software Factories: A Challenge for U.S. Management. 1992.

    Google Scholar 

  32. W. Decker and J. Valett. Software management environment (SME) concepts and architecture. Technical Report SEL-89-003, NASA Goddard, Greenbelt, Maryland, August 1989.

    Google Scholar 

  33. J. L. Elshoff. Characteristic program complexity metrics. In Proceedings of the Seventh International Conference on Software Engineering., pages 288–293, Orlando, FL, 1984.

    Google Scholar 

  34. S. Eick, E. Steffen, and E. Sumner. Seesoft — A tool for visualizing software. IEEE Transactions on Sofwtare Engineering, SE-18(11), November 1992.

    Google Scholar 

  35. Norm Fenton. Software measurment. In Proceedings of the Dagstuhl Software Measurement Workshop, Germany, September 1992.

    Google Scholar 

  36. Warren Harrison. Software measurement representation and analysis. In Proceedings of the Dagstuhl Software Measurement Workshop, Germany, September 1992.

    Google Scholar 

  37. Warren Harrison, Keneth Magel, Raymond Kluczny, and Arlan DeKock. Applying software complexity metrics to program maintenance. IEEE Computer, 15(9),:65–79, Septmeber 1982. Also appeared in the IEEE Computer Society Tutorial on Software Restructuring, order number 680.

    Google Scholar 

  38. W. S. Humphrey. Characterizing the software process: A maturity framework. IEEE Software, 5(2):73–79, March 1988.

    Google Scholar 

  39. Watts Humphrey. Quality form both developer and user viewpoints. IEEE Software, 6(5):84,100, September 1989.

    Google Scholar 

  40. Manny Lehman. The programming process. Technical Report RC2722, IBM Research Report, December 1969.

    Google Scholar 

  41. J. A. McCall, P. Richards, and G. Walters. Factors in software quality. Technical Report RADC-TR-77-369, Rome Air Development Center, Griffiss Air Force Base, NY, November 1977.

    Google Scholar 

  42. T. Moher and G. M. Schneider. Methodology and experimental research in software engineering. International Journal of Man-Machine Studies, 16(1):65–87, 1982.

    Google Scholar 

  43. R. Kent Madsen and Richard W. Selby. Metric-driven classification networks for identifying high-risk software components. In Proceedings of the International Conference on Applications of Software Measurement, San Diego, CA, November 1990.

    Google Scholar 

  44. John Musa. Faults, failures, and a metrics revolution. IEEE Software, 6(2):85,91, March 1989.

    Google Scholar 

  45. T. J. Ostrand and E. J. Weyuker. Collecting and categorizing software error data in an industrial environment. Journal of Systems and Software, 4:289–300, 1984.

    Google Scholar 

  46. D. L. Parnas. Some conclusions from an experiment in software engineering techniques. AFIPS Proceedings from the 1972 Fall Joint Computer Conference, 41:325–329, 1972.

    Google Scholar 

  47. Shari Lawrence Pfleeger. Multiple metric graphs. In Proceedings the Software Quality Workshop, Alexandria Bay, NY, AUgust 1991.

    Google Scholar 

  48. Norm Schneidewind. Software metric validations. In Proceedings of the Dagstuhl Software Measurement Workshop, Germany, September 1992.

    Google Scholar 

  49. E. Soloway and K. Ehrlich. Empirical Studies of programming knowledge. IEEE Transactions on Software Engineering, SE-10(5):595–609, September 1984.

    Google Scholar 

  50. Richard W. Selby. Evaluations of software technologies: Testing, cleanroom, and metrics. Technical Report TR-1500, Department of Computer Science, University of Maryland, College Park, 1985. Ph.D. Dissertation.

    Google Scholar 

  51. B. A. Sheil. The psychological study of programming. ACM Computing Surveys., 13:101–120, March 1981.

    Google Scholar 

  52. Richard W. Selby and Adam A. Porter. Learning from Examples: Generation and evaluation of decision tress for software resource analysis. IEEE Transactions on Software Engineering, SE-14(12): 1743–1757, December 1988.

    Google Scholar 

  53. Richard W. Selby, Adam A. Porter, Doug C. Schmidt, and James Berney. Metric driven analysis and fedback systems for enabling empirically guided software development. In Proceedings of the Tirteenth International Conference on Software Engineering, Austria, TX, May 1991.

    Google Scholar 

  54. V. Y. Shen, T. J. Yu, S. M. Thebaut, and L. R. Paulsen. Identifying errorprone software — an emperical study. IEEE Transactions on Software Engineering, SE-11(4):317–324, April 1985.

    Google Scholar 

  55. Koji Torii, Tohuro Kikuno, Ken ichi Matsumoto, and Shinji Kusumoto. A data collection and analysis system Ginger to improve programmer productivity on software development. Technical Report, Osaka University, Osaka, Japan, 1989.

    Google Scholar 

  56. C. E. Walston and C. P. Felix. A method of programming measurement and estimation. IBM Systems Journal, 16(1):54–73, 1977.

    Google Scholar 

  57. N. H. Weiderman, A. N. Habermann, M. W. Borger, and M. H. Klein. A methodology for evaluating environments. In Proceedings of the First ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments, pages 199–207, Palo Alto, California, December 1986.

    Google Scholar 

  58. Marv Zelkowitz. Axiomatic analyses of software measurement. In Proceedings of the Dagstuhl Software Measurement Workshop, Germany, September 1992.

    Google Scholar 

  59. Horst Zuse. Software Complexity Metrics. 1990.

    Google Scholar 

  60. Stu Zweben. Axiomatic analyses of software metrics. In Proceedings of the Dagstuhl Software Measurement Workshop, Germany, September 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

H. Dieter Rombach Victor R. Basili Richard W. Selby

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Selby, R.W. (1993). Software measurement and experimentation frameworks, mechanisms, and infrastructure. In: Rombach, H.D., Basili, V.R., Selby, R.W. (eds) Experimental Software Engineering Issues: Critical Assessment and Future Directions. Lecture Notes in Computer Science, vol 706. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57092-6_106

Download citation

  • DOI: https://doi.org/10.1007/3-540-57092-6_106

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57092-9

  • Online ISBN: 978-3-540-47903-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics