Hard-to-use evaluation criteria for software engineering

https://doi.org/10.1016/0164-1212(81)90028-5Get rights and content

Abstract

Most evaluations of software tools and methodologies could be called “public relations,” because they are subjective arguments given by proponents. The need for markedly increased productivity in software development is now forcing better evaluation criteria to be used. Software engineering must begin to live up to its second name by finding quantitative measures of quality. This paper suggests some evaluation criteria that are probably too difficult to carry out, criteria that may always remain subjective. It argues that these are so important that we should keep them in mind as a balance to the hard data we can obtain and should seek to learn more about them despite the difficulty of doing so.

A historical example is presented as illustration of the necessity of retaining subjective criteria. High-level languages and their compilers today enjoy almost universal acceptance. It will be argued that the value of this tool has never been precisely evaluated, and if narrow measures had been applied at its inception, it would have been found wanting. This historical lesson is then applied to the problem of evaluating a novel specification and testing tool under development at the University of Maryland.

References (15)

  • J.D. Ichbiah

    Rationale for the Design of the ADA Programming Language

    SIGPLAN Not. 14

    (1979)
  • M.H. Halstead

    Machine-Independent Computer Programming

    (1962)
  • W. Lonergan et al.

    Design of the B5000 System

    Datamation

    (1961)
  • D.M. Ritchie et al.

    The UNIX Time-Sharing System

    Commun. ACM

    (1974)
  • Dynamic Debugging Technique

    (1968)
  • B6700 System/Dump/Analyzer

    (1971)
There are more references available in the full text version of this article.

Cited by (2)

The work reported here was supported by grant no. F49620-80-C-0001-P-1 from the Air Force Office of Scientific Research.

View full text