A comparative evaluation of CASE tools

https://doi.org/10.1016/S0164-1212(98)10046-8Get rights and content

Abstract

CASE tools are complex software products offering many different features. Systems professionals evaluated various CASE products from a feature and attribute basis. Each product has a different mix of strengths and weaknesses as perceived by the end user. Specific CASE tools support different steps of the applications development process as well as varying methodologies. The complexity of software development, diversity of tools and features leads to several questions. What CASE features are being used by systems developers? What areas of CASE tolls need improvement? Are some CASE tools or product attributes considered to be better than others?

Introduction

In managing the complexity of large software projects, many organizations have turned to using Computer-Aided Software Engineering (CASE) tools. Over the recent years, CASE tools have been improved and have added features over and above the original basic concepts of data flow and entity relationship diagramming and data dictionaries. For example, with the advent of rapid application development (RAD), CASE tool manufacturers added prototyping and code generation facilities. Various CASE tools now support several different aspects of systems development, including but not limited to: requirements elicitation and analysis, design and communication, enforcement of standards and methodologies, prototyping and RAD, reverse engineering, and software maintenance and re-engineering.

These additional features have increased the complexity of learning and using CASE tools. With hundreds of options and support for multiple facets of software development, tools have become increasingly difficult to evaluate. Kemerer [1992] concludes that part of the reason for CASE tools to be of limited use by systems professionals is due to the complexity of the tool itself. Iivari [1996] supports the complexity of CASE argument in that a substantial factor in reduced CASE usage is due to the users perception of the tool being complicated to work with.

Various studies concluded that CASE tool usage improves software development productivity and is a positive factor for systems development (Banker and Kauffman (1991), Orlikowski (1993), Finlay and Mitchell (1994). The implication is that CASE tools can improve the development process.

There is anecdotal evidence and a few studies on CASE usage and their effects on business process re-engineering. Yet, several questions remain unanswered regarding the use of CASE tools. What tools and features are needed and used by system analysts? Are there significant differences between the primary CASE tools? Do some tools provide better support for different stages of development?

There are two basic objectives of this research: (1) to determine which of the features are the most important to CASE users and (2) to evaluate the existing CASE tools in terms of the effectiveness of these features. To meet these objectives, CASE users in several large organizations were surveyed to determine their usage and experiences with CASE tools.

Section snippets

Existing studies

CASE tools have been described and studied from several perspectives. A key value from earlier research is the identification of CASE features and attributes that need further investigation.

An early study by Norman and Nunamaker (1989)identified several basic uses of CASE tools: data flow diagrams, data dictionary, project standardization, screen/report design, presentation graphics, analysis, communication, import/export facilities, platform support and network capabilities. They also

Methodology and model

To identify the important attributes and make comparisons of features and usage of CASE tools, a survey of systems developers who use the tools was undertaken. The first step is to build a survey instrument that can be properly analyzed. Developing the model begins by identifying the basic features of CASE tools.

Based on early studies, interviews with software engineers, and examination of CASE tools, six categories of features were identified, each with several detailed attributes. The six

Results

There are two basic questions to be answered: (1) which attributes are important to CASE users? (2) How do users rate existing products in terms of these attributes? Results are discussed in terms of these two questions. The results are presented in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13. Before analyzing the results, a discussion of survey respondents and the reliability of the survey instrument will be presented.

CASE product evaluation

T-test: An objective of this study was to investigate how respondents rated each CASE tool. One method is to examine the ratings to identify products with attributes that are significantly different from the mean (two standard deviations). The attributes meeting this criteria are marked in the first data column of Table 4, Table 5, Table 6, Table 7, Table 8, Table 9. The significant differences are marked with the first letter of the product and a plus sign if they are above the mean.

Conclusions

An important conclusion is that there are two distinct uses of CASE tools: (1) for design tasks and (2) for generation of prototypes or working systems. Existing CASE tools tend to emphasize one of these uses. CASE users choose a tool and evaluate the detailed attributes based on their choice of usage.

Of the two tasks, generation of code and prototyping are considered to be more important. Since most tools handle graphics and data dictionary attributes relatively well, complex prototyping and

Gerald V. Post is a Professor of MIS at Western Kentucky University. His research interests include evaluation models for information systems, evaluation of developmental tools, and computer security. Some of his recent papers have appeared in Decision Sciences, JMIS, Information Systems Journal, Journal of Marketing Research, and Marketing Letters. Dr. Post has written an MIS textbook published by Richard D. Irwin.

Albert Kagan is a Professor and Director of NFAPP, at Arizona State University.

References (21)

  • R.D Banker et al.

    Reuse and productivity in integrated computer aided software engineering

    MIS Quarterly

    (1991)
  • R.G Chapman et al.

    Exploiting rank ordered choice set data within the stochastic utility model

    Journal of Marketing Research

    (1982)
  • L.J Cronbach

    Coefficient alpha and the internal structure of tests

    Psychometrika

    (1951)
  • P.N Finlay et al.

    Perceptions of the benefits from introduction of CASE: An empirical study

    MIS Quarterly

    (1994)
  • G Forte et al.

    A self-assessment by the software engineering community

    Communications of the ACM

    (1992)
  • P.E Green

    Hybrid models for conjoint analysis: An expository review

    Journal of Marketing Research

    (1984)
  • P.E Green et al.

    Conjoint analysis in consumer research: Issues and outlook

    Journal of Consumer Research

    (1978)
  • Green, P.E., Srinivasan, V., 1990. Conjoint analysis in marketing: New developments with implications for research and...
  • C.C Huff

    Elements of a realistic CASE tool adoption budget

    Communications of the ACM

    (1992)
  • J Iivari

    Why are CASE tools not used?

    Communications of the ACM

    (1996)
There are more references available in the full text version of this article.

Cited by (8)

  • A tool for visual and formal modelling of software designs

    2015, Science of Computer Programming
    Citation Excerpt :

    Post et al. [80] compare different CASE tools to elicit their most important features from a user perspective. Usability is considered, but only from a broad perspective. [80] concludes that there are two distinct uses of case tools, design and code generation or prototyping, and that CASE tools tend to emphasise one or the other; according to [80], code generation or prototyping is seen as more important.

  • User requirements for OO CASE tools

    2001, Information and Software Technology
  • Evaluating the usability of unity game engine from developers' perspective

    2019, 11th IEEE International Conference on Application of Information and Communication Technologies, AICT 2017 - Proceedings
  • Design features for online examination software

    2012, Decision Sciences Journal of Innovative Education
  • A case study on usage of a software process management tool in China

    2010, Proceedings - Asia-Pacific Software Engineering Conference, APSEC
View all citing articles on Scopus

Gerald V. Post is a Professor of MIS at Western Kentucky University. His research interests include evaluation models for information systems, evaluation of developmental tools, and computer security. Some of his recent papers have appeared in Decision Sciences, JMIS, Information Systems Journal, Journal of Marketing Research, and Marketing Letters. Dr. Post has written an MIS textbook published by Richard D. Irwin.

Albert Kagan is a Professor and Director of NFAPP, at Arizona State University. His research interests include strategic information systems, banking information systems, and buisness CASE applications. His recent papers have appeared in Journal of Applied Business Research, JMIS, Journal of Marketing Research, Omega: International Journal of Management Science, Journal of Systems and Software, Information Systems Journal and Applied Economics, among others.

Robert T. Keim is an Associate Professor of Information Systems in the School of Accountancy and Information Management at Arizona State University. His research interests include analysis and design of information systems, electronic commerce and information systems architecture. He has recently published articles in Behaviour & Information Technology, Corporate Computing, and the Journal of Systems Management. Dr. Keim is also the author of a textbook on business computer usage.

View full text