A controlled experiment investigation of an object-oriented design heuristic for maintainability

https://doi.org/10.1016/S0164-1212(03)00240-1Get rights and content

Abstract

The study presented in this paper is a controlled experiment, aiming at investigating the impact of a design heuristic, dealing with the `god class' problem, on the maintainability of object-oriented designs. In other words, we wish to better understand to what extent a specific design heuristic contributes to the quality of designs developed. The experiment has been conducted using undergraduate students as subjects, performing on two system designs using the Coad & Yourdon method. The results of this study provide evidence that the investigated design heuristic: (a) affects the evolution of design structures; and (b) considerably affects the way participants apply the inheritance mechanism.

Introduction

In the last decade or more, the object-oriented (OO) paradigm has gained a broad acceptance within the software community, the industry, and the software development organizations of any size. Hence, OO methodologies, languages and development environments, have been developed supporting this technology. It has mainly been made popular by C++, and now even more so by Java. Jones (1994) has noticed the rapid growth of OO technology, since 1994. Even recently, a survey carried out by Goodley (1999), has indicated the continuing growth in the popularity of Java as an OO development language.

Much of the literature asserts that substantial gains, such as increased understandability, productivity, quality, ease of modification, reuse, should accrue from using OO analysis, design, coding, and reusable components. Nevertheless, these benefits are mostly based on intuition and not on empirical evidence. Intuition may provide a starting point, but it needs to be backed up with empirical evidence. According to Basili and Burgess (1995), experimentation has shown that intuition about software in many cases is wrong. Jones (1994) as well, has identified a lack of empirical evidence to support the claims accredited to OO technology, like improved productivity and quality, defect removal efficiency, and even more so reusability.

For this reason, over recent years, there has been a growing interest in empirical evaluation. In a review (Deligiannis et al., 2002), examining how experimentation has been carried out in the OO technology, the evidence does not support the claim that OO techniques always provide the benefits accredited to them. A case study performed by Hatton (1998) on corrective maintenance issues, indicated a number of concerns whether OO technology has met its claims. Considering the performance and strategies of programmers new to OO technology it was found that OO concepts were not easy to learn and use quickly (Hillegersberg et al., 1995).

Clearly, more empirical research is needed to investigate these claims; examining in particular the efficacy and effectiveness with which the various OO features are applied. One particular area that warrants thorough investigation is considered the impact of design heuristics on one quality factor, namely maintainability. The reasons leading us to this decision are: (a) the maintenance phase is considered the most costly phase of system life cycle (Meyer, 1997; Hatton, 1998). Subsequently, investigations aiming to reducing it should be of increased interest for the empirical research community; and (b) design heuristics, concentrating cumulative knowledge mainly based on experience and practice (Booch, 1995), might provide useful assistance to achieve this goal.

The study presented in this paper is a controlled experiment, carried out using students as participants, which investigates the impact of a single design heuristic on quality of OO designs, captured by two important quality factors of design assessment, namely understandability, and maintainability. The paper is structured as follows. Section 2 describes the details of the experiment. Section 3 summarizes the data collected and presents the data analysis results. Section 4 identifies and discusses possible threats to the validity of the study. Finally, Section 5, presents the conclusions and future research targets. The structure of the paper follows the structure of a Briand et al.'s paper (2001).

Section snippets

Description of the experiment

This research builds upon the results of a previous observational study (Deligiannis et al., 2003), which investigated the impact of an OO design heuristic, namely “god class problem” (it is described in Section 2.1), with respect to the maintainability. It was also aiming at investigating the impact of a design heuristic on the maintainability of OO designs, as well as the relationship between that OO design heuristic and metrics, i.e. whether we are able to capture a specific design heuristic

Statistical analysis of the data

For the statistical analysis of the data we used variables related to the answers provided by the students in the questionnaire and variables related to their performance in the experiment. The statistical methods we used to test the hypotheses were: the Student's t-test for independent samples and the non-parametric Mann–Whitney (M–W) test in order to test significant differences between the two groups (A and B) for each individual variable. In order to examine differences between the two

Threats to validity

This section discusses various threats to validity and the way we attempted to alleviate them.

Conclusions

This study has investigated the effects of a single design heuristic on system design documents, with respect to understandability and maintainability, two essential components of software quality. The study has compared two designs, Design A and B, which were developed according to design heuristics in general, apart from a small but functionally important part of Design B violating the `god class' heuristic. Since the difference between them was restricted to a specific part of the design, it

Acknowledgements

We would like to thank the subjects for participating in this experimental study and the reviewers for their helpful comments on an earlier version of this paper.

References (38)

  • I Deligiannis et al.

    An empirical investigation of object-oriented design heuristics for maintainability

    J. Syst. Softw.

    (2003)
  • O Laitenberger et al.

    An experimental comparison of reading techniques for defect detection in UML design documents

    J. Syst. Softw.

    (2000)
  • Abreu, F., Melo, W., 1996. Evaluating the impact of object-oriented design on software quality. In: Proceedings of the...
  • E Allen et al.

    Measuring coupling and cohesion: an information theory approach

  • J Amstrong et al.

    Uses and abuses of inheritance

    Softw. Eng. J.

    (1994)
  • T Bar-David

    Practical consequences of formal definitions of inheritance

    J. Object Orient. Program.

    (1992)
  • V Basili et al.

    Finding and experimental basis for software engineering

    IEEE Softw.

    (1995)
  • Bieman, J., Kang, B.-K., 1995. Cohesion and reuse in an object-oriented system. In: ACM Symposium Software...
  • Bieman, J., Xia Zhao, J., 1995. Reuse Through Inheritance: A Quantitative Study of C++...
  • G Booch

    Object-Oriented Analysis and Design with Applications

    (1994)
  • G Booch

    Rules of Thumb

    ROAD

    (1995)
  • L Briand et al.

    A controlled experiment for evaluating quality guidelines on the maintainability of object-oriented designs

    IEEE Trans. Softw. Eng.

    (2001)
  • M.A Chaumun et al.

    A change impact model for changeability assessment in object-oriented software systems

    Sci. Comput. Program.

    (2002)
  • S Chidamber et al.

    A metrics suite for object oriented design

    IEEE Trans. Softw. Eng.

    (1994)
  • P Coad

    OOD criteria, part 1–3

    J. Object Orient. Programm.

    (1991)
  • P Coad et al.

    Object-Oriented Analysis

    (1991)
  • J Coplien

    Advanced C++

    (1992)
  • I Deligiannis et al.

    A review of experimental investigations into object-oriented technology

    Empiric. Softw. Eng. J.

    (2002)
  • N Fenton et al.

    Software Metrics, A rigorous & Practical Approach

    (1997)
  • Cited by (51)

    • A large-scale empirical study on the lifecycle of code smell co-occurrences

      2018, Information and Software Technology
      Citation Excerpt :

      Lozano et al. [16] proposed the use of change history information to better understand the relationship between code smells and design principle violations, in order to assess the causes for which a code smell appears. Deligiannis et al. [44] also performed a controlled experiment showing that the presence of Blob smell negatively affects the maintainability of source code.

    • Understanding metric-based detectable smells in Python software: A comparative study

      2018, Information and Software Technology
      Citation Excerpt :

      It does not mean that no code smells are allowed to appear, but rather that code smells are essential hints about beneficial refactoring. Various studies have confirmed the effects of code smells on different maintainability related aspects [54,61,62], especially changes [4–6,57–58], effort [7–9], modularity [55], comprehensibility [10,11], and defects [12,46,56–58]. Existing approaches of detecting code smells include metric-based [1,16–18,26], machine learning [19–21], history-based [22–23], textual-based [60], and search-based [41] approaches.

    • How developers perceive smells in source code: A replicated study

      2017, Information and Software Technology
      Citation Excerpt :

      In this work we adopt the term code smells for both code smells and antipatterns. Several studies consider code smells harmful from a maintenance point of view [6–11], while others suggest that smells are not terribly problematic [12]. Code Smells are also considered a cause of potential faults by several studies [13–17], while other studies report a significant but small effect on them [18].

    • The Smelly Eight: An Empirical Study on the Prevalence of Code Smells in Quantum Computing

      2023, Proceedings - International Conference on Software Engineering
    View all citing articles on Scopus
    View full text