On the relationship between two control-flow coverage criteria: all JJ-paths and MCDC
Introduction
Measurement of coverage in software testing refers to the practice of determining the extent to which all occurrences of some feature of interest have been exercised by a given test data set. Different features lead to different coverage criteria. Henceforth in this document, coverage criteria will be regarded in the traditional sense of being program-based [8], although coverage notions can be applied to other software artefacts, such as state-based specifications [4], [9]. Despite the fact that complete coverage provides no guarantees about correctness, test experts continue to recommend coverage measurement in order: (1) to identify test data sets that are inadequate, possibly as a result of testers having ‘blind spots’, and (2) to discover ‘surprises’, i.e. aspects of the implementation that do not form part of the requirements [1]. Thus, ascertaining coverage can be seen as a way of ensuring basic adequacy of test data by indicating deficiencies in testing where further investigation is needed.
Given the existence of a variety of coverage criteria, the ‘subsumes’ relationship provides one mechanism for comparing them. A criterion C1 is said to subsume criterion C2 if, whenever C1 is satisfied, C2 is also satisfied. Such subsumption is often denoted using the logical implication symbol, i.e. C1⇒C2. If neither C1 nor C2 subsumes the other, they are said to be incomparable.
The focus in this short note is on two particular program coverage criteria whose relationship was hitherto unknown, but which might have been suspected of being related since they are both defined in terms of control flow. One is the ‘all jump-to-jump paths’ (all JJ-paths) criterion, originally introduced as coverage of ‘linear code sequence and jumps’ (LCSAJs) [15]; the other is ‘modified condition/decision coverage’ (MCDC) [2], which forms part of the DO-178B standard for avionics software [12]. It is shown that, in general, ‘all JJ-paths’ and MCDC are incomparable but that, for programs written under certain specific constraints, ‘all JJ-paths’ subsumes MCDC. The next section provides the necessary background by describing the terminology, the two criteria, and related work. Section 3 provides the main results concerning the comparison of the two criteria and the final section provides a brief discussion and some conclusions.
Section snippets
Background and related work
This section introduces the relevant notation and terminology before providing descriptions of the two coverage criteria that are the focus of interest in this paper, namely ‘all JJ-paths’ and MCDC. Selected related work on subsumption is also described.
The relationship between all JJ-paths and MCDC
Incomparability in the general case will be considered first and this is followed by consideration of the circumstances when ‘all JJ-paths’ subsumes MCDC.
Discussion and conclusions
This paper has considered the relationship between two coverage criteria: ‘all JJ-paths’ and modified condition/decision coverage. Both criteria are based on relatively sophisticated control-flow concepts and one might have expected them to be connected in some way. However, it has been shown by means of an example that neither criterion subsumes the other, in general. Notwithstanding this result, it has also been proved that for programs where every decision forms part of a branching
References (18)
Testing Object-Oriented Systems: Models, Patterns, and Tools
(2000)- et al.
Applicability of modified condition/decision coverage to software testing
Software Engineering Journal
(1994) - J.J. Chilenski, An investigation of three forms of the Modified Condition Decision Coverage (MCDC) criterion, FAA...
- et al.
Test selection based on finite state models
IEEE Transactions on Software Engineering
(1991) - R. Hamlet, Theoretical comparison of testing methods, Proceedings of the ACM SIGSOFT 1989 Third Symposium on Software...
- et al.
Experimental evaluation of the tolerance for control-flow test criteria
Software Testing, Verification and Reliability
(2004) - et al.
A formal analysis of MCDC and RCDC test criteria
Software Testing, Verification and Reliability
(2005) The Art of Software Testing
(1979)- et al.
Generating test data from state-based specifications
Software Testing, Verification and Reliability
(2003)