A language-independent approach to black-box testing using Erlang as test specification language

https://doi.org/10.1016/j.jss.2013.07.021Get rights and content

Highlights

  • This paper presents a black-box approach to interface testing.

  • The functional specification of a component's API is written using an abstract language and used to automatically generate and run tests.

  • The methodology is illustrated using a real case study that reveals the potential of the technique.

  • Several implementations in different programming languages were tested with a single test specification, and found several errors that had been missed in previous and more traditional testing activities.

  • We create a reusable framework to test integration APIs of other components, the first stage in creating a complete testing framework.

Abstract

Integration of reused, well-designed components and subsystems is a common practice in software development. Hence, testing integration interfaces is a key activity, and a whole range of technical challenges arise from the complexity and versatility of such components.

In this paper, we present a methodology to fully test different implementations of a software component integration API. More precisely, we propose a black-box testing approach, based on the use of QuickCheck and inspired by the TTCN-3 test architecture, to specify and test the expected behavior of a component. We have used a real-world multimedia content management system as case study. This system offers the same integration API for different technologies: Java, Erlang and HTTP/XML. Using our method, we have tested all integration API implementations using the same test specification, increasing the confidence in its interoperability and reusability.

Introduction

Modular software systems are usually composed by integrated components. Software components integration and reuse is possible thanks to each component providing an interface, also called API (Application Programming Interface), which allows others to access its functionality. A complete usage protocol specification is needed to describe the API. This includes which operations the API provides, the types of the parameters of each operation and their return types, the order in which they must be executed, the allowed conditions under which they can be invoked, etc.

Testing this kind of API implementations is a very important task in the development process of complex systems. If the implementation of the specification is incorrect, components integration will fail causing system malfunctioning. In addition, both specifications and implementations evolve very frequently, and it is not uncommon that several implementations of a single API specification co-exist. This requires a suitable test approach that helps testers to test integration API implementations.

Unlike other stages of the software development cycle such as analysis, design, or implementation, in which practitioners use a set of well-known tools and methodologies, there is a lack of effective test strategies and technical facilitators that have made a significant impact in industry. Therefore, the main motivation for this work is the definition of a methodology that establishes the steps and technologies that can be used to improve, in particular, testing of integration API implementations.

Our proposal consists in using the API specification instead of the API implementation to generate, using a property-based testing approach (PBT: Derrick et al., 2010), meaningful test cases. PBT provides a powerful, high-level approach to testing: rather than focusing on individual test cases to encapsulate the behavior of a system, such behavior is specified using properties, expressed in a logical form (Derrick et al., 2010). This entails an important reduction of the lines of code in the corresponding test specification, i.e., the code manually written to test a system, compared to more traditional approaches, in which test cases are written by hand.

The proposed architecture is inspired by the Testing and Test Control Notation Version 3 (TTCN-3: Willcock et al., 2005, ETSI ES 201 873-1, 2018), which allows us to use the same test specification to test different implementations of the same API specification. However, unlike TTCN-3, we use properties, from which test cases are automatically generated, instead of writing the test suite manually. Properties are written using a functional programming language, specifically Erlang (Armstrong et al., 1996), and an existing property-based testing tool, QuickCheck (Claessen and Hughes, 2000, Arts et al., 2006) is used to generate and execute test cases.

The use of Erlang as test specification language is motivated by its capabilities to write concise and readable properties and models. Besides being a powerful, expressive, high-level programming language, Erlang integrates very well with other programming languages, such as C or Java. This integration allows for the use of Erlang as test specification language even if the System Under Test (SUT) is implemented in a different programming language. This helps us to achieve the goal of using a unique test specification language regardless of the programming language in which the SUT is implemented, avoiding the use of different languages to specify the system behavior to test. In addition, existing PBT libraries and tools available for Erlang, in particular QuickCheck (Claessen and Hughes, 2000, Arts et al., 2006), can be used to ease the task of defining test specifications, and to automatically generate and execute test cases. On the other hand, the TTCN-3 architecture is perfectly suited to our approach, because it splits the test code into an Abstract Test Suite (ATS) and some adapters that connect the abstract test suite with the SUT. Thus, with this approach, we are able to use the same test specification (in our case, properties), to test different implementations (our set of SUTs) of the same API specification.

We have used VoDKA Asset Manager, a content management component that provides an integration API for different programming languages (Java, HTTP/XML, and Erlang) as case study for our test strategy. This case study shows how the test approach is used with a real-world application, illustrating how all the implementations of the integration API are tested using the same test specification. In the process, we found and fixed several errors in our case study, previously overlooked by manual testing. We consider that the obtained results can be generalized to testing other integration API implementations.

As a result, we contribute a test methodology, especially oriented to test different implementations of an integration API: a black-box approach that uses properties to describe the API specification in an abstract way. This methodology has two strong advantages:

  • The use of abstract properties avoids dependencies with the API implementations, minimizing the effort to test their functional requirements, and the functional equivalence between them.

  • The maintenance of the test specification (i.e., the properties that describe the SUT) is significantly simplified. Apart from writing only one test specification per API specification (instead of one for each API implementation to test), not all kinds of changes in the SUT implementation imply changes in the test specification. Thus, we update the properties only if the API specification changes.

We present an implementation of this methodology using QuickCheck, which allows us to automatically generate and run specific test cases from the abstract properties. This is the first step in the creation of a reusable framework to test integration APIs, and we illustrate the process on the light of our case study.

The rest of the paper is organized as follows. Section 2 presents our case study, describes the problem, and analyzes different approaches which could be used for the same purpose, such as TTCN-3. Section 3 explains our proposal in detail. Section 4 summarizes the results and limitations of the approach. Finally, in Section 5, conclusions and future lines related to this work are included.

Section snippets

Case study: VoDKA Asset Manager

The motivation for this article emerges from the need to test the implementations of VoDKA Asset Manager integration API, a component to manage meta-information on multimedia contents (also known as assets), which are stored in VoDKA (Gulías et al., 2005), a video-on-demand server. Stored meta-information may be, for instance: title, summary, genre, parental rating, etc. In addition to storing meta-information, VoDKA Asset Manager offers search of assets using different criteria, update of

Description of the test approach

In this section we describe how to obtain an executable test suite from the VoDKA Asset Manager API specification following our black-box PBT approach (cf. Fig. 3).

Discussion

Our approach was able to detect faults in components currently in use in real deployments. The use of an abstract specification of the SUT, independent from the structure and the implementation language of the SUT, allowed us to test three different implementations of the same functional specification with one single test specification, avoiding different interpretations of the component requirements to test.

With a more traditional, manual, non-TTCN-3 based approach, it is likely that three

Conclusions and future work

This paper presents a black-box approach to interface testing in which, from a functional specification of a component's API described using an abstract language (independent from the implementation language), test sets are generated and executed to check if implementations of such API comply with its functional specification.

The approach is illustrated using a real case study that reveals the potential of this technique. We were able to test several implementations, each one written in a

Acknowledgments

We are forever thankful to Dr. VA-ctor M. GulA-as from the University of A Coruña, sadly deceased recently, for his unquestionable wisdom, unlimited support, reassuring guidance, and strong encouragement. His uncountable virtues most certainly outlive him, and will be always an inspiration for us.

Research partly funded by MICINN TIN-2010-20959, XUGA-FEDER CN 2012/211, FP7-ICT-317820.

Dr. Laura M. Castro is a teacher and researcher at the University of A Coruña (Spain). Her work focuses on distributed systems, functional and object-oriented programming, design patterns, and more recently, software testing. Dr. Castro is responsible for the Distributed Systems and Service-Oriented Architectures research area at the ITC Research Centre of the same university since 2010.

References (41)

  • L.M. Castro et al.

    Testing data consistency of data-intensive applications using QuickCheck

  • J. Armstrong et al.

    Concurrent Programming in Erlang

    (1996)
  • T. Arts et al.

    Testing telecoms software with Quviq QuickCheck

  • T. Arts et al.

    Testing Erlang data types with Quviq QuickCheck

  • T. Arts et al.

    From test cases to FSMs: augmented test-driven development and property inference

  • P. Baker et al.

    Model-Driven Testing. Using the UML Testing Profile

    (2008)
  • J. Blom et al.

    Automated test generation for industrial Erlang applications

  • L.M. Castro et al.

    A practical methodology for integration testing

  • Y. Cheon et al.

    Automating Java program testing using OCL and AspectJ

  • K. Claessen et al.

    QuickCheck: a lightweight tool for random testing of Haskell programs

  • J. Derrick et al.

    Property-based testing – the ProTest project

  • A.C. Dias Neto et al.

    A survey on model-based testing approaches: a systematic review

  • ETSI ES 201 873-1. Methods for Testing and Specification (MTS), The Testing and Test Control Notation version 3, Part...
  • European Telecommunications Standards Institute (ETSI), 2013....
  • P. Farrell-Vinay

    Manage Software Testing

    (2008)
  • G. Fink et al.

    Property-based testing: a new approach to testing for assurance

    ACM SIGSOFT Software Engineering Notes

    (1997)
  • M.A. Francisco et al.

    Automatic generation of test models and properties from UML models with OCL constraints

  • V.M. Gulías et al.

    VoDKA: developing a video-on-demand server using distributed functional programming

    Journal on Functional Programming

    (2005)
  • J. Hughes

    QuickCheck testing for fun and profit

  • A. Hunt et al.

    Pragmatic unit testing in Java with JUnit

    The Pragmatic Programmers

    (2003)
  • Cited by (4)

    • Temporal algebraic query of test sequences

      2018, Journal of Systems and Software
      Citation Excerpt :

      With the advances of high level languages like Haskell, Scala, and Groovy, embedding all sorts of DSLs is increasingly becoming a more practical option. Castro and Francisco take an even more radical approach by offering a whole testing DSL embedded in Erlang Castro and Francisco (2013) and uses it to target multiple languages. This is inspired by TTCN3 Grabowski et al. (2003), except that Castro and Francisco advocate to choose an embedded DSL rather than a dedicated language like TTCN3.

    Dr. Laura M. Castro is a teacher and researcher at the University of A Coruña (Spain). Her work focuses on distributed systems, functional and object-oriented programming, design patterns, and more recently, software testing. Dr. Castro is responsible for the Distributed Systems and Service-Oriented Architectures research area at the ITC Research Centre of the same university since 2010.

    Miguel A. Francisco is a senior software developer and team leader at Interoud Innovations S.L. Mr. Francisco's work in industry is combined with research activities in collaboration with fellow researchers at the University of A Coruña, among others. Specifically, his interests focus on the development of advanced integration testing techniques, which are directly tested on Interoud's products.

    1

    MADS Group.

    View full text