Bringing white-box testing to Service Oriented Architectures through a Service Oriented Approach

https://doi.org/10.1016/j.jss.2010.10.024Get rights and content

Abstract

The attractive feature of Service Oriented Architecture (SOA) is that pieces of software conceived and developed by independent organizations can be dynamically composed to provide richer functionality. The same reasons that enable flexible compositions, however, also prevent the application of some traditional testing approaches, making SOA validation challenging and costly. Web services usually expose just an interface, enough to invoke them and develop some general (black-box) tests, but insufficient for a tester to develop an adequate understanding of the integration quality between the application and the independent web services. To address this lack we propose an approach that makes web services more transparent to testers through the addition of an intermediary service that provides coverage information. The approach, named Service Oriented Coverage Testing (SOCT), provides testers with feedback about how much a service is exercised by their tests without revealing the service internals. In SOCT, testing feedback is offered itself as a service, thus preserving SOA founding principles of loose coupling and implementation neutrality. In this paper we motivate and define the SOCT approach, and implement an instance of it. We also perform a study to asses SOCT feasibility and provide a preliminary evaluation of its viability and value.

Introduction

Usage of web services has increased dramatically in the last years (Gartner and Forrester, 2003). Different definitions can be found in the literature of what a web service is; the World Wide Web Consortium (W3C) qualifies a web service as (W3C Working Group, 2004) a software system designed to support interoperable machine-to-machine interaction over a network and having an interface in a machine-processable format. Such interface descriptions can be published and discovered, thus making it cost-effective for companies to integrate their own services with those developed and managed by third parties (Schroth et al., 2008). Of course web services are not necessarily used across organizations, on the contrary they are also widely used “in-house” and within corporate environments. However, the former is the situation we consider in this paper, because, as we explain below, this exposes the most difficult challenges from the tester's viewpoint.

An emerging paradigm for organizing and utilizing distributed capabilities that may be under the control of different organizations is the Service Oriented Architecture (SOA) (OASIS, 2006), whereas the sequence and conditions in which one web service invokes other web services in order to achieve its goals is referred to as an orchestration (W3C Working Group, 2004).

Failures in web service orchestrations, unfortunately, are common and their impact becomes more obvious and detrimental as their popularity and interdependencies increase. For example, a recent failure in Amazon's storage web service affected many companies relying on it (Amazon Discussion Forum).

For a service orchestrator, building effective tests that can detect failures in the interaction among the composed services is challenging for two reasons. First, even if best practices (Torry Harris Business Solutions) are followed by the developer to test an individual service to ensure its quality, nothing guarantees that it will then operate smoothly as part of a dynamic distributed system made of multiple orchestrated but autonomous services. Second, the orchestrator of independently developed services can usually only access their interface to derive test cases and determine the extent of the testing activity. This limited visibility means that the orchestrator has to rely heavily upon an interface whose documentation is often limited and possibly inconsistent with the true system behavior, especially with services that undergo frequent updates (Fisher et al., 2007a).

Researchers have developed several approaches to address these challenges. In particular, much work has focused on test case generation from improved service interfaces (i.e., more precise behavioral specifications) (PLASTIC, 2010, Sinha and Paradkar, 2006, Xu et al., 2005), on the detection of inconsistencies between a service interface description and its behavior (Fisher et al., 2007b), on defining adequacy criteria based on the web services interactions (L. Li et al., 2008), on procedures to build scaffolding for test services in more controlled settings (Sneed and Huang, 2007), and on using the availability of multiple services as oracles (Tsai et al., 2005). One trait shared by existing test approaches is the treatment of web services as black boxes (Canfora and Di Penta, 2009), focusing on the external behavior but ignoring the internal structure of the services included in the orchestration. This trait follows the very nature of web services, which are meant to be implementation neutral. From a testing perspective, though, this is a pity. White-box approaches are in fact a well-known valuable complement to black-box ones (Pezzè and Young, 2007), as coverage information can provide an indication of the thoroughness of the executed test cases, and can help maintain an effective and efficient test suite.

To address this limitation we have conceived an approach through which services can be made more transparent to an external tester while maintaining the flexibility, dynamism and loose coupling of SOAs. Our approach enables a service orchestrator to access test coverage measures (or their byproducts) on the third-party services without gaining access to the code of those services. We refer to this enhancement as “whitening” of SOA testing (Bartolini et al., 2009) to reflect a move towards whitebox testing in terms of providing increased feedback about test executions on a service, yet the service provider remains in control of how much of the internal system structure is revealed.

Whitening is achieved through the use of dedicated services built for collecting coverage data; these services compute the coverage of the services under test, on behalf of the orchestrator. The loose coupling of the web service paradigm is not lost between the orchestrator and the developer of the provided service because the orchestrator is still unable to see anything of the service beyond the interface. In particular, the orchestrator is completely unaware of any implementation detail, and simply obtains some cumulative measures (percentages) which only reveal how much of the service the executed tests are actually using. Loss of loose coupling happens, at most, between the provided service and the coverage collecting service (of which it is reasonable to assume it is a trustworthy third party), but, even so, which and how much information is disclosed is under the control of the provider of the service under test. The approach thus blends naturally into the SOA paradigm and is called Service Oriented Coverage Testing (SOCT).

The added transparency from test whitening is clearly far from a “complete” white-box testing; coverage only adds a slight bit of information. However, this improvement in transparency increases testability, letting the service orchestrator gain additional feedback about how a service orchestration is exercised during validation. This feedback can then be used by orchestrators in many ways: to determine whether a coverage adequacy criterion that includes the third-party service structure has been reached; to identify tests that do not contribute as much and can be removed from a suite; or, to drive regression testing to detect possible updates in the implementation of a third-party service that might affect the behavior of their application. On the other end, third-party service providers may be enticed to provide such an extended testing interface as a way to implement continuous quality assurance checks (perhaps in association with the orchestrator), or may be required to do so as part of a service quality agreement.

Whitening SOA testing requires the design of an infrastructure that fits naturally in the service-oriented model by providing test coverage information itself as a service accessible only through a service interface. The infrastructure supporting our approach achieves that goal by requiring:

  • 1.

    for the developer of a provided service, to instrument the code to enable the monitoring of the execution of target program entities, and make the relative usage information publicly available;

  • 2.

    for the provider of the coverage collecting service, to track test execution results; and

  • 3.

    for the service orchestrator testing the integrated application, to request testing information through a standardized published web service testing interface.

From a broader perspective, such infrastructure relies on laying down a governance framework to realize inter-organization testing at the orchestration level (Bertolino and Polini, 2009). Such framework encompasses the set of rules, policies, practices and responsibilities by which a complex SOA system is controlled and administered. In this paper we focus on the governance issues more closely associated with the integration testing of the orchestrated application. Of course, governance per se does not prevent malicious or irresponsible behavior on the service provider's part. The SOCT approach works as far as all involved stakeholders cooperate diligently (which is not different from any other collaborative engineering endeavor). The service provider, in particular, should ensure that the coverage information sent to the collecting service are precise, complete, and timely.

The very idea of SOCT has been proposed for the first time in Bartolini et al. (2008a), and elaborated in a conceptual approach in Bartolini et al. (2009). This paper extends on the latter work by revising the approach's associated definitions together with its potential applications, providing more detailed explanations of the interactions among the stakeholders, describing a full implemented instance of it, and performing a completely new assessment of its usefulness and performance through a case study. More precisely, in the next section we overview foundational related work and then, in Section 3, we present the problem domain, its motivation and main challenges. In Section 4 we define SOCT concepts and a realization scenario. In particular, the main components of the developed instance are described in Section 4.3. The case study is described in Section 5. Conclusions are drawn in Section 6.

Section snippets

Related work

In this section we overview the topic of web service testing, which is currently actively researched, as recently surveyed by Canfora and Di Penta (2009). As mentioned earlier, we focus here on SOA testing at the integration level; in particular we address the need of testing a composition of services that might have been developed by independent organizations. Some of the issues encountered in testing a composition of services are investigated by Bucchiarone et al. (2007), distinguishing

Motivation

In this section we motivate and discuss the key ideas behind SOCT by illustrating some testing scenarios in which SOA test whitening would be valuable. The case study in Section 5 provides an assessment of some of these scenarios.

We consider the case of a SOA orchestrator building an Integrated Travel Reservation System (ITRS) for use by Travel Agency customers. ITRS is meant to provide its clients with a single-point access to several on-line services including flight booking and hotel

The SOCT approach

With reference to the motivating scenario of the previous section, the SOCT approach is depicted in Fig. 1. To the already mentioned ITRS orchestrator and GDS service provider, a new stakeholder that we call TCov is added. TCov is a service provider who sits between ITRS and GDS, delivering coverage information on GDS as the latter is tested by the ITRS orchestrator. To build such a scenario, four activities must take place. First, the company that provides the GDS services must instrument them

Case study

In this section we start assessing the SOCT support for whitened testing of SOA orchestrations like the ones described in Section 3. We focus on two research questions:

  • RQ1: SOCT Usefulness: is SOCT useful for improving SOA testing? In particular, we will assess whether it can support test suite assessment and reduction (RQ1.1) and selective regression testing for change detection (RQ1.2);

  • RQ2: SOCT Viability: is the overhead introduced by the proposed SOCT infrastructure acceptable?

Conclusions and future work

We have presented SOCT, an approach that whitens SOA testing, overcoming what is apparently a contradiction in terms: it empowers a SOA orchestrator to obtain coverage information on an invoked external web service, yet without breaking the implementation neutrality of the latter, which is one of SOA founding principles. Such an attainment is possible if providers are willing to instrument their services so that users invoking them for testing purposes can monitor the execution of the service

Acknowledgements

This work was partially supported by the following projects: the TAS3 Project (EU FP7 IP No. 216287), the Italian MIUR Project D-ASAP (Prin 2007), and the National Science Foundation Award #0915526. We also wish to present our gratitude to the reviewers for their important suggestions which we believe improved this work a lot.

Cesare Bartolini earned his master degree in IT Engineering at the University of Pisa in 2003. His main research focus after his degree was on real-time systems. From 2004 to 2007 he was granted a PhD at Scuola Superiore Sant’Anna in Pisa, researching on platform-based design for real-time systems. During this time, he made an internship at United Technologies Research Center in East Hartford, CT, USA, working on real-time projects for real-time modeling. After earning his PhD, he started to

References (43)

  • A. Benharref et al.

    Efficient traces’ collection mechanisms for passive testing of web services

    Information Software Technology Journal

    (2009)
  • Amazon Discussion Forum. Thread: Massive (500) Internal Server Error.outage....
  • Apache Web Services Project

    Axis User's Guide

    (2005)
  • C. Bartolini et al.

    Whitening SOA testing

  • C. Bartolini et al.

    Introducing service-oriented coverage testing

  • C. Bartolini et al.

    Data Flow-Based Validation of Web Services Compositions: Perspectives and Examples. Architecting Dependable Systems V

    (2008)
  • A. Bertolino et al.

    SOA test governance: enabling service integration testing across organization and technology borders

  • A. Bucchiarone et al.

    Testing service composition

  • G. Canfora et al.

    Service Oriented Architecture Testing: A Survey. Number 5413 in LNCS

    (2009)
  • H. Cao et al.

    Towards model-based verification of BPEL with model checking

  • M. Di Penta et al.

    Search-based testing of service level agreements

  • Fisher II, M., Elbaum, S., Rothermel, G., December 2007. Automated refinement and augmentation of web service...
  • M. Fisher et al.

    Dynamic characterization of web application interfaces

  • J. García-Fanjul et al.

    Generating test cases specifications for BPEL compositions of web services using SPIN

  • Gartner and Forrester, 2003. Use of web services skyrocketing. http://www.utilitycomputing.com/news/404.asp (accessed...
  • S. Hou et al.

    Quota-constrained test-case prioritization for regression testing of service-centric systems

  • L. Li et al.

    Control flow analysis and coverage driven testing for web services

  • Z.J. Li et al.

    Business-process-driven gray-box SOA testing

    IBM Systems Journal

    (2008)
  • H. Lu et al.

    Testing context-aware middleware-centric programs: a data flow approach and an RFID-based experimentation

  • Y.-S. Ma et al.

    Mujava: a mutation system for java

  • L. Mei et al.

    Data flow testing of service-oriented workflow applications

  • Cited by (32)

    • Optimal control based regression test selection for service-oriented workflow applications

      2017, Journal of Systems and Software
      Citation Excerpt :

      The motivation of this perspective is to systematically apply control theory to software testing process and provide software testing a theoretically rigorous foundation. BPEL is a language that composes partner Web Services into BPEL applications (Bentakouk et al., 2009, Bartolini et al., 2011). It defines the process logic through activities that can be divided into two classes: basic and structured.

    • Bringing Test-Driven Development to web service choreographies

      2015, Journal of Systems and Software
      Citation Excerpt :

      Similarly to SoapUI, this tool provides support for testing the CRUD operations over REST service resources. SOCT (Service Oriented Coverage Testing) (Bartolini et al., 2011) and BISTWS (Built-in Structural Testing of Web Services) (Eler et al., 2010) are approaches for applying structural (white-box) testing in web services. The goal of these works is to calculate the test coverage of testing suites.

    • Multi-dimensional criteria for testing web services transactions

      2013, Journal of Computer and System Sciences
    • A Framework for In-Vivo Testing of Mobile Applications

      2020, Proceedings - 2020 IEEE 13th International Conference on Software Testing, Verification and Validation, ICST 2020
    View all citing articles on Scopus

    Cesare Bartolini earned his master degree in IT Engineering at the University of Pisa in 2003. His main research focus after his degree was on real-time systems. From 2004 to 2007 he was granted a PhD at Scuola Superiore Sant’Anna in Pisa, researching on platform-based design for real-time systems. During this time, he made an internship at United Technologies Research Center in East Hartford, CT, USA, working on real-time projects for real-time modeling. After earning his PhD, he started to collaborate with the Software Engineering Lab at ISTI-CNR in Pisa, mainly focusing on web service testing, where he is currently working under a research grant.

    Antonia Bertolino is a CNR Research Director in ISTI-CNR, Pisa. She is an internationally renowned researcher in the fields of software testing and dependability, and currently participates to the FP7 projects CHOReOS, TAS3, CONNECT and the Network NESSOS. She is an Associate Editor of the IEEE Transactions on Software Engineering and Springer Empirical Software Engineering Journal, and serves as the Software Testing Area Editor for the Elsevier Journal of Systems and Software. She currently serves as the Program Chair of CBSE 2011, AST2011, and in the past for the flagship conference ESEC/FSE 2007. She has (co)authored over 100 papers in international journals and conferences.

    Sebastian Elbaum is a Professor at the University of Nebraska- Lincoln. His research interests are geared towards developing more dependable systems, including program analysis, end-user software engineering, and empirical software engineering. He received the NSF CAREER Award for his research on the utilization of field data to test highly-configurable and rapidly-evolving pervasive systems. He was the Program Chair for the International Symposium of Software Testing and Analysis and Program Co-Chair for the Symposium of Empirical Software Engineering and Measurement. He is an Associate Editor for the ACM Transactions on Software Engineering and Methodology. He received a Ph.D. in Computer Science from the University of Idaho and a Systems Engineering degree from the Universidad Catolica de Cordoba, Argentina.

    Eda Marchetti is a researcher at CNR-ISTI. She graduated summa cum laude in Computer Science from the University of Pisa (1997) and got a PhD from the same University (2003). Her research activity focuses on Software Testing and in particular: developing automatic methodologies for testing, defining approaches for scheduling the testing activities, implementing UML-based tools for test cases generation, and defining methodologies for test effectiveness evaluation. She has served as a reviewer for several international conferences and journals, and she has been part of the organizing and program committee of several international workshops and conferences.

    View full text