A model-driven approach to catch performance antipatterns in ADL specifications

https://doi.org/10.1016/j.infsof.2016.11.008Get rights and content

Abstract

Context: While the performance analysis of a software architecture is a quite well-assessed task nowadays, the issue of interpreting the performance results for providing feedback to software architects is still very critical. Performance antipatterns represent effective instruments to tackle this issue, because they document common mistakes leading to performance problems as well as their solutions.

Objective: Up today performance antipatterns have been only studied in the context of software modeling languages like UML, whereas in this manuscript our objective is to catch them in the context of ADL-based software architectures to investigate their effectiveness.

Method: We have implemented a model-driven approach that allows the automatic detection of four performance antipatterns in Æmilia, that is a stochastic process algebraic ADL for performance-aware component-oriented modeling of software systems.

Results: We evaluate the approach by applying it to three case studies in different application domains. Experimental results demonstrate the effectiveness of our approach to support the performance improvement of ADL-based software architectures.

Conclusion: We can conclude that the detection of performance antipatterns, from the earliest stages of software development, represents an effective instrument to tackle the issue of identifying flaws and improving system performance.

Introduction

The need of early non-functional analysis of software architecture is nowadays well-assessed, as it generates positive effects on the whole software development process [1]. In fact, early detection of violations of non-functional requirements allows developers to save a lot of effort in the testing phases where bugs are hard and expensive to fix [2]. In fact, the investigation of non-functional attributes in software architectures helps to compare different alternatives that are equivalent from a functional viewpoint, thus introducing an additional value in the decisional process of software architects [3], [4], [5].

In the last two decades the performance analysis of a software architecture has become a well-assessed task. Several modeling languages (like UML) allow models to be annotated with performance input parameters, then annotated models can be transformed into performance models (like Queueing Networks), and analysis tools can be used to obtain performance indices. Syntax and semantics of several architecture description languages (such as Æmilia [6]) allow performance parameters of a software architecture to be specified, and their supporting tools can analyze the architecture performance (besides the typical functional analysis) without needing to transform the architecture description into a performance-specific language.

In the literature, several approaches have been proposed for the performance analysis at the software architectural design level [7], [8], [9], whereas the issue of interpreting the performance results for providing architectural refactorings to software architects is still very critical. This is mostly due to the gap between performance results – i.e., mean values, variances, and-or probability distributions of indices like throughput, response time, etc. – and expected refactorings, like architectural alternatives, that can help to remove possible problems identified during the performance analysis phase. In most cases, software analysts (with no expertise in performance) build different architectural alternatives to try overcoming performance problems or, in the best case, performance experts provide suggestions based on their previous experience.

Therefore, further approaches are necessary to support and facilitate the process of performance results interpretation and software refactoring generation. Moreover, automation in this process would be breakthrough for this task. Up to now only bottleneck analysis [10] has been used for this goal. It allows the identification of cases where the performance of a software system is limited by a number of overloaded software components and–or hardware resources. However, it falls short to identify more complex cases.

Performance antipatterns [11], [12], [13] represent effective instruments to tackle the issue of interpreting performance results, because they document: (i) common mistakes leading to performance problems and (ii) solutions in terms of software refactorings. Their effectiveness has been demonstrated, among others, by our recently consolidated results: (i) we formalized the representation of antipatterns by means of first-order logic rules that express a set of system properties under which an antipattern occurs [14]; (ii) we introduced a methodology to prioritize the detected antipatterns and solve the most promising ones [15].

In the literature performance antipatterns have been only studied in the context of UML-like software modeling languages [16], [17], whereas in this paper we tackle the problem of identifying their occurrences in ADL-based software architectures. The main goal is to close a round-trip process that allows the performance analysis of an architecture and the results interpretation and architectural refactoring. This would contribute to an easier adoption of analysis practices in the daily life of software architects. Performance antipatterns are defined in terms of their own specific vocabulary [12], and they are founded on different aspects of a software system referring to static, behavioral, and deployment characteristics [14] as well as on performance measures. As a consequence, the ADLs suitable for the detection of performance antipatterns are those that better fit these characteristics, namely the ones that better overlap with the antipatterns vocabulary and that allow performance analysis of a software architecture.

Various ADLs allow performance analysis (e.g., ABACUS [18], OSAN [19], EAST-ADL [20]). Nevertheless, none of their supporting environments allows the interpretation of performance results. Based on our experience and supported by the software performance engineering community [11], [12], [13], the most promising ADLs for performance antipatterns detection are Æmilia [6] and AADL [21].

Æmilia is an ADL aimed (among other) at the performance evaluation of software systems. It allows a software architecture to be specified from a functional viewpoint as well as performance parameters (such as rates and probabilities of actions) and to evaluate performance indices of interest (such as throughput, response time and utilization) to be defined. Æmilia is very powerful in the specification of performance measures because it relies on rewards that can be assigned to architectural elements, and performance analysis co-exists with functional verification, such as the reachability of a certain state or deadlock-freeness. AADL is an ADL designed (among other things) for the specification and analysis of software systems. It supports, in fact, both performance analysis and functional simulation. AADL allows latency analysis on flow specifications of components and connections to be performed [22]. However, such analysis is limited to worst and best cases, whereas the reward-based mechanism of Æmilia enables a much wider range of performance specifications. On the basis of these considerations, we have adopted the Æmilia language as the ADL for this work because of its stronger ability in the specification of performance measures, while we defer AADL for our future work.

This paper is an extension of [23] where we have shortly illustrated an approach for performance antipatterns detection on Æmilia specification, which enables the actual usage of first-oder logic rules [14] in a concrete ADL-based software architecture. In detail, our approach starts by converting an Æmilia textual description into an Æmilia model conforming to an enriched Æmilia metamodel that we define in the following. The Æmilia model is subsequently annotated with performance results provided by the performance evaluation executed by means of the tool TwoTowers [24], which is the Æmilia supporting environment. At this point, the performance antipatterns detection is performed by our engine that analyzes the annotated Æmilia model using a set of OCL rules that model the detectable performance antipatterns. A list of detected antipattern occurrences, if any, is given as a result. Each antipattern includes in its definition the corresponding solutions that are alternative architectures allowing their removal. Æmilia allows to represent a subset of performance antipatterns, in particular its expressiveness enables the (full or partial) detection of seven performance antipatterns, as it will be illustrated later.

The choice of the OCL language for the definition of rules modeling the performance antipatterns is founded on the expressivity of OCL as compared to first-order logic with which performance antipatterns have been formally represented without the possibility of automating their detection [14]. From [25], it follows that the expressive power of OCL is no lower than first-order logic, we do not loose expressivity by defining OCL rules for performance antipatterns. In addition, OCL is well suited for implementation purposes, being applicable, for example, to Eclipse Modeling Framework-based models, as it has been done in this work.

The novel contributions of this paper are: (i) a detailed description of an Æmilia metamodel enriched with performance antipatterns-related concepts; (ii) the automation of the performance results annotation that allows the inclusion of performance values in Æmilia models; (iii) an extended experimentation that includes further details on the original case study and two additional ones; (iv) a comparison of the antipatterns-based process with the bottleneck analysis.

As future work, we plan to apply our approach on other formalisms, starting from AADL that appears as the most promising one among the ADLs. Moreover, we intend to introduce automation in the application of antipatterns solutions. Lastly, we plan to experiment our approach on real systems coming from industrial experiences.

The paper is organized as follows: related work is discussed in Section 2. Section 3 provides background information on Æmilia and presents a preliminary study that has brought us to identify which performance antipatterns are detectable in Æmilia descriptions. Section 4 describes the whole approach of representation and detection of performance antipatterns in Æmilia. Validation results are reported in Section 5, where our approach is applied to three case studies. Section 6 discusses the open issues raised by the proposed approach and Section 7 argues the threats to validity. Finally, Section 8 concludes the paper with final considerations and directions for future work.

Section snippets

Related work

The broader scope of this paper is the analysis of non-functional concerns at the architecture level. In the literature, the analysis of quality properties at the architectural level has been discussed from different perspectives, including modeling strategies (e.g., architecture viewpoints and perspectives [5], [26]) that enable analysis methods [27], [28], [29]. There are two main streams of approaches in this direction: (i) qualitative (e.g., scenario-based architecture analysis [30],

Reasoning on performance antipatterns in Æmilia

In this section, we provide an overview of the performance antipatterns that can be specified in Æmilia. Section 3.1 first provides basic information on Æmilia, then Section 3.2 reports on the detection and solution of antipatterns in Æmilia.

Representing and detecting performance antipatterns in Æmilia

In this section, we present our model-driven approach to catch the performance antipatterns in the Æmilia ADL. Fig. 1 illustrates the round-trip performance analysis process that we envisage considering the Æmilia context, where shaded boxes represent the main contributions of this paper. The presented approach is tool supported, we developed an Eclipse-based tool used to automatically detect the performance antipatterns. Our tool can be downloaded on-line [54].

Fig. 1 is partitioned in two

Validation

In this section, we present our approach at work on three examples to demonstrate that the process actually induces performance improvements on ADL-based software architectures.3 To demonstrate the usefulness of the antipattern-based process, we compare our experimental results with the ones obtained by means of the well-known bottleneck

Discussion

The approach presented in this paper highlights the complexity of detecting performance antipatterns in ADL-based software architectures. In the following we discuss some key points raised by this work.

Early vs. late performance analysis. There is a trade-off between carrying on performance analysis in the early lifecycle phases, where detected problems are cheaper to fix but the amount of information is limited, and late performance analysis (possibly on running artifacts), where the results

Threats to validity

A threat to the internal validity is that our current implementation allows the detection of antipatterns that strictly conform to our interpretation of the literature [12]. Indeed several other feasible interpretations of antipatterns can be provided. This unavoidable gap is an open–issue in this domain and certainly requires a wider investigation to consolidate the antipatterns’ definitions. However, different interpretations of antipatterns can be added in our tool by translating them in OCL

Conclusion

In this paper, we have introduced a model-driven approach to detect performance antipatterns in an ADL-based software architecture. To the best of our knowledge, this is the first paper that works on ADL like Æmilia to introduce automation in the investigation of the causes of poor performance and the results that we have reported are promising.

This experience has allowed us to widen the scope of our research that focused on UML-like languages [15], [16], because typical ADL specifications are

Acknowledgments

We would like to thank the anonymous reviewers for their constructive feedback that helped us to improve the paper quality.

References (65)

  • S. Balsamo et al.

    Performance evaluation at the software architecture level

    Formal Methods for Software Architectures

    (2003)
  • V. Cortellessa et al.

    Digging into UML models to remove performance antipatterns

    ICSE Workshop Quovadis

    (2010)
  • M. De Sanctis, C. Trubiani, V. Cortellessa, A. Di Marco, M. Flamminj, PANDA-AEmilia open source project,...
  • D.C. Petriu

    Challenges in integrating the analysis of multiple non-functional properties in model-driven software engineering

    Proceedings of the Workshop on Challenges in Performance Methods for Software Development, (WOSP-C)

    (2015)
  • P. Mohan et al.

    Quality flaws: issues and challenges in software development

    Comput. Eng. Intell. Syst.

    (2012)
  • A. Jansen et al.

    Software architecture as a set of architectural design decisions

    Working IEEE / IFIP Conference on Software Architecture (WICSA

    (2005)
  • A. Alebrahim et al.

    Towards systematic integration of quality requirements into software architecture

    European Conference on Software Architecture ECSA

    (2011)
  • B. Tekinerdogan et al.

    Defining architectural viewpoints for quality concerns

    European Conference on Software Architecture ECSA

    (2011)
  • M. Bernardo et al.

    Architecting families of software systems with process algebras

    ACM Trans. Softw. Eng. Methodol.

    (2002)
  • V. Cortellessa et al.

    Model-Based Software Performance Analysis

    (2011)
  • H. Koziolek

    Performance evaluation of component-based software systems: a survey

    Perform. Eval.

    (2010)
  • G. Franks et al.

    Layered bottlenecks and their mitigation

    Third International Conference on Quantitative Evaluation of Systems (QEST)

    (2006)
  • C.U. Smith et al.

    Software performance antipatterns for identifying and correcting performance problems

    International Computer Measurement Group Conference

    (2012)
  • C.U. Smith et al.

    More new software antipatterns: even more ways to shoot yourself in the foot

    Int. CMG Conference

    (2003)
  • C.U. Smith et al.

    Performance and scalability of distributed software architectures: an SPE approach

    Scalable Comput.

    (2000)
  • V. Cortellessa et al.

    An approach for modeling and detecting software performance antipatterns based on first-order logics

    Softw. Syst. Model.

    (2014)
  • C. Trubiani et al.

    Guilt-based handling of software performance antipatterns in palladio architectural models

    J. Syst. Softw.

    (2014)
  • C. Trubiani et al.

    Detection and solution of software performance antipatterns in palladio architectural models

    ICPE

    (2011)
  • K. Dunsire et al.

    The ABACUS architectural approach to computer-based system and enterprise evolution

    ECBS

    (2005)
  • A. Kamandi et al.

    Toward a new analyzable architectural description language based on OSAN

    ICSEA

    (2007)
  • ATESST2

    EAST-ADL Domain Model Specification

    (2010)
  • B.A. Lewis et al.

    Multi-dimensional model based engineering using AADL

    IEEE International Workshop on Rapid System Prototyping

    (2008)
  • Carnegie mellon software engineering institute (sei), architecture analysis and design language (aadl) - latency...
  • V. Cortellessa et al.

    Enabling performance antipatterns to arise from an adl-based software architecture

    Joint Working IEEE/IFIP Conference on Software Architecture and European Conference on Software Architecture (WICSA/ECSA)

    (2012)
  • M. Bernardo

    Twotowers 5.1 User Manual

    (2006)
  • L. Mandel et al.

    On the expressive power of ocl

    FM’99–Formal Methods

    (1999)
  • N. Rozanski et al.

    Software Systems Architecture: Working with Stakeholders Using Viewpoints and Perspectives

    (2012)
  • R. Kazman et al.

    Saam: a method for analyzing the properties of software architectures

    International Conference on Software Engineering (ICSE)

    (1994)
  • L. Dobrica et al.

    A survey on software architecture analysis methods

    IEEE Trans. Software Eng.

    (2002)
  • B. Tekinerdogan

    Asaam: aspectual software architecture analysis method

    Working IEEE/IFIP Conference on Software Architecture (WICSA)

    (2004)
  • R. Kazman et al.

    Scenario-based analysis of software architecture

    Software IEEE

    (1996)
  • R. Kazman et al.

    The architecture tradeoff analysis method

    IEEE International Conference on Engineering of Complex Computer Systems (ICECCS)

    (1998)
  • Cited by (14)

    • Specification of schedulability assumptions to leverage multiprocessor Analysis

      2022, Journal of Systems Architecture
      Citation Excerpt :

      The design pattern approach we advocate is an application of the work initiated with the Ravenscar profile [11], which is a set of restrictions on Ada programs to ease verification and validation activities. This approach has been applied on various systems, and for various properties such as performance, safety or security [12–14]; sometimes it has even been extended by the concept of contracts [15]. To the best of our knowledge, very few studies have proposed to apply a pattern-based approach on the verification and validation of multiprocessor embedded systems when it comes to taking into account the shared hardware resources beyond the processing units themselves.

    • Multi-objective software performance optimisation at the architecture level using randomised search rules

      2021, Information and Software Technology
      Citation Excerpt :

      To increase the degree of automation of architecture-based software performance optimisation, diagnosis methods, rule-based and metaheuristic-based methods have been proposed on the basis of quality evaluation at the SA level in the past decades. Diagnosis methods [5–7] can find potential performance problems and provide corresponding candidate improvement solutions according to performance antipatterns [8]. Even though the performance problems and the candidate solutions can be automatically given by diagnosis tools, software architects still need to manually choose the best candidate solution and apply it to the SA model to be optimised.

    • A systematic mapping study on architectural smells detection

      2021, Journal of Systems and Software
      Citation Excerpt :

      9 of the 85 (10.58%) papers implemented model-driven approaches. Description of Existing Model-driven Techniques — Many model-driven approaches took advantage of model transformation mechanisms to detect architectural smells (Arcelli et al., 2019; De Sanctis et al., 2017; Cortellessa et al., 2012; Czabanski et al., 2018; Cortellessa et al., 2010, 2014; Trubiani and Koziolek, 2011). For instance, in two approaches the model transformation was achieved using intermediate XML representations (Cortellessa et al., 2010, 2014).

    • Exploiting load testing and profiling for Performance Antipattern Detection

      2018, Information and Software Technology
      Citation Excerpt :

      In our previous work we demonstrated the effectiveness of performance antipatterns, specifically: (i) we formalized the representation of antipatterns by means of first-order logic rules that express a set of system properties under which an antipattern occurs [22]; (ii) we introduced a methodology to prioritize the detected antipatterns and solve the most promising ones [23]; (iii) we introduced a model-driven approach to detect and solve antipatterns within architectural description languages (ADL) [24,25]. In this paper we move a step forward with respect to our previous work [22–24] since we make use of load testing and profiling data to detect and solve performance flaws by analyzing a system’s actual runtime behavior. The novel contributions of this paper are: (i) an approach to specify performance antipatterns that is parameterized by application execution data derived from load testing and profiling; (ii) an approach to detect performance antipatterns that is parameterized by monitoring the application execution and collecting performance measurements that are used to identify the sources of performance flaws; (iii) performance antipattern solution includes the application of software refactorings aimed to improve the performance measurements under analysis; (iv) experimentation on a real-world industrial case study where domain experts provided detection rules and refactoring actions that are used in a further and representative case study based on distributed microservices [26].

    • PAD-A: performance antipattern detector for AADL

      2021, International Journal of Information Technology (Singapore)
    View all citing articles on Scopus
    View full text