Skip to main content
Log in

From monolithic to component-based performance evaluation of software architectures

A series of experiments analysing accuracy and effort

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. to be more precise: Hauptdiplom part of the German Diplom program, which is similar to a Masters program.

References

  • Bacigalupo DA, Jarvis SA, He L, Nudd GR (2004) An investigation into the application of different performance techniques to e-commerce applications. In: 18th IEEE international parallel and distributed processing symposium 2004 (IPDPS’04). IEEE Computer Society Press, Santa Fe, New Mexico

    Google Scholar 

  • Bacigalupo DA, Jarvis SA, He, L, Spooner DP, Dillenberger DN, Nudd GR (2005) An investigation into the application of different performance prediction methods to distributed enterprise applications. J Supercomput 34(2):93–111

    Article  Google Scholar 

  • Bacigalupo DA, Turner JD, Jarvis SA, Nudd GR (2003) A dynamic predictive framework for e-business workload management. In: 7th world multiconference on systemics,cybernetics and informatics (SCI2003) performance of web services invited session. Orlando, USA

    Google Scholar 

  • Balsamo S, Di Marco A, Inverardi P, Simeoni M (2004a) Model-based performance prediction in software development: a survey. IEEE Trans Softw Eng 30(5):295–310

    Article  Google Scholar 

  • Balsamo S, Marzolla M, Di Marco A, Inverardi P (2004b) Experimenting different software architectures performance techniques: a case study. In: Proceedings of the 4th international workshop on software and performance. ACM, pp 115–119

  • Basili VR, Caldiera G, Rombach HD (1994) The Goal Question Metric Approach. In: Marciniak JJ (ed) Encyclopedia of software engineering—2 volume set. Wiley, pp 528–532

  • Becker S, Grunske L, Mirandola R, Overhage S (2006) Performance prediction of component-based systems: a survey from an engineering perspective. In: Reussner R, Stafford J, Szyperski C (eds.) Architecting systems with trustworthy components. Lecture notes in computer science, vol 3938. Springer-Verlag, Berlin, pp 169–192

    Chapter  Google Scholar 

  • Becker S, Koziolek H, Reussner R (2009) The Palladio component model for model-driven performance prediction. J Syst Softw 82:3–22

    Article  Google Scholar 

  • Becker S, Koziolek H, Reussner RH (2007) Model-based performance prediction with the palladio component model. In: WOSP ’07: proceedings of the 6th international workshop on software and performance. ACM, New York, pp 54–65

    Chapter  Google Scholar 

  • Böhme R, Reussner R (2008) Dependability metrics. Lecture notes in computer science, chapter validation of predictions with measurements, vol 4909. Springer-Verlag, Berlin, pp 7–13

    Book  Google Scholar 

  • Bondarev E, Chaudron MRV, de Kock EA (2007) Exploring performance trade-offs of a JPEG decoder using the DeepCompass framework. In: WOSP ’07: proceedings of the 6th international workshop on software and performance. ACM, New York, pp 153–163

    Chapter  Google Scholar 

  • Bondarev E, Muskens J, With Pd, Chaudron M, Lukkien J (2004) Predicting real-time properties of component assemblies: a scenario-simulation approach. In: Proceedings of the 30th EUROMICRO conference (EUROMICRO’04). IEEE Computer Society, Washington, pp 40–47

    Chapter  Google Scholar 

  • Briand LC, Bunse C, Daly WJ (1997) An experimental evaluation of quality guidelines on the maintainability of object-oriented design documents. In: 7th workshop on empirical studies of programmers. ACM, pp 1–19

  • Brosig F, Kounev S, Paclat C (2009) Using weblogic diagnostics framework to enable performance prediction for java EE applications. Oracle Technology Network (OTN) Article

  • Dujmović J, Almeida V, Lea D, eds (2004) In: Proceedings of the 4th international workshop on software and performance (WOSP’04). ACM, New York

  • Dumke RR, Rautenstrauch C, Schmietendorf A, Scholz A, eds (2001) Performance engineering, state of the art and current trends. Lecture notes in computer science, vol 2047. Springer-Verlag, Berlin

    MATH  Google Scholar 

  • Franks G, Omari T, Woodside CM, Das O, Derisavi S (2009) Enhanced modeling and solution of layered queueing networks. IEEE Trans Softw Eng 35(2):148–161

    Article  Google Scholar 

  • Heitmann F, Moldt D (2007) Petri nets tool database. Available from http://www.informatik.uni.hamburg.de/TGI/PetriNets/tools/db.html

  • Hermanns H, Herzog U, Katoen J-P (2002) Process Algebra for Performance Evaluation. Theor Comp Sci 274(1–2):43–87

    Article  MathSciNet  MATH  Google Scholar 

  • Huber N, Becker S, Rathfelder C, Schweflinghaus J, Reussner R (2010) Performance modeling in industry: a case study on storage virtualization. In: ACM/IEEE 32nd international conference on software engineering, software engineering in practice track, Capetown, South Africa. ACM, New York, pp 1–10. Acceptance rate: 23% (16/71)

    Google Scholar 

  • Jain R (1991) The art of computer systems performance analysis : techniques for experimental design, measurement, simulation, and modeling. Wiley

  • Jedlitschka A, Ciolkowski M, Pfahl D (2008) Guide to advanced empirical software engineering, chapter reporting experiments in software engineering. Springer, London, pp 201–228

    Book  Google Scholar 

  • Kounev S (2006) Performance modeling and evaluation of distributed component-based systems using queueing petri nets. IEEE Trans Softw Eng 32(7):486–502

    Article  Google Scholar 

  • Koziolek H (2004a) Empirical evaluation of performance-analysis methods for software architectures. http://sdqweb.ipd.kit.edu/publications/pdfs/koziolek2004b.pdf. Partial English translation of the original Master’s thesis “Empirische Bewertung von Performance-Analyseverfahren für Software-Architekturen”, Universität Oldenburg

  • Koziolek H (2004b) Empirische bewertung von performance-analyseverfahren für software-architekturen. Master’s thesis, Universität Oldenburg

  • Koziolek H (2010) Performance evaluation of component-based software systems: a survey. Perform Eval 67(8):634–658. (Special issue on software and performance)

    Article  Google Scholar 

  • Koziolek H, Becker S, Happe J, Reussner R (2008) Model-driven software development: integrating quality assurance, chapter evaluating performance of software architecture models with the Palladio component model. IDEA Group Inc., pp 95–118

  • Koziolek H, Firus V (2005) Empirical evaluation of model-based performance predictions methods in software development. In: Reussner RH, Mayer J, Stafford JA, Overhage S, Becker S, Schroeder PJ (eds) Proceeding of the first international conference on the quality of software architectures (QoSA’05). Lecture notes in computer science, vol 3712. Springer-Verlag Berlin, pp 188–202

    Google Scholar 

  • Kuperberg M, Krogmann K, Reussner R (2008) Performance prediction for black-box components using reengineered parametric behaviour models. In: Proceedings of the 11th international symposium on component based software engineering (CBSE’08), Karlsruhe, Germany, 14th-17th October 2008. Lecture notes in computer science, vol 5282. Springer-Verlag, Berlin, pp 48–63

    Google Scholar 

  • Lazowska E, Zahorjan J, Graham GS, Sevcik KC (1984) Quantitative system performance—computer system analysis using queueing network models. Prentice-Hall

  • Liu Y, Fekete A, Gorton I (2005) Design-level performance prediction of component-based applications. IEEE Trans Softw Eng 31(11):928–941

    Article  Google Scholar 

  • Martens A (2007) Empirical validation of the model-driven performance prediction approach Palladio. Master’s thesis, Carl-von-Ossietzky Universität Oldenburg

  • Martens A, Becker S, Koziolek H, Reussner R (2008a) An empirical investigation of the applicability of a component-based performance prediction method. In: Proceedings of the 5th European performance engineering workshop (EPEW’08), Palma de Mallorca, Spain. Lecture notes in computer science, vol 5261. Springer-Verlag, Berlin, pp 17–31

    Google Scholar 

  • Martens A, Becker S, Koziolek H, Reussner R (2008b) An empirical investigation of the effort of creating reusable models for performance prediction. In: Proceedings of the 11th international symposium on component-based software engineering (CBSE’08), Karlsruhe, Germany. Lecture notes in computer science, vol 5282. Springer-Verlag, Berlin, pp 16–31

    Google Scholar 

  • Martens A, Koziolek H, Prechelt L, Reussner R (2009) Experiment 3: effort for creating and reusing PCM performance models. http://sdqweb.ipd.kit.edu/wiki/Mobppexp/reuse_experiment

  • Marzolla M (2004) Simulation-based performance modeling of UML software architectures. PhD Thesis TD-2004-1, Dipartimento di Informatica, Università Ca’ Foscari di Venezia, Mestre, Italy

  • Medvidovic N, Taylor RN (1997) A classification and comparison framework for software architecture description languages. IEEE Trans Softw Eng 26:70–93

    Article  Google Scholar 

  • Menasce D, Almeida V, Dowdy L (1994) Capacity planning and performance modeling: from mainframes to client-server systems. Prentice-Hall, New Jersey

    Google Scholar 

  • Menascé DA, Almeida VAF (2000) Scaling for E-business: technologies, models, performance, and capacity planning. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Menascé DA, Almeida VAF, Dowdy LW (2004) Performance by design. Prentice Hall

  • Object Management Group (OMG) (2005) UML profile for schedulability, performance and time (SPT)

  • Object Management Group (OMG) (2006) UML profile for modeling and analysis of real-time and embedded systems (MARTE) RFP (realtime/05-02-06)

  • Prechelt L (2001) Kontrollierte experimente in der softwaretechnik. Springer-Verlag, Berlin

    Google Scholar 

  • Prechelt L, Unger B, Tichy WF, Votta LG (2001) A controlled experiment in maintenance comparing design patterns to simpler solutions. IEEE Trans Softw Eng 27:1134

    Article  Google Scholar 

  • R Development Core Team (2007) R: a language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. ISBN 3-900051-07-0, Last retrieved 2008-01-06

  • Rolia JA, Sevcik KC (1995) The method of layers. IEEE Trans Softw Eng 21(8):689–700

    Article  Google Scholar 

  • Smith CU, Williams LG (2002) Performance solutions: a practical guide to creating responsive, scalable software. Addison-Wesley

  • Williams LG, Smith CU (2002) PASASM: a method for the performance assessment of software architectures. In: Proceedings of the 3rd international workshop on software and performance (WOSP’02). ACM, New York, pp 179–189

    Chapter  Google Scholar 

  • Williams LG, Smith CU (2003) Making the business case for software performance engineering. In: Proceedings of the 29th international computer measurement group conference, December 7–12, 2003, Dallas, Texas, USA. Computer Measurement Group, pp 349–358

  • Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. Kluwer, Norwell

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank Steffen Becker, Walter Tichy, Wilhelm Hasselbring, Viktoria Firus, Klaus Krogmann, and Michael Kuperberg for their valuable additions to this work. Furthermore, we thank all students who participated in our studies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne Martens.

Additional information

Guest Editors: Ali Babar, Arie van Deursen, Patricia Lago

Rights and permissions

Reprints and permissions

About this article

Cite this article

Martens, A., Koziolek, H., Prechelt, L. et al. From monolithic to component-based performance evaluation of software architectures. Empir Software Eng 16, 587–622 (2011). https://doi.org/10.1007/s10664-010-9142-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-010-9142-8

Keywords

Navigation