Skip to main content

Efficient Experiment Selection in Automated Software Performance Evaluations

  • Conference paper
Book cover Computer Performance Engineering (EPEW 2011)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6977))

Included in the following conference series:

Abstract

The performance of today’s enterprise applications is influenced by a variety of parameters across different layers. Thus, evaluating the performance of such systems is a time and resource consuming process. The amount of possible parameter combinations and configurations requires many experiments in order to derive meaningful conclusions. Although many tools for automated performance testing are available, controlling experiments and analyzing results still requires large manual effort. In this paper, we apply statistical model inference techniques, namely Kriging and MARS, in order to adaptively select experiments. Our approach automatically selects and conducts experiments based on the accuracy observed for the models inferred from the currently available data. We validated the approach using an industrial ERP scenario. The results demonstrate that we can automatically infer a prediction model with a mean relative error of 1.6% using only 18% of the measurement points in the configuration space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Balsamo, S., Di Marco, A., Inverardi, P., Simeoni, M.: Model-Based Performance Prediction in Software Development: A Survey. IEEE Transactions on Software Engineering 30(5), 295–310 (2004)

    Article  Google Scholar 

  2. Becker, S., Koziolek, H., Reussner, R.: The Palladio component model for model-driven performance prediction. Journal of Systems and Software 82, 3–22 (2009)

    Article  Google Scholar 

  3. Courtois, M., Woodside, M.: Using regression splines for software performance analysis and software characterization. In: Proceedings of the 2nd International Workshop on Software and Performance, WOSP 2000, September 17–20, pp. 105–114. ACM Press, New York (2000)

    Google Scholar 

  4. De Smith, M.J., Goodchild, M.F., Longley, P.A.: Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools. Troubador Publishing

    Google Scholar 

  5. Denaro, G., Polini, A., Emmerich, W.: Early performance testing of distributed software applications. SIGSOFT Software Engineering Notes 29(1), 94–103 (2004)

    Article  Google Scholar 

  6. Fioukov, A.V., Hammer, D.K., Obbink, H., Eskenazi, E.M.: Performance prediction for software architectures. In: Proceedings of PROGRESS 2002 Workshop (2002)

    Google Scholar 

  7. Friedman, J.H.: Multivariate adaptive regression splines. Annals of Statistics 19(1), 1–141 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  8. Gorton, I., Liu, A.: Performance Evaluation of Alternative Component Architectures for Enterprise JavaBean Applications. IEEE Internet Computing 7(3), 18–23 (2003)

    Article  Google Scholar 

  9. Groenda, H.: Certification of software component performance specifications. In: Proceedings of Workshop on Component-Oriented Programming (WCOP) 2009, pp. 13–21 (2009)

    Google Scholar 

  10. Happe, J., Westermann, D., Sachs, K., Kapová, L.: Statistical inference of software performance models for parametric performance completions. In: Heineman, G.T., Kofron, J., Plasil, F. (eds.) QoSA 2010. LNCS, vol. 6093, pp. 20–35. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  11. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data mining, Inference,and Prediction, 2nd edn. Springer Series in Statistics. Springer, Heidelberg (2009)

    Book  MATH  Google Scholar 

  12. Jin, Y., Tang, A., Han, J., Liu, Y.: Performance evaluation and prediction for legacy information systems. In: Proceedings of ICSE 2007, pp. 540–549. IEEE CS, Washington (2007)

    Google Scholar 

  13. Jung, G., Pu, C., Swint, G.: Mulini: An Automated Staging Framework for QoS of Distributed Multi-Tier Applications. In: ASE Workshop on Automating Service Quality (2007)

    Google Scholar 

  14. Koziolek, H.: Performance evaluation of component-based software systems: A survey. Performance Evaluation (in press, corrected proof, 2009)

    Google Scholar 

  15. Kraft, S., Pacheco-Sanchez, S., Casale, G., Dawson, S.: Estimating service resource consumption from response time measurements. In: Proc. of VALUETOOLS 2009. ACM, NY (2009)

    Google Scholar 

  16. Krige, D.G.: A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. Journal of the Chemical, Metallurgical and Mining Society of South Africa 52(6), 119–139 (1951)

    Google Scholar 

  17. Kumar, D., Zhang, L., Tantawi, A.: Enhanced inferencing: Estimation of a workload dependent performance model. In: Proceedings of VALUETOOLS 2009 (2009)

    Google Scholar 

  18. Li, J., Heap, A.D.: A review of spatial interpolation methods for environmental scientists. Geoscience Australia, Canberra (2008)

    Google Scholar 

  19. Miller, B.P., Callaghan, M.D., Cargille, J.M., Hollingsworth, J.K., Irvin, R.B., Karavanic, K.L., Kunchithapadam, K., Newhall, T.: The paradyn parallel performance measurement tool. Computer 28, 37–46 (1995)

    Article  Google Scholar 

  20. Mos, A., Murphy, J.: A framework for performance monitoring, modelling and prediction of component oriented distributed systems. In: WOSP 2002: Proc. of the 3rd International Workshop on Software and Performance, pp. 235–236. ACM, New York (2002)

    Google Scholar 

  21. Motulsky, H.J., Ransnas, L.A.: Fitting curves to data using nonlinear regression: a practical and non-mathematical review (1987)

    Google Scholar 

  22. Pacifici, G., Segmuller, W., Spreitzer, M., Tantawi, A.: Dynamic estimation of cpu demand of web traffic. In: Proc. of VALUETOOLS 2006, page 26. ACM, New York (2006)

    Google Scholar 

  23. Pebesma, E.J.: Multivariable geostatistics in s: the gstat package. Computers and Geosciences 30, 683–691 (2004)

    Article  Google Scholar 

  24. Reussner, R., Sanders, P., Prechelt, L., Müller, M.S.: SKaMPI: A detailed, accurate MPI benchmark. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 52–59. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  25. Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Statistical Science 4, 409–423 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  26. Sankarasetty, J., Mobley, K., Foster, L., Hammer, T., Calderone, T.: Software performance in the real world: personal lessons from the performance trauma team. In: Cortellessa, V., Uchitel, S., Yankelevich, D. (eds.) WOSP, pp. 201–208. ACM, New York (2007)

    Chapter  Google Scholar 

  27. SAP. SAP Standard Application Benchmarks (March 2011), http://www.sap.com/solutions/benchmark

  28. Schneider, T.: SAP Performance Optimization Guide: Analyzing and Tuning SAP Systems. Galileo Pr. Inc., Bonn (2006)

    Google Scholar 

  29. Switzer, P.: Kriging. John Wiley and Sons Ltd., Chichester (2006)

    Book  Google Scholar 

  30. Thakkar, D., Hassan, A.E., Hamann, G., Flora, P.: A framework for measurement based performance modeling. In: WOSP 2008: Proceedings of the 7th International Workshop on Software and Performance, pp. 55–66. ACM, New York (2008)

    Google Scholar 

  31. Tobler, W.: A computer movie simulating urban growth in the detroit region. Economic Geography 46(2), 234–240 (1970)

    Article  Google Scholar 

  32. Westermann, D., Happe, J.: Performance Cockpit: Systematic Measurements and Analyses. In: ICPE 2011: Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering. ACM, New York (2011)

    Google Scholar 

  33. Westermann, D., Happe, J.: Software Performance Cockpit (March 2011), http://www.softwareperformancecockpit.org/

  34. Westermann, D., Happe, J., Hauck, M., Heupel, C.: The performance cockpit approach: A framework for systematic performance evaluations. In: Proceedings of the 36th EUROMICRO SEAA 2010. IEEE CS, Los Alamitos (2010)

    Google Scholar 

  35. Woodside, C.M., Vetland, V., Courtois, M., Bayarov, S.: Resource function capture for performance aspects of software components and sub-systems. In: Dumke, R.R., Rautenstrauch, C., Schmietendorf, A., Scholz, A. (eds.) WOSP 2000 and GWPESD 2000. LNCS, vol. 2047, pp. 239–256. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Westermann, D., Krebs, R., Happe, J. (2011). Efficient Experiment Selection in Automated Software Performance Evaluations. In: Thomas, N. (eds) Computer Performance Engineering. EPEW 2011. Lecture Notes in Computer Science, vol 6977. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24749-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24749-1_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24748-4

  • Online ISBN: 978-3-642-24749-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics