Skip to main content
Log in

An empirical comparison of model-based and capture and replay approaches for performance testing

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

A variety of testing tools has been developed to support and automate the software testing activity. Some of them may use different techniques such as Model-based Testing (MBT) or Capture and Replay (CR). Model-based Testing is a technique for automatic generation of testing artifacts based on software models. One of the main benefits of using MBT is related to the easiness of maintaining models over code; hence, it is likely that using models as a source for automatic generation of scripts requires less effort and reduces the number of faults. Otherwise, CR-based tools basically record the user interaction with the System Under Test (SUT) and then playback the recorded test. This paper presents our effort on setting up and running an experimental study performed in order to evaluate the effort to use MBT and CR-based tools to generate performance scripts. Thus, we apply an MBT and a CR approaches for the purpose of evaluation with respect to the effort to generate scripts and scenarios from the perspective of the performance testers and the performance test engineers in the context of undergraduates, M.Sc. and Ph.D. students, performance testers and performance test engineers for the generation of performance test scripts and scenarios. Our results indicate that, for simple testing tasks, the effort of using a CR-based tool was lower than using an MBT tool, but as the complexity or size of the activities of the testing tasks increases, the advantage of using MBT increased significantly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. Basically, a correlation feature of a testing tool allows to save a changing values, e.g. a session ID to a parameter. Thus, when the tool starts the virtual user emulation, it does not use the recorded ID value, instead, it uses a list of IDs from a test data source.

  2. HTTP method, i.e., GET or POST

  3. www.pucrs.br

  4. www.senacrs.com.br/faculdadesenacpoa

  5. A full report about subjects background is available on www.cepes.pucrs.br/experiment

  6. One of the authors is a test manager at the company that the subjects come from.

  7. A sample of the models and scripts designed when performing the experiment’s tasks can be found in: http://www.cepes.pucrs.br/experiment

  8. The activities described in the last phrase had to be performed by the experiment subjects when executing Task 1 in all sessions.

  9. The activity described in the last phrase had to be performed by the experiment subjects when executing Task 2 in all sessions.

  10. Our survey did not look for intellectual abilities but rather their knowledge on MBT, UML and software testing.

References

  • Abbors F, Bäcklund A, Truscan D (2010) MATERA - an integrated framework for model-based testing. In: Proceedings of the 2010 17th IEEE international conference and workshops on the engineering of computer-based systems. IEEE Computer Society, Washington, pp 321–328

    Google Scholar 

  • Abbors F, Ahmad T, Truscan D, Porres I (2013) Model-based performance testing in the cloud using the MBPet tool. In: Proceedings of the 4th ACM/SPEC international conference on performance engineering. ACM, New York, pp 423–424

    Google Scholar 

  • Amazon (2014) Amazon.com: online shop. Avaliable in: http://www.amazon.com/

  • Apfelbaum L, Doyle J (1997) Model based testing. In: Proceedings of the software quality week conference, pp 296–300

  • Bai X (2010) Testing the performance of an SSAS cube using VSTS. In: Proceedings of the 2010 7th international conference on information technology: new generations. IEEE, Washington, pp 986–991

    Google Scholar 

  • BCM Software (2007) Performance benchmarking kit using incident management with silkperformer. Tech. rep., BMC Software

  • Chadwick D, Davis C, Dunn M, Jessee E, Kofaldt A, Mooney K, Nicolas R, Patel A, Reinstrom J, Siefkes K, Silva P, Ulrich S, Yeung W (2008) Using rational performance tester version 7. IBM Redbooks

  • Cook TD, Campbell DT (1979) Quasi-Experimentation: design and analysis issues for field settings. Houghton Mifflin

  • Costa LT, Czekster R, Oliveira FM, Rodrigues EM, Silveira MB, Zorzo AF (2012) Generating performance test scripts and scenarios based on abstract intermediate models. In: Proceedings of the 24rd international conference on software engineering and knowledge engineering. Knowledge Systems Institute Graduate School, San Francisco, pp 112–117

    Google Scholar 

  • El-Far IK, Whittaker JA (2001) Model-based software testing. Wiley

  • Guckenheimer S, Perez JJ (2006) Software engineering with microsoft visual studio team system. Addison-Wesley Professional, Boston

    Google Scholar 

  • Hemmati H, Arcuri A, Briand L (2013) Achieving scalable model-based testing through test case diversity. ACM Trans Softw Eng Methodol 22:1–42

    Article  Google Scholar 

  • Hewlett Packard - HP (2014) Software HP LoadRunner. Available in: https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp

  • Holmes A, Kellogg M (2006) Automating functional tests using selenium. In: Proceedings of the conference on AGILE 2006. IEEE Computer Society, Washington, pp 270–275

    Google Scholar 

  • Jing Y, Lan Z, Hongyuan W, Yuqiang S, Guizhen C (2010) JMeter-based aging simulation of computing system. In: Proceedings of the international conference on computer, mechatronics, control and electronic engineering. IEEE, Washington, pp 282–285

    Google Scholar 

  • Li N, Offutt J (2014) An empirical analysis of test oracle strategies for model-based testing. In: IEEE 7th international conference on software testing, verification and validation. IEEE, Cleveland

    Book  Google Scholar 

  • Ma YS, Offutt J, Kwon YR (2006) MuJava: a mutation system for java. In: Proceedings of the 28th international conference on software engineering. ACM, New York, pp 827–830

    Google Scholar 

  • Meier J, Farre C, Bansode P, Barber S, Rea D (2007) Performance testing guidance for web applications: patterns & practices. Microsoft Press

  • Menasce D (2002) TPC-W: a benchmark for e-commerce. IEEE Int Comput 36:83–87

    Article  Google Scholar 

  • Misurda J, Clause JA, Reed JL, Childers BR, Soffa ML (2005) Demand-driven structural testing with dynamic instrumentation. In: Proceedings of the 27th international conference on software engineering. ACM, New York, pp 156–165

    Google Scholar 

  • Portal Action (2014) System action statistical package. Available in: http://www.portalaction.com.br/en

  • R Project (2014) The R Project for statistical computing. Available in: http://www.r-project.org/

  • Rodrigues EM (2013) PLeTs: a product line of model-based testing tools. PhD thesis. Pontifical Catholic University of Rio Grande do Sul, Porto Alegre

    Google Scholar 

  • Shafique M, Labiche Y (2013) A systematic review of state-based test tools. Int J Softw Tools Technol Transfer:1–18

  • Silveira MB, Rodrigues EM, Zorzo AF, Vieira H, Oliveira F (2011) Model-based automatic generation of performance test scripts. In: Proceedings of the 23rd international conference on software engineering and knowledge engineering. Knowledge Systems Institute Graduate School, Miami, pp 1–6

    Google Scholar 

  • TestOptimal (2014) TestOptimal model-based test automation. Available in: http://www.testoptimal.com/

  • Utting M, Legeard B (2006) Practical model-based testing: a tools approach. Morgan Kaufmann Publishers Inc., San Francisco

    Google Scholar 

  • Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. Kluwer Academic Publishers

  • Woodside M, Franks G, Petriu DC (2007) The future of software performance engineering. In: Proceedings of the future of software engineering. IEEE, Washington, pp 171–187

    Google Scholar 

  • Zhang M (2006) ArgoUML. J Comput Sci Coll 21:6–7

    Google Scholar 

Download references

Acknowledgments

Avelino Zorzo, Elder Rodrigues, Flavio Oliveira, Leandro Costa and Maicon Bernardino are researchers from the Center of Competence in Performance Testing at PUCRS, a partnership between Dell Computers of Brazil Ltd. and PUCRS. This study was also partially supported by the project PROCAD/CAPES 191/2007, a partnership between PUCRS, UEM and USP. The authors also would like to acknowledge the help of Dr. Dorival Leao Pinto Junior (Department of Applied Mathematics and Statistics, ICMC-USP) with the application of hypothesis tests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elder Macedo Rodrigues.

Additional information

Communicated by: Atif Memon

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Macedo Rodrigues, E., Moreira de Oliveira, F., Teodoro Costa, L. et al. An empirical comparison of model-based and capture and replay approaches for performance testing. Empir Software Eng 20, 1831–1860 (2015). https://doi.org/10.1007/s10664-014-9337-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-014-9337-5

Keywords

Navigation