ABSTRACT
[Context] A variety of testing tools have been developed to support and automate software performance testing activities. These tools may use different techniques, such as Model-Based Testing (MBT) or Capture and Replay (CR). [Goal] For software companies, it is important to evaluate such tools w.r.t. the effort required for creating test artifacts using them; despite its importance, there are few empirical studies comparing performance testing tools, specially tools developed with different approaches. [Method] We are conducting experimental studies to provide evidence about the required effort to use CR-based tools and MBT tools. In this paper, we present our first results, evaluating the effort (time spent) when using LoadRunner and Visual Studio CR-based tools, and the PLeTsPerf MBT tool to create performance test scripts and scenarios to test Web applications, in the context of a collaboration project between Software Engineering Research Center at PUCRS and a technological laboratory of a global IT company. [Results] Our results indicate that, for simple testing tasks, the effort of using a CR-based tool was lower than using an MBT tool, but as the testing complexity increases tasks, the advantage of using MBT grows significantly. [Conclusions] To conclude, we discuss the lessons we learned from the design, operation, and analysis of our empirical experiment.
- F. Abbors, T. Ahmad, D. Truscan, and I. Porres. MBPeT - a model-based performance testing tool. In Proceedings of the 4th International Conference on Advances in System Testing and Validation Lifecycle, pages 1--8. IARIA, 2012.Google Scholar
- Amazon. Amazon.com: online shop. Avaliable in: http://www.amazon.com/, 2014.Google Scholar
- X. Bai. Testing the performance of an ssas cube using vsts. In Proceedings of the 2010 7th International Conference on Information Technology: New Generations, pages 986--991, Washington, USA, 2010. IEEE Computer Society. Google ScholarDigital Library
- D. Chadwick, C. Davis, M. Dunn, E. Jessee, A. Kofaldt, K. Mooney, R. Nicolas, A. Patel, J. Reinstrom, K. Siefkes, P. Silva, S. Ulrich, and W. Yeung. Using Rational Performance Tester Version 7. IBM Redbooks, 2008.Google Scholar
- T. D. Cook and D. T. Campbell. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Houghton Mifflin, 1979.Google Scholar
- L. T. Costa, R. Czekster, F. M. Oliveira, E. M. Rodrigues, M. B. Silveira, and A. F. Zorzo. Generating performance test scripts and scenarios based on abstract intermediate models. In Proceedings of the 24rd International Conference on Software Engineering and Knowledge Engineering, pages 112--117, San Francisco, CA, USA, 2012. Knowledge Systems Institute Graduate School.Google Scholar
- O. El Ariss, D. Xu, S. Dandey, B. Vender, P. McClean, and B. Slator. A systematic capture and replay strategy for testing complex gui based java applications. In Proceedings of the 7th International Conference on Information Technology: New Generations, pages 1038--1043, apr. 2010. Google ScholarDigital Library
- M. Fowler. Domain Specific Languages. Addison-Wesley Professional, 1st edition, 2010. Google ScholarDigital Library
- M. Grechanik, Q. Xie, and C. Fu. Experimental assessment of manual versus tool-based maintenance of gui-directed test scripts. In Proceedings of the IEEE International Conference on Software Maintenance, pages 9--18, Washington, DC, USA, 2009. IEEE Computer Society.Google ScholarCross Ref
- Hewlett Packard HP. Software HP LoadRunner. Available in: http://goo.gl/JU2R5d, 2014.Google Scholar
- Y. Jing, Z. Lan, W. Hongyuan, S. Yuqiang, and C. Guizhen. JMeter-based aging simulation of computing system. In Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering, pages 282--285, Washington, DC, USA, 2010. IEEE Computer Society.Google Scholar
- N. Juristo and O. S. Gómez. Replication of software engineering experiments. In B. Meyer and M. Nordio, editors, Empirical Software Engineering and Verification, volume 7007 of LNCS, pages 60--88. Springer Berlin Heidelberg, 2012. Google ScholarDigital Library
- D. Menasce. TPC-W: A benchmark for e-commerce. IEEE Internet Computing, 36:83--87, 2002. Google ScholarDigital Library
- M. Mendonca, J. Maldonado, M. de Oliveira, J. Carver, S. Fabbri, F. Shull, G. H. Travassos, E. Hohn, and V. Basili. A framework for software engineering experimental replications. In 13th IEEE International Conference on Engineering of Complex Computer Systems, pages 203--212, March 2008. Google ScholarDigital Library
- G. J. Myers and C. Sandler. The Art of Software Testing. Wiley, New York, NY, USA, 2004. Google ScholarDigital Library
- E. M. Rodrigues. PLeTs: A Product Line of Model-based Testing Tools. PhD thesis, Pontifical Catholic University of Rio Grande do Sul, Porto Alegre, Brazil, 2013.Google Scholar
- E. M. Rodrigues, F. M. Oliveira, L. T. Costa, M. Bernardino, S. Souza, R. Saad, and A. Zorzo. Model-based testing applied to performance testing: An empirical study (Under Review). Empirical Software Engineering, 2014.Google Scholar
- E. M. Rodrigues, L. D. Viccari, A. F. Zorzo, and I. M. Gimenes. PLeTs tool - test automation using software product lines and model based testing. In Proceedings of the 22th International Conference on Software Engineering and Knowledge Engineering, pages 483--488, Redwood City, CA, USA, jul. 2010.Google Scholar
- G. Ruffo, R. Schifanella, M. Sereno, and R. Politi. WALTy: A user behavior tailored tool for evaluating web application performance. In Proceedings of the Network Computing and Applications, Third IEEE International Symposium, pages 77--86, Washington, DC, USA, 2004. IEEE Computer Society. Google ScholarDigital Library
- M. B. Silveira, E. M. Rodrigues, A. F. Zorzo, H. Vieira, and F. Oliveira. Model-based automatic generation of performance test scripts. In Proceedings of the 23rd International Conference on Software Engineering and Knowledge Engineering, pages 258--263, Miami, FL, USA, 2011.Google Scholar
- B. Software. Performance benchmarking kit using incident management with silkperformer. Technical report, BMC Software, 2007.Google Scholar
- TestOptimal. Testoptimal model-based test automation. Available in: http://www.testoptimal.com/, 2014.Google Scholar
- C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, 2000. Google ScholarDigital Library
- M. Zhang. Argouml. Journal of Computing Sciences in Colleges, 21:6--7, 2006. Google ScholarDigital Library
Index Terms
- Evaluating capture and replay and model-based performance testing tools: an empirical comparison
Recommendations
An empirical comparison of model-based and capture and replay approaches for performance testing
A variety of testing tools has been developed to support and automate the software testing activity. Some of them may use different techniques such as Model-based Testing (MBT) or Capture and Replay (CR). Model-based Testing is a technique for automatic ...
Mutation Testing Cost Reduction Techniques: A Survey
Since the 1970s, researchers have widely used mutation as a testing technique, applying mainly it to validate test suites, as well as to validate test case strategies and test data generation. Mutation today is sufficiently mature for industrial ...
Cloud-Based Test Tools: A Brief Comparative View
AbstractThe concept of virtualization has brought life to the new methods of software testing. With the help of cloud technology, testing has become much more popular because of the opportunities it provides. Cloud technologies provides everything as a ...
Comments