skip to main content
10.1145/2652524.2652587acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Evaluating capture and replay and model-based performance testing tools: an empirical comparison

Published:18 September 2014Publication History

ABSTRACT

[Context] A variety of testing tools have been developed to support and automate software performance testing activities. These tools may use different techniques, such as Model-Based Testing (MBT) or Capture and Replay (CR). [Goal] For software companies, it is important to evaluate such tools w.r.t. the effort required for creating test artifacts using them; despite its importance, there are few empirical studies comparing performance testing tools, specially tools developed with different approaches. [Method] We are conducting experimental studies to provide evidence about the required effort to use CR-based tools and MBT tools. In this paper, we present our first results, evaluating the effort (time spent) when using LoadRunner and Visual Studio CR-based tools, and the PLeTsPerf MBT tool to create performance test scripts and scenarios to test Web applications, in the context of a collaboration project between Software Engineering Research Center at PUCRS and a technological laboratory of a global IT company. [Results] Our results indicate that, for simple testing tasks, the effort of using a CR-based tool was lower than using an MBT tool, but as the testing complexity increases tasks, the advantage of using MBT grows significantly. [Conclusions] To conclude, we discuss the lessons we learned from the design, operation, and analysis of our empirical experiment.

References

  1. F. Abbors, T. Ahmad, D. Truscan, and I. Porres. MBPeT - a model-based performance testing tool. In Proceedings of the 4th International Conference on Advances in System Testing and Validation Lifecycle, pages 1--8. IARIA, 2012.Google ScholarGoogle Scholar
  2. Amazon. Amazon.com: online shop. Avaliable in: http://www.amazon.com/, 2014.Google ScholarGoogle Scholar
  3. X. Bai. Testing the performance of an ssas cube using vsts. In Proceedings of the 2010 7th International Conference on Information Technology: New Generations, pages 986--991, Washington, USA, 2010. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. Chadwick, C. Davis, M. Dunn, E. Jessee, A. Kofaldt, K. Mooney, R. Nicolas, A. Patel, J. Reinstrom, K. Siefkes, P. Silva, S. Ulrich, and W. Yeung. Using Rational Performance Tester Version 7. IBM Redbooks, 2008.Google ScholarGoogle Scholar
  5. T. D. Cook and D. T. Campbell. Quasi-Experimentation: Design and Analysis Issues for Field Settings. Houghton Mifflin, 1979.Google ScholarGoogle Scholar
  6. L. T. Costa, R. Czekster, F. M. Oliveira, E. M. Rodrigues, M. B. Silveira, and A. F. Zorzo. Generating performance test scripts and scenarios based on abstract intermediate models. In Proceedings of the 24rd International Conference on Software Engineering and Knowledge Engineering, pages 112--117, San Francisco, CA, USA, 2012. Knowledge Systems Institute Graduate School.Google ScholarGoogle Scholar
  7. O. El Ariss, D. Xu, S. Dandey, B. Vender, P. McClean, and B. Slator. A systematic capture and replay strategy for testing complex gui based java applications. In Proceedings of the 7th International Conference on Information Technology: New Generations, pages 1038--1043, apr. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Fowler. Domain Specific Languages. Addison-Wesley Professional, 1st edition, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Grechanik, Q. Xie, and C. Fu. Experimental assessment of manual versus tool-based maintenance of gui-directed test scripts. In Proceedings of the IEEE International Conference on Software Maintenance, pages 9--18, Washington, DC, USA, 2009. IEEE Computer Society.Google ScholarGoogle ScholarCross RefCross Ref
  10. Hewlett Packard HP. Software HP LoadRunner. Available in: http://goo.gl/JU2R5d, 2014.Google ScholarGoogle Scholar
  11. Y. Jing, Z. Lan, W. Hongyuan, S. Yuqiang, and C. Guizhen. JMeter-based aging simulation of computing system. In Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering, pages 282--285, Washington, DC, USA, 2010. IEEE Computer Society.Google ScholarGoogle Scholar
  12. N. Juristo and O. S. Gómez. Replication of software engineering experiments. In B. Meyer and M. Nordio, editors, Empirical Software Engineering and Verification, volume 7007 of LNCS, pages 60--88. Springer Berlin Heidelberg, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Menasce. TPC-W: A benchmark for e-commerce. IEEE Internet Computing, 36:83--87, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Mendonca, J. Maldonado, M. de Oliveira, J. Carver, S. Fabbri, F. Shull, G. H. Travassos, E. Hohn, and V. Basili. A framework for software engineering experimental replications. In 13th IEEE International Conference on Engineering of Complex Computer Systems, pages 203--212, March 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. G. J. Myers and C. Sandler. The Art of Software Testing. Wiley, New York, NY, USA, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. E. M. Rodrigues. PLeTs: A Product Line of Model-based Testing Tools. PhD thesis, Pontifical Catholic University of Rio Grande do Sul, Porto Alegre, Brazil, 2013.Google ScholarGoogle Scholar
  17. E. M. Rodrigues, F. M. Oliveira, L. T. Costa, M. Bernardino, S. Souza, R. Saad, and A. Zorzo. Model-based testing applied to performance testing: An empirical study (Under Review). Empirical Software Engineering, 2014.Google ScholarGoogle Scholar
  18. E. M. Rodrigues, L. D. Viccari, A. F. Zorzo, and I. M. Gimenes. PLeTs tool - test automation using software product lines and model based testing. In Proceedings of the 22th International Conference on Software Engineering and Knowledge Engineering, pages 483--488, Redwood City, CA, USA, jul. 2010.Google ScholarGoogle Scholar
  19. G. Ruffo, R. Schifanella, M. Sereno, and R. Politi. WALTy: A user behavior tailored tool for evaluating web application performance. In Proceedings of the Network Computing and Applications, Third IEEE International Symposium, pages 77--86, Washington, DC, USA, 2004. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. B. Silveira, E. M. Rodrigues, A. F. Zorzo, H. Vieira, and F. Oliveira. Model-based automatic generation of performance test scripts. In Proceedings of the 23rd International Conference on Software Engineering and Knowledge Engineering, pages 258--263, Miami, FL, USA, 2011.Google ScholarGoogle Scholar
  21. B. Software. Performance benchmarking kit using incident management with silkperformer. Technical report, BMC Software, 2007.Google ScholarGoogle Scholar
  22. TestOptimal. Testoptimal model-based test automation. Available in: http://www.testoptimal.com/, 2014.Google ScholarGoogle Scholar
  23. C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. Zhang. Argouml. Journal of Computing Sciences in Colleges, 21:6--7, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Evaluating capture and replay and model-based performance testing tools: an empirical comparison

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEM '14: Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
      September 2014
      461 pages
      ISBN:9781450327749
      DOI:10.1145/2652524

      Copyright © 2014 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 September 2014

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ESEM '14 Paper Acceptance Rate23of123submissions,19%Overall Acceptance Rate130of594submissions,22%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader