skip to main content
10.1145/2591062.2591139acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

An adaptive bayesian approach for URL selection to test performance of large scale web-based systems

Published: 31 May 2014 Publication History

Abstract

In case of large scale web-based systems, scripts for performance testing are updated iteratively. In each script, multiple URLs of the system are considered depending on intuitions that those URLs will expose the performance bugs. This paper proposes a Bayesian approach for including a URL to a test script based on its probability of being time intensive. As the testing goes on the scheme adaptively updates its knowledge regarding a URL. The comparison with existing methods shows that the proposed technique performs similar in guiding applications towards intensive tasks, which helps to expose performance bugs.

References

[1]
A. Aleti and I. Meedeniya. Component deployment optimisation with bayesian learning. In Proceedings of the 14th international ACM Sigsoft symposium on Component based Software Engineering (CBSE), pages 11–20. ACM, 2011.
[2]
W. Dickinson, D. Leon, and A. Podgurski. Finding failures by cluster analysis of execution profiles. In Proceedings of the 23rd International Conference on Software Engineering (ICSE), pages 339–348. IEEE Computer Society, 2001.
[3]
M. Grechanik, C. Fu, and Q. Xie. Automatically finding performance problems with feedback-directed learning software testing. In Proceedings of the 34th International Conference on Software Engineering (ICSE), pages 156–166. IEEE Computer Society, 2012.
[4]
M. Grindal, J. Offutt, and S. F. Andler. Combination testing strategies: a survey. Software Testing, Verification and Reliability, 15(3):167–199, 2005.
[5]
E. J. Weyuker and F. I. Vokolos. Experience with performance testing of software systems: Issues, an approach, and case study. IEEE Transactions on Software Engineering, 26(12):1147–1156, 2000.

Cited By

View all
  • (2019)A Methodology for Generating Tests for Evaluating User-Centric Performance of Mobile Streaming ApplicationsModel-Driven Engineering and Software Development10.1007/978-3-030-11030-7_18(406-429)Online publication date: 1-Feb-2019
  • (2019)An end‐user‐centric test generation methodology for performance evaluation of mobile networked applicationsSoftware Testing, Verification and Reliability10.1002/stvr.171329:6-7Online publication date: 7-Oct-2019
  • (2018)Software semantics and syntax as a tool for automated test generationInternational Journal of Critical Computer-Based Systems10.1504/IJCCBS.2017.0899877:4(369-396)Online publication date: 13-Dec-2018

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE Companion 2014: Companion Proceedings of the 36th International Conference on Software Engineering
May 2014
741 pages
ISBN:9781450327688
DOI:10.1145/2591062
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • TCSE: IEEE Computer Society's Tech. Council on Software Engin.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 May 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Bayesian Learning
  2. Performance Testing
  3. Web-based Systems

Qualifiers

  • Article

Conference

ICSE '14
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2019)A Methodology for Generating Tests for Evaluating User-Centric Performance of Mobile Streaming ApplicationsModel-Driven Engineering and Software Development10.1007/978-3-030-11030-7_18(406-429)Online publication date: 1-Feb-2019
  • (2019)An end‐user‐centric test generation methodology for performance evaluation of mobile networked applicationsSoftware Testing, Verification and Reliability10.1002/stvr.171329:6-7Online publication date: 7-Oct-2019
  • (2018)Software semantics and syntax as a tool for automated test generationInternational Journal of Critical Computer-Based Systems10.1504/IJCCBS.2017.0899877:4(369-396)Online publication date: 13-Dec-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media