skip to main content
10.1145/2188286.2188345acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Capturing performance assumptions using stochastic performance logic

Published: 22 April 2012 Publication History

Abstract

Compared to functional unit testing, automated performance testing is difficult, partially because correctness criteria are more difficult to express for performance than for functionality. Where existing approaches rely on absolute bounds on the execution time, we aim to express assertions on code performance in relative, hardware-independent terms. To this end, we introduce Stochastic Performance Logic (SPL), which allows making statements about relative method performance. Since SPL interpretation is based on statistical tests applied to performance measurements, it allows (for a special class of formulas) calculating the minimum probability at which a particular SPL formula holds. We prove basic properties of the logic and present an algorithm for SAT-solver-guided evaluation of SPL formulas, which allows optimizing the number of performance measurements that need to be made. Finally, we propose integration of SPL formulas with Java code using higher-level performance annotations, for performance testing and documentation purposes.

References

[1]
K. Beck, Test Driven Development: By Example. 2002.
[2]
B. Meyer, "Applying "Design by Contract"," IEEE Computer, vol. 25, 1992.
[3]
P. Tahchiev, F. Leme, V. Massol, and G. Gregory, JUnit in Action, 2nd Edition. 2010.
[4]
Google, "Googletest." http://code.google.com/p/googletest/.
[5]
G. T. Leavens, C. Ruby, R. Leino, E. Poll, and B. Jacobs, "JML: Notations and Tools Supporting Detailed Design in Java," in OOPSLA'00, 2000.
[6]
E. Cohen, W. Schulte, and S. Tobies, "A Practical Verification Methodology for Concurrent Programs," 2009.
[7]
T. Kalibera and P. Tuma, "Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results," in Formal Methods and Stochastic Models for Performance Evaluation, vol. 4054 of LNCS, Springer, 2006.
[8]
T. Kalibera, J. Lehotsky, D. Majda, B. Repcek, M. Tomcanyi, A. Tomecek, P. Tuma, and J. Urban, "Automated Benchmarking and Analysis Tool," in VALUETOOLS'06, ACM, 2006.
[9]
L. Bulej, T. Kalibera, and P. Tuma, "Repeated Results Analysis for Middleware Regression Benchmarking," Perf. Evaluation, vol. 60, 2005.
[10]
H. P. Barendregt, The Lambda Calculus, Its Syntax and Semantics. North-Holland, Amsterdam, 1984.
[11]
B. L. Welch, "The Generalization of "Student's Problem when Several Different Population Variances are Involved," Biometrika, vol. 34, 1947.
[12]
L. Wasserman, All of Statistics: A Concise Course in Statistical Inference. Springer Texts in Statistics, Springer, 2003.
[13]
T. Kalibera and P. Tuma, "Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results," in Formal Methods and Stochastic Models for Performance Evaluation, vol. 4054 of LNCS, Springer, 2006.
[14]
C. Barrett, D. Dill, and A. Stump, "Checking Satisfiability of First-Order Formulas by Incremental Translation to SAT," in Computer Aided Verification, vol. 2404 of LNCS, Springer, 2002.
[15]
L. de Moura and N. Bjørner, "Satisfiability Modulo Theories: An Appetizer," in Formal Methods: Foundations and Applications, vol. 5902 of LNCS, Springer, 2009.
[16]
L. de Moura and N. Bjorner, "Z3: An Efficient SMT Solver," in Tools and Algorithms for the Construction and Analysis of Systems, vol. 4963 of LNCS, Springer, 2008.
[17]
N. E'en and N. Sorensson, "Translating Pseudo-boolean Constraints into SAT," J. on Satisfiability, Boolean Modeling and Computation, 2006.
[18]
S. E. Perl and W. E. Weihl, "Performance assertion checking," SIGOPS Oper. Syst. Rev., vol. 27, 1993.
[19]
P. Reynolds, C. Killian, J. L. Wiener, J. C. Mogul, M. A. Shah, and A. Vahdat, "Pip: Detecting the Unexpected in Distributed Systems," in NSDI'06, USENIX, 2006.
[20]
J. S. Vetter and P. H. Worley, "Asserting Performance Expectations," in Proc. 2002 ACM/IEEE Conf. on Supercomputing, Supercomputing '02, IEEE CS, 2002.
[21]
X. Liu, Z. Guo, X. Wang, F. Chen, X. Lian, J. Tang, M. Wu, M. F. Kaashoek, and Z. Zhang, "D3S: Debugging Deployed Distributed Systems," in NSDI'08, USENIX, 2008.
[22]
A. Tjang, F. Oliveira, R. Bianchini, R. Martin, and T. Nguyen, "Model-Based Validation for Internet Services," in Proc. 28th IEEE Intl. Symp. on Reliable Distributed Systems, 2009.
[23]
H. Du, R. Gan, K. Liu, Z. Zhang, and D. Booy, "Method for Constructing Performance Annotation Model Based on Architecture Design of Information Systems," in Research and Practical Issues of Enterprise Information Systems II, vol. 255 of IFIP, Springer, 2008.
[24]
S. Distefano, D. Paci, A. Puliafito, and M. Scarpa, "UML Design and Software Performance Modeling," in Computer and Information Sciences - ISCIS 2004, vol. 3280 of LNCS, Springer, 2004.
[25]
C.-W. Ho and L. Williams, "Developing Software Performance with the Performance Refinement and Evolution Model," in WOSP'07, ACM, 2007.
[26]
M. A. Isa and D. N. A. Jawawi, "Comparative Evaluation of Performance Assessment and Modeling Method for Software Architecture," in Software Engineering and Computer Systems, vol. 181 of CCIS, Springer, 2011.

Cited By

View all
  • (2021)Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suitesPeerJ Computer Science10.7717/peerj-cs.5487(e548)Online publication date: 28-May-2021
  • (2021)Applying test case prioritization to software microbenchmarksEmpirical Software Engineering10.1007/s10664-021-10037-x26:6Online publication date: 30-Sep-2021
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '12: Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
April 2012
362 pages
ISBN:9781450312028
DOI:10.1145/2188286
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 April 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. performance testing
  2. regression benchmarking

Qualifiers

  • Research-article

Conference

ICPE'12
Sponsor:

Acceptance Rates

Overall Acceptance Rate 252 of 851 submissions, 30%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suitesPeerJ Computer Science10.7717/peerj-cs.5487(e548)Online publication date: 28-May-2021
  • (2021)Applying test case prioritization to software microbenchmarksEmpirical Software Engineering10.1007/s10664-021-10037-x26:6Online publication date: 30-Sep-2021
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • (2018)To What Extent Does Performance Awareness Support Developers in Fixing Performance Bugs?Computer Performance Engineering10.1007/978-3-030-02227-3_2(14-29)Online publication date: 3-Oct-2018
  • (2017)Unit Testing Performance in Java ProjectsProceedings of the 8th ACM/SPEC on International Conference on Performance Engineering10.1145/3030207.3030226(401-412)Online publication date: 17-Apr-2017
  • (2017)Unit testing performance with Stochastic Performance LogicAutomated Software Engineering10.1007/s10515-015-0188-024:1(139-187)Online publication date: 1-Mar-2017
  • (2016)Mining performance specificationsProceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering10.1145/2950290.2950314(39-49)Online publication date: 1-Nov-2016
  • (2016)Analysis of Overhead in Dynamic Java Performance MonitoringProceedings of the 7th ACM/SPEC on International Conference on Performance Engineering10.1145/2851553.2851569(275-286)Online publication date: 12-Mar-2016
  • (2016)Statistical Approach to Architecture Modes in Smart Cyber Physical Systems2016 13th Working IEEE/IFIP Conference on Software Architecture (WICSA)10.1109/WICSA.2016.33(168-177)Online publication date: Apr-2016
  • (2015)Utilizing Performance Unit Tests To Increase Performance AwarenessProceedings of the 6th ACM/SPEC International Conference on Performance Engineering10.1145/2668930.2688051(289-300)Online publication date: 28-Jan-2015
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media