Skip to main content

Inferring Performance from Code: A Review

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12476))

Abstract

Performance is an important non-functional property of software that has a direct impact on the end-user’s perception of quality of service since it is related to metrics such as response time, throughput, and utilization. Performance-by-construction can be defined as a development paradigm where executable code carries some kind of guarantee on its performance, as opposed to the current practice in software engineering where performance concerns are left to the later stages of the development process by means of profiling or testing. In this paper we argue that performance-by-construction techniques need to be probabilistic in nature, leveraging accurate models for the analysis. In support of this idea, here we carry out a literature review on methods that can be used as the basis of performance-by-construction development approaches. There has been significant research—reviewed elsewhere—on performance models derived from high-level software specifications such as UML diagrams or other domain-specific languages. This review, instead, focuses on methods where performance information is extracted directly from the code, a line of research that has arguably been less explored by the software performance engineering community.

This work has been partially supported by the Italian Ministry for Education under grant SEDUCE no. 2017TWRCNB.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Android Developers’ Guide: Threading performance. https://developer.android.com/topic/performance/threads.html. Accessed 23 July 2020

  2. NASA delays satellite launch after finding bugs in software program. https://fcw.com/Articles/1998/04/19/NASA-delays-satellite-launch-after-finding-bugs-in-software-program.aspx. Accessed 4 Feb 2018

  3. Ahmad, T., Ashraf, A., Truscan, D., Porres, I.: Exploratory performance testing using reinforcement learning. In: 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 156–163. IEEE (2019)

    Google Scholar 

  4. Ammons, G., Choi, J.-D., Gupta, M., Swamy, N.: Finding and removing performance bottlenecks in large systems. In: Odersky, M. (ed.) ECOOP 2004. LNCS, vol. 3086, pp. 172–196. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24851-4_8

    Chapter  Google Scholar 

  5. Baldoni, R., Coppa, E., D’elia, D.C., Demetrescu, C., Finocchi, I.: A survey of symbolic execution techniques. ACM Comput. Surv. (CSUR) 51(3), 1–39 (2018)

    Article  Google Scholar 

  6. Ball, T., Larus, J.R.: Efficient path profiling. In: Proceedings of the 29th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 29, pp. 46–57. IEEE (1996)

    Google Scholar 

  7. Balsamo, S., Di Marco, A., Inverardi, P., Simeoni, M.: Model-based performance prediction in software development: a survey. IEEE Trans. Softw. Eng. 30(5), 295–310 (2004)

    Article  Google Scholar 

  8. Barham, P., Donnelly, A., Isaacs, R., Mortier, R.: Using magpie for request extraction and workload modelling. In: OSDI, vol. 4, p. 18 (2004)

    Google Scholar 

  9. Barham, P., Isaacs, R., Mortier, R., Narayanan, D.: Magpie: online modelling and performance-aware systems. In: HotOS, pp. 85–90 (2003)

    Google Scholar 

  10. Becker, S., Koziolek, H., Reussner, R.: Model-based performance prediction with the palladio component model. In: Proceedings of the 6th International Workshop on Software and Performance (WOSP), pp. 54–65 (2007)

    Google Scholar 

  11. Bolch, G., Greiner, S., De Meer, H., Trivedi, K.S.: Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications. Wiley, Hoboken (2006)

    Book  Google Scholar 

  12. Borges, M., Filieri, A., d’Amorim, M., Păsăreanu, C.S.: Iterative distribution-aware sampling for probabilistic symbolic execution. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp. 866–877 (2015)

    Google Scholar 

  13. Brünink, M., Rosenblum, D.S.: Mining performance specifications. In: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 39–49 (2016)

    Google Scholar 

  14. Bucklew, J.: Introduction to Rare Event Simulation. Springer, New York (2013). https://doi.org/10.1007/978-1-4757-4078-3

    Book  MATH  Google Scholar 

  15. Bühlmann, P., Wyner, A.J., et al.: Variable length Markov chains. Ann. Stat. 27(2), 480–513 (1999)

    Article  MathSciNet  Google Scholar 

  16. Buse, R.P., Weimer, W.: The road not taken: estimating path execution frequency statically. In: 2009 IEEE 31st International Conference on Software Engineering, pp. 144–154. IEEE (2009)

    Google Scholar 

  17. Cai, Y., Sullivan, K.J.: Modularity analysis of logical design models. In: 21st IEEE/ACM International Conference on Automated Software Engineering (ASE 2006), pp. 91–102. IEEE (2006)

    Google Scholar 

  18. Chen, B., Liu, Y., Le, W.: Generating performance distributions via probabilistic symbolic execution. In: Proceedings of the 38th International Conference on Software Engineering, pp. 49–60 (2016)

    Google Scholar 

  19. Chen, T.Y., Kuo, F.C., Merkel, R.G., Tse, T.: Adaptive random testing: the art of test case diversity. J. Syst. Softw. 83(1), 60–66 (2010)

    Article  Google Scholar 

  20. Chen, Z., et al.: Speedoo: prioritizing performance optimization opportunities. In: Proceedings of the 40th International Conference on Software Engineering, pp. 811–821 (2018)

    Google Scholar 

  21. Coppa, E., Demetrescu, C., Finocchi, I.: Input-sensitive profiling. ACM SIGPLAN Not. 47(6), 89–98 (2012)

    Article  Google Scholar 

  22. Filieri, A., Păsăreanu, C.S., Visser, W., Geldenhuys, J.: Statistical symbolic execution with informed sampling. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 437–448 (2014)

    Google Scholar 

  23. Garcia, J., Krka, I., Mattmann, C., Medvidovic, N.: Obtaining ground-truth software architectures. In: Proceedings of the 35th International Conference on Software Engineering (ICSE), pp. 901–910 (2013)

    Google Scholar 

  24. Geldenhuys, J., Dwyer, M.B., Visser, W.: Probabilistic symbolic execution. In: Proceedings of the 2012 International Symposium on Software Testing and Analysis, pp. 166–176 (2012)

    Google Scholar 

  25. Goldsmith, S.F., Aiken, A.S., Wilkerson, D.S.: Measuring empirical computational complexity. In: Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 395–404 (2007)

    Google Scholar 

  26. Gomes, C.P., Sabharwal, A., Selman, B.: Model counting (2008)

    Google Scholar 

  27. Gordon, A.D., Henzinger, T.A., Nori, A.V., Rajamani, S.K.: Probabilistic programming. In: Proceedings of the Future of Software Engineering (FOSE), pp. 167–181 (2014)

    Google Scholar 

  28. Graham, S.L., Kessler, P.B., Mckusick, M.K.: Gprof: a call graph execution profiler. ACM Sigplan Not. 17(6), 120–126 (1982)

    Article  Google Scholar 

  29. Harman, M., O’Hearn, P.: From start-ups to scale-ups: opportunities and open problems for static and dynamic program analysis. In: SCAM (2018)

    Google Scholar 

  30. He, S., Manns, G., Saunders, J., Wang, W., Pollock, L., Soffa, M.L.: A statistics-based performance testing methodology for cloud applications. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 188–199 (2019)

    Google Scholar 

  31. Holmes, G., Donkin, A., Witten, I.H.: WEKA: a machine learning workbench. In: Proceedings of ANZIIS 1994-Australian New Zealnd Intelligent Information Systems Conference, pp. 357–361. IEEE (1994)

    Google Scholar 

  32. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  33. King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)

    Article  MathSciNet  Google Scholar 

  34. Kluge, M., Knüpfer, A., Nagel, W.E.: Knowledge based automatic scalability analysis and extrapolation for MPI programs. In: Cunha, J.C., Medeiros, P.D. (eds.) Euro-Par 2005. LNCS, vol. 3648, pp. 176–184. Springer, Heidelberg (2005). https://doi.org/10.1007/11549468_22

    Chapter  Google Scholar 

  35. Koziolek, H.: Performance evaluation of component-based software systems: a survey. Perform. Eval. 67(8), 634–658 (2010)

    Article  Google Scholar 

  36. Larus, J.R.: Whole program paths. ACM SIGPLAN Not. 34(5), 259–269 (1999)

    Article  Google Scholar 

  37. Luckow, K., Kersten, R., Păsăreanu, C.: Symbolic complexity analysis using context-preserving histories. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 58–68. IEEE (2017)

    Google Scholar 

  38. Luckow, K., Păsăreanu, C.S., Dwyer, M.B., Filieri, A., Visser, W.: Exact and approximate probabilistic symbolic execution for nondeterministic programs. In: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, pp. 575–586 (2014)

    Google Scholar 

  39. Mazeroff, G., De, V., Jens, C., Michael, G., Thomason, G.: Probabilistic trees and automata for application behavior modeling. In: 41st ACM Southeast Regional Conference Proceedings (2003)

    Google Scholar 

  40. Mazeroff, G., Gregor, J., Thomason, M., Ford, R.: Probabilistic suffix models for API sequence analysis of windows XP applications. Pattern Recogn. 41(1), 90–101 (2008)

    Article  Google Scholar 

  41. Möbius, C., Dargie, W., Schill, A.: Power consumption estimation models for processors, virtual machines, and servers. IEEE Trans. Parallel Distrib. Syst. 25(6), 1600–1614 (2014)

    Article  Google Scholar 

  42. Nethercote, N., Seward, J.: Valgrind: a framework for heavyweight dynamic binary instrumentation. ACM Sigplan Not. 42(6), 89–100 (2007)

    Article  Google Scholar 

  43. Nocedal, J., Wright, S.: Numerical Optimization. Springer, Heidelberg (2006). https://doi.org/10.1007/978-0-387-40065-5

    Book  MATH  Google Scholar 

  44. Păsăreanu, C.S., Rungta, N.: Symbolic pathfinder: symbolic execution of Java bytecode. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, pp. 179–180 (2010)

    Google Scholar 

  45. Perez-Palacin, D., Mirandola, R.: Uncertainties in the modeling of self-adaptive systems: a taxonomy and an example of availability evaluation. In: Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, pp. 3–14 (2014)

    Google Scholar 

  46. Puterman, M.L.: Markov decision processes. Handb. Oper. Res. Manag. Sci. 2, 331–434 (1990)

    MathSciNet  MATH  Google Scholar 

  47. Ramalingam, G.: Data flow frequency analysis. ACM SIGPLAN Not. 31(5), 267–277 (1996)

    Article  Google Scholar 

  48. Risso, F., Degioanni, L.: An architecture for high performance network analysis. In: Proceedings of the Sixth IEEE Symposium on Computers and Communications, pp. 686–693. IEEE (2001)

    Google Scholar 

  49. Ron, D., Singer, Y., Tishby, N.: The power of amnesia: learning probabilistic automata with variable memory length. Mach. Learn. 25(2–3), 117–149 (1996). https://doi.org/10.1023/A:1026490906255

    Article  MATH  Google Scholar 

  50. Rosendahl, M.: Automatic complexity analysis. In: Proceedings of the Fourth International Conference on Functional Programming Languages and Computer Architecture, pp. 144–156 (1989)

    Google Scholar 

  51. Sankaranarayanan, S., Chakarov, A., Gulwani, S.: Static analysis for probabilistic programs: inferring whole program properties from finitely many paths. In: Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 447–458 (2013)

    Google Scholar 

  52. Sarkar, V.: Determining average program execution times and their variance. In: Proceedings of the ACM SIGPLAN 1989 Conference on Programming Language Design and Implementation, pp. 298–312 (1989)

    Google Scholar 

  53. Schlabach, T.: Insight into event tracing for windows (2019)

    Google Scholar 

  54. Sevitsky, G., De Pauw, W., Konuru, R.: An information exploration tool for performance analysis of Java programs. In: Proceedings Technology of Object-Oriented Languages and Systems, TOOLS 38, pp. 85–101. IEEE (2001)

    Google Scholar 

  55. Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM J. Comput. 13(2), 292–314 (1984)

    Article  MathSciNet  Google Scholar 

  56. Stewart, W.J.: Probability, Markov Chains, Queues, and Simulation. Princeton University Press, Princeton (2009)

    Book  Google Scholar 

  57. Tribastone, M.: Towards software performance by construction. In: Margaria, T., Steffen, B. (eds.) ISoLA 2018. LNCS, vol. 11244, pp. 466–470. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03418-4_27

    Chapter  Google Scholar 

  58. Tribastone, M., Gilmore, S.: Automatic extraction of PEPA performance models from UML activity diagrams annotated with the MARTE profile. In: Proceedings of the Seventh International Workshop on Software and Performance (WOSP) (2008)

    Google Scholar 

  59. Tribastone, M., Gilmore, S.: Automatic translation of UML sequence diagrams into PEPA models. In: Fifth International Conference on the Quantitative Evaluation of Systems (QEST), pp. 205–214 (2008)

    Google Scholar 

  60. Wang, W., et al.: Testing cloud applications under cloud-uncertainty performance effects. In: 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST), pp. 81–92. IEEE (2018)

    Google Scholar 

  61. Wegbreit, B.: Mechanical program analysis. Commun. ACM 18(9), 528–539 (1975)

    Article  MathSciNet  Google Scholar 

  62. Wong, S., Cai, Y., Valetto, G., Simeonov, G., Sethi, K.: Design rule hierarchies and parallelism in software development tasks. In: 2009 IEEE/ACM International Conference on Automated Software Engineering, pp. 197–208. IEEE (2009)

    Google Scholar 

  63. Woodside, M., Franks, G., Petriu, D.C.: The future of software performance engineering. In: Proceedings of the Future of Software Engineering (FOSE), pp. 171–187 (2007)

    Google Scholar 

  64. Woodside, M., Petriu, D.C., Petriu, D.B., Shen, H., Israr, T., Merseguer, J.: Performance by unified model analysis (PUMA). In: Proceedings of the 5th International Workshop on Software and Performance, pp. 1–12. ACM, New York (2005)

    Google Scholar 

  65. Zaparanuks, D., Hauswirth, M.: Algorithmic profiling. In: Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 67–76 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Emilio Incerto , Annalisa Napolitano or Mirco Tribastone .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 705 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Incerto, E., Napolitano, A., Tribastone, M. (2020). Inferring Performance from Code: A Review. In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation: Verification Principles. ISoLA 2020. Lecture Notes in Computer Science(), vol 12476. Springer, Cham. https://doi.org/10.1007/978-3-030-61362-4_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61362-4_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61361-7

  • Online ISBN: 978-3-030-61362-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics