Skip to main content

Analysing the Performance of Python-Based Web Services with the VyPR Framework

  • Conference paper
  • First Online:
Book cover Runtime Verification (RV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12399))

Included in the following conference series:

Abstract

In this tutorial paper, we present the current state of VyPR, a framework for the performance analysis of Python-based web services. We begin by summarising our theoretical contributions which take the form of an engineer-friendly specification language; instrumentation and monitoring algorithms; and an approach for explanation of property violations. We then summarise the VyPR ecosystem, which includes an intuitive library for writing specifications and powerful tools for analysing monitoring results. We conclude with a brief description of how VyPR was used to improve our understanding of the performance of a critical web service at the CMS Experiment at CERN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Call Graphs for Python. https://github.com/jrfonseca/gprof2dot

  2. cProfiler. https://docs.python.org/2/library/profile.html#module-cProfile

  3. Extending Python with C. https://docs.python.org/2.7/extending/extending.html

  4. Network Time Protocol. http://www.ntp.org

  5. pyinstrument. https://github.com/joerick/pyinstrument

  6. Python’s GIL. https://wiki.python.org/moin/GlobalInterpreterLock

  7. SQLAlchemy for Python. https://www.sqlalchemy.org

  8. SQLite. https://www.sqlite.org/index.html

  9. The CMS Experiment at CERN. http://cms.cern

  10. The Flask Framework. https://flask.palletsprojects.com/en/1.1.x/

  11. The LHC. https://home.cern/science/accelerators/large-hadron-collider

  12. The Python Programming Language. https://github.com/python/cpython

  13. trace - the Python tracing tool. https://docs.python.org/2/library/trace.html

  14. What is Deterministic Profiling? https://docs.python.org/2.4/lib/node453.html

  15. Alur, R., Etessami, K., Madhusudan, P.: A temporal logic of nested calls and returns. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 467–481. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24730-2_35

    Chapter  MATH  Google Scholar 

  16. Ball, T., Naik, M., Rajamani, S.K.: From symptom to cause: localizing errors in counterexample traces. In: Proceedings of the 30th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2003, pp. 97–105. Association for Computing Machinery, New York (2003). https://doi.org/10.1145/604131.604140

  17. Bensalem, S., Bozga, M., Krichen, M., Tripakis, S.: Testing conformance of real-time applications by automatic generation of observers. Electron. Notes Theor. Comput. Sci. 113, 23–43 (2005). Proceedings of the Fourth Workshop on Runtime Verification (RV 2004). https://doi.org/10.1016/j.entcs.2004.01.036. http://www.sciencedirect.com/science/article/pii/S157106610405251X

  18. Barringer, H., Goldberg, A., Havelund, K., Sen, K.: Rule-based runtime verification. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 44–57. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24622-0_5

    Chapter  MATH  Google Scholar 

  19. Bartocci, E., Manjunath, N., Mariani, L., Mateis, C., Ničković, D.: Automatic failure explanation in CPS models. In: Ölveczky, P.C., Salaün, G. (eds.) SEFM 2019. LNCS, vol. 11724, pp. 69–86. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30446-1_4

    Chapter  Google Scholar 

  20. Bauer, A., Leucker, M., Schallhart, C.: Runtime verification for LTL and TLTL. ACM Trans. Softw. Eng. Methodol. 20(4), 14:1–14:64 (2011). https://doi.org/10.1145/2000799.2000800

    Article  Google Scholar 

  21. Beck, F., Moseler, O., Diehl, S., Rey, G.D.: In situ understanding of performance bottlenecks through visually augmented code. In: 2013 21st International Conference on Program Comprehension (ICPC), pp. 63–72 (2013)

    Google Scholar 

  22. Beer, I., Ben-David, S., Chockler, H., Orni, A., Trefler, R.: Explaining counterexamples using causality. Formal Methods Syst. Des. 40(1), 20–40 (2012). https://doi.org/10.1007/s10703-011-0132-2

    Article  MATH  Google Scholar 

  23. Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D.: Quantified event automata: towards expressive and efficient runtime monitors. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 68–84. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32759-9_9

    Chapter  Google Scholar 

  24. Birch, G., Fischer, B., Poppleton, M.R.: Fast model-based fault localisation with test suites. In: Blanchette, J.C., Kosmatov, N. (eds.) TAP 2015. LNCS, vol. 9154, pp. 38–57. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21215-9_3

    Chapter  Google Scholar 

  25. Christakis, M., Heizmann, M., Mansur, M.N., Schilling, C., Wüstholz, V.: Semantic fault localization and suspiciousness ranking. In: Vojnar, T., Zhang, L. (eds.) TACAS 2019. LNCS, vol. 11427, pp. 226–243. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17462-0_13

    Chapter  Google Scholar 

  26. Cito, J., Leitner, P., Rinard, M., Gall, H.C.: Interactive production performance feedback in the IDE. In: 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 971–981 (2019)

    Google Scholar 

  27. Cito, J., Oliveira, F., Leitner, P., Nagpurkar, P., Gall, H.C.: Context-based analytics - establishing explicit links between runtime traces and source code. In: 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP), pp. 193–202 (2017)

    Google Scholar 

  28. Clarke, E., Jha, S., Lu, Y., Veith, H.: Tree-like counterexamples in model checking, pp. 19–29, February 2002. https://doi.org/10.1109/LICS.2002.1029814

  29. D’Angelo, B., et al.: LOLA: runtime monitoring of synchronous systems. In: 12th International Symposium on Temporal Representation and Reasoning (TIME 2005), pp. 166–174. IEEE Computer Society Press, June 2005

    Google Scholar 

  30. Dawes, J.H., Han, M., Reger, G., Franzoni, G., Pfeiffer, A.: Analysis tools for the VyPR framework for Python. In: 2019 International Conference on Computing in High Energy and Nuclear Physics, Adelaide, Australia (2019)

    Google Scholar 

  31. Dawes, J.H., Reger, G.: Explaining violations of properties in Control-Flow Temporal Logic. In: Finkbeiner, B., Mariani, L. (eds.) RV 2019. LNCS, vol. 11757, pp. 202–220. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32079-9_12

    Chapter  Google Scholar 

  32. Dawes, J.H., Reger, G.: Specification of temporal properties of functions for runtime verification. In: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC 2019, Limassol, Cyprus, 8–12 April 2019, pp. 2206–2214 (2019). https://doi.org/10.1145/3297280.3297497

  33. Dawes, J.H., Reger, G., Franzoni, G., Pfeiffer, A., Govi, G.: VyPR2: a framework for runtime verification of Python web services. In: Vojnar, T., Zhang, L. (eds.) TACAS 2019. LNCS, vol. 11428, pp. 98–114. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17465-1_6

    Chapter  Google Scholar 

  34. Falcone, Y., Krstić, S., Reger, G., Traytel, D.: A taxonomy for classifying runtime verification tools. In: Colombo, C., Leucker, M. (eds.) RV 2018. LNCS, vol. 11237, pp. 241–262. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03769-7_14

    Chapter  Google Scholar 

  35. Ferrère, T., Maler, O., Ničković, D.: Trace diagnostics using temporal implicants. In: Finkbeiner, B., Pu, G., Zhang, L. (eds.) ATVA 2015. LNCS, vol. 9364, pp. 241–258. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24953-7_20

    Chapter  Google Scholar 

  36. Hallé, S., Varvaressos, S.: A formalization of complex event stream processing, vol. 2014, pp. 2–11, September 2014. https://doi.org/10.1109/EDOC.2014.12

  37. Javed, O., et al.: PerfCI: a toolchain for automated performance testing during continuous integration of Python projects. In: 2020 International Conference on Automated Software Engineering (2020, to appear)

    Google Scholar 

  38. Koymans, R.: Specifying real-time properties with metric temporal logic. Real-Time Syst. 2(4), 255–299 (1990). https://doi.org/10.1007/BF01995674

    Article  Google Scholar 

  39. Papadopoulos, I., Chytracek, R., Duellmann, D., Govi, G., Shapiro, Y., Xie, Z.: CORAL, a software system for vendor-neutral access to relational databases, pp. 495–499, April 2006. https://doi.org/10.1142/9789812773678_0082

  40. Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57, October 1977. https://doi.org/10.1109/SFCS.1977.32

  41. Reger, G.: Suggesting edits to explain failing traces. In: Bartocci, E., Majumdar, R. (eds.) RV 2015. LNCS, vol. 9333, pp. 287–293. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23820-3_20

    Chapter  Google Scholar 

  42. Reger, G.: A report of RV-CuBES 2017. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. An International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools. Kalpa Publications in Computing, vol. 3, pp. 1–9. EasyChair (2017). https://doi.org/10.29007/2496. https://easychair.org/publications/paper/MVXk

  43. Reiter, R.: A theory of diagnosis from first principles. Artif. Intell. 32(1), 57–95 (1987). https://doi.org/10.1016/0004-3702(87)90062-2. http://www.sciencedirect.com/science/article/pii/0004370287900622

  44. Groce, A., Visser, W.: What went wrong: explaining counterexamples. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 121–136. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-44829-2_8

    Chapter  Google Scholar 

  45. de Souza, H.A., Chaim, M.L., Kon, F.: Spectrum-based software fault localization: a survey of techniques, advances, and challenges. CoRR abs/1607.04347 (2016). http://arxiv.org/abs/1607.04347

  46. Wotawa, F., Stumptner, M., Mayer, W.: Model-based debugging or how to diagnose programs automatically. In: Hendtlass, T., Ali, M. (eds.) IEA/AIE 2002. LNCS (LNAI), vol. 2358, pp. 746–757. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-48035-8_72. http://dl.acm.org/citation.cfm?id=646864.708248

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua Heneage Dawes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dawes, J.H., Han, M., Javed, O., Reger, G., Franzoni, G., Pfeiffer, A. (2020). Analysing the Performance of Python-Based Web Services with the VyPR Framework. In: Deshmukh, J., Ničković, D. (eds) Runtime Verification. RV 2020. Lecture Notes in Computer Science(), vol 12399. Springer, Cham. https://doi.org/10.1007/978-3-030-60508-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60508-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60507-0

  • Online ISBN: 978-3-030-60508-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics