Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13701))

Included in the following conference series:

Abstract

Software systems cannot in general be assumed proven correct before deployment. Testing is still the most common approach to demonstrate a satisfactory level of correctness. However, some errors will survive verification efforts, and it is therefore reasonable to monitor a system after deployment, to determine whether it executes correctly. Both for testing and post-deployment monitoring, it may be desirable to be able to formalize correctness properties that can be monitored against program executions. This is also referred to as runtime verification. We present a specification language and a monitoring system for monitoring such specifications against event streams. The monitoring engine front-end, written in Scala, translates the specification to C++, whereas the back-end (the monitoring engine), written in C++, interprets the generated C++ monitor on an event stream. This makes it feasible to monitor the execution of C and C++ programs online.

The research performed was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The original system was focused on Log analysis, hence the name LogScope (scope as in telescope).

  2. 2.

    Optimizations similar to slicing can avoid examining all states.

  3. 3.

    An extension of the language can allow different types of values.

References

  1. Attard, D.P., Cassar, I., Francalanza, A., Aceto, L., Ingólfsdóttir, A.: A runtime monitoring tool for actor-based systems. In: Gay, S., Ravara, A. (eds.) Behavioural Types: from Theory to Tools, chapter 3, pp. 49–76. River Publishers (2017)

    Google Scholar 

  2. Barringer, H., Rydeheard, D., Havelund, K.: Rule systems for run-time monitoring: from Eagle to RuleR. In: Sokolsky, O., Taşıran, S. (eds.) RV 2007. LNCS, vol. 4839, pp. 111–125. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-77395-5_10

    Chapter  Google Scholar 

  3. Barringer, H., Goldberg, A., Havelund, K., Sen, K.: Rule-based runtime verification. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 44–57. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24622-0_5

    Chapter  MATH  Google Scholar 

  4. Barringer, H., Groce, A., Havelund, K., Smith, M.: Formal analysis of log files. J. Aerosp. Comput. Inf. Commun. 7(11), 365–390 (2010)

    Article  Google Scholar 

  5. Basin, D.A., Klaedtke, F., Marinovic, S., Zălinescu, E.: Monitoring of temporal first-order properties with aggregations. Formal Methods Syst. Design 46(3), 262–285 (2015)

    Article  Google Scholar 

  6. Colombo, C., Pace, G.J., Schneider, G.: LARVA – safer monitoring of real-time Java programs (tool paper). In: Proceedings of the 2009 Seventh IEEE International Conference on Software Engineering and Formal Methods, SEFM ’09, pp. 33–37, Washington, DC, USA, IEEE Computer Society (2009)

    Google Scholar 

  7. d’Amorim, M., Havelund, K.: Event-based runtime verification of Java programs. In: Proceedings of the Third International Workshop on Dynamic Analysis, WODA ’05, pp. 1–7, New York, NY, USA, Association for Computing Machinery (2005)

    Google Scholar 

  8. Dams, D., Havelund, K., Kauffman, S.: Python library for trace analysis. In: Dang, T., Stolz, V. (eds) Runtime Verification. Tbilisi, Georgia, September 28–30, Springer, Cham, LNCS (2022). https://doi.org/10.1007/978-3-031-17196-3_15

  9. D’Angelo, B., et al.: LOLA: Runtime monitoring of synchronous systems. In: Proceedings of TIME 2005: the 12th International Symposium on Temporal Representation and Reasoning, pp. 166–174, IEEE (2005)

    Google Scholar 

  10. Daut. https://github.com/havelund/daut (2022)

  11. Decker, N., Leucker, M., Thoma, D.: Monitoring modulo theories. Softw. Tools Technol. Transf. (STTT) 18(2), 205–225 (2016)

    Article  Google Scholar 

  12. Robert, B.: Doorenbos. Production Matching for Large Learning Systems. PhD thesis, Carnegie Mellon University, Pittsburgh, PA (1995)

    Google Scholar 

  13. Forgy, C.: Rete: a fast algorithm for the many pattern/many object pattern match problem. Artif. Intell. 19, 17–37 (1982)

    Article  Google Scholar 

  14. Graphviz. https://graphviz.org (2022)

  15. Hallé, S., Villemaire, R.: Runtime enforcement of web service message contracts with data. IEEE Trans. Serv. Comput. 5(2), 192–206 (2012)

    Article  Google Scholar 

  16. Havelund, K.: Runtime verification of C programs. In: Suzuki, K., Higashino, T., Ulrich, A., Hasegawa, T. (eds.) FATES/TestCom -2008. LNCS, vol. 5047, pp. 7–22. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-68524-1_3

    Chapter  Google Scholar 

  17. Havelund, K.: Data automata in Scala. In: 2014 Theoretical Aspects of Software Engineering Conference, TASE 2014, Changsha, China, September 1–3, pp. 1–9. IEEE Computer Society (2014)

    Google Scholar 

  18. Havelund, K.: Rule-based runtime verification revisited. Softw. Tools Technol. Transf. (STTT) 17(2), 143–170 (2015)

    Article  Google Scholar 

  19. Havelund, K., Peled, D.: Runtime verification: from propositional to first-order temporal logic. In: Colombo, C., Leucker, M. (eds.) RV 2018. LNCS, vol. 11237, pp. 90–112. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03769-7_7

    Chapter  Google Scholar 

  20. Havelund, K., Peled, D.: An extension of LTL with rules and its application to runtime verification. In: Finkbeiner, B., Mariani, L. (eds.) RV 2019. LNCS, vol. 11757, pp. 239–255. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32079-9_14

    Chapter  Google Scholar 

  21. Kim, M., Kannan, S., Lee, I., Sokolsky, O.: Java-MaC: a run-time assurance tool for Java. In: Proceedings of the 1st International Workshop on Runtime Verification (RV’01), vol.55(2) of ENTCS. Elsevier (2001)

    Google Scholar 

  22. LogFire. https://github.com/havelund/logfire

  23. LogScope in Python. https://github.com/havelund/logscope (2022)

  24. LogScope in Scala/C++. https://github.com/logscope (2022)

  25. Meredith, P.O.N., Jin, D., Griffith, D., Chen, F., Roşu, G.: An overview of the MOP runtime verification framework. Int. J. Softw. Tech. Technol. Transf (STTT) 14249–289 (2011)

    Google Scholar 

  26. Pike, L., Wegmann, N., Niller, S., Goodloe, A.: Copilot: Monitoring embedded systems. Innov. Syst. Softw. Eng. 9(4), 235–255 (2013)

    Article  Google Scholar 

  27. PyContract. https://github.com/pyrv/pycontract (2022)

  28. Reger, G., Cruz, H.C., Rydeheard, D.: MarQ: monitoring at runtime with QEA. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 596–610. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46681-0_55

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Klaus Havelund .

Editor information

Editors and Affiliations

A Visualization of Monitors

A Visualization of Monitors

Textual monitors are automatically visualized using GraphViz’s dot-format [14]. This appendix shows the visualization of the textual monitors presented in Sect. 4.

Fig. 13.
figure 13

Monitor M1 visualized.

Monitor M1 The monitor M1 in Fig. 2 is visualized in Fig. 13. Hot states (annotated in text with the modifier hot) are visualized as orange arrow shaped pentagons. Orange means danger: this state has to be left eventually. Non-hot states are visualized as green rectangles, we can stay in those “forever” (terminating monitoring in such a state is ok). The initial state Command is pointed to by an arrow leaving a black point. Transitions are labelled with events (and additional conditions as we shall see later). The color red in general indicates error. For example a command issued in the Succeed state causes an error, symbolized with a red cross on a horizontal line.

Monitors M1a, M1b, and M1c The three monitors M1a, M1b, and M1c in Fig. 3 are visualized in Figure 14. Figure 14a, the visualization of M1a, shows how multiple target states are visualized: the transition of the Command state triggered by a command event creates a Succeed and a Command state. This is visualized with a black triangle (symbolizing a Boolean ‘and’: \(\wedge \)) with dashed lines leading to the target states. Note how in the Succeed state, a succeed event leads to ok which in the visualization is shown as a green dot. The visualization of monitor M1b in Fig. 14b illustrates how an always state is visualized: with an unlabelled self loop. The difference between the visualization of this monitor and of M1c in Fig. 14c is only that the initial state in Fig. 14c has no name.

Fig. 14.
figure 14

Monitors M1a, M1b, and M1c visualized.

Monitor M2 Monitor M2 in Fig. 4 is visualized in Fig. 15. The difference from previous visualizations is that now events carry data maps, which is shown. It is also shown how bindings to fields in target state maps are created. Specifically, the transition ‘command(name : x, kind :“FSW") => Succeed(c : x)’ from the initial always state is shown as an edge labelled with command(cmd : x, kind : “FSW"), and below it the binding of the c field of the Succeed state (see its definition) to the x that was bound on the left of the => symbol.

Fig. 15.
figure 15

Monitor M2 visualized.

Monitor M3 Monitor M3 in Fig. 5 is visualized in Fig. 16. The only new visualization concept here is that the transition from the initial always state to the error state is now labelled not only with the event pattern succeed(name : x) but also with the condition pattern !Succeed(c : x) underneath.

Fig. 16.
figure 16

Monitor M3 visualized.

Monitor M4 Monitor M4 in Fig. 6 is visualized in Fig. 17. Recall that by observing the color scheme one can from the graph quickly understand the violations being checked for: orange means terminating here is a violation, and red means an occurred violation.

Fig. 17.
figure 17

Monitor M4 visualized.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Havelund, K. (2022). Specification-Based Monitoring in C++. In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Verification Principles. ISoLA 2022. Lecture Notes in Computer Science, vol 13701. Springer, Cham. https://doi.org/10.1007/978-3-031-19849-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19849-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19848-9

  • Online ISBN: 978-3-031-19849-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics