Skip to main content
Log in

Using language workbenches and domain-specific languages for safety-critical software development

  • Regular Paper
  • Published:
Software & Systems Modeling Aims and scope Submit manuscript

Abstract

Language workbenches support the efficient creation, integration, and use of domain-specific languages. Typically, they execute models by code generation to programming language code. This can lead to increased productivity and higher quality. However, in safety-/mission-critical environments, generated code may not be considered trustworthy, because of the lack of trust in the generation mechanisms. This makes it harder to justify the use of language workbenches in such an environment. In this paper, we demonstrate an approach to use such tools in critical environments. We argue that models created with domain-specific languages are easier to validate and that the additional risk resulting from the transformation to code can be mitigated by a suitably designed transformation and verification architecture. We validate the approach with an industrial case study from the healthcare domain. We also discuss the degree to which the approach is appropriate for critical software in space, automotive, and robotics systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. http://www.esterel-technologies.com/products/scade-suite/

  2. http://jetbrains.com/mps.

  3. https://www.fda.gov/MedicalDevices/ucm085281.htm#_Toc517237938.

  4. There are DSLs that are widely used in a particular domain over years such as Cryptol [41]. In such cases, a proven-in-use argument might be feasible.

  5. Building our own program analysis tools is completely infeasible in practice; it is also not recommended, because those tools must be proven in use (or proven correct) for them to be of any practical use.

  6. Note that there might be additional code/behaviors in \(E_2\) that could be exploited maliciously. We discuss this below.

  7. http://www.ldra.com/en/software-quality-test-tools/group/by-coding-standard/misra-c-c.

  8. Other jurisdictions have other regulating bodies. But the FDA is generally considered to be the most stringent one, so it is commonly used as the benchmark.

  9. http://www.imdrf.org/docs/imdrf/final/technical/imdrf-tech-131209-samd-key-definitions-140901.pdf.

  10. https://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm073779.pdf. https://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm524904.pdf.

  11. The notion is to mitigate them to As Low As Reasonably Possible (ALARP).

  12. In some cases, some high risks could remain, but then it is up to the manufacturer to document that the risk/benefit ratio is better than the already existing solutions. This would still be accepted by the FDA since there is still a benefit.

  13. https://www.fda.gov/downloads/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDRH/CDRHTransparency/UCM388442.pdf.

  14. http://voelter.de/data/pub/kernelf-reference.pdf.

  15. In addition to extension, the ability to remove language concepts that are not needed as part of a specific DSL is an important ingredient to making an embeddable language truly reusable.

  16. https://en.wikipedia.org/wiki/Satisfiability_modulo_theories.

  17. Many reasons contribute to this: It does not have to care about non-functional concerns, so no optimizations are involved; MPS offers convenient APIs to traverse trees; Java in general requires attention to fewer details than C++, for example, as a consequence of garbage collection; and a part of the interpreter could be reused from KernelF.

  18. Of course, as is always the case with coverage measurements, high coverage is not a guarantee for the absence of errors; for example, one cannot exhaustively test the ranges of (number) values or cases where a language structure allows for an unbounded set of programs.

  19. https://en.wikipedia.org/wiki/Gcov.

  20. https://en.wikipedia.org/wiki/Cppcheck.

  21. In conversations with people from FDA, we have learned that static analysis will play an increasing role in their assessment of the quality of a software system. However, currently, testing and documentation is still paramount.

  22. https://clinicaltrials.gov/ct2/show/NCT02345265.

  23. https://markusvoelter.github.io/ProgrammingBasics/.

  24. http://www.ecss.nl/.

  25. http://www.autosar.org

  26. We are not allowed to mention names at this point.

  27. https://smaccmpilot.org/.

  28. Note that the languages and generators would still be DSL-specific; otherwise, we would use a fixed language tool and thus move to case A of in Fig. 2.

  29. The authors have anecdotally heard about an attempt to develop a code generator in Ada as part of a mission-critical military project; however, a simple template-expanding code generator is a long way from a full-blown language workbench.

  30. https://www.jetbrains.com/mps/concepts/.

  31. https://martinfowler.com/bliki/GivenWhenThen.html.

  32. https://cucumber.io/.

References

  1. Amrani, M., Combemale, B., Lucio, L., Selim, G.M.K., Dingel, J., Traon, Y.L., Vangheluwe, H., Cordy, J.R.: Formal verification techniques for model transformations: a tridimensional classification. J. Object Technol. 14(3), 1:1–43 (2015). https://doi.org/10.5381/jot.2015.14.3.a1

    Article  Google Scholar 

  2. Arkin, B., Stender, S., McGraw, G.: Software penetration testing. IEEE Secur. Priv. 3(1), 84–87 (2005)

    Article  Google Scholar 

  3. Beine, M., Otterbach, R., Jungmann, M.: Development of safety-critical software using automatic code generation. Technical Report, SAE Technical Paper (2004)

  4. Bettini, L.: Implementing Domain-Specific Languages with Xtext and Xtend. Packt Publishing Ltd, Birmingham (2016)

    Google Scholar 

  5. Boehm, B.W., et al.: Software Engineering Economics, vol. 197. Prentice-hall, Englewood Cliffs (1981)

    MATH  Google Scholar 

  6. Broy, M., Kirstan, S., Krcmar, H., Schätz, B., Zimmermann, J.: What is the benefit of a model-based design of embedded software systems in the car industry? Softw Des Dev Concepts Methodol Tools Appl Concepts Methodol Tools Appl, p.310 (2013). https://doi.org/10.4018/978-1-4666-4301-7.ch017

  7. Bruckhaus, T., Madhavii, N., Janssen, I., Henshaw, J.: The impact of tools on software productivity. IEEE Softw. 13(5), 29–38 (1996)

    Article  Google Scholar 

  8. Buckl, C., Regensburger, M., Knoll, A., Schrott, G.: Models for automatic generation of safety-critical real-time systems. In: ARES 2007 Conference. IEEE (2007)

  9. Chlipala, A.: A verified compiler for an impure functional language. ACM SIGPLAN Not. 45, 93–106 (2010)

    Article  MATH  Google Scholar 

  10. Claessen, K., Hughes, J.: Quickcheck: a lightweight tool for random testing of haskell programs. Acm SIGPLAN Not. 46(4), 53–64 (2011)

    Article  Google Scholar 

  11. Conmy, P., Paige, R.F.: Challenges when using model driven architecture in the development of safety critical software. In: 4th Intl. Workshop on Model-Based Methodologies for Pervasive and Embedded Software. IEEE (2007)

  12. Conrad, M.: Verification and validation according to iso 26262: a workflow to facilitate the development of high-integrity software. In: ERTS2 Conference 2012

  13. Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: The astrée analyzer. In: Esop, vol. 5, pp. 21–30. Springer (2005)

  14. Cuoq, P., Kirchner, F., Kosmatov, N., Prevosto, V., Signoles, J., Yakobowski, B.: Frama-c. In: International Conference on Software Engineering and Formal Methods. Springer (2012)

  15. Dahlweid, M., Moskal, M., Santen, T., Tobies, S., Schulte, W.: Vcc: Contract-based modular verification of concurrent c. In: ICSE Companion (2009)

  16. Dormoy, F.-X.: Scade 6: a model based solution for safety critical software development. In: Proceedings of the 4th European Congress on Embedded Real Time Software (ERTS’08), pp. 1–9 (2008)

  17. Erdweg, S., Van Der Storm, T., Völter, M., Boersma, M., Bosman, R., Cook, W. R., Gerritsen, A., Hulshout, A., Kelly, S., Loh, A., et al.: The state of the art in language workbenches. In: International Conference on Software Language Engineering, pp. 197–217. Springer (2013)

  18. Eysholdt, M.: Executable specifications for xtext. Website (2014). http://www.xpect-tests.org/

  19. Florence, S.P., Fetscher, B., Flatt, M., Temps, W.H., Kiguradze, T., West, D.P., Niznik, C., Yarnold, P.R., Findler, R.B., Belknap, S.M.: Pop-pl: a patient-oriented prescription programming language. ACM SIGPLAN Not. 51, 131–140 (2015)

    Article  Google Scholar 

  20. Görke, S., Riebeling, R., Kraus, F., Reichel, R.: Flexible platform approach for fly-by-wire systems. In: 2013 IEEE/AIAA Digital Avionics Systems Conference. IEEE (2013)

  21. Halang, W.A., Zalewski, J.: Programming languages for use in safety-related applications. Ann. Rev. Control (2003). https://doi.org/10.1016/S1367-5788(03)00005-1

    Google Scholar 

  22. Hanmer, R.: Patterns for Fault Tolerant Software. Wiley, Hoboken (2013)

    Google Scholar 

  23. Hart, B.: Sdr security threats in an open source world. In: Software Defined Radio Conference, pp. 3–5 (2004)

  24. Haxthausen, A.E., Peleska, J.: A domain specific language for railway control systems. In: Proc. of the 6th biennial world conference on integrated design and process technology (2002)

  25. Hermans, F., Pinzger, M., Van Deursen, A.: Domain-specific languages in practice: a user study on the success factors. In: International Conference on Model Driven Engineering Languages and Systems, pp. 423–437. Springer (2009)

  26. Hickey, P.C., Pike, L., Elliott, T., Bielman, J., Launchbury, J.: Building embedded systems with embedded dsls. ACM SIGPLAN Not. 49, 3–9 (2014)

    Article  Google Scholar 

  27. Holzmann, G.: Spin Model Checker, the: Primer and Reference Manual. Addison-Wesley Professional, Boston (2003)

    Google Scholar 

  28. Huang, W.-l., Peleska, J.: Exhaustive model-based equivalence class testing. In: IFIP International Conference on Testing Software and Systems, pp. 49–64. Springer (2013)

  29. Kärnä, J., Tolvanen, J.-P., Kelly, S.: Evaluating the use of domain-specific modeling in practice. In: Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling (2009)

  30. Kats, L.C., Vermaas, R., Visser, E.: Integrated language definition testing: enabling test-driven language development. ACM SIGPLAN Not. 46, 139–154 (2011)

    Article  Google Scholar 

  31. Kieburtz, R. B., McKinney, L., Bell, J. M., Hook, J., Kotov, A., Lewis, J., Oliva, D. P., Sheard, T., Smith, I., Walton, L.: A software engineering experiment in software component generation. In: Proceedings of the 18th International Conference on Software Engineering, pp. 542–552. IEEE Computer Society (1996)

  32. Koopman, P.: Embedded Software Costs 15–40 per line of code (Update: 25–50). http://bit.ly/29QHOlo (URL too long)

  33. Koopman, P.: Risk areas in embedded software industry projects. In: 2010 Workshop on Embedded Systems Education. ACM (2010)

  34. Kosar, T., Mernik, M., Carver, J.C.: Program comprehension of domain-specific and general-purpose languages: comparison using a family of experiments. Empir. Softw. Eng. 17(3), 276–304 (2012)

    Article  Google Scholar 

  35. Kroening, D., Tautschnig, M.: Cbmc–c bounded model checker. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 389–391. Springer (2014)

  36. Kuhn, A., Murphy, G.C., Thompson, C.A.: An exploratory study of forces and frictions affecting large-scale model-driven development. In: International Conference on Model Driven Engineering Languages and Systems, pp. 352–367. Springer (2012)

  37. Kumar, R., Myreen, M.O., Norrish, M., Owens, S.: Cakeml: a verified implementation of ml. ACM SIGPLAN Not. 49, 179–191 (2014)

    MATH  Google Scholar 

  38. Lämmel, R.: Grammar testing. In: Proceedings of the 4th International Conference on Fundamental Approaches to Software Engineering (2001)

  39. Ledinot, E., Astruc, J.-M., Blanquart, J.-P., Baufreton, P., Boulanger, J.-L., Delseny, H., Gassino, J., Ladier, G., Leeman, M., Machrouh, J., et al.: A cross-domain comparison of software development assurance standards. In: Proc. of ERTS 2012

  40. Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009)

    Article  Google Scholar 

  41. Lewis, J.: Cryptol: specification, implementation and verification of high-grade cryptographic applications. In: Proceedings of the 2007 ACM workshop on Formal methods in security engineering, pp. 41–41. ACM (2007)

  42. Liebel, G., Marko, N., Tichy, M., Leitner, A., Hansson, J.: Assessing the state-of-practice of model-based engineering in the embedded systems domain. In: International Conference on Model Driven Engineering Languages and Systems, pp. 166–182. Springer (2014)

  43. Liggesmeyer, P., Trapp, M.: Trends in embedded software engineering. IEEE Softw. 26(3), 19–25 (2009)

    Article  Google Scholar 

  44. Lúcio, L., Barroca, B., Amaral, V.: A technique for automatic validation of model transformations. In: MODELS 2010. Springer (2010)

  45. Méry, D., Schätz, B., Wassyng, A.: The pacemaker challenge: developing certifiable medical devices (dagstuhl seminar 14062). In: Dagstuhl Reports, vol. 4. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2014)

  46. Michailidis, A., Spieth, U., Ringler, T., Hedenetz, B., Kowalewski, S.: Test front loading in early stages of automotive software development based on autosar. In: DATE 2010. IEEE

  47. Motor Industry Software Reliability Association and Motor Industry Software Reliability Association staff: MISRA C: 2012: Guidelines for the Use of the C Language in Critical Systems. Motor Industry Research Association (2013)

  48. Molotnikov, Z., Völter, M., Ratiu, D.: Automated domain-specific c verification with mbeddr. In: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, pp. 539–550. ACM (2014)

  49. Munier, P.: Polyspace®. Industrial Use of Formal Methods: Formal Verification, pp. 123–153 (2012). https://www.mathworks.com/products/polyspace.html. Accessed 10 Apr 2018

  50. Myers, G .J.: Software Reliability. Wiley, Hoboken (1976)

    Google Scholar 

  51. Myers, G.J.: A controlled experiment in program testing and code walkthroughs/inspections. Commun. ACM 21(9), 760–768 (1978)

    Article  Google Scholar 

  52. Nguyen-Tuong, A., Guarnieri, S., Greene, D., Shirley, J., Evans, D.: Automatically hardening web applications using precise tainting. In: IFIP International Information Security Conference. Springer, (2005)

  53. Pajic, M., Jiang, Z., Lee, I., Sokolsky, O., Mangharam, R.: Safety-critical medical device development using the upp2sf model translation tool. ACM Trans. Embed. Comput. Syst. (TECS) 13(4s), 127 (2014)

    Google Scholar 

  54. Ratiu, D., Voelter, M.: Automated testing of DSL implementations. In: 11th IEEE/ACM International Workshop on Automation of Software Test (AST 2016) (2016)

  55. Ratiu, D., Schaetz, B., Voelter, M., Kolb, B.: Language engineering as an enabler for incrementally defined formal analyses. In: Proceedings of the First International Workshop on Formal Methods in Software Engineering: Rigorous and Agile Approaches, pp. 9–15. IEEE Press (2012)

  56. Ratiu, D., Zeller, M., Killian, L.: Safety.lab: model-based domain specific tooling for safety argumentation. In: International Conference on Computer Safety, Reliability, and Security, pp. 72–82. Springer (2014)

  57. Réveillère, L., Mérillon, F., Consel, C., Marlet, R., Muller, G.: A dsl approach to improve productivity and safety in device drivers development. In: ASE 2000. IEEE

  58. Santhanam, V.: The anatomy of an faa-qualifiable ada subset compiler. In: ACM SIGAda Ada Letters, vol. 23, pp. 40–43. ACM (2002)

  59. Svendsen, A., Olsen, G. K., Endresen, J., Moen, T., Carlson, E., Alme, K.-J., Haugen, Ø.: The future of train signaling. In: International Conference on Model Driven Engineering Languages and Systems, pp. 128–142. Springer (2008)

  60. Tolvanen, J.-P., Djukić, V., Popovic, A.: Metamodeling for medical devices: code generation, model-debugging and run-time synchronization. Procedia Comput. Sci. 63, 539–544 (2015)

    Article  Google Scholar 

  61. Van Deursen, A., Klint, P., Visser, J.: Domain-specific languages: an annotated bibliography. ACM SIGPLAN Not. 35(6), 26–36 (2000)

    Article  Google Scholar 

  62. Vergu, V., Neron, P., Visser, E.: Dynsem: A dsl for dynamic semantics specification. Technical Report, Delft University of Technology, Software Engineering Research Group (2015)

  63. Visser, E., Wachsmuth, G., Tolmach, A., Neron, P., Vergu, V., Passalaqua, A., Konat, G.: A language designer’s workbench: a one-stop-shop for implementation and verification of language designs. In: Proc. of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. ACM (2014)

  64. Voelter, M.: Language and ide modularization and composition with mps. In: Generative and Transformational Techniques in Software Engineering IV, pp. 383–430. Springer (2013)

  65. Voelter, M.: Generic Tools, Specific Languages. TU Delft Delft University of Technology, Delft (2014)

    Google Scholar 

  66. Voelter, M., Lisson, S.: Supporting diverse notations in MPS’ Projectional Editor. GEMOC Workshop

  67. Voelter, M., Molotnikov, Z., Kolb, B.: Towards improving software security using language engineering and mbeddr c. In: Proceeding of the Workshop on Domain-Specific Modeling 2015, pp. 55–62. Pittsburgh, PA, USA, 27–27 October 2015

  68. Voelter, M., Ratiu, D., Kolb, B., Schaetz, B.: mbeddr: Instantiating a language workbench in the embedded software domain. Autom. Softw. Eng. 20(3), 339–390 (2013)

    Article  Google Scholar 

  69. Voelter, M., Ratiu, D., Tomassetti, F.: Requirements as first-class citizens: integrating requirements closely with implementation artifacts. In: ACESMB@ MoDELS (2013)

  70. Voelter, M., Deursen, A. v., Kolb, B., Eberle, S.: Using C language extensions for developing embedded software: a case study In: Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 655–674, Pittsburgh, PA, USA, 25–30 October 2015

  71. Voelter, M., van Deursen, A., Kolb, B., Eberle, S.: Using c language extensions for developing embedded software: a case study. In: OOPSLA 2015 (2015)

  72. Voelter, M., Kolb, B., Szabó, T., Ratiu, D., van Deursen, A.: Lessons learned from developing mbeddr: a case study in language engineering with mps. Softw. Syst. Model., pp. 1–46 (2017). https://doi.org/10.1007/s10270-016-0575-4

  73. Voelter, M., SzabÓ, T., Engelmann, B.: An Overview of Program Analysis using Formal Methods. Self-published (2017). http://voelter.de/data/books/introToFormalMethodsAndDSLs-1.0.pdf

  74. Wallace, M.: Modular architectural representation and analysis of fault propagation and transformation. Electron. Notes Theor. Comput. Sci. 141(3), 53–71 (2005)

    Article  Google Scholar 

  75. Weiser, M., Gannon, J.D., McMullin, P.R.: Comparison of structural test coverage metrics. IEEE Softw. 2(2), 80 (1985)

    Article  Google Scholar 

  76. Whalen, M.W., Heimdahl, M.P.E.: An approach to automatic code generation for safety-critical systems. In: 14th IEEE International Conference on Automated Software Engineering, 1999, pp 315–318. IEEE (1999)

  77. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006)

    Article  Google Scholar 

  78. Wortmann, A., Beet, M.: Domain specific languages for efficient satellite control software development. In: DASIA 2016, vol 736 (2016)

  79. Wu, H., Gray, J.G., Mernik, M.: Unit testing for domain-specific languages. In: Domain-Specific Languages, IFIP TC 2 Working Conference, DSL 2009, Oxford, UK, July 15-17, 2009, Proceedings, pp. 125–147 (2009)

Download references

Acknowledgements

The authors would like to thank the team at Voluntis and itemis who built the system that underlies the case study. These include Wladimir Safonov, Jürgen Haug, Sergej Koščejev, Alexis Archambault, Nikhil Khandelwal. We would also like to thank Richard Paige and Sebastian Zarnekow for their feedback on drafts of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Voelter.

Additional information

Communicated by Dr Perry Alexander.

The PLUTO languages

The PLUTO languages

The exact nature of the DSLs used in PLUTO is not relevant to the contributions of this paper. However, for completeness, we provide an overview over the DSLs here. Note that a discussion of the implementation of the PLUTO languages using MPS is beyond the scope of this paper. We refer the reader to the MPS tutorialsFootnote 30 or [65].

Main algorithm The main algorithm controls messages sent to the user and its replies, as well as the timing of those messages and prompts. It also makes high-level decision as to the execution of the algorithm. It is essentially a hierarchical state machine. For complex decisions, it calls into the decision support sublanguage.

Fig. 8
figure 8

A decision tree; the green/up edges represent yes, the red/down edges represent no

Fig. 9
figure 9

A decision table that specifically works on ranges of values. Note the compact syntax for range representation

Decision support The decision support abstractions can, at a high-level, all be seen as functions: Based on a list of arguments, the function returns one or more values. Plain functions are available for arithmetic calculations. However, it is typical for medical decisions to depend on the interactions between several criteria. To make them more readable (and this easier to validate), they are often represented as decision trees (Fig. 8) or decision tables. A particular kind of decision table is one that splits two values into ranges and returns a result based on these ranges. Figure 9 shows a table that returns a score based; scores represent standardized severities or risks that are then used in the algorithm. The number types with ranges, and their static checking (see Fig. 10), are also an important ingredient to being able to represent the medical domain correctly.

Fig. 10
figure 10

Numbers are specified with a range and a precision. The type system checks number ranges and precisions even for simple computations with such values; the figure above shows an error resulting from invalid ranges

Testing Testing is an important ingredient to the PLUTO languages. For testing functions and function-like abstractions, regular JUnit-style function tests are supported; Fig. 11 shows an example. The first of the tests in Fig. 11 tests a function with one argument, the second one passes an argument list, and the last one shows how complex data structures, in this case, a patient’s replies to a questionnaire, are passed to the test. The table notations for testing based on equivalence partitions are shown in Fig. 12.

Scenario tests (Fig. 13) are more involved because they take into account the execution of algorithm over time. They are expressed in the well-known given-when-then style,Footnote 31 which is, for example, also supported by the Cucumber test tool.Footnote 32 To express the passage of time and occurrences at specific times, the at notation is used. The execution of the tests is based on a simulation. The number of steps and the time resolution are derived from the scenario specification.

Fig. 11
figure 11

Function tests call a function (or something function-like, such as a decision tree or table) with the arguments specified after given and then check that the expected value is returned. The answers construct represents a user’s reply to a questionnaire; it can be seen as an instance of a record

Fig. 12
figure 12

Equivalence partitions help test complex structures with relevant combinations of values

Fig. 13
figure 13

Scenarios follow the established given-when-then style:given a couple of preconditions, when something happens, then a set of assertions must hold. Scenarios express the passage of time, as well as points in when something happens or is asserted, using the at notation

Simulation The purpose of the simulator is to let HCPs “play” with an algorithm. To this end, the in-IDE interpreter executes algorithms and renders a UI that resembles the one on the final phone (Fig. 14). A set of DSLs is available to style the UI; to some degree, lower level styling support is available through Javascript and CSS. A control panel lets users configure a particular simulation and also fast forward in time (Fig. 15). There is also a debugger that, while relying on the same interpreter, provides a lower level view on the execution of algorithms. It is not used by HCPs.

Fig. 14
figure 14

Simulator lets users play with an algorithm. DSLs are available to style the UI. Note that, while an iPhone-style frame is shown, the simulator does not run on Apple’s iOS simulator

Fig. 15
figure 15

Control panel to configure the simulations

Documentation generation One important kind of output is the medical protocol, a visualization of the overall algorithm for review by HCPs or associated medical personnel not trained in the use of the PLUTO DSLs. The outputs are too large to show here; they are essentially graphviz-style flowcharts with a couple of special notational elements. It is often necessary to highlight specific aspects on the overall algorithm. To this end, the generation of the flowchart can be configured using a DSL (Fig. 16). It supports:

  • The level of detail (Deep in the example)

  • The tags that should be included and excluded. Model elements can be tagged, for example, with whether they are part of the default flow or whether they are relevant for complications in the treatment. A generated visualization might want to highlight specific tags.

  • Color mappings for tags (e.g., render the case for complications in red)

  • Human-readable labels for states or messages in order to make them more understandable for outsides.

The reason why these configurations are represented as models (expressed in their own DSL) as opposed to just configuring a particular visualization through a dialog is that many such configurations exist, and they must be reproduced in bulk, automatically, as the algorithm evolves.

Fig. 16
figure 16

Configuration for the generation of medical protocol flowcharts

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Voelter, M., Kolb, B., Birken, K. et al. Using language workbenches and domain-specific languages for safety-critical software development. Softw Syst Model 18, 2507–2530 (2019). https://doi.org/10.1007/s10270-018-0679-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10270-018-0679-0

Keywords

Navigation