Skip to main content

Markov Automata on Discount!

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 10740))

Abstract

Markov automata (MA) are a rich modelling formalism for complex systems combining compositionality with probabilistic choices and continuous stochastic timing. Model checking algorithms for different classes of properties involving probabilities and rewards have been devised for MA, opening up a spectrum of applications in dependability engineering and artificial intelligence, reaching out into economy and finance. In the latter more general contexts, several quantities of considerable importance are based on the idea of discounting reward expectations, so that the near future is more important than the far future. This paper introduces the expected discounted reward value for MA and develops effective iterative algorithms to quantify it, based on value- as well as policy-iteration. To arrive there, we reduce the problem to the computation of expected discounted rewards and expected total rewards in Markov decision processes. This allows us to adapt well-known algorithms to the MA setting. Experimental results clearly show that our algorithms are efficient and scale to MA with hundred thousands of states.

This work was partly supported by the ERC Advanced Grant POWVER (695614) and by the Sino-German project CAP (GZ 1023).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    This can be achieved by renaming the actions and does not affect compositionality properties of MRA due to the fact that only closed MRA are considered in this work.

  2. 2.

    Here \(\mathrm{dR}^{\mathrm{opt}}_{\mathcal {C},\beta }\) denotes discounted reward on a CTMDP \(\mathcal {C}\) [19].

  3. 3.

    For details we refer to [5].

References

  1. de Alfaro, L., Faella, M., Henzinger, T.A., Majumdar, R., Stoelinga, M.: Model checking discounted temporal properties. Theor. Comput. Sci. 345(1), 139–170 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. de Alfaro, L., Henzinger, T.A., Majumdar, R.: Discounting the future in systems theory. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 1022–1037. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-45061-0_79

    Chapter  Google Scholar 

  3. Bertsekas, D.P.: Dynamic Programming and Optimal Control, 2nd edn. Athena Scientific, Belmont (2000)

    Google Scholar 

  4. Boudali, H., Crouzen, P., Stoelinga, M.: A rigorous, compositional, and extensible framework for dynamic fault tree analysis. IEEE Trans. Dependable Secure Comput. 7(2), 128–143 (2010)

    Article  Google Scholar 

  5. Butkova, Y.: Discounted Markov automata. Technical Report 2018–01, ERC Grant POWVER (695614), Universität des Saarlandes, Saarland Informatics Campus, Saarbrücken, Germany (2018). http://www.powver.org/publications/TechRepRep/ERC-POWVER-TechRep-2018-01.pdf

  6. Butkova, Y., Wimmer, R., Hermanns, H.: Long-run rewards for Markov automata. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10206, pp. 188–203. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54580-5_11

    Chapter  Google Scholar 

  7. Dehnert, C., Junges, S., Katoen, J.-P., Volk, M.: A storm is coming: a modern probabilistic model checker. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 592–600. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63390-9_31

    Chapter  Google Scholar 

  8. Eisentraut, C., Hermanns, H., Katoen, J.-P., Zhang, L.: A semantics for every GSPN. In: Colom, J.-M., Desel, J. (eds.) PETRI NETS 2013. LNCS, vol. 7927, pp. 90–109. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38697-8_6

    Chapter  Google Scholar 

  9. Eisentraut, C., Hermanns, H., Zhang, L.: On probabilistic automata in continuous time. In: LICS 2010, pp. 342–351. IEEE CS (2010)

    Google Scholar 

  10. Guck, D., Hatefi, H., Hermanns, H., Katoen, J.-P., Timmer, M.: Modelling, reduction and analysis of Markov automata. In: Joshi, K., Siegle, M., Stoelinga, M., D’Argenio, P.R. (eds.) QEST 2013. LNCS, vol. 8054, pp. 55–71. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40196-1_5

    Chapter  Google Scholar 

  11. Guck, D., Hatefi, H., Hermanns, H., Katoen, J., Timmer, M.: Analysis of timed and long-run objectives for Markov automata. Log. Meth. Comput. Sci. 10(3) (2014)

    Google Scholar 

  12. Guck, D., Timmer, M., Hatefi, H., Ruijters, E., Stoelinga, M.: Modelling and analysis of Markov reward automata. In: Cassez, F., Raskin, J.-F. (eds.) ATVA 2014. LNCS, vol. 8837, pp. 168–184. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11936-6_13

    Google Scholar 

  13. Hatefi, H., Hermanns, H.: Model checking algorithms for Markov automata. In: Electronic Communication of the EASST, vol. 53 (2012)

    Google Scholar 

  14. Hatefi, H., Wimmer, R., Braitling, B., Fioriti, L.M.F., Becker, B., Hermanns, H.: Cost vs. time in stochastic games and Markov automata. Formal Aspects Comput. 29(4), 629–649 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Hatefi Ardakani, H.: Finite horizon analysis of Markov automata. Ph.D. thesis, Universität des Saarlandes, Saarbrücken, Germany (2017)

    Google Scholar 

  16. Haverkort, B.R., Hermanns, H., Katoen, J.: On the use of model checking techniques for dependability evaluation. In: SRDS 2000, pp. 228–237. IEEE CS (2000)

    Google Scholar 

  17. Jansen, D.N.: More or less true DCTL for continuous-time MDPs. In: Braberman, V., Fribourg, L. (eds.) FORMATS 2013. LNCS, vol. 8053, pp. 137–151. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40229-6_10

    Chapter  Google Scholar 

  18. Jensen, A.: Markoff chains as an aid in the study of Markoff processes. Scand. Actuarial J. 1953, 87–91 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  19. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. Wiley, New York (1994)

    Book  MATH  Google Scholar 

  20. Timmer, M.: SCOOP: a tool for symbolic optimisations of probabilistic processes. In: QEST 2011, pp. 149–150. IEEE CS (2011)

    Google Scholar 

  21. Timmer, M., van de Pol, J., Stoelinga, M.I.A.: Confluence reduction for Markov automata. In: Braberman, V., Fribourg, L. (eds.) FORMATS 2013. LNCS, vol. 8053, pp. 243–257. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40229-6_17

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuliya Butkova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Butkova, Y., Wimmer, R., Hermanns, H. (2018). Markov Automata on Discount!. In: German, R., Hielscher, KS., Krieger, U. (eds) Measurement, Modelling and Evaluation of Computing Systems. MMB 2018. Lecture Notes in Computer Science(), vol 10740. Springer, Cham. https://doi.org/10.1007/978-3-319-74947-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-74947-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-74946-4

  • Online ISBN: 978-3-319-74947-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics