Skip to main content

Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization

  • Chapter
Stochastic Model Checking. Rigorous Dependability Analysis Using Model Checking Techniques for Stochastic Systems (ROCKS 2012)

Abstract

State-based systems with discrete or continuous time are often modelled with the help of Markov chains. In order to specify performance measures for such systems, one can define a reward structure over the Markov chain, leading to the Markov Reward Model (MRM) formalism. Typical examples of performance measures that can be defined in this way are time-based measures (e.g. mean time to failure), average energy consumption, monetary cost (e.g. for repair, maintenance) or even combinations of such measures. These measures can also be regarded as target objects for system optimization. For that reason, an MRM can be enhanced with an additional control structure, leading to the formalism of Markov Decision Processes (MDP).

In this tutorial, we first introduce the MRM formalism with different types of reward structures and explain how these can be combined to a performance measure for the system model. We provide running examples which show how some of the above mentioned performance measures can be employed. Building on this, we extend to the MDP formalism and introduce the concept of a policy. The global optimization task (over the huge policy space) can be reduced to a greedy local optimization by exploiting the non-linear Bellman equations. We review several dynamic programming algorithms which can be used in order to solve the Bellman equations exactly. Moreover, we consider Markovian models in discrete and continuous time and study value-preserving transformations between them. We accompany the technical sections by applying the presented optimization algorithms to the example performance models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Altman, E.: Constrained Markov Decision Processes. Chapman & Hall (1999)

    Google Scholar 

  2. Altman, E.: Applications of Markov Decision Processes in Communication Networks. In: Feinberg, E.A., Shwartz, A. (eds.) Handbook of Markov Decision Processes. International Series in Operations Research & Management Science, vol. 40, pp. 489–536. Springer, US (2002)

    Chapter  Google Scholar 

  3. Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model-Checking Algorithms for Continuous-Time Markov Chains. IEEE Transactions on Software Engineering 29(6), 524–541 (2003)

    Article  MATH  Google Scholar 

  4. Bäuerle, N., Rieder, U.: Markov Decision Processes with Applications to Finance. Springer, Heidelberg (2011)

    Google Scholar 

  5. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    MATH  Google Scholar 

  6. Benini, L., Bogliolo, A., Paleologo, G.A., De Micheli, G.: Policy Optimization for Dynamic Power Management. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 18, 813–833 (1998)

    Article  Google Scholar 

  7. Bertsekas, D.: Dynamic Programming and Optimal Control, 3rd edn., vol. I. Athena Scientific (1995) (revised in 2005)

    Google Scholar 

  8. Bertsekas, D.: Dynamic Programming and Optimal Control, 4th edn., vol. II. Athena Scientific (1995) (revised in 2012)

    Google Scholar 

  9. Bertsekas, D., Tsitsiklis, J.: An analysis of stochastic shortest path problems. Mathematics of Operations Research 16(3), 580–595 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertsekas, D., Tsitsiklis, J.: Neuro-Dynamic Programming, 1st edn. Athena Scientific (1996)

    Google Scholar 

  11. Beynier, A., Mouaddib, A.I.: Decentralized Markov decision processes for handling temporal and resource constraints in a multiple robot system. In: Proceedings of the 7th International Symposium on Distributed Autonomous Robotic System, DARS (2004)

    Google Scholar 

  12. Bolch, G., Greiner, S., de Meer, H., Trivedi, K.S.: Queueing Networks and Markov Chains - Modelling and Performance Evaluation with Computer Science Applications, 2nd edn. Wiley (2006)

    Google Scholar 

  13. Cassandra, A.R.: A survey of POMDP applications. In: Working Notes of AAAI 1998 Fall Symposium on Planning with Partially Observable Markov Decision Processes, pp. 17–24 (1998)

    Google Scholar 

  14. Diz, F.J., Palacios, M.A., Arias, M.: MDPs in medicine: opportunities and challenges. In: Decision Making in Partially Observable, Uncertain Worlds: Exploring Insights from Multiple Communities, IJCAI Workshop (2011)

    Google Scholar 

  15. Fox, B.L., Landi, D.M.: An algorithm for identifying the ergodic subchains and transient states of a stochastic matrix. Communications of the ACM 11(9), 619–621 (1968)

    Article  MATH  Google Scholar 

  16. Gouberman, A., Siegle, M.: On Lifetime Optimization of Boolean Parallel Systems with Erlang Repair Distributions. In: Operations Research Proceedings 2010 - Selected Papers of the Annual International Conference of the German Operations Research Society, pp. 187–192. Springer (January 2011)

    Google Scholar 

  17. Guo, X., Hernandez-Lerma, O.: Continuous-Time Markov Decision Processes - Theory and Applications. Springer (2009)

    Google Scholar 

  18. Heidergott, B., Hordijk, A., Van Uitert, M.: Series Expansions For Finite-State Markov Chains. Probability in the Engineering and Informational Sciences 21(3), 381–400 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hou, Z., Filar, J.A., Chen, A. (eds.): Markov Processes and Controlled Markov Chains. Springer (2002)

    Google Scholar 

  20. Howard, R.A.: Dynamic Programming and Markov Processes. John Wiley & Sons, New York (1960)

    MATH  Google Scholar 

  21. Hu, Q., Yue, W.: Markov Decision Processes with their Applications. Springer (2008)

    Google Scholar 

  22. Janssen, J., Manca, R.: Markov and Semi-Markov Reward Processes. In: Applied Semi-Markov Processes, pp. 247–293. Springer, US (2006)

    Google Scholar 

  23. Janssen, J., Manca, R.: Semi-Markov Risk Models for Finance, Insurance and Reliability. Springer (2007)

    Google Scholar 

  24. Jensen, A.: Markoff chains as an aid in the study of Markoff processes. Skandinavisk Aktuarietidskrift 36, 87–91 (1953)

    MathSciNet  MATH  Google Scholar 

  25. Stidham Jr., S., Weber, R.: A survey of Markov decision models for control of networks of queues. Queueing Systems 13(1-3), 291–314 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  26. Mahadevan, S.: Learning Representation and Control in Markov Decision Processes: New Frontiers. Foundations and Trends in Machine Learning 1(4), 403–565 (2009)

    Article  MATH  Google Scholar 

  27. Mahadevan, S., Maggioni, M.: Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes. Journal of Machine Learning Research 8, 2169–2231 (2007)

    MathSciNet  MATH  Google Scholar 

  28. Mausam, Kolobov, A.: Planning with Markov Decision Processes: An AI Perspective. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers (2012)

    Google Scholar 

  29. Momtazi, S., Kafi, S., Beigy, H.: Solving Stochastic Path Problem: Particle Swarm Optimization Approach. In: Nguyen, N.T., Borzemski, L., Grzech, A., Ali, M. (eds.) IEA/AIE 2008. LNCS (LNAI), vol. 5027, pp. 590–600. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  30. Obal, W.D., Sanders, W.H.: State-space support for path-based reward variables. In: Proceedings of the Third IEEE International Performance and Dependability Symposium on International Performance and Dependability Symposium, IPDS 1998, pp. 233–251. Elsevier Science Publishers B. V. (1999)

    Google Scholar 

  31. Ott, J.T.: A Markov Decision Model for a Surveillance Application and Risk-Sensitive Markov Decision Processes. PhD thesis, Karlsruhe Institute of Technology (2010)

    Google Scholar 

  32. Powell, W.B.: Approximate Dynamic Programming - Solving the Curses of Dimensionality. Wiley (2007)

    Google Scholar 

  33. Puterman, M.L.: Markov Decision Processes - Discrete Stochastic Dynamic Programming. John Wiley & Sons INC. (1994)

    Google Scholar 

  34. Qiu, Q., Pedram, M.: Dynamic power management based on continuous-time Markov decision processes. In: Proceedings of the 36th Annual ACM/IEEE Design Automation Conference, DAC 1999, pp. 555–561. ACM (1999)

    Google Scholar 

  35. Sanders, W.H., Meyer, J.F.: A Unified Approach for Specifying Measures of Performance, Dependability, and Performability. Dependable Computing for Critical Applications 4, 215–238 (1991)

    Article  Google Scholar 

  36. Schaefer, A.J., Bailey, M.D., Shechter, S.M., Roberts, M.S.: Modeling medical treatment using Markov decision processes. In: Brandeau, M.L., Sainfort, F., Pierskalla, W.P. (eds.) Operations Research and Health Care. International Series in Operations Research & Management Science, vol. 70, pp. 593–612. Kluwer Academic Publishers (2005)

    Google Scholar 

  37. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. A Bradford Book. MIT Press (March 1998)

    Google Scholar 

  38. Trivedi, K.S., Malhotra, M.: Reliability and Performability Techniques and Tools: A Survey. In: Messung, Modellierung und Bewertung von Rechen- und Kommunikationssystemen. Informatik aktuell, pp. 27–48. Springer, Heidelberg (1993)

    Google Scholar 

  39. Tsitsiklis, J.N.: NP-Hardness of checking the unichain condition in average cost MDPs. Operations Research Letters 35(3), 319–323 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  40. White, D.J.: A Survey of Applications of Markov Decision Processes. The Journal of the Operational Research Society 44(11), 1073–1096 (1993)

    Article  MATH  Google Scholar 

  41. Wolff, R.W.: Poisson Arrivals See Time Averages. Operations Research 30(2), 223–231 (1982)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Gouberman, A., Siegle, M. (2014). Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization. In: Remke, A., Stoelinga, M. (eds) Stochastic Model Checking. Rigorous Dependability Analysis Using Model Checking Techniques for Stochastic Systems. ROCKS 2012. Lecture Notes in Computer Science, vol 8453. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45489-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-45489-3_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-45488-6

  • Online ISBN: 978-3-662-45489-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics