Skip to main content

Building Optimal Operation Policies for Dam Management Using Factored Markov Decision Processes

  • Conference paper
  • First Online:
Advances in Artificial Intelligence and Its Applications (MICAI 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9414))

Included in the following conference series:

  • 1395 Accesses

Abstract

In this paper, we present the conceptual model of a real-world application of factored Markov Decision Processes to dam management. The idea is to demonstrate that it is possible to efficiently automate the construction of operation policies by modelling compactly the problem as a sequential decision problem that can be easily solved using stochastic dynamic programming. We will explain the problem domain and provide an analysis of the resulting value and policy functions. We will also present a useful discussion about the issues that will appear when the conceptual model to be extended into a real-world application.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andersen, S.K., Olesen, K.G., Jensen, F.V., Jensen, F.: HUGIN: a shell for building bayesian belief universes for expert systems. In: Proceeding Eleventh Joint Conference on Artificial Intelligence, IJCAI, pp. 1080–1085, Detroit, 20–25 August 1989

    Google Scholar 

  2. Bellman, R.E.: Dynamic Programming. Princeton U. Press, Princeton (1957)

    MATH  Google Scholar 

  3. Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: structural assumptions and computational leverage. J. AI Res. 11, 1–94 (1999)

    MathSciNet  MATH  Google Scholar 

  4. Darwiche, A., Goldszmidt, M.: Action networks: a framework for reasoning about actions and change under understanding. In: Proceedings of the Tenth Conference on Uncertainty in AI, UAI-1994, pp. 136–144, Seattle (1994)

    Google Scholar 

  5. Dean, T., Kanazawa, K.: A model for reasoning about persistence and causation. Comput. Intell. 5, 142–150 (1989)

    Article  Google Scholar 

  6. Lamond, B.F., Boukhtouta, A.: Water reservoir applications of Markov decision processes. In: Feinberg, E.A., Shwartz, A. (eds.) Handbook of Markov Decision Processes, Methods and Applications. Kluwer, US (2002)

    Google Scholar 

  7. Haddawy, P., Suwandy, M.: Decision-theoretic refinement planning using inheritance abstraction. In: Hammond, K. (ed.) Proceedings of the Second International Conference on Artificial Intelligence Planning Systems. AAAI Press, Menlo Park (1994)

    Google Scholar 

  8. Howard, R.A., Matheson, J.E.: Influence diagrams. In: Howard, R.A., Matheson, J.E. (eds.) Principles and Applications of Decision Analysis, Menlo Park (1984)

    Google Scholar 

  9. Loucks, D.P., van Beek, E.: Water resources systems planning and management: an introduction to methods, models and applications. In: Dynamic Programming. United Nations Educational, Scientific and Cultural Organization (UNESCO) (2005)

    Google Scholar 

  10. Veloso, M., Carbonel, J., Perez, A., Borrajo, D., Fink, E., Blythe, J.: Integrating planning and learning: the prodigy architecture. J. Exp. Theor. AI 1, 81–120 (1995)

    Article  Google Scholar 

  11. Majercik, S.M., Littman, M.L.: MAXPLAN: a new approach to probabilistic planning. In: Simmons, R., Smith, S. (eds.) Proceedings of the Fourth International Conference on Artificial Intelligence Planning Systems, pp. 86–93. AAAI Press, Menlo Park (1998)

    Google Scholar 

  12. Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Francisco (1988)

    Google Scholar 

  13. Puterman, M.L.: Markov Decision Processes. Wiley, New York (1994)

    Book  MATH  Google Scholar 

  14. Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)

    Google Scholar 

Download references

Acknowledgments

Authors wish to thank the Control, Electronics and Communication department and the Enabling Technologies division of the Electrical Research Institute-Mexico for the financial support to perform this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alberto Reyes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Reyes, A., Ibargüengoytia, P.H., Romero, I., Pech, D., Borunda, M. (2015). Building Optimal Operation Policies for Dam Management Using Factored Markov Decision Processes. In: Pichardo Lagunas, O., Herrera Alcántara, O., Arroyo Figueroa, G. (eds) Advances in Artificial Intelligence and Its Applications. MICAI 2015. Lecture Notes in Computer Science(), vol 9414. Springer, Cham. https://doi.org/10.1007/978-3-319-27101-9_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-27101-9_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-27100-2

  • Online ISBN: 978-3-319-27101-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics