Skip to main content
Log in

Ranking policies in discrete Markov decision processes

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

An optimal probabilistic-planning algorithm solves a problem, usually modeled by a Markov decision process, by finding an optimal policy. In this paper, we study the k best policies problem. The problem is to find the k best policies of a discrete Markov decision process. The k best policies, k > 1, cannot be found directly using dynamic programming. Naïvely, finding the k-th best policy can be Turing reduced to the optimal planning problem, but the number of problems queried in the naïve algorithm is exponential in k. We show empirically that solving k best policies problem by using this reduction requires unreasonable amounts of time even when k = 3. We then provide two new algorithms. The first is a complete algorithm, based on our theoretical contribution that the k-th best policy differs from the i-th policy, for some i < k, on exactly one state. The second is an approximate algorithm that skips many less useful policies. We show that both algorithms have good scalability. We also show that the approximate algorithms runs much faster and finds interesting, high-quality policies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    MATH  Google Scholar 

  2. Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: structural assumptions and computational leverage. J. Artif. Intell. Res. 11, 1–94 (1999)

    MATH  MathSciNet  Google Scholar 

  3. Bonet, B., Geffner, H.: Planning with incomplete information as heuristic search in belief space. In: Proceedings of ICAPS, pp. 52–61 (2000)

  4. Bresina, J.L., Dearden, R., Meuleau, N., Ramkrishnan, S., Smith, D.E., Washington, R.: Planning under continuous time and resource uncertainty: a challenge for AI. In: Proceedings of UAI, pp. 77–84 (2002)

  5. Bresina, J.L., Jónsson, A.K., Morris, P.H., Rajan, K.: Activity planning for the Mars exploration rovers. In: Proceedings of ICAPS, pp. 40–49 (2005)

  6. Aberdeen, D., Thiébaux, S., Zhang, L.: Decision-theoretic military operations planning. In: Proceedings of ICAPS, pp. 402–412 (2004)

  7. Musliner, D.J., Carciofini, J., Goldman, R.P., Durfee, E.H., Wu, J., Boddy, M.S.: Flexibly integrating deliberation and execution in decision-theoretic agents. In: Proceedings of ICAPS Workshop on Planning and Plan-Execution for Real-World Systems (2007)

  8. Nielsen, L.R., Jorgensen, E., Kristensen, A.R., Ostergaard, S.: Optimal replacement policies for dairy cows based on daily yield measurements. J. Dairy Sci. 93(1), 75–92 (2010)

    Article  Google Scholar 

  9. Perny, P., Weng, P.: On finding compromise solutions in multiobjective markov decision processes. In: ECAI Multidisciplinary Workshop on Advances in Preference Handling (2010)

  10. Nielsen, L.R., Kristensen, A.R.: Finding the k best policies in finite-horizon MDPs. Eur. J. Oper. Res. 175(2), 1164–1179 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  11. Nielsen, L.R., Pretolani, D., Andersen, K.A.: Finding the k shortest hyperpaths using reoptimization. Oper. Res. Lett. 34(2), 155–164 (2006)

    Article  MathSciNet  Google Scholar 

  12. Nielsen, L.R., Andersen, K.A., Pretolani, D.: Finding the k shortest hyperpaths. Comput. Oper. Res. 32, 1477–1497 (2005)

    Article  MathSciNet  Google Scholar 

  13. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific, Belmont (1996)

    MATH  Google Scholar 

  14. Howard, R.: Dynamic Programming and Markov processes. MIT Press, Cambridge (1960)

    MATH  Google Scholar 

  15. Puterman, M.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York (1994)

    MATH  Google Scholar 

  16. Littman, M.L., Dean, T., Kaelbling, L.P.: On the complexity of solving Markov decision problems. In: Proceedings of UAI, pp. 394–402 (1995)

  17. Bonet, B.: On the speed of convergence of value iteration on stochastic shortest-path problems. Math. Oper. Res. 32(2), 365–373 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  18. Barto, A., Bradtke, S., Singh, S.: Learning to act using real-time dynamic programming. Artif. Intell. 72, 81–138 (1995)

    Article  Google Scholar 

  19. Wingate, D., Seppi, K.D.: Prioritization methods for accelerating MDP solvers. J. Mach. Learn. Res. 6, 851–881 (2005)

    MathSciNet  Google Scholar 

  20. Bonet, B., Geffner, H.: Learning in depth-first search: A unified approach to heuristic search in deterministic and non-deterministic settings, and its applications to MDPs. In: Proceedings of ICAPS, pp. 142–151 (2006)

  21. ICAPS-06: 5th International Planning Competition (2006). http://www.ldc.usb.ve/~bonet/ipc5/

  22. Dai, P., Goldsmith, J.: Finding best k policies. In: Proceedings of ADT, pp. 144–155 (2009)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Dai.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dai, P., Goldsmith, J. Ranking policies in discrete Markov decision processes. Ann Math Artif Intell 59, 107–123 (2010). https://doi.org/10.1007/s10472-010-9216-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-010-9216-8

Keywords

Mathematics Subject Classification (2010)

Navigation