Abstract
Since the computational complexity is one among stations of interest of many interested researchers, numerous procedures are appeared for accelerating iterative methods and for reducing the memory bits required for computing machines. For solving Markov Decision Processes (MDPs), several tests are proposed in the literature and especially to improve the standard Value Iteration Algorithm (VIA). The Bellman optimality equation have played a central role for establish this dynamic programming tool.
In this work, we propose a new test based on the extension of some test for eliminating non-optimal decisions from the planning. In order to demonstrate the scientific interest of our contribution, we compare our result with those of Macqueen and Porteus by an illustrating example. Thus, we reduce the state and action spaces size in each stage as soon as it is possible.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
MacQueen, J.B.: A modified dynamic programming method for Markovian decision problems. J. Math. Anal. Appl. 14, 38–43 (1965). https://doi.org/10.1016/0022-247X(66)90060-6
MacQueen, J.B.: A test for suboptimal actions in Markovian decision problems. Oper. Res. 15, 559–561 (1967). https://doi.org/10.1287/opre.15.3.559
Porteus, E.L.: Some bounds for discounted sequential decision processes. Manag. Sci. 18, 7–11 (1971). https://doi.org/10.1287/mnsc.18.1.7
Grinold, R.C.: Elimination of suboptimal actions in Markov decision problems. Oper. Res. 21, 848–851 (1973). https://doi.org/10.1287/opre.21.3.848
Puterman, M.L., Shin, M.C.: Modified policy iteration algorithms for discounted Markov decision problems. Manag. Sci. 24, 1127–1137 (1978). https://doi.org/10.1287/mnsc.24.11.1127
White, D.J.: The determination of approximately optimal policies in Markov decision processes by the use of bounds. J. Oper. Res. Soc. 33, 253–259 (1982). https://doi.org/10.1057/jors.1982.51
Sladk\(\acute{y}\), K.: Identification of optimal policies in Markov decision processes. Kybernetika 46, 558–570 (2010). MSC:60J10, 90C40, 93E20 | MR 2676091 | Zbl 1195.93148
Semmouri, A., Jourhmane, M.: Markov decision processes with discounted cost: the action elimination procedures. In: ICCSRE 2nd International Conference of Computer Science and Renewable Energies, pp. 1–6. IEEE Press, Agadir, Morocco (2019). https://doi.org/10.1109/ICCSRE.2019.8807578
Semmouri, A., Jourhmane, M.: Markov decision processes with discounted costs over a finite horizon: action elimination. In: Masrour, T., Cherrafi, A., El Hassani, I. (eds.) International Conference on Artificial Intelligence & Industrial Applications, pp. 199–213. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51186-9_14
Semmouri, A., Jourhmane, M., Elbaghazaoui, B.E.: Markov decision processes with discounted costs: new test of non-optimal actions. J. Adv. Res. Dyn. Control Syst. 12(05-SPECIAL ISSUE), 608–616 (2020). https://doi.org/10.5373/JARDCS/V12SP5/20201796
Semmouri, A., Jourhmane, M., Belhallaj, Z.: Discounted Markov decision processes with fuzzy costs. Ann. Oper. Res. 295(2), 769–786 (2020). https://doi.org/10.1007/s10479-020-03783-6
Howard, R.A.: Dynamic Programming and Markov Processes. Wiley, New York (1960)
Bellman, R.E.: Dynamic Programming. Princeton University Press (1957)
Bertsekas, D.P., Shreve, S.E.: Stochastic Optimal Control. Academic Press, New York (1978)
White, D.: Markov Decision Processes. Wiley, England (1993)
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York (1994)
Piunovskiy, A.B.: Examples in Markov Decision Processes, vol. 2. World Scientific, London (2013)
Derman, C.: Finte State Markovian Decision Processes. Academic Press, New York (1970)
Acknowledgements
The authors would like to thank the following people. Firstly, Professor Dr. C. Daoui of Sultan Moulay Slimane University, Beni Mellal, Morocco for his help and encouraging during the period of research. Secondly, Mr. Lekbir Tansaoui, ELT teacher, co-author and textbook designer in Mokhtar Essoussi High School, Oued Zem, Morocco for proofreading this paper. We also wish to express our sincere thanks to all members of the organizing committee of the Conference CBI’21 and referees for careful reading of the manuscript, valuable suggestions and of a number of helpful remarks.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Now, we give reminders about some famous tests that have played a crucial role in action elimination approach notably in Markov decision problems. Applying MacQueen’s [1, 2], Porteus [3], Semmouri, Jourhmane and Elbaghazaoui [10] bounds for the standard VIA leads to the following tests to eliminate action a in A(s) permanently:
MacQueen test:
Porteus test:
where
and
Let also
and
Semmouri, Jourhmane and Elbaghazaoui test:
where
and let too
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Semmouri, A., Jourhmane, M., Elbaghazaoui, B.E. (2021). Markov Decision Processes with Discounted Rewards: New Action Elimination Procedure. In: Fakir, M., Baslam, M., El Ayachi, R. (eds) Business Intelligence. CBI 2021. Lecture Notes in Business Information Processing, vol 416. Springer, Cham. https://doi.org/10.1007/978-3-030-76508-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-76508-8_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-76507-1
Online ISBN: 978-3-030-76508-8
eBook Packages: Computer ScienceComputer Science (R0)