Abstract
The purpose of this short note is to correct some oversights in [1]. More precisely, we point out that stronger assumptions have to be imposed on the decision model (in order to use results in [2]) and present a counterexample to a comment to [1, Theorem 3.1].
References
R.F. Serfozo, “Monotone optimal policies for Markov decision processes”,Mathematical Programming Study 6 (1976) 202–215.
M. Schäl, “Conditions for optimality in dynamic programming and for the limit ofn-stage optimal policies to be optimal”,Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 32 (1975) 179–196.
M. Schäl, “On the optimality of (s, S)-policies in dynamic inventory models with finite horizon”,SIAM Journal on Applied Mathematics 30 (1976) 518–537.
A.F. Veinott, “On the optimality of (s, S) inventory policies: New conditions and a new proof”,SIAM Journal on Applied Mathematics 14 (1966) 1067–1083.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Kalin, D. A note on ‘monotone optimal policies for markov decision processes’. Mathematical Programming 15, 220–222 (1978). https://doi.org/10.1007/BF01609021
Issue Date:
DOI: https://doi.org/10.1007/BF01609021