Abstract:
This paper continues the study of safety control for Markov chains, a notion we introduced in our recent work. In our past work we have restricted our attention to Markov...Show MoreMetadata
Abstract:
This paper continues the study of safety control for Markov chains, a notion we introduced in our recent work. In our past work we have restricted our attention to Markov stationary controls, and derived necessary and sufficient conditions for safety enforcement in this class of policies. As opposed to optimal control of Markov chains under complete observations, where optimality is normally achieved in the class of stationary policies, enforcement of safety can benefit from the consideration of non-stationary policies. In this work we show that in meeting the safety control objective, it suffices to consider a class of non-stationary policies which are induced from the class of stationary policies of an augmented chain. Also, given a controlled Markov chain and a safety specification (describing bounds within which the probability distribution must always lie), we present an algorithm for computing the maximal set of safe initial distributions-the initial distributions from where it is possible to control the chain so that the safety specification is always satisfied.
Date of Conference: 14-17 December 2004
Date Added to IEEE Xplore: 16 May 2005
Print ISBN:0-7803-8682-5
Print ISSN: 0191-2216