Abstract:
Consider that a remote estimator seeks to estimate the state of a non-collocated discrete-time finite-dimensional linear time-invariant plant that is persistently excited...View moreMetadata
Abstract:
Consider that a remote estimator seeks to estimate the state of a non-collocated discrete-time finite-dimensional linear time-invariant plant that is persistently excited by process noise. A communication link attempts to relay the state of the plant to the estimator whenever it receives a transmission request. The link experiences packet-drops and has an (action-dependent) state that is influenced by the history of current and past requests. A controlled Markov chain models this dependence and a given function of the link's state governs the packet-drop probability. Every randomized stationary transmission policy is specified by a function that determines the probability of a transmission request in terms of the link's state. The article focuses on the design of these policies. Two theorems provide necessary and sufficient conditions for the existence of a randomized stationary policy that stabilizes the estimation error, in the second-moment sense. They also show that it suffices to search for deterministic stabilizing policies and identify an important case in which the search can be further narrowed to threshold policies.
Published in: 2018 IEEE Conference on Decision and Control (CDC)
Date of Conference: 17-19 December 2018
Date Added to IEEE Xplore: 20 January 2019
ISBN Information: