Loading [a11y]/accessibility-menu.js
Near-Optimality of Finite-Memory Codes and Reinforcement Learning for Zero-Delay Coding of Markov Sources | IEEE Conference Publication | IEEE Xplore

Near-Optimality of Finite-Memory Codes and Reinforcement Learning for Zero-Delay Coding of Markov Sources


Abstract:

We study the problem of zero-delay coding of a Markov source over a noisy channel with feedback. Building and generalizing prior work, we first formulate the problem as a...Show More

Abstract:

We study the problem of zero-delay coding of a Markov source over a noisy channel with feedback. Building and generalizing prior work, we first formulate the problem as a Markov decision process (MDP) where the state is a probability measure valued predictor along with a finite memory of channel outputs and quantizers. We then approximate this state by marginalizing over all possible predictors, so that our policies only use the finite-memory term to encode the source. Under an appropriate notion of predictor stability, we show that such policies are near-optimal for the zero-delay coding problem as the memory length increases. We also give sufficient conditions for predictor stability to hold, and present a reinforcement learning algorithm and establish its convergence to compute near-optimal finite-memory policies. These theoretical results are supported by simulations.
Date of Conference: 10-12 July 2024
Date Added to IEEE Xplore: 05 September 2024
ISBN Information:

ISSN Information:

Conference Location: Toronto, ON, Canada

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.