Abstract:
In this work, lossy distributed compression of a pair of correlated sources is considered. Conventionally, Shannon's random coding arguments - using randomly generated un...Show MoreMetadata
Abstract:
In this work, lossy distributed compression of a pair of correlated sources is considered. Conventionally, Shannon's random coding arguments - using randomly generated unstructured codebooks whose blocklength is taken to be asymptotically large - are used to derive achievability results. However, in some multi-terminal communications scenarios, using random codes with constant finite blocklength in certain coding architectures leads to improved achievable regions compared to the conventional approach. In other words, in some network communication scenarios, there is a finite optimal value in the blocklength of the randomly generated code used for distributed processing of information sources. Motivated by this, a coding scheme is proposed which consists of two codebook layers: i) the primary codebook which has constant finite blocklength, and ii) the secondary codebook whose blocklength is taken to be asymptotically large. The achievable performance is analyzed in two steps. In the first step, a characterization of an inner bound to the achievable region is derived in terms information measures which are functions of multi-letter probability distributions. In the next step, a computable single-letter inner-bound to the achievable region is extracted. It is shown through an example that the resulting rate-distortion region is strictly larger than the Berger-Tung achievable region.
Published in: IEEE Transactions on Information Theory ( Volume: 67, Issue: 7, July 2021)