Loading [a11y]/accessibility-menu.js
Lossy common information of two dependent random variables | IEEE Conference Publication | IEEE Xplore

Lossy common information of two dependent random variables


Abstract:

The two most prevalent notions of common information are due to Wyner and Gács-Körner and both the notions can be stated as two different characteristic points in the los...Show More

Abstract:

The two most prevalent notions of common information are due to Wyner and Gács-Körner and both the notions can be stated as two different characteristic points in the lossless Gray-Wyner region. Although these quantities can be easily evaluated for random variables with infinite entropy (eg. continuous random variables), the operational significance underlying their definition is applicable only to the lossless framework. The primary objective of this paper is to generalize these two notions of common information to the lossy Gray-Wyner network, which extends the theoretical intuition underlying their definitions for general sources and distortion measures. We begin with the lossy generalization of Wyner's common information, defined as the minimum rate on the shared branch of the Gray-Wyner network at minimum sum rate when the two decoders reconstruct the sources subject to individual distortion constraints. We derive a complete single letter information theoretic characterization for this quantity and use it to compute the common information of symmetric bivariate Gaussian random variables. We then derive similar results to generalize Gács-Körner's definition to the lossy framework. These two characterizations allow us to carry the practical insight underlying the two notions of common information to general sources and distortion measures.
Date of Conference: 01-06 July 2012
Date Added to IEEE Xplore: 27 August 2012
ISBN Information:

ISSN Information:

Conference Location: Cambridge, MA, USA

References

References is not available for this document.