Skip to main content
Log in

Achievable complexity-performance tradeoffs in lossy compression

  • Source Coding
  • Published:
Problems of Information Transmission Aims and scope Submit manuscript

Abstract

We present several results related to the complexity-performance tradeoff in lossy compression. The first result shows that for a memoryless source with rate-distortion function R(D) and a bounded distortion measure, the rate-distortion point (R(D) + γ, D + ɛ) can be achieved with constant decompression time per (separable) symbol and compression time per symbol proportional to \(\left( {{{\lambda _1 } \mathord{\left/ {\vphantom {{\lambda _1 } \varepsilon }} \right. \kern-\nulldelimiterspace} \varepsilon }} \right)^{{{\lambda _2 } \mathord{\left/ {\vphantom {{\lambda _2 } {\gamma ^2 }}} \right. \kern-\nulldelimiterspace} {\gamma ^2 }}}\), where λ 1 and λ 2 are source dependent constants. The second result establishes that the same point can be achieved with constant decompression time and compression time per symbol proportional to \(\left( {{{\rho _1 } \mathord{\left/ {\vphantom {{\rho _1 } \gamma }} \right. \kern-\nulldelimiterspace} \gamma }} \right)^{{{\rho _2 } \mathord{\left/ {\vphantom {{\rho _2 } {\varepsilon ^2 }}} \right. \kern-\nulldelimiterspace} {\varepsilon ^2 }}}\). These results imply, for any function g(n) that increases without bound arbitrarily slowly, the existence of a sequence of lossy compression schemes of blocklength n with O(ng(n)) compression complexity and O(n) decompression complexity that achieve the point (R(D), D) asymptotically with increasing blocklength. We also establish that if the reproduction alphabet is finite, then for any given R there exists a universal lossy compression scheme with O(ng(n)) compression complexity and O(n) decompression complexity that achieves the point (R, D(R)) asymptotically for any stationary ergodic source with distortion-rate function D(·).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Berger, T., Rate Distortion Theory: A Mathematical Basis for Data Compression, Englewood Cliffs, NJ: Prentice-Hall, 1971.

    Google Scholar 

  2. Sipser, M., Introduction to the Theory of Computation, Boston: Thomson Course Technology, 2006, 2nd ed.

    MATH  Google Scholar 

  3. Wyner, A.D., Ziv, J., and Wyner, A.J., On the Role of Pattern Matching in Information Theory, IEEE Trans. Inform. Theory, 1998, vol. 44, no. 6, pp. 2045–2056.

    Article  MathSciNet  MATH  Google Scholar 

  4. Yang, E. and Kieffer, J., Simple Universal Lossy Data Compression Schemes Derived from the Lempel-Ziv Algorithm, IEEE Trans. Inform. Theory, 1996, vol. 42, no. 1, pp. 239–245.

    Article  MathSciNet  MATH  Google Scholar 

  5. Forney, G.D., Jr., Concatenated Codes, Cambridge: MIT Press, 1966. Translated under the title Kaskadnye kody, Moscow: Mir, 1970.

    Google Scholar 

  6. Spielman, D., Linear-Time Encodable and Decodable Error-Correcting Codes, IEEE Trans. Inform. Theory, 1996, vol. 42, no. 6, pp. 1723–1731.

    Article  MathSciNet  MATH  Google Scholar 

  7. Gallager, R.G., Information Theory and Reliable Communication, New York: Wiley, 1968. Translated under the title Teoriya informatsii i nadezhnaya svyaz’, Moscow: Sov. Radio, 1974.

    MATH  Google Scholar 

  8. Luby, M., Mitzenmacher, M., Shokrollahi, M., and Spielman, D., Efficient Erasure Correcting Codes, IEEE Trans. Inform. Theory, 2001, vol. 47, no. 2, pp. 569–584.

    Article  MathSciNet  MATH  Google Scholar 

  9. Sason, I. and Urbanke, R., Complexity versus Performance of Capacity-Achieving Irregular Repeat-Accumulate Codes on the Binary Erasure Channel, IEEE Trans. Inform. Theory, 2004, vol. 50, no. 6, pp. 1247–1256.

    Article  MathSciNet  Google Scholar 

  10. Pfister, H. and Sason, I., Accumulate-Repeat-Accumulate Codes: Capacity-Achieving Ensembles of Systematic Codes for the Erasure Channel with Bounded Complexity, IEEE Trans. Inform. Theory, 2007, vol. 53, no. 6, pp. 2088–2115.

    Article  MathSciNet  Google Scholar 

  11. Chandar, V., Iterative Algorithms for Lossy Source Coding, Master Thesis, Dept. of Electrical Engineering, MIT, Cambridge, MA, 2006.

    Google Scholar 

  12. Martinian, E. and Yedidia, J., Iterative Quantization Using Codes on Graphs, in Proc. 41st Ann. Allerton Conf. on Communication, Control, and Computing, Monticello, IL, 2003.

  13. Khandekar, A. and McEliece, R.J., On the Complexity of Reliable Communication on the Erasure Channel, in Proc. 2001 IEEE Int. Sympos. on Information Theory (ISIT’2001), Washington, DC, USA, 2001.

  14. Matsunaga, Y. and Yamamoto, H., A Coding Theorem for Lossy Data Compression by LDPC Codes, IEEE Trans. Inform. Theory, 2003, vol. 49, no. 9, pp. 2225–2229.

    Article  MathSciNet  Google Scholar 

  15. Sun, Z., Shao, M., Chen, J., Wong, K., and Wu, X., Achieving the Rate-Distortion Bound with Low-Density Generator Matrix Codes, IEEE Trans. Commun., 2010, vol. 58, no. 6, pp. 1643–1653.

    Article  Google Scholar 

  16. Miyake, S., Lossy Data Compression over Z q by LDPC Code, in Proc. 2006 IEEE Int. Sympos. on Information Theory (ISIT’2006), Seattle, USA, 2006, pp. 813–816.

  17. Martinian, E. and Wainwright, M., Low Density Codes Achieve the Rate-Distortion Bound, Proc. 2006 Data Compression Conf. (DCC’2006), Snowbird, UT, Storer, J.A. and Cohn, M., Eds., Los Alamitos, CA: IEEE Comp. Soc. Press, 2006, pp. 153–162.

    Google Scholar 

  18. Wainwright, M.J. and Maneva, E., Lossy Source Encoding via Message-Passing and Decimation over Generalized Codewords of LDGM Codes, in Proc. 2005 IEEE Int. Sympos. on Information Theory (ISIT’2005), Adelaide, Australia, 2005, pp. 1493–1497.

  19. Ciliberti, S., Mézard, M., and Zecchina, R., Message Passing Algorithms for Non-linear Nodes and Data Compression, ComPlexUs, 2006, vol. 3, no. 1–3, pp. 58–65.

    Article  Google Scholar 

  20. Gupta, A. and Verdú, S., Nonlinear Sparse-Graph Codes for Lossy Compression, IEEE Trans. Inform. Theory, 2009, vol. 55, no. 5, pp. 1961–1975.

    Article  MathSciNet  Google Scholar 

  21. Kontoyiannis, I., An Implementable Lossy Version of the Lempel-Ziv Algorithm. I: Optimality for Memoryless Sources, IEEE Trans. Inform. Theory, 1999, vol. 45, no. 7, pp. 2293–2305.

    Article  MathSciNet  MATH  Google Scholar 

  22. Jalali, S. and Weissman, T., Block and Sliding-Block Lossy Compression via MCMC, IEEE Trans. Commun., 2012, vol. 60, no. 8, pp. 2187–2198.

    Article  MathSciNet  Google Scholar 

  23. Jalali, S., Montanari, A., and Weissman, T., Lossy Compression of Discrete Sources via the Viterbi Algorithm, IEEE Trans. Inform. Theory, 2012, vol. 58, no. 4, pp. 2475–2489.

    Article  MathSciNet  Google Scholar 

  24. Korada, S. and Urbanke, R., Polar Codes are Optimal for Lossy Source Coding, IEEE Trans. Inform. Theory, 2010, vol. 56, no. 4, pp. 1751–1768.

    Article  MathSciNet  Google Scholar 

  25. Arikan, E., Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels, IEEE Trans. Inform. Theory, 2009, vol. 55, no. 7, pp. 3051–3073.

    Article  MathSciNet  Google Scholar 

  26. Shannon, C.E., Coding Theorems for a Discrete Source with a Fidelity Criterion, IRE Nat. Conv. Rec., 1959, vol. 7, no. 4, pp. 142–163.

    Google Scholar 

  27. Zhang, Z., Yang, E., and Wei, V., The Redundancy of Source Coding with a Fidelity Criterion. Part I: Known Statistics, IEEE Trans. Inform. Theory, 1997, vol. 43, no. 1, pp. 71–91.

    Article  MathSciNet  MATH  Google Scholar 

  28. Marton, K., Error Exponent for Source Coding with a Fidelity Criterion, IEEE Trans. Inform. Theory, 1974, vol. 20, no. 2, pp. 197–199.

    Article  MathSciNet  MATH  Google Scholar 

  29. Cover, T.M. and Thomas, J.A., Elements of Information Theory, Hoboken, NJ: Wiley, 2006, 2nd ed.

    MATH  Google Scholar 

  30. Kontoyiannis, I. and Gioran, C., Efficient Random Codebooks and Databases for Lossy Compression in Near-Linear Time, in Proc. IEEE Information Theory Workshop on Networking and Information Theory (ITW’2009), Volos, Greece, 2009, pp. 236–240.

  31. Hoeffding, W., Probability Inequalities for Sums of Bounded Random Variables, J. Amer. Statis. Assoc., 1963, vol. 58, no. 301, pp. 13–30.

    Article  MathSciNet  MATH  Google Scholar 

  32. Kostina, V. and Verdú, S., Fixed-Length Lossy Compression in the Finite Blocklength Regime, IEEE Trans. Inform. Theory, 2012, vol. 58, no. 6, pp. 3309–3338.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. Gupta.

Additional information

Original Russian Text © A. Gupta, S. Verdú, T. Weissman, 2012, published in Problemy Peredachi Informatsii, 2012, Vol. 48, No. 4, pp. 62–87.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gupta, A., Verdú, S. & Weissman, T. Achievable complexity-performance tradeoffs in lossy compression. Probl Inf Transm 48, 352–375 (2012). https://doi.org/10.1134/S0032946012040060

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0032946012040060

Keywords

Navigation