Optimal reconstruction from quantized data

https://doi.org/10.1016/S0923-5965(99)00032-6Get rights and content

Abstract

Data quantization is an essential step before digitization, i.e. prior to representing real numbers by bits for further digital processing. In this paper we show how a statistically non-optimal quantizer (e.g. a uniform quantizer) can be improved by a simple scaling operation before reconstructing the original value. The scale factor depends on the statistics of the input and it is not constant, except for the case of the optimal Lloyd–Max quantizer where it is always equal to 1. However in other cases, for example in the common uniform quantizer we report improvement of the order of 0.1–0.6 db depending on the bit-rate. The proposed method can be used in any application where uniform (or other non-optimal) quantizers are used. In particular, it can be used for the quantization scheme of the JPEG image coding standard, and elsewhere.

Introduction

Today's digital technology requires the representation of physical parameters that take continuous values (e.g. speech waves, image intensity, electrical signals, etc.) by a finite-length digital code. Each word in the code contains a finite number of bits and corresponds to a discrete value level of the original datum. This encoding process is usually separated in two steps: (a) quantization and (b) code assignment (coding). In the first step we decide which levels of the data will be represented by digital codes since no finite (and denumerable) code can represent infinitely many and non-denumerable values. The second process, called coding, decides exactly which code-words to assign to each distinct quantization level.

It is well known that, for scalar values, the Lloyd–Max quantizer [3] is statistically optimal in terms of the relationship between bit-rate and distortion. Nevertheless, many other sub-optimal quantizers are widely used (e.g. the uniform quantizer) mainly due to their simplicity and the lack of dependence on the input statistics. Dependence on the input statistics forces the encoder to spend extra bit-rate in order to make the quantization levels known to the decoder.

In [1] the authors propose the optimal transform coding scheme incorporating uniform quantization. They find that the noise-free optimal Karhunen–Loeve transform fails to be optimal in the noisy case and that optimality is achieved by a simple scaling scheme. In [5] the authors design the optimal subband coder incorporating uniform quantization and based on the signal statistics. Again a pre- and post-quantization scaling scheme is proposed.

In this paper we study the design of an optimal quantizer irrespective of the particular coding method used (transform coding, subband coding, etc). We show how a statistically non-optimal quantizer (for example the uniform quantizer, but we are not restricted to it) can be improved by a simple scaling operation. This scaling takes place in the decoder before reconstructing the original value. The scale factor α depends on the statistics of the input and is not constant, except for the case of the optimal Lloyd–Max quantizer where α=1. The improvement comes from the fact that the data are not evenly distributed in a quantization bin. Therefore, a reconstruction level closer to the peak of the bin-distribution will yield smaller reconstruction error. The problem applies in digital communications, image coding, speech coding and in general anywhere quantization is used. In Section 4 we show for example, its application to the JPEG image coding standard, where performance improvement ranging between 0.1 and 0.6 db is achieved depending on the coarseness of the quantization, and therefore on the bit-rate.

Section snippets

Optimal data reconstruction

Consider a system consisting of a quantization module used by some symbol coding device (coder) and a reconstruction subsystem (decoder) consisting of a simple amplification module as shown in Fig. 1. The quantizer input v and output y are related by the following expression:y=v+e,where we introduced e as the quantization error. The numbers y,v and e are modeled by real, zero-mean random variables. We shall not make any particular assumptions regarding the probability distribution of any one of

Simulation experiments

We tested our scaling theory in a simple uniform quantizer with different inputs distributed according to the Gaussian or Laplace distributions. The choice of distributions was made so as to match the data typically found in an image coding problem. In the next section we shall discuss further experiments on real images using the JPEG coding protocol. Fig. 2, Fig. 3 show rate distortion plots for standard and scaled uniform quantizers. Fig. 2 shows the MSE versus the entropy (in bits) for

Application: the JPEG protocol

JPEG (an acronym for Joint Photographic Experts Group) is currently the international de jure still image coding standard [2], [4], [6] as it is supported by both the ISO and the ITU-T (formerly CCITT). It is very widely used in applications where high image quality needs to be combined with high compression rates, for example, in image archives, desktop publishing, graphic arts, newspaper wire-photo transmission, multimedia databases, etc. JPEG's scope covers both color and grayscale images.

References (6)

  • K.I. Diamantaras, M.G. Strintzis, Optimal transform coding in the presence of quantization noise, IEEE Trans. Image...
  • ISO/IEC JTC1 10918-1/ITU-T Recomm. T.81, Information technology – digital compression coding of continuous-tone still...
  • N.S. Jayant et al.

    Digital Coding of Waveforms

    (1984)
There are more references available in the full text version of this article.

Cited by (3)

View full text