Elsevier

Information Sciences

Volume 181, Issue 14, 15 July 2011, Pages 3043-3053
Information Sciences

Geometric piecewise uniform lattice vector quantization of the memoryless Gaussian source

https://doi.org/10.1016/j.ins.2011.03.012Get rights and content

Abstract

The aim of this paper is to find a quantization technique that has low implementation complexity and asymptotic performance arbitrarily close to the optimum. More specifically, it is of interest to develop a new vector quantizer design procedure for a memoryless Gaussian source that yields vector quantizers with excellent performance and the structure required for fast quantization. To achieve this, we combined a fast lattice-encoding algorithm with a geometric approach to generate a model of a geometric piecewise-uniform lattice vector quantizer. Expressions for granular distortion and the optimal number of outputs points in each region were derived. Both exact and approximative asymptotic analyses were carried out. During this process, the constant probability density function of the input signal vector was kept inside the whole region. The analysis demonstrated the existence of piecewise-constant approximations to the input-vector probability density function, which is optimal for the proposed geometric piecewise-uniform vector quantizer. The considered quantization technique is near optimal for a memoryless Gaussian source. In other words, this paper proposes a method for a near-optimum, low-complex vector quantizer design based on probability density function discretization. The presented methodology gives a signal-to-quantization noise ratio that in some cases differs from the optimum by 0.1 dB or less. Improvements of the considered model in performance and complexity over some of the existing techniques are also demonstrated.

Introduction

The purpose of vector quantization, in general, is to provide compression of data into the fewest possible bits while preserving essential information [9], [10], [4], [14], [26]. To take advantage of vector quantization, the optimization of vector quantization design was investigated. Designing an optimal vector quantizer is equivalent to finding a partition of the vector space with simultaneous assigning a representative point to each partition in such a way that the predefined distortion measure between input and output is minimized. For each specific probability distribution, extensive research has been conducted to find an optimal distribution of output points inside vector space [9], [2], [1], [8], [20], [18]. Unfortunately, these proposals are based on complex non-uniform vector quantization techniques. Because of this, the objective of this paper is to instead find a quantization technique that has low implementation complexity and performance arbitrarily close to the optimum.

The quantizer presented in this paper is designed for a memoryless Gaussian source. Numerous signal sources can be modeled as Gaussian sources [9]. Additionally, a properly chosen filtering technique applied to non-Gaussian sources with memory produces sequences that are approximately independent and Gaussian [19]. Satisfactory quality can also be achieved sometimes by the application of Gaussian quantizers to sources that are not Gaussian.

The lattice vector quantization is highly structured, but when applied to non-uniform distribution, it gives performance much worse than the optimum [9], [10], [6]. To overcome this problem, we approached problem of Gaussian source quantization with a piecewise-uniform lattice vector quantizer. The first step of the quantization includes a partition of multidimensional space into regions; afterwards, a lattice grid application (the second step) allows a more successful adaptation of cell sizes to the input signal’s statistical characteristics. There are two logics for the first step of the quantization. The quantizer obtained after the first step can be an optimal vector quantizer or a quantizer that follows the source geometry. It is intuitively expected that the cell side length of the lattice grid is better adapted to the probability density function of input vector for the case when the first partition follows the source geometry. As a result, the final distortion may be minimized, even though the distortion after the first step is not optimal. In this paper, we confirm this intuitive hypothesis.

The importance of source geometry and lattice quantization has been noted previously [7]. A geometric approach in designing the quantizers rests on an asymptotic equipartition principle of information theory. The asymptotic equipartition principle suggests that almost all codewords are selected to lie in a region of high probability specified by the entropy of the source. Unfortunately, the lattice codewords are not uniformly distributed on this surface. To overcome this difficulty partially, the author introduced several concentric surfaces. However, the obtained performance was not close to the optimum because the number of codewords inside representative surfaces was not optimized. On the other hand, using the optimal number of codewords inside representative surfaces enlarges the quantizer complexity. Recent attempts of satisfied performance–complexity ratio finding, for geometric Gaussian source coding, resulted in a wide variety of product and unrestricted quantizer appearance [16], [3], [11], [21], [22], [23], [24], [25].

In this paper, we design a piecewise-uniform vector quantizer for a memoryless Gaussian source, taking the surfaces of a constant input-vector probability density function for the boundaries between the regions. Though the equation for the constant input-vector probability density function is an expression for an n-dimensional sphere, our proposal of space partition takes into consideration the source geometry. The next step of quantizer design is to partition each region using uniform lattice vector quantization. For each region, the number of output points found, i.e., the side lengths of the lattice cells, is based on a minimum distortion criterion for the geometric piecewise-uniform lattice vector quantization. By optimizing the granular distortion of the considered model, we can determine a closed-form solution for the number of output points inside each region. In this way, we reach a multidimensional space partition similar to the optimal one, even though the first partition was not performed in the optimal way. Because of this, these technically non-optimal systems can perform arbitrarily close to the optimum. In comparison with the complexity of the nearest neighbor quantization (complexity grows exponentially with dimension and rate increase [9]), the complexity of the considered quantization model is far simpler. The input vector quantization could be carried out in the following way: first, determine the region where the input vector is, and second, quantize the input vector using a highly structured uniform (lattice) quantizer (lattice complexity growth is only polynomial with respect to dimension and rate [9]). In the subsequent discussion, it is shown that an almost optimal performance can be achieved with a relatively small number of regions. Moreover, the structure involved due to the geometric approach application enables fast finding of the region.

There is a technique similar to the piecewise-uniform vector quantization technique, called the two-stage vector quantization. The main difference between these techniques is the manner in which the quantizers are designed. In [15], an approach was presented with a two-stage vector quantization using an unstructured codebook for the first stage and a spherical lattice codebook for the second stage. The unstructured codebook is determined using the first previously mentioned logic. The quantizer obtained after the first stage is an optimal vector quantizer. Consequently, the cell size can be brought into accordance with the input-vector probability density function only for the extensive size of the first stage codebook. The two-stage vector quantization is feasible for moderate to large encoding rates and vector dimensions, allowing the codebook size obtained in the first stage to remain reasonable. Because of this, the analysis in [15] is performed for bit rates of two to three bits/dimension, while the vector dimensions range from 8 to 32. We will show that joint optimum two-stage codebook design algorithm, which uses the codebook size of 28 for the first stage, gives a signal-to-quantization noise ratio about 0.5 dB less than the signal-to-quantization noise ratio achieved with our algorithm with 16 regions. This confirms that although the first-stage codebook size is large, the joint optimum two-stage vector quantization follows the source geometry incompletely; thus, there is no total cell size adaptation to the input-vector probability density function.

The analysis of this piecewise-uniform vector quantizer for arbitrary distribution of the source signal was given in [13]. The authors of [13] concluded that the main shortcoming of their analysis was the lack of a method for the first-step quantization. The method for deciding the regions that divide the input space was not defined. In one example of a two-dimensional Gaussian distribution, the support was chosen to be a hexagon, while regions were heuristically selected to be trapezoids. In this paper, for a two-dimensional Gaussian distribution, the support is circle and the regions are concentric rings inside this circle.

To prove the proposed geometric piecewise-uniform lattice vector quantization is a near-optimum quantization for a memoryless Gaussian source, we establish the existence of a piecewise-uniform approximation to the input vector probability density function for which the considered quantizer is optimal. It is known that a uniform quantizer is optimal for uniform distribution, so a piecewise-uniform quantizer is optimal for a piecewise-uniform probability density function [9], [13]. Therefore, as the approximation of the non-uniform with piecewise-uniform distribution becomes more accurate, the distortion asymptotically approaches its minimum value. We find this piecewise-uniform approximation by performing an asymptotic analysis under the assumption that the probability density function of input signal vector is constant inside the entire region. Then, we require a necessary condition that the optimal granular distortion and number of outputs points in each region become equal to those obtained for the quantizer proposed for a memoryless Gaussian source. This idea implies that a near-optimum vector quantizer for a memoryless Gaussian source can be designed by finding an optimum quantizer for its adequate discretization. Thus, this paper proposes the geometric piecewise-uniform lattice vector quantizer design based on input-vector probability density function discretization. In [17] for Laplacian sources and [18] for circularly symmetric sources, in contrast, geometric piecewise-uniform lattice vector quantizer design is based on a radial compression function linearization.

The idea of vector quantizer design based on input-vector probability density function discretization also appeared in [12]. In that paper, the input-vector probability density function at the lower region boundary was taken as a constant probability density function inside the region. Then, it was heuristically supposed that the cubic cell length increased with increasing radius by a constant factor. Under that constraint, distortion minimum could be achieved only by guided (specified) geometric support partitioning, which causes (in the case of Gaussian source quantization) the distance between region boundaries to decrease with radius increase. Because of this, the model proposed in [12] approaches a non-uniform quantizer, where cell length decreases with radius increase. For this paper, we perform a broader analysis. Without constraints related to cell length, we determine an optimal geometric piecewise-uniform lattice vector quantizer for piecewise-constant approximation of input-vector probability density function obtained as in [12], as well as four other ways. We also find the best approximation with respect to distortion, i.e., the best method for a near-optimum Gaussian quantizer design. In comparison with [12], we demonstrate that for low rates (3 bits/dimension), our method gives for about 0.15 dB better signal-to-quantization noise ratio. In addition, our method is applicable for an arbitrary geometric support partitioning.

There are two quantization theories. One of them is exact and based on iterative algorithms (applicable for low rates and dimensions), and the other one uses asymptotic analysis (applicable for higher rates or dimensions) [9]. Asymptotic analysis is simple and gives good results in most practical cases. Closed-form solutions and simple analytical solutions are significant for both theory and practice. In this paper we perform asymptotic analysis, which has been shown acceptable for bit rates greater than 2 bits/dimension. As with almost all analyses, ours neglects the boundary effects, but the number of regions where the boundary effects are neglected is very small (less than 16).

The remaining part of this paper is organized as follows. In Section 2, we perform an asymptotic analysis of geometric piecewise-uniform lattice vector quantization for a memoryless Gaussian source. In Section 3, we investigate the probability density function discretization to confirm that geometric piecewise-uniform lattice vector quantization is a near-optimum technique for non-uniform distribution. In Section 4, the effectiveness of the presented quantizer design is evaluated by performance comparisons to other vector quantizers. It is shown that the signal-to-quantization noise ratio of the obtained quantizer can be within 0.1 dB of the optimum.

Section snippets

Geometric piecewise-uniform lattice vector quantization for memoryless Gaussian source

Let us consider a multidimensional quantization with a codebook that consists of N representative n-dimensional vectors. Then the bit number per dimension, quantizer rate R isR=1nlog2N.The support is partitioned into Nq regions, which are quantized using uniform lattice quantizers with specified cell numbers Ni, i = 1,  , Nq. In the general case, granular distortion per dimension for the jth lattice cell inside the ith region (Si, j) is [9], [12]Di,j=1nSi,jx-yi.j2f(x)dx,j=1,2,,Ni,i=1,2,,Nq,where x

A near-optimum quantizer design based on probability density function discretization

As cited in Section 1, we performed discretization of the input-vector probability density function. We introduce now the piecewise-constant characteristic that approximates the smooth curve of the probability density function for the input vector. At the very beginning of our analysis in this section, we suppose that f(x) over the whole ith region is constant fi. As a result of this assumption, we obtain granular distortion per dimension asDg=M(n)i=1NqΔi2Pi=M(n)i=1NqfiViViNi2n,where Δi is

Results and discussion

In this section, we present SQNR of obtained quantizers applying asymptotic analysis, which uses approximation where the whole region the input vector pdf is constant. The pdf approximation is performed in five different ways. We numerically determine optimal support radius from the condition that for this rNqopt total distortion is minimum. The procedure is as follows: for rNq with changes in small enough steps, we calculate total distortion. The iterations interrupt when distortion becomes

Conclusion

In this paper, we propose a new method for a near-optimum low-complex vector quantizer design, based on a geometric approach, lattice quantization and probability density function discretization. We demonstrate that as the approximation to non-uniform distribution inside the ith region fi becomes Pi/Vi, the asymptotic distortion approaches its minimum value. We derive in closed form the number of lattice cells for each of the regions (see (25)), as well as granular distortion (see (26)). For a

References (26)

  • A. Gersho

    Asymptotically optimal block quantization

    IEEE Trans. Inform. Theory

    (1979)
  • A. Gersho et al.

    Vector Quantization and Signal Compression

    (1992)
  • R.M. Gray et al.

    Quantization

    IEEE Trans. Inform. Theory

    (1998)
  • Cited by (10)

    • Variable-length coding for performance improvement of asymptotically optimal unrestricted polar quantization of bivariate Gaussian source

      2013, Information Sciences
      Citation Excerpt :

      Accordingly, the research that followed in the field of unrestricted polar quantization have mainly been focused on high bit rate cases that are of particular interest for most practical applications of quantization [16,17,26]. Asymptotic analysis is considered acceptable for bit rates greater than 3 bit/sample [11]. By utilizing the asymptotic approximations one can derive closed-form solutions that are significant for both theory and practice.

    • Design of product polar quantizers for A/D conversion of measurement signals with Gaussian distribution

      2013, Measurement: Journal of the International Measurement Confederation
    • Piecewise Uniform Quantization for One-Dimensional Two-Component GMM

      2022, 2022 21st International Symposium INFOTEH-JAHORINA, INFOTEH 2022 - Proceedings
    • The Effect of Uniform Data Quantization on GMM-based Clustering by Means of em Algorithm

      2021, 2021 20th International Symposium INFOTEH-JAHORINA, INFOTEH 2021 - Proceedings
    View all citing articles on Scopus
    View full text