Elsevier

Signal Processing

Volume 75, Issue 1, 5 January 1999, Pages 89-92
Signal Processing

Fast communication
A fast vector quantization encoding algorithm using multiple projection axes

https://doi.org/10.1016/S0165-1684(99)00035-3Get rights and content

Abstract

Computation of nearest neighbor generally requires a large number of expensive distance calculations. In this paper, we present an algorithm which uses multiple projection axes to accelerate the encoding process of VQ by eliminating the necessity of calculating many distances. Since the proposed algorithm rejects those codewords that are impossible to be the nearest codeword, it produces the same output as a conventional full search algorithm. The simulation results confirm the effectiveness of the proposed algorithm.

Introduction

Vector quantization (VQ) is a very efficient approach to low-bit-rate image compression [2]. However, the utilization of VQ is severely limited by its encoding complexity which increases exponentially with dimension. Thus, many researchers have looked into a fast encoding algorithm to accelerate the VQ process. These works can be classified into two groups. The first group does not solve the nearest-neighbor problem itself but instead seeks a suboptimal solution that is almost as good in the sense of mean squared error (MSE). Usually it relies on the use of data structures which facilitate fast search of the codebook such as TSVQ, K-d tree [4], [7].

However, the second group addresses an exact solution of the nearest-neighbor encoding problem. With some memory overhead, the fast nearest-neighbor search (FNNS) algorithm [3], [5] and the projection method save great deal of computation time. FNNS algorithm uses the triangle inequality and can reject a great many unlikely codewords. However, the algorithm requires an additional memory of size N(N−1)/2 to store the distances of all pairs of codewords, where N is a codebook size. When N is large, the memory requirement can be a serious problem.

The projection method such as equal-average nearest-neighbor search (ENNS) algorithm [6] uses the mean of an input vector to cancel the unlikely codeword. This method shows a great deal of computation time savings over conventional full search algorithm with only N additional memory. The improved algorithm using the variance as well as the mean of an input vector shows more computation time savings with 2N additional memory [1].

In this paper, we will review ENNS algorithm and present a new fast encoding algorithm. The algorithm uses multiple projection points of an input vector to accelerate the encoding process. A new inequality between the projection points and the distance is derived for the algorithm. Since the codeword searching area is reduced with the inequality, the proposed algorithm requires less computation time than ENNS algorithm.

Section snippets

The proposed algorithm

Before describing the proposed algorithm, we will give a definition and review ENNS algorithm.

Definition 1

Let l be a line on Rk. If any point p=(p1,p2,…,pk) on l satisfies the condition p1=p2=⋯=pk, l is called the central line or the central axis of Rk. If Lx is the projection point of x on the central line l, it can be seen that Lx=(mx,mx,…,mx) where mx is a mean of vector x.

ENNS algorithm uses the mean of a vector to reject a lot of unlikely codewords. The main logic of ENNS can be stated as follows.

Theorem 1

Let

Experimental results

We performed experiments on a PC using four images. The images are 512×512 monochrome with 256 gray levels. The subimages of 4×4 pixels are used for input vectors. In each experiment, the codebooks are designed using LBG algorithm from ‘Lena’ image.

Experiments were conducted up to three projection axes. The axes are defined as p1=(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1), p2=(1,1,1,1,1,1,1,1,−1,−1,−1,−1,−1,−1,−1,−1,) and p3=(1,1,−1,−1,1,1,−1,−1,1,1,−1,−1,1,1,−1,−1), respectively. As mentioned above,

Conclusion

In this paper, we propose a new fast VQ encoding algorithm which uses multiple projection axes. For the purpose, a new inequality between the distance and the projection points is derived. Experimental results confirm that the proposed algorithm outperforms ENNS algorithm. However, it should be noted that the proposed algorithm requires N memory per one projection axis. Currently, we investigate the optimal method to obtain projection axes.

References (7)

  • S Baek et al.

    A fast encoding algorithm for vector quantization

    IEEE Signal Process. Lett.

    (December 1997)
  • R.M Gray

    Vector quantization

    IEEE Acoust. Speech Signal Process. Mag.

    (April 1984)
  • C.M. Huang, Q. Bi, G.S. Stiles, R.W. Harris, Fast full search equivalent encoding algorithms for image compression...
There are more references available in the full text version of this article.

Cited by (0)

View full text