Loading [a11y]/accessibility-menu.js
KLT-based adaptive entropy-constrained quantization with universal arithmetic coding | IEEE Journals & Magazine | IEEE Xplore

KLT-based adaptive entropy-constrained quantization with universal arithmetic coding


Abstract:

For flexible speech coding, a Karhunen-Loève Transform (KLT) based adaptive entropy-constrained quantization (KLT-AECQ) method is proposed. It is composed of backward-ada...Show More

Abstract:

For flexible speech coding, a Karhunen-Loève Transform (KLT) based adaptive entropy-constrained quantization (KLT-AECQ) method is proposed. It is composed of backward-adaptive linear predictive coding (LPC) estimation, KLT estimation based on the time-varying LPC coefficients, scalar quantization of the speech signal in a KLT domain, and superframe-based universal arithmetic coding based on the estimated KLT statistics. To minimize the outliers both in rate and distortion, a new distortion criterion includes the penalty in the rate increase. Gain adaptive step size selection and bounded Gaussian source model also cooperate to increase the perceptual quality. KLT-AECQ does not require either any explicit codebook or a training step, thus KLT-AECQ can have an infinite number of rate-distortion operating points regardless of time-varying source statistics. For the speech signal, the conventional KLT-based classified vector quantization (KLT-CVQ) and the proposed KLT-AECQ yield signal-to-noise ratios of 17.86 and 26.22, respectively, at around 16 kbits/s. The perceptual evaluation of speech quality (PESQ) scores for each method are 3.87 and 4.04, respectively1.
Published in: IEEE Transactions on Consumer Electronics ( Volume: 56, Issue: 4, November 2010)
Page(s): 2601 - 2605
Date of Publication: 30 November 2010

ISSN Information:


References

References is not available for this document.