Elsevier

Signal Processing

Volume 81, Issue 2, February 2001, Pages 345-356
Signal Processing

Quantization errors in averaged digitized data

https://doi.org/10.1016/S0165-1684(00)00212-7Get rights and content

Abstract

Analytic expressions which describe average quantization errors in digitized data with additive noise are derived. The magnitude of this error depends on the noise present in the analog signal, the bin-size (the difference between neighboring quantization levels) and also the signal itself. An iterative process, which corrects for these residual quantization errors after averaging, is proposed and tested in simulations. Alternatively a method for avoiding quantization errors during digitization of signals which will later be averaged is suggested.

Introduction

Digitization of images or signals is a crucial step in computerized image and signal processing and it is worth some consideration in order to preserve signal quality in the transition from analog to digital. In the field of microscopy quantization takes place either during the recording process in the microscope or as an intermediate step between recording and computer processing of images. The former type occurs in all scanning microscopes and in conventional (nonscanning) microscopes with an electronic array detector of some sort. An example of the latter type of quantization is the scanning of micrographs. Many applications of microscopy such as imaging two-dimensional crystals or single particle reconstruction involve averaging of data after digitization. The error or noise introduced by quantization of ideal signals has been described in statistical terms in the classic paper [2] and in more general terms in [5]. The question addressed in this paper is how quantization errors affect averages taken from digitized images or signals which contain additive noise.

Section snippets

An expression for quantization errors in averaged data with additive noise

The following analysis is based on the assumption that noisy data is digitized with relatively widely spaced quantization levels (for instance using a 1-byte digitizer) and subsequently averaged using a much finer representation. This is a common situation since the output of many digitizers is limited to 1 byte whereas in subsequent processing steps it is often advantageous to convert to a finer data representation. This is all the more important if, for technical reasons, it is not possible

Normal noise

Choosing a normal distribution to describe the noisePx=12πσ2expx22some straightforward calculations lead to the following expression for the averaged quantized signal:QNf=s−E=s−bπm=1−1m+1mexp−2π2σ2m2b2sin2πsmb(which, for σ=0, simplifies to QN(f)〉=s−Sawb(s) as it should).

Fig. 2 compares an averaged quantized signal calculated by Eq. (9) to the same average produced by actually carrying out quantization and averaging of a set of noisy signals in a computer simulation.

A measure of the

A posteriori correction of quantization errors

The general expression for the averaged quantized signal (3) immediately leads to another procedure for correcting the quantization error still present in the digitized data after averaging without needing to choose specific quantization levels or modify the electronics of the digitizer. 〈E〉 can be treated as a correction term that has to be added to the average to obtain the ideal signal:s=Qf+Es,where the correction term isEs=bπm=1−1m+1mReΦnmbsinsmbImΦnmbcossmbin the general case,Es=bπ

Discussion and conclusion

Quantization errors are most severe if the quantization levels are spaced widely compared to the noise amplitude. If, however, the noise is large compared to the spacing of quantization levels, quantization errors are very effectively smoothed. A variable offset has the same effect in averaging out quantization errors. This shows that the more stable an image or signal is and the higher the signal to noise ratio at a given spacing of quantization levels the more severe the distortion due to

References (11)

  • L.D. Marks

    Wiener-filter enhancement of noisy HREM images

    Ultramicroscopy

    (1996)
  • C. Ai et al.

    Removing the quantization error by repeated observation (image processing)

    IEEE Trans. Signal Process

    (1991)
  • W.R. Bennett

    Spectra of quantized signals

    Bell Systems Tech. J.

    (1948)
  • G.H. Campbell

    Analysis of experimental error in high resolution electron micrographs

    Microsc. Microanal.

    (1997)
  • P. Carbone et al.

    Effect of additive dither on the resolution of ideal quantizers

    IEEE Trans. Instrum. Meas.

    (1994)
There are more references available in the full text version of this article.

Cited by (0)

View full text