Quantization errors in averaged digitized data
Introduction
Digitization of images or signals is a crucial step in computerized image and signal processing and it is worth some consideration in order to preserve signal quality in the transition from analog to digital. In the field of microscopy quantization takes place either during the recording process in the microscope or as an intermediate step between recording and computer processing of images. The former type occurs in all scanning microscopes and in conventional (nonscanning) microscopes with an electronic array detector of some sort. An example of the latter type of quantization is the scanning of micrographs. Many applications of microscopy such as imaging two-dimensional crystals or single particle reconstruction involve averaging of data after digitization. The error or noise introduced by quantization of ideal signals has been described in statistical terms in the classic paper [2] and in more general terms in [5]. The question addressed in this paper is how quantization errors affect averages taken from digitized images or signals which contain additive noise.
Section snippets
An expression for quantization errors in averaged data with additive noise
The following analysis is based on the assumption that noisy data is digitized with relatively widely spaced quantization levels (for instance using a 1-byte digitizer) and subsequently averaged using a much finer representation. This is a common situation since the output of many digitizers is limited to 1 byte whereas in subsequent processing steps it is often advantageous to convert to a finer data representation. This is all the more important if, for technical reasons, it is not possible
Normal noise
Choosing a normal distribution to describe the noisesome straightforward calculations lead to the following expression for the averaged quantized signal:(which, for σ=0, simplifies to as it should).
Fig. 2 compares an averaged quantized signal calculated by Eq. (9) to the same average produced by actually carrying out quantization and averaging of a set of noisy signals in a computer simulation.
A measure of the
A posteriori correction of quantization errors
The general expression for the averaged quantized signal (3) immediately leads to another procedure for correcting the quantization error still present in the digitized data after averaging without needing to choose specific quantization levels or modify the electronics of the digitizer. 〈E〉 can be treated as a correction term that has to be added to the average to obtain the ideal signal:where the correction term isin the general case,
Discussion and conclusion
Quantization errors are most severe if the quantization levels are spaced widely compared to the noise amplitude. If, however, the noise is large compared to the spacing of quantization levels, quantization errors are very effectively smoothed. A variable offset has the same effect in averaging out quantization errors. This shows that the more stable an image or signal is and the higher the signal to noise ratio at a given spacing of quantization levels the more severe the distortion due to
References (11)
Wiener-filter enhancement of noisy HREM images
Ultramicroscopy
(1996)- et al.
Removing the quantization error by repeated observation (image processing)
IEEE Trans. Signal Process
(1991) Spectra of quantized signals
Bell Systems Tech. J.
(1948)Analysis of experimental error in high resolution electron micrographs
Microsc. Microanal.
(1997)- et al.
Effect of additive dither on the resolution of ideal quantizers
IEEE Trans. Instrum. Meas.
(1994)