Abstract
In this paper we investigate the combined effects of quantization and clipping on high-order function neural networks (HOFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits under the assumption of different probability distributions for the quantized variables, and then compare the training performance between situations with and without weight clipping. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Kashefi F (1999) Rapidly training device for fiber optic neural network. PhD dissertation, The University of Texas at Dallas
Hollis PW, Paulos JJ (1990) Artificial neural networks using MOS analog multipliers. IEEE J Solid State Circuits 25:849–855
Lont J, Guggenbuhl WB (1992) Analog CMOS implementation of a multiplayer perceptron with nonlinear synapses. IEEE Trans Neural Netw 3:457–465
Sakaue S, Kohda T, Yamamoto H et al (1993) Reduction of required precision bits for back-propagation applied to pattern recognition. IEEE Trans Neural Netw 4:270–275
Montalvo AJ, Gyurcsik RS, Paulos JJ (1997) Toward a general purpose analog VLSI neural network with on-chip learning. IEEE Trans Neural Netw 8:413–423
Reyner LM, Filippi E (1991) An analysis of the performance of silicon impementations of backpropagation algorithms for artificial neural networks. IEEE Trans Comput 40:1380–1389
Oppenheim AV, Schaffer RW (1975) Digital signal processing. Prentice-Hall, Englewood Cliffs
Jiang M, Gielen G (2001) The effect of quantization on the high order function neural networks. In: Proceedings of IEEE international workshop on neural networks for signal processing, pp 143–152
Pao YH (1989) Adaptive pattern recognition and neural networks. Addison–Wesley, Reading
Xie Y, Jabri MA (1992) Analysis of the effects of quantization in multilayer neural networks using a statistical model. IEEE Trans Neural Netw 3:334–338
Dundar G, Rose K (1995) The effects of quantization on multilayer neural networks. IEEE Trans Neural Netw 6:1446–1451
Tang CZ, Kwan HK (1993) Multilayer feedforward neural networks with single power-of-two weights. IEEE Trans Signal Process 41:2724–2727
Marco G, Marco M (1996) Optimal convergence of on-line backpropagation. IEEE Trans Neural Netw 7:251–254
Bayraktaroglu I, Ogrenci AS, Dundar G et al (1999) ANNSyS: an analog neural network synthesis system. Neural Netw 12:325–338
Shima T, Kimura T, Kamatani Y et al (1992) Neural chips with on-chip backpropagation and/or Hebbian learning. IEEE J Solid State Circuits 27:1868–1876
Dolenko BK, Card HC (1995) Tolerance to analog hardware of onchip learning in backpropagation networks. IEEE Trans Neural Netw 6:1045–1052
Holt JL, Hwang J-N (1993) Finite error precision analysis of neural network hardware implementations. IEEE Trans Comput 42:1380–1389
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagation errors. Nature 323:533–536
Karayiannis NB, Venetsanopoulos AN (1992) Fast learning algorithms for neural networks. IEEE Trans Circuits Systems II Analog Digit Signal Process 39:453–474
Jiang M, Gielen G, Zhang B et al (2003) Fast learning algorithms for feedforward neural networks. Appl Intell 18:37–54
Jiang M, Gielen G, Xing B et al (2002) A fast learning algorithms of time-delay neural networks. Inf Sci 148:27–39
Anand R, Mehrotra K, Mohan CK et al (1995) Efficient classification for multiclass problems using modular neural networks. IEEE Trans Neural Netw 6:117–123
Moerland P, Fiesler E (1997) Neural network adaptations to hardware implementations. In: Fiesler E, Beale R (eds) Handbook of neural computation. Institute of Physics, New York, pp 1–13
Jiang M, Gielen G (2003) The effects of quantization on multilayer feedforward neural networks. Int J Pattern Recognit Artif Intell 17:637–661
Jiang M, Gielen G (2004) Backpropagation analysis of the limited precision on high-order function neural networks. In: Xin FL, Wang J, Guo CG (eds) Advances in neural networks, part I. Lecture notes in computer science, vol 3173. Springer, Heidelberg, pp 305–310
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Jiang, M., Gielen, G. Analysis of quantization effects on high-order function neural networks. Appl Intell 28, 51–67 (2008). https://doi.org/10.1007/s10489-007-0041-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-007-0041-7