Skip to main content
Log in

Analysis of quantization effects on high-order function neural networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In this paper we investigate the combined effects of quantization and clipping on high-order function neural networks (HOFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. We analyze the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits under the assumption of different probability distributions for the quantized variables, and then compare the training performance between situations with and without weight clipping. We establish and analyze the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Kashefi F (1999) Rapidly training device for fiber optic neural network. PhD dissertation, The University of Texas at Dallas

  2. Hollis PW, Paulos JJ (1990) Artificial neural networks using MOS analog multipliers. IEEE J Solid State Circuits 25:849–855

    Article  Google Scholar 

  3. Lont J, Guggenbuhl WB (1992) Analog CMOS implementation of a multiplayer perceptron with nonlinear synapses. IEEE Trans Neural Netw 3:457–465

    Article  Google Scholar 

  4. Sakaue S, Kohda T, Yamamoto H et al (1993) Reduction of required precision bits for back-propagation applied to pattern recognition. IEEE Trans Neural Netw 4:270–275

    Article  Google Scholar 

  5. Montalvo AJ, Gyurcsik RS, Paulos JJ (1997) Toward a general purpose analog VLSI neural network with on-chip learning. IEEE Trans Neural Netw 8:413–423

    Article  Google Scholar 

  6. Reyner LM, Filippi E (1991) An analysis of the performance of silicon impementations of backpropagation algorithms for artificial neural networks. IEEE Trans Comput 40:1380–1389

    Article  Google Scholar 

  7. Oppenheim AV, Schaffer RW (1975) Digital signal processing. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  8. Jiang M, Gielen G (2001) The effect of quantization on the high order function neural networks. In: Proceedings of IEEE international workshop on neural networks for signal processing, pp 143–152

  9. Pao YH (1989) Adaptive pattern recognition and neural networks. Addison–Wesley, Reading

    MATH  Google Scholar 

  10. Xie Y, Jabri MA (1992) Analysis of the effects of quantization in multilayer neural networks using a statistical model. IEEE Trans Neural Netw 3:334–338

    Article  Google Scholar 

  11. Dundar G, Rose K (1995) The effects of quantization on multilayer neural networks. IEEE Trans Neural Netw 6:1446–1451

    Article  Google Scholar 

  12. Tang CZ, Kwan HK (1993) Multilayer feedforward neural networks with single power-of-two weights. IEEE Trans Signal Process 41:2724–2727

    Article  MATH  Google Scholar 

  13. Marco G, Marco M (1996) Optimal convergence of on-line backpropagation. IEEE Trans Neural Netw 7:251–254

    Article  Google Scholar 

  14. Bayraktaroglu I, Ogrenci AS, Dundar G et al (1999) ANNSyS: an analog neural network synthesis system. Neural Netw 12:325–338

    Article  Google Scholar 

  15. Shima T, Kimura T, Kamatani Y et al (1992) Neural chips with on-chip backpropagation and/or Hebbian learning. IEEE J Solid State Circuits 27:1868–1876

    Article  Google Scholar 

  16. Dolenko BK, Card HC (1995) Tolerance to analog hardware of onchip learning in backpropagation networks. IEEE Trans Neural Netw 6:1045–1052

    Article  Google Scholar 

  17. Holt JL, Hwang J-N (1993) Finite error precision analysis of neural network hardware implementations. IEEE Trans Comput 42:1380–1389

    Google Scholar 

  18. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagation errors. Nature 323:533–536

    Article  Google Scholar 

  19. Karayiannis NB, Venetsanopoulos AN (1992) Fast learning algorithms for neural networks. IEEE Trans Circuits Systems II Analog Digit Signal Process 39:453–474

    Article  MATH  Google Scholar 

  20. Jiang M, Gielen G, Zhang B et al (2003) Fast learning algorithms for feedforward neural networks. Appl Intell 18:37–54

    Article  MATH  Google Scholar 

  21. Jiang M, Gielen G, Xing B et al (2002) A fast learning algorithms of time-delay neural networks. Inf Sci 148:27–39

    Article  MATH  Google Scholar 

  22. Anand R, Mehrotra K, Mohan CK et al (1995) Efficient classification for multiclass problems using modular neural networks. IEEE Trans Neural Netw 6:117–123

    Article  Google Scholar 

  23. Moerland P, Fiesler E (1997) Neural network adaptations to hardware implementations. In: Fiesler E, Beale R (eds) Handbook of neural computation. Institute of Physics, New York, pp 1–13

    Google Scholar 

  24. Jiang M, Gielen G (2003) The effects of quantization on multilayer feedforward neural networks. Int J Pattern Recognit Artif Intell 17:637–661

    Article  Google Scholar 

  25. Jiang M, Gielen G (2004) Backpropagation analysis of the limited precision on high-order function neural networks. In: Xin FL, Wang J, Guo CG (eds) Advances in neural networks, part I. Lecture notes in computer science, vol 3173. Springer, Heidelberg, pp 305–310

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghu Jiang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jiang, M., Gielen, G. Analysis of quantization effects on high-order function neural networks. Appl Intell 28, 51–67 (2008). https://doi.org/10.1007/s10489-007-0041-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-007-0041-7

Keywords