Skip to main content

Backpropagation Analysis of the Limited Precision on High-Order Function Neural Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3173))

Abstract

Quantization analysis of the limited precision is widely used in the hardware realization of neural networks. Due to the most neural computations are required in the training phase, the effects of quantization are more significant in this phase. We pay attention and analyze backpropagation training and recall of the limited precision on the HOFNN, point out the potential problems and the performance sensitivity with lower-bit quantization. We compare the training performances with and without weight clipping, derive the effects of the quantization error on backpropagation for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jiang, M., Gielen, G.: The Effects of Quantization on Multilayer Feedforward Neural Networks. International Journal of Pattern Recognition and Artificial Intelligence 17, 637–661 (2003)

    Article  Google Scholar 

  2. Marco, G., Marco, M.: Optimal Convergence of On-Line Backpropagation. IEEE Transactions on Neural Networks 7, 251–254 (1996)

    Article  Google Scholar 

  3. Bayraktaroglu, I., Ogrenci, A.S., Dundar, G., et al.: ANNSyS: An Analog Neural Network Synthesis System. Neural Networks 12, 325–338 (1999)

    Article  Google Scholar 

  4. Shima, T., Kimura, T., Kamatani, Y., et al.: Neural Chips with On-Chip Backpropagation and / or Hebbian Learning. IEEE Journal of Solid State Circuits 27, 1868–1876 (1992)

    Article  Google Scholar 

  5. Anand, R., Mehrotra, K., Mohan, C.K., et al.: Efficient Classification for Multiclass Problems Using Modular Neural Networks. IEEE Transactions on Neural Networks 6, 117–123 (1995)

    Article  Google Scholar 

  6. Holt, J.L., Hwang, J.-N.: Finite Error Precision Analysis of Neural Network Hardware Implementations. IEEE Transactions on Computers 42, 1380–1389 (1993)

    Google Scholar 

  7. Jiang, M., Gielen, G.: The Effect of Quantization on the High Order Function Neural Networks. In: Neural Networks for Signal Processing - Proceedings of the IEEE Workshop, pp. 143–152. IEEE Inc., Los Alamitos (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jiang, M., Gielen, G. (2004). Backpropagation Analysis of the Limited Precision on High-Order Function Neural Networks. In: Yin, FL., Wang, J., Guo, C. (eds) Advances in Neural Networks – ISNN 2004. ISNN 2004. Lecture Notes in Computer Science, vol 3173. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-28647-9_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-28647-9_52

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22841-7

  • Online ISBN: 978-3-540-28647-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics