Skip to main content

On the computational power of limited precision weights neural networks in classification problems: How to calculate the weight range so that a solution will exist

  • Neural Modeling (Biophysical and Structural Models)
  • Conference paper
  • First Online:
Foundations and Tools for Neural Modeling (IWANN 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1606))

Included in the following conference series:

Abstract

This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using limited range integer values opens the road for efficient VLSI implementations because i) a limited range for the weights can be translated into reduced storage requirements and ii) integer computation can be implemented in a more efficient way than the floating point one. The paper concentrates on classification problems and shows that, if the weights are restricted in a drastic way (both range and precision), the existence of a solution is not to be taken for granted anymore. We show that, if the weight range is not chosen carefully, the network will not be able to implement a solution independently on the number of units available on the first hidden layer. The paper presents an existence result which relates the difficulty of the problem as characterized by the minimum distance between patterns of different classes to the weight range necessary to ensure that a solution exists. This result allows us to calculate a weight range for a given category of problems and be confident that the network has the capability to solve the given problems with integer weights in that range.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. V. Beiu, S. Draghici, and H. E. Makaruk. On limited fan-in optimal neural networks. Technical Report LA-UR-97-2873, Los Alamos National Laboratory, 1997.

    Google Scholar 

  2. D. Brüls. Programmable vlsi array processor for neural networks and matrix-based signal processing. Technical report, Siemens AG, Corporate Research and Development Division, Munich, 1993.

    Google Scholar 

  3. R. Coggins, M. Jabri, and Wattle. A trainable gain analogue vlsi neural network. In Advances in Neural Information Processing Systems, NIPS’93, volume 6, pages 874–881. Morgan Kaufman, 1994.

    Google Scholar 

  4. S. Draghici. On the possibilities of the limited precision weights neural networks in classification problems. In J. Mira, R. Moreno-Diaz, and J. Cabestany, editors, Biological and Artificial Computation: From Neuroscience to Technology, Lecture Notes in Computer Science, pages 753–762. Springer-Verlag, 1997.

    Google Scholar 

  5. S. Draghici. On VLSI-optimal constructive algorithms for classification problems. In Proc. of EIS’98 International Symposium on Engineering of Intelligent Systems, pages 456–462. ICSC Academic Press, 1998.

    Google Scholar 

  6. S. Draghici, V. Beiu, and I. Sethi. A VLSI optimal constructive algorithm for classification problems. In Smart Engineering Systems: Neural Networks, Fuzzy Logic, Data Mining and Evolutionary Programming, pages 141–151. ASME Press, 1997.

    Google Scholar 

  7. G. Dundar and K. Rose. The effect of quantization on multilayer neural networks. IEEE Transactions on Neural Networks, 6:1446–1451, 1995.

    Article  Google Scholar 

  8. M. Duranton. L-neuro 2.3: A vlsi for image processing by neural networks. In Proceedings of the 4-th International Conference on Microelectronics for Neural Networks and Fuzzy Systems, pages 157–160, 1996.

    Google Scholar 

  9. M. A. Glover and W. T. M. III. A massively-parallel simd processor for neural network and machine vision applications. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 843–849. Morgan Kaufmann, 1994.

    Google Scholar 

  10. M. Griffin, G. Tahara, K. Knorpp, and B. Riley. An 11-milion transistor neural network execution engine. In IEEE International Conference on Solid-State Circuits, pages 180–181. IEEE, 1991.

    Google Scholar 

  11. D. Hammerstrom. A highly parallel digital architecture for neural network emulation. In J. G. Delgado-Frias and W. R. Moore, editors, VLSI for Artificial Intelligence and Neural Networks, pages 357–366. Plenum Press, 1991.

    Google Scholar 

  12. M. Höhfeld and S. E. Fahlman. Learning with limited numerical precision using the cascade-correlation algorithm. IEEE Transactions on Neural Networks, 3(4):602–611, 1992.

    Article  MATH  Google Scholar 

  13. P. Ienne. Digital connectionist hardware: Current problems and future challenges. In Lecture Notes in Computer Science, Biological and Artificial Computation: From Neuroscience to Technology, volume 1240, pages 688–713. Springer Verlag, 1997.

    Google Scholar 

  14. A. H. Khan and E. L. Hines. Integer-weights neural nets. Electronics Letters, 30(15):1237–1238, 1994.

    Article  Google Scholar 

  15. H. K. Kwan and C. Z. Tang. Designing multilayer feedforward neural networks using simplified activation functions and one-power-of-two weights. Electronics Letters, 28(25):2343–2344, 1992.

    Article  Google Scholar 

  16. H. K. Kwan and C. Z. Tang: Multiplierless multilayer feedfoward neural networks design suitable for continuous input-output mapping. Electronics Letters, 29(14):1259–1260, 1993.

    Article  Google Scholar 

  17. M. Marchesi, G. Orlandi, F. Piazza, L. Pollonara, and A. Uncini. Multilayer perceptrons with discrete weights. In Proceedings International Joint Conference on Neural Networks, volume 2, pages 623–630, 1990.

    Google Scholar 

  18. M. Marchesi, G. Orlandi, F. Piazza, and A. Uncini. Fast neural networks without multipliers. IEEE Transactions on Neural Networks, 4(1):53–62, 1993.

    Article  Google Scholar 

  19. S. Oteki, A. Hashimoto, T. Furuta, T. Watanabe, D. G. Stork, and H. Eguchi. A digital neural network vlsi with on-chip learning using stochastic pulse encoding. In Proceedings of the IJCNN, volume 3, pages 3039–3045, 1993.

    Google Scholar 

  20. U. Ramacher. Synapse—a neurocomputer that synthesizes neural algorithms on a parallel systolic engine. Journal of Parallel and Distributed Computing, 14(3):307–318, March 1992.

    Article  Google Scholar 

  21. C. Z. Tang and H. K. Kwan. Multilayer feedforward neural networks with single power-of-two weights. IEEE Transactions On Signal Processing, 41(8):2724–2727, 1993.

    Article  MATH  Google Scholar 

  22. J. B. Theeten, M. Duranton, N. Mauduit, and J. A. Sirat. The lneuro chip: A digital vlsi with on-chip learning mechanism. In International Conference on Neural Networks, pages 593–596, 1990.

    Google Scholar 

  23. J. M. Vincent and D. J. Myers. Weight dithering and wordlength selection for digital backpropagation networks. BT Technology Journal, 10(3):1180–1190, 1992.

    Google Scholar 

  24. M. Viredaz. Design and Analysis of a Systolic Array for Neural Computation. PhD thesis, École Polytechnique Fédérale de Lausanne, 1994.

    Google Scholar 

  25. J. Wawrzynek, K. Asanović, B. Kingsbury, J. Beck, D. Johnson, and N. Morgan. Spert-ii: A vector microprocessor system. Computer, 29(3):79–86, 1996.

    Article  Google Scholar 

  26. Y. Xie and M. A. Jabri. Training algorithms for limited precision feedforward neural networks. Technical Report SEDAL TR 1991-8-3, School of EE, University of Sydney, Australia, 1991.

    Google Scholar 

  27. M. Yasunaga, N. Masuda, M. Yagyu, M. Asai, K. Shibata, M. Ooyama, M. Yamada, T. Sakaguchi, and M. Hashimoto. A self-learning neural network composed of 1152 digital neurons in wafer-scale lsis. In Proceedings of the IJCNN, pages 1844–1849, 1991.

    Google Scholar 

  28. M. Yasunaga, N. Masuda, M. Yagyu, M. Asai, M. Yamada, and A. Masaki. Design, fabrication and evaluation of a 5-inch wafer scale neural network lsi compoosed of 576 digital neurons. In Proceedings of the IJCNN, volume 2, pages 527–536, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Juan V. Sánchez-Andrés

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Draghici, S. (1999). On the computational power of limited precision weights neural networks in classification problems: How to calculate the weight range so that a solution will exist. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098197

Download citation

  • DOI: https://doi.org/10.1007/BFb0098197

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66069-9

  • Online ISBN: 978-3-540-48771-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics