Skip to main content

On the possibilities of the limited precision weights neural networks in classification problems

  • Neural Nets Simulation, Emulation and Implementation
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

Abstract

Limited precision neural networks are better suited for hardware implementations. Several researchers have proposed various algorithms which are able to train neural networks with limited precision weights. Also it has been suggested that the limits introduced by the limited precision weights can be compensated by an increased number of layers. This paper shows that, from a theoretical point of view, neural networks with integer weights in the range [-p,p] can solve classification problems for which the minimum euclidian distance in-between two patterns from opposite classes is 1/p. This result can be used in an information theory context to calculate a bound on the number of bits necessary for solving a problem. It is shown that the number of bits is limited by m*n*log(2pD) where m is the number of patterns, n is the dimensionality of the space, p is the weight range and D is the radius of a sphere including all patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. Abu-Mostafa Y.S., Connectivity versus Entropy, NIPS'87, D.Z. Anderson (Ed.), Amer. Inst. Of Phys., New York, 1988, 1–8

    Google Scholar 

  2. Beiu V., Peperstraete J.A., Vandewalle J., Lauwereins R., Area-time performances of some neural computations, Int. Symp. On Signal Proc., Robotics and NN's, P. Borne, T. Fukuda and S.G. Tzafestas (Eds.), GERF EC, Lille, 1994, pp. 664–668

    Google Scholar 

  3. Beiu V.-Neural Networks Using Threshold Gates: A Complexity Analysis of Their Area and Time Efficient VLSI Representations, Ph.D. thesis, Katholieke Universiteit Leuven, 1994. Extended version to appear as “VLSI Complexity of Discrete Neural Networks”, Gordon & Breach, 1997 (in press).

    Google Scholar 

  4. Beiu, V., Optimal VLSI Implementations of Neural Networks, Chap. 18 in J.G. Taylor (ed.): Neural Networks and Their Applications, John Wiley & Sons, Chichester, UK, 255–276, 1996.

    Google Scholar 

  5. Beiu, V., Taylor J.G., VLSI optimal neural network learning algorithm, Artif. NN's and Genetic Algs., D.W. Pearson, N.C. Steele and R.F. Albrecht (Eds.), Springer-Verlag, Vienna, 1995, pp. 61–64

    Google Scholar 

  6. Beiu, V., Taylor J.G., Area efficient constructive learning algorithms, Proc. 10th Intl.Conf. on Control Sys. and Comp. Sci., PU Bucharest, Bucharest, 1995, pp. 293–310

    Google Scholar 

  7. Beiu, V., Entropy bounds for classification algorithms, Neural Network World, Vol. 6, No. 4, pp. 497–505, IDG Press, 1996

    Google Scholar 

  8. Beiu, V., T. De Pauw, Tight bounds on the size of neural networks for classification problems, submitted for IWANN'97

    Google Scholar 

  9. Bruck J., Goodman J.W., On the power of neural networks for solving hard problems, NIPS'87, D.Z. Anderson (Ed.), Amer. Inst. Of Phys., NY, 1988, 137–143 (also in J. of Complexity, 6, 1990, 129–135)

    Google Scholar 

  10. Coggins R., M. Jabri, Wattle: A Trainable Gain Analogue VLSI Neural Network, Advances in NIPS 6 (NIPS*93, Denver, CO), Morgan Kaufman, San Mateo, CA, 874–881, 1994

    Google Scholar 

  11. Denker J.S., Wittner B.S., Network Generality, Training Required and Precision Required, NIPS'88, D.Z. Anderson (Ed.), Amer. Inst of Phys., New York, 1988, 219–222

    Google Scholar 

  12. Dundar G., K. Rose, The Effect of Quantization on Multilayer Neural Networks, IEEE Transactions on Neural Networks 6(6), pp. 1446–1451, 1995

    Google Scholar 

  13. Hammerstrom D., The connectivity analysis of simple associations — or — How many connections do you need?, NIPS'87, D.Z. Anderson (Ed.), Amer. Inst. Of Phys., New York, 1988, 338–347

    Google Scholar 

  14. Hohfeld M., S.E. Fahlman, Learning with limited numerical precision using the Cascade-Correlation Algorithm, Tech.Rep. CMU-CS-91-130, School of Comp. Sci. Carnegie Mellon, May 1991. Also in IEEE Transactions on Neural Networks, NN-3(4) 602–611, 1992

    Google Scholar 

  15. Hohfeld M., S.E. Fahlman, Probabilistic rounding in neural networks with limited precision. In U. Ruckert and J.A. Nossek (eds.): Microelectronics for Neural Networks (Proc. MicroNeuro'91-Munich, Germany), Kyrill & Method Verlag, 1–8, October 1991. Also in Neurocomputing, 4, 291–299, 1992

    Google Scholar 

  16. Khan A.H., E.L. Hines, Integer weight neural networks, Electronics Letters 30(15), pp. 1237–1238, 1994

    Google Scholar 

  17. Khan A.H., R.G. Wilson, Integer weight approximation of continuous-weight multilayer feedforward nets, Proc. IEEE Int. Conf. on Neural Networks, vol. 1, pp. 392–397, Washington DC, June 1996, IEEE Press, New York, NY.

    Google Scholar 

  18. Klaggers H., Soegtrop M., Limited fan-in random wired cascade-correlation learning architecture, MicroNeuro'93, D.J. Myers and A.F.Murray (Eds.), Univ.Ed Tech. Ltd. Edinburgh, 1993, pp. 79–82

    Google Scholar 

  19. Kwan H.K., Tang C.Z., Designing Multilayer Feedforward Neural Networks Using Simplified Activation Functions and One-Power-of-Two Weights. Electronic Letters, 28(25), pp. 2343–2344, 1992

    Google Scholar 

  20. Kwan H.K., Tang C.Z., Multiplierless Multilayer Feedforward Neural Networks Design Suitable for Continuous Input-Output Mapping, Electronic Letters, 29(14), pp. 1259–1260, 1993

    Google Scholar 

  21. Mason R.D., Robertson W., Mapping Hierarchical Neural Networks to VLSI Hardware, Neural Networks, vol. 8, 6, 1995, 905–913

    Google Scholar 

  22. Marchesi M., G. Orlandi, F. Piazza, L. Pollonara, A. Uncini, Multilayer Perceptrons with Discrete Weights, Proc. Int. Joint Conf. on Neural Networks IJCNN'90, San Diego, Vol. II, pp. 623–630, June 1990

    Google Scholar 

  23. Marchesi M., G. Orlandi, F. Piazza, A. Uncini, Fast Neural Networks without Multipliers, IEEE Transactions on Neural Networks, NN-4(1), pp. 53–62, 1993

    Google Scholar 

  24. Phatak D.S., Koren I., Connectivity and performance tradeoffs in the cascade-correlation learning architecture, IEEE Trans. NN's, 5, 6, 1994, 930–935

    Google Scholar 

  25. Tang C.Z., H.K. Kwan, Multilayer Feedforward Neural Networks with Single Power-of-Two Weights. IEEE Trans. On Signal Processing, SP-41(8), 2724–2727, 1993

    Google Scholar 

  26. Vincent J.M., D.J.Myers, Weight Dithering and Wordlength Selection for Digital Backpropagation Networks, BT Technology J., 10(3), pp. 124–133, 1992

    Google Scholar 

  27. Williamson R.C., Entropy and the complexity of feedforward neural networks, NIPS'90, R.P. Lippmann, J.E.Moody and D.S. Touretzky (Eds.), Morgan Kaufmann, San Mateo, 1991, pp. 946–952

    Google Scholar 

  28. Xie Y., M.A. Jabri, Training Algorithms for Limited Precision Feedforward Neural Networks, SEDAL TR 1991-8-3, School of EE, University of Sydney, Australia, 1991. Also in Proc. of the Australian Conference on Neural Networks, Canberra, Australia, 67–71, 1992

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Draghici, S., Sethi, I.K. (1997). On the possibilities of the limited precision weights neural networks in classification problems. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032534

Download citation

  • DOI: https://doi.org/10.1007/BFb0032534

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics