Skip to main content

VLSI Optimal Neural Network Learning Algorithm

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

In this paper we consider binary neurons having a threshold nonlinear transfer function and detail a novel direct design algorithm as an alternative to the classical learning algorithms which determines the number of layers, the number of neurons in each layer and the synaptic weights of a particular neural network. While the feedforward neural network is described by m examples of n bits each, the optimisation criteria are changed. Beside the classical size-and-depth we also use the A and the AT 2 complexity measures of VLSI circuits (A being the area of the chip, and T the delay for propagating the inputs to the outputs). We considering the maximum fan-in of one neuron as a parameter and proceed to show its influence on the area, obtaining a full class of solutions. Results are compared with another constructive algorithm. Further directions for research are pointed out in the conclusions, together with some open questions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hammerstrom, D.: “The Connectivity Analysis of Simple Association -or- How Many Connections Do You Need,” Proc. NIPS’87 (Denver, November), Amer. Inst. Phys., 338–347 (1987).

    Google Scholar 

  2. Beiu, V., Peperstraete, J.A., Vandewalle, J., Lauwereins, R.: “Area-Time Performances of Some Neural Computations,” in P. Borne, T. Fukuda and S.G. Tzafestas (eds.): Proc. SPRANN’94 (Lille, April), GERF EC, Lille, 664–668 (1994).

    Google Scholar 

  3. Mead, C.A., Conway, L.: Introduction to VLSI Systems, Addison-Wesley, Reading (1980).

    Google Scholar 

  4. Rumelhart, D.E., McClelland, J.L., The PDP Research Group (eds.): Parallel Distributed Processing: Explorations in the Microstructure of Cognition, A Bradford Book, The MIT Press, Cambridge (1986).

    Google Scholar 

  5. Holt, J.L., Hwang, J.-N.: “Finite Precision Error Analysis of Neural Network Hardware Implementations,” IEEE Trans, on Comp, C-42(3), 281–290 (1993).

    Google Scholar 

  6. Wray, J., Green, G.G.R.: “Neural Networks, Approximation Theory, and Finite Precision Computation,” Neural Networks, 8 (1), 31–37 (1995).

    Article  Google Scholar 

  7. Baum, E.B.: “On the Capabilities of Multilayer Perceptrons,” J. Compl, 4, 193–215 (1988).

    Article  MathSciNet  MATH  Google Scholar 

  8. Minsky, M.L., Papert, S.A.: Perceptron: An Introduction to Computational Geometry, The MIT Press, Cambridge (1969).

    Google Scholar 

  9. Hopcroft, J.E., Mattson, R.L.: “Synthesis of Minimal Threshold Logic Networks,” IEEE Trans, on Electr. Comp, EC-6, 552–560 (1965).

    Article  Google Scholar 

  10. Smieja, F.J.: “Neural Network Constructive Algorithm: Trading Generalisation for Learning Efficiency?” Circuits, System, Signal Processing, 12 (2), 331–374, 1993.

    Article  MATH  Google Scholar 

  11. Bose, N.K., Garga, A.K.: “Neural Network Design Using Voronoi Diagrams,” IEEE Trans, on Neural Networks, NN-4(5), 778–787, 1993.

    Article  Google Scholar 

  12. Ramacher, U., Wesseling, M.: “A Geometrical Approach to Neural Network Design,” Proc. IJCNN’89 (Washington, January), IEEE Press, vol. 2, 147–153 (1989).

    Google Scholar 

  13. Abu-Mostafa, Y.S.: “Learning from Hints in Neural Networks,” J. Compl, 6, 192–198 (1990).

    Article  MathSciNet  MATH  Google Scholar 

  14. Armstrong, W.W., Gecsei, J.: “Adaption Algorithms for Binary Tree Networks,” IEEE Trans, on Systems, Man and Cybernetics, SMC-9, 276–285 (1979).

    Article  Google Scholar 

  15. Diederich, S., Opper, M.: “Learning of Correlated Patterns in Spin-Glass Networks by Local Learning Rules,” Phys. Rev. Lett, 58 (9), 949–952 (1987).

    Article  MathSciNet  Google Scholar 

  16. Fiesler, E., Choudry, A., Caulfield, H.J.: “A Universal Weight Discretization Method for Backpropagation Neural Networks,” IEEE Trans, on System, Man, and Cybernetics (to appear).

    Google Scholar 

  17. Fontanari, J.F., Meier, R.: “Evolving a Learning Algorithm for the Binary Perceptron,” Network, 2, 353–359 (1991).

    Article  Google Scholar 

  18. Gallant, S.I.: “Perceptron-Based Learning Algorithms,” IEEE Trans, on Neural Networks, NN-1(2), 179–191 (1990).

    Article  Google Scholar 

  19. Mézard, M., Nadal, J.-P.: “Learning in Feedforward Layered Networks: the Tiling Algorithm,” J. Phys. A: Math. Gen, 22, 2191–2203 (1989).

    Article  Google Scholar 

  20. Tan, S., Vandewalle, J.: “Efficient Algorithm for the Design of Multilayer Feedforward Neural Networks,” Proc. IJCNN’92 (Baltimore, January), IEEE Press, vol. 2, 190–195 (1992).

    Google Scholar 

  21. Beiu, V., Peperstraete, J.A., Vandewalle, J., Lauwereins, R.: “Efficient Decomposition of COMPARISON and Its Applications,” in M. Verleysen (ed.): Proc. ESANN’93 (Brussels, April), Dfacto, Brussels, 45–50 (1993).

    Google Scholar 

  22. Bruck, J., Smolensky, R.: “Polynomial Threshold Functions, AC Functions and Spectral Norms,” SIAM J. Comput, 21 (1), 33–42 (1992).

    Article  MathSciNet  MATH  Google Scholar 

  23. Muroga, S.: “The Principle of Majority Decision Logic Elements and the Complexity of Their Circuits,” Proc. Intl. Conf. Inform. Processing, Paris (1959).

    Google Scholar 

  24. Siu, K.-Y., Roychowdhury, V., Kailath, T.: “Depth-Size Tradeoffs for Neural Computations,” IEEE Trans, on Comp., C-40(12), 1402–1412 (1991).

    Article  MathSciNet  Google Scholar 

  25. Shawe-Taylor, J.S., Anthony, M.H.G., Kern, W.: “Classes of Feedforward Neural Nets and Their Circuit Complexity,” Neural Networks, 5 (6), 971–977 (1992).

    Article  Google Scholar 

  26. Red’kin, N.P.: “Synthesis of Threshold Circuits for Certain Classes of Boolean Functions,” Kibernetika, 6 (5), 6–9 (1970).

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag/Wien

About this paper

Cite this paper

Beiu, V., Taylor, J.G. (1995). VLSI Optimal Neural Network Learning Algorithm. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-7535-4_18

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-82692-8

  • Online ISBN: 978-3-7091-7535-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics