Skip to main content

On the power of networks of majority functions

  • Neural Network Theories, Neural Models
  • Conference paper
  • First Online:
Artificial Neural Networks (IWANN 1991)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 540))

Included in the following conference series:

Abstract

Quantization of the synaptic weights is a central problem of hardware implementation of neural networks using numerival technology. In this paper, a particular linear threshold boolean function, called majority function is considered, whose synaptic weights are restricted to only three values: −1, 0, +1. Some results about the complexity of the circuits composed of such gates are reported. They show that this simple family of functions remains powerful in therm of circuit complexity. The learning problem with this subclass of threshold function is also studied and numerical experiments of different algorithms are reported.

Supported by grant 20-5637.88 of the Swiss National Science Foundation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Ajtai, Σ 11 formulae on finite structures, Anals of Pure and Applied Logic 24 (1984) pp. 1–48.

    Google Scholar 

  2. E. Amaldi and S. Nicolis, Stability-Capacity Diagram of a Neural Network with Ising Bonds, J. Physique 50 (1989) pp. 2333–2345.

    Google Scholar 

  3. E. Amaldi, E. Mayoraz and D. de Werra, Discrete Optimization Problems in Neural Network Design, Submitted to Discrete Applied Math.

    Google Scholar 

  4. E. Amaldi, On the Complexity of Training Perceptrons, Int. Conf. on Artificial Neural Networks, Helsinki Published by Elsvier (1991).

    Google Scholar 

  5. M. Furst, J.B. Saxe, M. Sipser, Circuits and the Polynomial-Time Hierarchy, Proceedings of 22 nd Annual IEEE Symp. on Fundations of Computer Science (1981) pp. 260–270.

    Google Scholar 

  6. F. Glover, Tabu Search, Part I, ORSA J. Computing 1(3) (1989) 190–206.

    Google Scholar 

  7. J. Hastad, Almost Optimal Lower Bounds for Small Depth Circuits, Proceedings of 18 th ACM Symp. on Theory of Computing (1986) pp. 6–20.

    Google Scholar 

  8. J. Hong, On connectionist models, Tech. Rep. 87-012, Dept of Computer Science University of Chicago, USA (1987).

    Google Scholar 

  9. W. Krauth and M. Mézard, Learning Algorithms with Optimal Stability in Neural Networks, J. Phys. A: Math Gen. 20 (1987) pp. L745–L752.

    Google Scholar 

  10. W. Krauth and M. Opper, Critical Storage Capacity of the J=±1 Neural Networks, J. Phys. A 22 (1989) pp. L519–L523.

    Google Scholar 

  11. R.J. McEliece, E.C. Posner, E.R. Rodemick and S.S. Venkatesh, The Capacity of the Hopfield Association Memory IEE Trans. on Information Theory IT-33 No. 4 (July 1987).

    Google Scholar 

  12. E. Mayoraz, Benchmark of Some Learning Algorithms for Single Layer and Hopfield Networks, Complex Systems 4 (1990) pp. 477–490.

    Google Scholar 

  13. W. Krauth and M. Mézard, Storage Capacity of Memory Networks with Binary Couplings, J. Phys. France 50 (1989) pp. 3057–3066.

    Google Scholar 

  14. J. Myhill and W. H. Kautz, On the size of Weights Required for Linear-Input Switching Functions, IRE Trans. on Electronic Computers EC 10 (1961).

    Google Scholar 

  15. L. Pitt and L.G. Valiant, Computational Limitations on Learning from Examples, Journal of the ACM 35 No. 4 (October 1988) pp. 965–984.

    Google Scholar 

  16. A.A. Razborov, Lower Bounds for the size of circuits of bounded depth with basis {∧, ⊗}, Math. Notes 41 (1987) pp. 333–338.

    Google Scholar 

  17. R. Smolensky, Algebraic Methods in the Theory of Lower Bounds for Boolean Circuit Complexity, Proceedings of 19 th ACM Symp. on Theory of Computing (1987) pp. 77–82.

    Google Scholar 

  18. K.Y. Siu and J. Bruck, On The Power of Threshold Circuits with Small Weights To appear in SIAM J. on Discr. Math.

    Google Scholar 

  19. S.S. Venkatesh, Directed Drift: A new Linear Threshold Algorithm for Learning Binary Weights On-Line, Presented to the Workshop on Neural Networks for Computing, Snowbird, Utah (April 1989).

    Google Scholar 

  20. M. Verleysen, B. Sirletti, A.M. Vandermeulebroecke and P.G.A. Jespers, Neural Networks for High-Storage Content-Addressable Memory: VLSI Circuit and Learning Algorithm, IEEE Journal of solid-state circuits vol. 24 No 3 (June 1989).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mayoraz, E. (1991). On the power of networks of majority functions. In: Prieto, A. (eds) Artificial Neural Networks. IWANN 1991. Lecture Notes in Computer Science, vol 540. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0035880

Download citation

  • DOI: https://doi.org/10.1007/BFb0035880

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-54537-8

  • Online ISBN: 978-3-540-38460-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics