Skip to main content
Log in

Artificial neural networks with quasipolynomial synapses and product synaptic contacts

  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

A neural network model with quasipolynomial synapses and product contacts is investigated. The model further generalizes the sigma-pi and product unit models. What and how many quasipolynomial terms, both for individual variables and for cross-product terms, are learned, not predetermined, subject to hardware constraint. Three possible cases are considered. In case 1, the number of learnable parameters needed is determined in learning. It can be considered another method of “growing” a network for a given task, although the graph of the network is fixed. Mechanisms preventing the network from growing too many parameters are designed. In cases 2 and 3, the number of parameters allowed or available is fixed. Cases 2 and 3 may offer both some control on the generalizability of learning and flexibility in functional representation, and may provide a compromise between the complexity of loading and generalizability of learning. Gradient-descent algorithms for training feedforward networks with polynomial synapses and product contacts are developed. Hardware issues are considered, and experimental results are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Abu-Mostafa Y (1989) Complexity in neural systems. In: Mead C (eds) Analog VLSI and neural systems. Addison-Wesley, Reading, Mass

    Google Scholar 

  • Baum EB, Haussler D (1989) What size net gives valid generalization? Neural Comput 1:151–160

    Google Scholar 

  • Blum A, Rivest RL (1988) Training a 3-node neural network is NP-complete. In: Proceedings 1988 Workshop on computational learning theory. Morgan Kaufmann, San Mateo, Calif, pp 9–18

    Google Scholar 

  • Bullock TH, Orkand R, Grinnell A (1977) Introduction to nervous systems. Freeman, San Francisco

    Google Scholar 

  • Carroll SM, Dickinson BW (1989) Construction of neural nets using the random transform. In. Proceedings of International Joint Conference on Neural Networks, vol 1. pp 607–611

    Google Scholar 

  • Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314

    Google Scholar 

  • Durbin R, Rumelhart DE (1989) Product units: a computationally powerful and biologically plausible extension to the backpropagation networks. Neural Comput. 1:133–142

    Google Scholar 

  • Engelmore R, Morgan T (eds) (1988) Blackboard systems. Addison-Wesley, Reading, Mass

    Google Scholar 

  • Frean M (1990) The upstart algorithm: a method for constructing and training feedforward networks. Neural Comput. 2:198–209

    Google Scholar 

  • Giles CL, Maxwell T (1987) Learning, invariance, and generalization in high-order neural networks. Appl. Opt 26:4972–4978

    Google Scholar 

  • Hornik K, Stinchcombe M, White H (1989) Multilayer feedfoward networks are universal approximators. Neural Networks 2:359–366

    Google Scholar 

  • Judd JS (1990) Neural network design and the complexity of learning. MIT Press, Cambridge, Mass

    Google Scholar 

  • Liang P (1990) Problem decomposition and subgoaling in artificial neural networks. In: Proceedings 1990 IEEE Conference on Systems, Man, and Cybernetics, Nov. 4–7, 1990, Los Angeles, Calif, pp 178–181

  • Marchand M, Golea M, Rujan P (1990) A convergence theorem for sequential learning in two-layer Perceptrons. Europhys Lett 11:487–492

    Google Scholar 

  • McCulloch WS, Pitts W (1943) A logical calculus of ideas immanent in nervous activity. Bull Math Biophys 5:115–133

    Google Scholar 

  • Mezard M, Nadal J (1990) Learning in feedforward layered networks: the tiling algorithm. J. Phys A 22:2191

    Google Scholar 

  • Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing. MIT Press,Cambridge, Mass, pp 318–362

    Google Scholar 

  • Schürmann J (1976) Multifont word recognition system with application to postal address reading. In: Proceedings of International Joint Conference on Pattern Recognition, pp 658–662

  • Schürmann J (1977) Polynomklassifikatoren für die Zeichenerkennung. Oldenbourg-Verlag, München.

    Google Scholar 

  • Sethi IK (1990) Entropy nets: from decision trees to neural networks. roc IEEE 78:1605–1613

    Google Scholar 

  • Shepherd GM (1983) Neurobiology. Oxford University Press, New York

    Google Scholar 

  • Yin HF, Liang P (1991) A connectionist expert system combining production system and associative memory. Intl J Pattern Recognition Art Intell 5:523–544

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

On leave from the School of Computer Science, Technical University of Nova Scotia, Canada

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liang, P., Jamali, N. Artificial neural networks with quasipolynomial synapses and product synaptic contacts. Biol. Cybern. 70, 163–175 (1993). https://doi.org/10.1007/BF00200830

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00200830

Keywords

Navigation