Abstract
A neural network model with quasipolynomial synapses and product contacts is investigated. The model further generalizes the sigma-pi and product unit models. What and how many quasipolynomial terms, both for individual variables and for cross-product terms, are learned, not predetermined, subject to hardware constraint. Three possible cases are considered. In case 1, the number of learnable parameters needed is determined in learning. It can be considered another method of “growing” a network for a given task, although the graph of the network is fixed. Mechanisms preventing the network from growing too many parameters are designed. In cases 2 and 3, the number of parameters allowed or available is fixed. Cases 2 and 3 may offer both some control on the generalizability of learning and flexibility in functional representation, and may provide a compromise between the complexity of loading and generalizability of learning. Gradient-descent algorithms for training feedforward networks with polynomial synapses and product contacts are developed. Hardware issues are considered, and experimental results are presented.
Similar content being viewed by others
References
Abu-Mostafa Y (1989) Complexity in neural systems. In: Mead C (eds) Analog VLSI and neural systems. Addison-Wesley, Reading, Mass
Baum EB, Haussler D (1989) What size net gives valid generalization? Neural Comput 1:151–160
Blum A, Rivest RL (1988) Training a 3-node neural network is NP-complete. In: Proceedings 1988 Workshop on computational learning theory. Morgan Kaufmann, San Mateo, Calif, pp 9–18
Bullock TH, Orkand R, Grinnell A (1977) Introduction to nervous systems. Freeman, San Francisco
Carroll SM, Dickinson BW (1989) Construction of neural nets using the random transform. In. Proceedings of International Joint Conference on Neural Networks, vol 1. pp 607–611
Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314
Durbin R, Rumelhart DE (1989) Product units: a computationally powerful and biologically plausible extension to the backpropagation networks. Neural Comput. 1:133–142
Engelmore R, Morgan T (eds) (1988) Blackboard systems. Addison-Wesley, Reading, Mass
Frean M (1990) The upstart algorithm: a method for constructing and training feedforward networks. Neural Comput. 2:198–209
Giles CL, Maxwell T (1987) Learning, invariance, and generalization in high-order neural networks. Appl. Opt 26:4972–4978
Hornik K, Stinchcombe M, White H (1989) Multilayer feedfoward networks are universal approximators. Neural Networks 2:359–366
Judd JS (1990) Neural network design and the complexity of learning. MIT Press, Cambridge, Mass
Liang P (1990) Problem decomposition and subgoaling in artificial neural networks. In: Proceedings 1990 IEEE Conference on Systems, Man, and Cybernetics, Nov. 4–7, 1990, Los Angeles, Calif, pp 178–181
Marchand M, Golea M, Rujan P (1990) A convergence theorem for sequential learning in two-layer Perceptrons. Europhys Lett 11:487–492
McCulloch WS, Pitts W (1943) A logical calculus of ideas immanent in nervous activity. Bull Math Biophys 5:115–133
Mezard M, Nadal J (1990) Learning in feedforward layered networks: the tiling algorithm. J. Phys A 22:2191
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing. MIT Press,Cambridge, Mass, pp 318–362
Schürmann J (1976) Multifont word recognition system with application to postal address reading. In: Proceedings of International Joint Conference on Pattern Recognition, pp 658–662
Schürmann J (1977) Polynomklassifikatoren für die Zeichenerkennung. Oldenbourg-Verlag, München.
Sethi IK (1990) Entropy nets: from decision trees to neural networks. roc IEEE 78:1605–1613
Shepherd GM (1983) Neurobiology. Oxford University Press, New York
Yin HF, Liang P (1991) A connectionist expert system combining production system and associative memory. Intl J Pattern Recognition Art Intell 5:523–544
Author information
Authors and Affiliations
Additional information
On leave from the School of Computer Science, Technical University of Nova Scotia, Canada
Rights and permissions
About this article
Cite this article
Liang, P., Jamali, N. Artificial neural networks with quasipolynomial synapses and product synaptic contacts. Biol. Cybern. 70, 163–175 (1993). https://doi.org/10.1007/BF00200830
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF00200830