Abstract
The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions [1]; and (ii) another one for implementing certain sub-classes of Boolean functions [2]. Because VLSI implementations do not cope well with highly interconnected nets – the area of a chip grows with the cube of the fan-in [3] – this paper will analyse the influence of limited fan-in on the size optimality for the two solutions mentioned. First, we will extend a result from Horne & Hush [1] valid for fan-ins Δ = 2 to arbitrary fan-in. Second, we will prove that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins (Δ = 6 ... 9), there exist VLSI-optimal (i.e., minimising AT2) solutions [4], while there are similar small constants relating to our capacity of processing information [5].
Similar content being viewed by others
References
B.G. Horne and D.R. Hush, “On the node complexity of neural networks”, Neural Networks, 7(9), pp. 1413–1426, 1994.
N.P. Red'kin, “Synthesis of threshold circuits for certain classes of Boolean functions”, Kibernetika, 5, pp. 6–9, 1970 [English translation in Cybernetics, 6(5), pp. 540–544, 1973].
D. Hammerstrom, “The connectivity analysis of simple association - or - how many connections do you need”, in D.Z. Anderson (ed.) Neural Information Processing Systems, pp. 338–347, AIP New York, NY, 1988.
V. Beiu, “Constant fan-in digital neural networks are VLSI-optimal”, Tech. Rep. LA-UR–97–61, Los Alamos National Laboratory, USA, 1997; in S.W. Ellacott, J.C. Mason and I.J. Anderson (eds) Mathematics of Neural Nets: Models, Algorithms and Applications, pp. 89–94, Kluwer Academic Publishers: Boston, MA, 1997.
G.A. Miller, “The magical number seven, plus or minus two: some limits on our capacity for processing information”, Psychology Review, 63, pp. 71–97 1956.
R.C. Williamson, “ε-entropy and the complexity of feedforward neural networks”, in R.P. Lippmann, J.E Moody and D.S Touretzky (eds) Advances in Neural Information Processing Systems 3, pp. 946–952, Morgan Kaufmann: San Mateo, CA, 1990.
Y.S. Abu-Mostafa, “Connectivity versus entropy”, in D.Z. Anderson (ed.) Neural Information Processing Systems, pp.1–8, AIP New York, NY, 1988.
D.S. Phatak and I Koren, “Connectivity and performances tradeoffs in the cascade correlation learning architecture”, IEEE Transactions on Neural Networks, 5(6), pp. 930–935, 1994.
J. Bruck and J.W. Goodmann, “On the power of neural networks for solving hard problems”, in D.Z. Anderson (ed.) Neural Information Processing Systems, pp. 137–143, AIP New York, NY, 1988 [also in Journal of Complexity, 6, pp. 129–135, 1990].
V. Beiu, J.A. Peperstraete, J. Vandewalle and R. Lauwereins, “Area-time performances of some neural computations”, in P. Borne, T. Fukuda and S.G. Tzafestas (eds) Proceedings of the IMACS International Symposium on Signal Processing, Robotics and Neural Networks (SPRANN'94), pp. 664–668, Lille, GERF EC, France, 1994.
P.L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network”, Tech. Rep., Dept. Sys. Eng., Australian Natl. Univ., Canberra, 1996 [ftp:syseng.anu.edu.au/pub/peter/TR96d.ps.Z; short version in M.C. Mozer, M.I. Jordan & T. Petsche (eds) Advances in Neural Information Processing Systems 9, pp. 134–140, MIT Press: Cambridge, MA, 1997].
B.-T. Zhang and H. Mühlenbein, “Genetic programming of minimal neural networks using Occam's razor”, Tech. Rep. GMD 0734, Schloß Birlinghoven, St. Augustin, Germany, 1993 [also in Complex Systems, 7(3), pp. 199–220, 1993].
V. Beiu, “On the circuit and VLSI complexity of threshold gate COMPARISON”,Tech.Rep. LA-UR–96–3591, Los Alamos National Laboratory, USA, 1996 [also in Neurocomputing 19, pp. 77–98, 1998.
V. Beiu, VLSI Complexity of Discrete Neural Networks, Gordon & Breach Newark, NJ, 1998.
J. Wray and G.G.R. Green, “Neural networks, approximation theory, and finite precision computation”, Neural Networks,8(1), pp. 31–37, 1995.
M.R. Walker, S.Haghighi, A. Afghan and L.A. Akers, “Training a limited-interconnect, synthetic neural IC”, in D.S. Touretzky(ed.) Advances in Neural Information Processing Systems 1, pp. 777–784, Morgan Kaufmann: San Mateo, CA, 1989.
V. Beiu, “When constants are important”, Tech. Rep. LA-UR–97–226, Los Alamos National Laboratory, USA, 1997; in I. Dumitrache (ed.) Proceedings of the 11th International Conference on Control Systems and Computer Science (CSCS-11), Vol. 2, pp. 106–111, Bucharest, UPBucharest, Romania, 1997.
V. Beiu and J.G. Taylor, “On the circuit complexity of sigmoid feedforward neural networks”, Neural Networks, 9(7), pp. 1155–1171, 1996.
K.-Y. Siu, V.P. Roychowdhury and T. Kailath, “Depth-size tradeoffs for neural computations”, IEEE Transactions on Computer, 40(12), pp. 1402–1412, 1991.
V.P. Roychowdhury, A. Orlitsky and K.-Y. Siu, “Lower bounds on threshold and related circuits via communication complexity IEEE Transactions on Information Theory, 40(2), pp. 467–474, 1994.
S. Vassiliadis, S. Cotofana and K. Berteles, “2–1 addition and related arithmetic operations with threshold logic”, IEEE Transactions on Computer, 45(9), pp. 1062–1068, 1996.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Beiu, V., Makaruk, H.E. Deeper Sparsely Nets can be Optimal. Neural Processing Letters 8, 201–210 (1998). https://doi.org/10.1023/A:1009665432594
Issue Date:
DOI: https://doi.org/10.1023/A:1009665432594