Original Contribution
A comparison study of binary feedforward neural networks and digital circuits

https://doi.org/10.1016/S0893-6080(05)80123-6Get rights and content

Abstract

A comparison study was carried out between feedforward neural networks composed of binary linear threshold units and digital circuits. These networks were generated by the regular partitioning algorithm and a modified Quine-McCluskey algorithm, respectively. The size of both types of networks and their generalisation properties are compared as a function of the nearest-neighbour correlation in the binary input sets. The ratio of the number of components required by digital circuits and the number of neurons grows linearly for the input sets considered. The considered neural networks do not outperform digital circuits with respect to generalisation. Sensitivity analysis leads to a preference for digital circuits, especially for increasing number of inputs. In the case of analog input sets, hybrid networks of binary neurons and logic gates are of interest.

References (16)

  • AleksanderI.

    Neural computing architectures

    (1988)
  • ArmstrongW.W. et al.

    Some results concerning adaptive logic networks

  • BarkemaG.T. et al.

    Numerical study of phase transitions in Potts models

    Physiological Review

    (1991)
  • BarkemaG.T. et al.

    The patch algorithm: Fast design of binary feedforward neural networks

    Neural Networks

    (1992)
  • BoothT.L.

    Digital networks and computer systems

    (1971)
  • DertouzosM.L.

    Threshold logic: A synthesis approach

    (1965)
  • FreanM.

    The upstart algorithm: A method for constructing and training feedforward neural networks

    Neural Computation

    (1990)
  • KeibekS.A.J. et al.

    A fast partitioning algorithm and a comparison of binary feedforward neural networks

    Europhysics Letters

    (1992)
There are more references available in the full text version of this article.

Cited by (15)

  • C-Mantec: A novel constructive neural network algorithm incorporating competition between neurons

    2012, Neural Networks
    Citation Excerpt :

    Choosing the proper neural network architecture for a given classification problem remains a difficult issue (Baum & Haussler, 1989; Gómez, Franco, & Jerez, 2009; Lawrence, Giles, & Tsoi, 1996; Rumelhart, Hinton, & Williams, 1986), and despite the existence of several proposals to solve or alleviate this problem (Haykin, 1994), there is no general agreement on the strategy to follow in order to select an optimal neural network architecture. The computationally inefficient “trial and error” method is still much used in applications using Artificial Neural Networks (ANNs), but as an alternative different neural constructive algorithms have been proposed in recent years (Andree, Barkema, Lourens, Taal, & Vermeulen, 1993; Fahlman & Lebiere, 1990; Frean, 1990; García-Pedrajas & Ortiz-Boyer, 2007; Keibek, Barkema, Andree, Savenlie, & Taal, 1992; Mezard & Nadal, 1989; Nicoletti & Bertini, 2007; Parekh, Yang, & Honavar, 2000; Subirats, Jerez, & Franco, 2008; Utgoff & Stracuzzi, 2002). In general, constructive methods start with a small network (normally a single neuron in a single hidden layer) to then add new units as needed until a stopping criteria is met.

View all citing articles on Scopus
View full text