Elsevier

Neural Networks

Volume 5, Issue 4, July–August 1992, Pages 605-625
Neural Networks

Original Contribution
Convergence and divergence in neural networks: Processing of chaos and biological analogy*

https://doi.org/10.1016/S0893-6080(05)80039-5Get rights and content

Abstract

We have used simple neural networks as models to examine two interrelated biological questions: What are the functional implications of the converging and diverging projections that profusely interconnect neurons? How do the dynamical features of the input signal affect the responses of such networks? In this paper we examine subsets of these questions by using error-back propagation learning as the network response in question. The dynamics of the input signals was suggested by our previous biological findings. These signals consisted of chaotic series generated by the recursive logistic equation, xn+1 = 3.95(1 − xn)Xn, random noise, and sine functions. The input signals were also sent to a variety of teacher functions that controlled the type of computations networks were required to do, Single and double hidden-layer networks were used to examine, respectively, divergence and a combination of divergence and convergence. Networks containing single and multiple input/output units were used to determine how the networks learned when they were required to perform single or multiple tasks on their input signals. Back propagation was performed “on-line” in each training trial, and all processing was analog. Thereafter, the network units were examined “neurophysiologically” by selectively removing individual synapses to determine their effect on system error. The findings show that the dynamics of input signals strongly affect the learning process. Chaotic point processes, analogous to spike trains in biological systems, provide excellent signals on which networks can perform a variety of computational tasks. Continuous functions that vary within bounds, whether chaotic or not, impose some limitations. Differences in convergence and divergence determine the relative strength of the trained network connections. Many weak synapses, and even some of the strongest ones, are multifunctional in that they have approximately equal effects in all learned tasks, as has been observed biologically. Training sets all synapses to optimal levels, and many units are automatically given task-specific assignments. But despite their optimal settings, many synapses produce relatively weak effects, particularly in networks that combine convergence and divergence within the same layer. Such findings of “lazy” synapses suggest a re-examination of the role of weak synapses in biological systems. Of equal biological importance is the finding that networks containing only trainable synapses are severely limited computationally unless trainable thresholds are included. Network capabilities are also severely limited by relatively small increases in the number of network units. Some of these findings are immediately addressable from the code of the back propagation algorithm itself. Others, such as limitations imposed by increasing network size, need to be viewed through error surfaces generated by the trial-to-trial connection changes that occur during learning. We discuss the biological implications of the findings.

References (38)

  • BarrionuevoG. et al.

    Conductance mechanism responsible for long-term potentiation in monosynaptic and isolated excitatory synaptic inputs to hippocampus

    Journal of Neurophysiology

    (1986)
  • CarewT.J. et al.

    Invertebrate learning and memory: From behavior to molecules

    Annual Review of Neuroscience

    (1986)
  • Chavez-NoriegaL.E. et al.

    A decrease in firing threshold observed after induction of EPSP-Spike (E-S) component in rat hippocampal slices

    Experimental Brain Research

    (1990)
  • CazaletsJ. et al.

    Suppressive control of the crustacean pyloric network by a pair of identified neurons. II. Modulation of neuronal properties

    Journal of Neuroscience

    (1990)
  • EdelmanG.M.

    Neural Darwinism: The theory of neuronal group selection

    (1987)
  • JohnE.R.

    Switchboard versus statistical theories of learning and memory

    Science

    (1972)
  • LapedesA. et al.

    Nonlinear signal processing using neural networks: Predictive and system modeling (LA-UR-87-2662)

    Los Alamos National Laboratory Technical Report

    (1987)
  • LinskerR.

    From basic network principles to neural architecture: Emergence of spatial-opponent cells

  • MerzenichM.M. et al.

    Somatosensory cortical map changes following digit amputation in adult monkeys

    Journal of Comparative Neurology

    (1984)
  • Cited by (32)

    • Thermodynamic insights into membrane fouling in a membrane bioreactor: Evaluating thermodynamic interactions with Gaussian membrane surface

      2018, Journal of Colloid and Interface Science
      Citation Excerpt :

      All those methods involved regular or periodic functions. By definition, individual events in a random process are not predictable nor properly described by any periodic functions, whereas, the frequency of different outcomes over lots of events is always predictable, and can be described by probability distributions [42]. This means that all those literature methods fail to properly simulate the random process, calling for a proper simulating method.

    • Chaos control: How to avoid chaos or take advantage of it

      1994, Journal of the Franklin Institute
    • Dynamics of two resistively coupled duffing-type electrical oscillators

      2006, International Journal of Bifurcation and Chaos
    View all citing articles on Scopus
    *

    Supported by AFOSR 89-0262 and 92J0140, and NIH Biomedical Research Support Grant RR0709, to GJM, and AFOSR 88-0105 to RMB.

    1

    Robert M. Burton, Jr., is at the Department of Mathematics, Oregon State University, Corvallis, OR 97331.

    View full text