Skip to main content

Viewing a class of neurodynamics on parameter space

  • Complex Systems Dynamics
  • Conference paper
  • First Online:
Biological and Artificial Computation: From Neuroscience to Technology (IWANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1240))

Included in the following conference series:

  • 79 Accesses

Abstract

Nearly all models in neural networks start from the assumption that the input-output characteristic is a sigmoidal function. On parameter space we present a systematic and feasible method for analyzing the whole spectrum of attractors-all saturated, all-but-one saturated, all-but-two saturated, etc. — of a neurodynamical system with a saturated sigmoidal function as its input-output characteristic. We present an argument which claims, under a mild condition, that only all saturated or all-but-one saturated attractors are observable for the neurodynamics. For any given all saturated configuration ξ (all-but-one saturated configuration η) the paper shows how to construct an exact parameter region R(ξ) (¯R(η)) such that if and only if the parameters fall within R(ξ) (¯R(η)), then ξ (η) is an attractor (a fixed point) of the dynamics. The parameter region for an all saturated fixed point attractor is independent of the specific choice of a saturated sigmoidal function, whereas for an all-but-one saturated fixed point it is sensitive to the input-output characteristic.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Feng, J. 1995. Establishment of topological maps-a model study. Neural Processing Letters 2, 1–4.

    Google Scholar 

  2. Feng, J. 1997. Lyapunov functions for neural nets with nondifferentiable input-output characteristics. Neural Computation 9, 45–51.

    Google Scholar 

  3. Feng, J., and Brown, D. 1996. A novel approach for analyzing dynamics in neural networks with saturated characteristics. Neural Processing Letters 4, 9–16.

    Google Scholar 

  4. Feng, J., and Brown, D. 1997. Fixed point attractors for a class of neurodynamics. Neural Computation (accepted).

    Google Scholar 

  5. Feng, J., and Hadeler, K.P. 1996. Qualitative behavior of some simple networks. Jour. of Phys. A: Math. Gen. 29, 5019–5033.

    Google Scholar 

  6. Feng, J., Pan, H., and Roychowdhury, V. P. 1996. On neurodynamics with limiter function and Linsker's developmental model. Neural Computation 8, 1003–1019.

    Google Scholar 

  7. Feng, J., Pan, H., and Roychowdhury, V. P. 1997. Linsker-type Hebbian learning: a qualitative analysis on the parameter space. Neural Networks (in press).

    Google Scholar 

  8. Feng, J., and Tirozzi, B. 1995. The SLLN for the free-energy of the Hopfield and spin glass model. Helvetica Physica Acta 68, 365–379.

    Google Scholar 

  9. Feng, J., and Tirozzi, B. 1995. An application of the saturated attractor analysis to three typical models. Lecture notes in computer science 930, 353–360.

    Google Scholar 

  10. Feng, J., and Tirozzi, B. 1996. Convergence theorems for Kohonen feature mapping with VLRPs. Computers and Mathematics with Applications 32, (in press).

    Google Scholar 

  11. Feng, J., and Tirozzi, B. 1997. A discrete version of the dynamic link network. Neurocomputing 14, (in press).

    Google Scholar 

  12. Feng, J., and Tirozzi, B. 1997. Convergence of learning processes, stability of attractors and critical capacity of neural networks. in Bovier, A.(ed.) Springer-Verlag, (in press).

    Google Scholar 

  13. Feng, J., and Tirozzi, B. 1997. Capacity of the Hopfield model. Jour. of Phys. A: Math. Gen. (accepted).

    Google Scholar 

  14. Goodhill, G., and Barrow, H.G. 1994. The role of weight normalization in competitive learning. Neural Computation 6, 255–269.

    Google Scholar 

  15. Hertz, J., Krogh, A., and Palmer, R. 1991. Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company.

    Google Scholar 

  16. Linsker, R. 1986. From basic network principle to neural architecture (series). Proc. Natl. Acad. Sci. USA 83, 7508–7512, 8390–8394, 8779–8783.

    Google Scholar 

  17. Miller, K., and MacKay, D. 1994. The role of constraints in Hebbian learning. Neural Computation 6, 100–126.

    Google Scholar 

  18. Sejnowski, T.J. 1995. Time for a new neural code? Nature 376, 21–22.

    Google Scholar 

  19. Swindale, N.V. 1996. The development of topography in the visual cortex: a review of models. Network: Computation in Neural Systems 7, 161–247.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Roberto Moreno-Díaz Joan Cabestany

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Feng, J., Brown, D. (1997). Viewing a class of neurodynamics on parameter space. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032514

Download citation

  • DOI: https://doi.org/10.1007/BFb0032514

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63047-0

  • Online ISBN: 978-3-540-69074-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics