Skip to main content

Attractor Neural Networks with Hypercolumns

  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 2002 (ICANN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2415))

Included in the following conference series:

Abstract

We investigate attractor neural networks with a modular structure, where a local winner-takes-all rule acts within the modules (called hypercolumns). We make a signal-to-noise analysis of storage capacity and noise tolerance, and compare the results with those from simulations. Introducing local winner-takes-all dynamics improves storage capacity and noise tolerance, while the optimal size of the hypercolumns depends on network size and noise level.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hubel, D.H. and T.N. Wiesel: Uniformity of monkey striate cortex: A parallel relationship between field size, scatter and magnification factor. J. Comp. Neurol., 1974(158): p. 295–306.

    Google Scholar 

  2. Holst, A. and A. Lansner: A Higher Order Bayesian Neural Network for Classification and Diagnosis, in Applied Decision Technologies: Computational Learning and Probabilistic Reasoning, A. Gammerman, Editor. 1996, John Wiley & Sons Ltd.: New York. p. 251–260.

    Google Scholar 

  3. Lansner, A. and Ö. Ekeberg: A one-layer feedback artificial neural network with a Bayesian learning rule. Int. J. Neural Systems, 1989. Vol. 1: p. 77–87.

    Article  MATH  Google Scholar 

  4. Sandberg, A., et al.: A Bayesian attractor network with incremental learning. to appear in Network: Computation in Neural Systems, 2002. Vol. 13(2).

    Google Scholar 

  5. Ström, B.: A Model of Neocortical Memory, Using Hypercolumns and a Bayesian Learning Rule. 2000, Nada, KTH: Stockholm.

    Google Scholar 

  6. Peterson, C. and B. Söderberg: A New Method for Mapping Optimization Promlems onto Neural Networks. International Journal of Neural Systems, 1989. Vol. 1(3).

    Google Scholar 

  7. Levy, N., D. Horn, and E. Ruppin: Associative Memory in a Multi-modular Network. Neural Computation, 1999. Vol. 11: p. 1717–1737.

    Article  Google Scholar 

  8. O’Kane, D. and A. Treves: Short-and long-range connections in autoassociative memory. J. Phys. A, 1992. Vol. 25: p. 5055–5069.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Johansson, C., Sandberg, A., Lansner, A. (2002). Attractor Neural Networks with Hypercolumns. In: Dorronsoro, J.R. (eds) Artificial Neural Networks — ICANN 2002. ICANN 2002. Lecture Notes in Computer Science, vol 2415. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46084-5_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-46084-5_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44074-1

  • Online ISBN: 978-3-540-46084-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics