Skip to main content
Log in

A stochastic population approach to the problem of stable recruitment hierarchies in spiking neural networks

  • Original Paper
  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

Synchrony-driven recruitment learning addresses the question of how arbitrary concepts, represented by synchronously active ensembles, may be acquired within a randomly connected static graph of neuron-like elements. Recruitment learning in hierarchies is an inherently unstable process. This paper presents conditions on parameters for a feedforward network to ensure stable recruitment hierarchies. The parameter analysis is conducted by using a stochastic population approach to model a spiking neural network. The resulting network converges to activate a desired number of units at each stage of the hierarchy. The original recruitment method is modified first by increasing feedforward connection density for ensuring sufficient activation, then by incorporating temporally distributed feedforward delays for separating inputs temporally, and finally by limiting excess activation via lateral inhibition. The task of activating a desired number of units from a population is performed similarly to a temporal k-winners-take-all network.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Amari S (1974) A method of statistical neurodynamics. Kybernetik 14:201–215

    PubMed  CAS  Google Scholar 

  • Arik S (2002) A note on the global stability of dynamical neural networks. IEEE Trans Circ Syst I: Fundam Theory Appl 49(4):502–504

    Article  Google Scholar 

  • Calvert BD, Marinov CA (2000) Another k-winners-take-all analog neural network. IEEE Trans Neural Netw 11(4):829–838

    Article  PubMed  CAS  Google Scholar 

  • Diesmann M, Gewaltig M-O, Aertsen A (1999) Stable propagation of synchronous spiking in cortical neural networks. Nature 402:529–533

    Article  PubMed  CAS  Google Scholar 

  • Elias SA, Grossberg S (1975) Pattern formation, contrast control, and oscillations in the short term memory of shunting on-center off-surround networks. Biol Cybern 20:69–98

    Article  Google Scholar 

  • Feldman JA (1982) Dynamic connections in neural networks. Biol Cybern 46:27–39

    Article  PubMed  CAS  Google Scholar 

  • Feldman JA (1990) Computational constraints on higher neural representations. In: Schwartz EL (eds). Computational neuroscience, system development foundation benchmark series, chap 13. MIT Press, Cambridge, pp. 163–178

    Google Scholar 

  • Feldman JA, Ballard DH (1982) Connectionist models and their properties. Cogn Sci 6:205–254

    Article  Google Scholar 

  • Gerbessiotis AV (1993) Topics in parallel and distributed computation. PhD thesis, The Division of Applied Sciences, Harvard University, Cambridge, Massachusetts

    Google Scholar 

  • Gerbessiotis AV (1998) A graph-theoretic result for a model of neural computation. Discrete Appl Math 82:257–262

    Article  Google Scholar 

  • Gerbessiotis AV (2003) Random graphs in a neural computation model. Int J Comput Math 80:689–707

    Article  MathSciNet  Google Scholar 

  • Gerstner W (1999) Spiking neurons. In: Maass W, Bishop CM (eds). Pulsed neural networks, chap 1. MIT Press, Cambridge, pp. 3–54

    Google Scholar 

  • Günay C (2003) Hierarchical learning of conjunctive concepts in spiking neural networks. PhD thesis, Center for Advanced Computer Studies,University of Louisiana at Lafayette, Lafayette,LA 70504-4330,USA

    Google Scholar 

  • Günay C, Maida AS (2001) The required measures of phase segregation in distributed cortical processing. In: Proceedings of the international joint conference on neural networks, Washington, DC, vol 1, pp. 290–295

  • Günay C, Maida AS (2003) Temporal binding as an inducer for connectionist recruitment learning over delayed lines. Neural Netw 16(5–6):593–600

    Article  PubMed  Google Scholar 

  • Günay C, Maida AS (2005) Using temporal binding for hierarchical recruitment of conjunctive concepts over delayed lines. Neurocomputing (in print)

  • Hinton GE, McClelland JL, Rumelhart DE (1986) Distributed representations. In: Rumelhart DE, McClelland JL, the PDP Research Group (eds) Parallel distributed processing: explorations in the microstructure of cognition, Foundations, vol 1. MIT Press, Cambridge, pp 77–109

  • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational properties. Proc Natl Acad Sci 79:2554–2558

    Article  PubMed  CAS  Google Scholar 

  • Indiveri G (2000) Modeling selective attention using a neuromorphic analog VLSI device. Neural Comput 12(12):2857–2880

    Article  PubMed  CAS  Google Scholar 

  • Kaszkurewicz E, Bhaya A (1994) On a class of globally stable neural circuits. IEEE Trans Circuits Syst I: Fundam Theory Appl 41(2):171–174

    Article  Google Scholar 

  • Knoblauch A, Palm G (2001) Pattern separation and synchronization in spiking associative memories and visual areas. Neural Netw 14:763–780

    Article  PubMed  CAS  Google Scholar 

  • Koch C, Poggio T, Torre V (1983) Nonlinear interactions in a dendritic tree: localization timing and role in information processing. Proc Natl Acad Sci 80:2799–2802

    Article  PubMed  CAS  Google Scholar 

  • Levy WB (1996) A sequence predicting CA3 is a flexible associator that learns and uses context to solve Hippocampal-like tasks. Hippocampus 6:576–590

    Article  Google Scholar 

  • Litvak V, Sompolinsky H, Segev I, Abeles M (2003) On the transmission of rate codes in long feedforward networks with excitatory-inhibitory balance. J Neurosci 23(7):3006–3015

    PubMed  CAS  Google Scholar 

  • Maass W (2000) On the computational power of winner-take-all. Neural Comput 12(11):2519–2536

    Article  PubMed  CAS  Google Scholar 

  • Minai AA, Levy WB (1993) The dynamics of sparse random networks. Biol Cybern 70:177–187

    Article  PubMed  CAS  Google Scholar 

  • Minai AA, Levy WB (1994) Setting the activity level in sparse random networks. Neural Comput 6:85–99

    Article  Google Scholar 

  • O’Reilly RC, Busby RS, Soto R (2003) Three forms of binding and their neural substrates: alternatives to temporal synchrony. In: Cleeremans A (eds). The unity of consciousness: binding, integration and dissociation. Oxford University Press, Oxford

    Google Scholar 

  • Page M (2000) Connectionist modelling in psychology: a localist manifesto. Behav Brain Sci 23(4):443–467

    Article  PubMed  CAS  Google Scholar 

  • Phillips CL, Harbor RD (1991) Feedback control systems, 2nd edn. Prentice Hall, New Jersey

    Google Scholar 

  • Shadlen MN, Newsome WT (1994) Noise, neural codes and cortical organization. Curr Opin Neurobiol 4:569–579

    Article  PubMed  CAS  Google Scholar 

  • Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci 18(10):3870–3896

    PubMed  CAS  Google Scholar 

  • Shastri L (1999) Recruitment of binding and binding-error detector circuits via long-term potentiation. Neurocomputing 26–7:865–874

    Article  Google Scholar 

  • Shastri L (2001) Biological grounding of recruitment learning and vicinal algorithms in long-term potentiation. In: Wermter S, Austin J, Willshaw DJ (eds). Emergent neural computational architectures based on neuroscience, Lecture Notes in Computer Science, vol 2036. Springer, Berlin Heidelberg New York, pp. 348–367

    Google Scholar 

  • Shastri L, Ajjanagadde V (1993) From simple associations to systematic reasoning: a connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behav Brain Sci 16(3):417–451

    Article  Google Scholar 

  • Tetzlaff T, Geisel T, Diesmann M (2002) The ground state of cortical feed-forward networks. Neurocomputing 44–46:673–678

    Article  Google Scholar 

  • Tymoshchuk P, Kaszkurewicz E (2003) A winner-take-all circuit based on second order Hopfield neural networks as building blocks. In: Hasselmo M, Wunsch DC (eds). Proceedings of the international joint conference on neural networks. Portland, Oregon, pp. 891–896

    Chapter  Google Scholar 

  • Urahama K, Nagao T (1995) K-winners-take-all circuit with O(N) complexity. IEEE Trans Neural Netw 6(3):776–778

    Article  PubMed  CAS  Google Scholar 

  • Valiant LG (1994) Circuits of the mind. Oxford University Press, Oxford

    Google Scholar 

  • Valiant LG (2000) A neuroidal architecture for cognitive computation. J ACM 47(5):854–882

    Article  Google Scholar 

  • van Rossum M, Turrigiano G, Nelson S (2002) Fast propagation of firing rates through layered networks of noisy neurons. J Neurosci 22(5):1956–1966

    PubMed  Google Scholar 

  • von der Malsburg C (1994) The correlation theory of brain function. In: Domany E, van Hemmen JL, Schulten K (eds) Models of neural networks, volume 2 Physics of neural networks, chap 2, vol 2. Springer, Berlin Heidelberg New York, pp 95–120. Originally appeared as a Technical Report at the Max-Planck Institute for Biophysical Chemistry, Gottingen, 1981

  • von der Malsburg C (1995) Binding in models of perception and brain function. Current Opinion in neurobiology vol 5. Elsevier, Netherlands, pp. 520–526

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cengiz Günay.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Günay, C., Maida, A.S. A stochastic population approach to the problem of stable recruitment hierarchies in spiking neural networks. Biol Cybern 94, 33–45 (2006). https://doi.org/10.1007/s00422-005-0023-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00422-005-0023-y

Keywords

Navigation