Skip to main content
Log in

Processing and Transmission of Confidence in Recurrent Neural Hierarchies

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This article addresses the construction of hierarchies from dynamic attractor networks. We claim that such networks, e.g., dynamic neural fields (DNFs), contain a data model which is encoded in their lateral connections, and which describes typical properties of afferent inputs. This allows to infer the most likely interpretation of inputs, robustly expressed through the position of the attractor state. The principal problem resides in the fact that positions of attractor states alone do not reflect the quality of match between input and data model, termed decision confidence. In hierarchies, this inevitably leads to final decisions which are not Bayes-optimal when inputs exhibit different degrees of ambiguity or conflict, since the resulting differences in confidence will be ignored by downstream layers. We demonstrate a solution to this problem by showing that a correctly parametrized DNF layer can encode decision confidence into the latency of the attractor state in a well-defined way. Conversely, we show that input stimuli gain competitive advantages w.r.t. each other as a function of their relative latency, thus allowing downstream layers to decode attractor latency in an equally well-defined way. Putting these encoding and decoding mechanisms together, we construct a three-stage hierarchy of DNF layers and show that the top-level layer can take Bayes-optimal decisions when the decisions in the lowest hierarchy levels have variable degrees of confidence. In the discussion, we generalize these findings, suggesting a novel possibility to represent and manipulate probabilistic information in recurrent networks without any need for log-encoding, just using the biologically well-founded effect of response latency as an additional coding dimension.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. This article uses a rate-coded model for simplicity, but we do not wish to exclude spiking models, where the effect of response latency has been documented as well [33, 36].

  2. Python/C code implementing all simulations of this article is available under www.gepperth.net/alexander.

References

  1. Amari S-I (1990) Mathematical foundations of neurocomputing. Proc IEEE 78(9):1441–1463

    Article  Google Scholar 

  2. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Adv Neural Inf Process Syst 19:153

    Google Scholar 

  3. Bicho E, Louro L, Erlhagen W (2010) Integrating verbal and nonverbal communication in a dynamic neural field architecture for human–robot interaction. Front Neurorobot 4:5

    Google Scholar 

  4. Bishop C (2006) Pattern recognition and machine learning. Springer, New York

    MATH  Google Scholar 

  5. Borowsky R, Masson M (1996) Semantic ambiguity effects in word identification. J Exp Psychol 22(1):63

    Google Scholar 

  6. Cisek P (2006) Integrated neural processes for defining potential actions and deciding between them: a computational model. J Neurosci 26(38):9761–9770

    Article  Google Scholar 

  7. Cisek P (2007) Cortical mechanisms of action selection: the affordance competition hypothesis. Philos Trans R Soc B 362(1485):1585–1599

    Article  Google Scholar 

  8. Cuijpers R, Erlhagen W (2008) Implementing Bayes’ rule with neural fields. In: Kurkova V, Neruda R, Koutnik J (eds) Proceedings of the international conference on artificial neural networks, ICANN 2008. Springer, Berlin, pp 228–237

    Chapter  Google Scholar 

  9. Deco G, Rolls ET (2004) A neurodynamical cortical model of visual attention and invariant object recognition. Vision Res 44(6):621–642

    Article  Google Scholar 

  10. Deneve S, Pouget A (2003) Basis functions for object-centered representations. Neuron 37(2):347–359

    Article  Google Scholar 

  11. Erlhagen W, Schöner G (2002) Dynamic field theory of movement preparation. Psychol Rev 109(3):545

    Article  Google Scholar 

  12. Faubel C, Schöner G (2008) Learning to recognize objects on the fly: a neurally based dynamic field approach. Neural Netw 21(4):562–576

    Article  Google Scholar 

  13. Gautrais J, Thorpe S (1998) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48(1–3):57–65

    Article  Google Scholar 

  14. Gepperth A (2012) Efficient online bootstrapping of sensory representations. Neural Netw 41:39–50

    Article  Google Scholar 

  15. Gold J, Shadlen M (2001) Neural computations that underlie decisions about sensory stimuli. Trends Cogn Sci 5(1):10–16

    Article  Google Scholar 

  16. Hazeltine E, Poldrack R, Gabrieli J (2000) Neural activation during response competition. J Cogn Neurosci 2:118–129

    Article  Google Scholar 

  17. Hinton G, Osindero S, Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MATH  MathSciNet  Google Scholar 

  18. Johnson JS, Spencer JP, Schöner G (2008) Moving to higher ground: the dynamic field theory and the dynamics of visual cognition. New Ideas Psychol 26(2):227–251

    Article  Google Scholar 

  19. Kiani R, Esteky H, Tanaka K (2005) Differences in onset latency of macaque inferotemporal neural responses to primate and non-primate faces. J Neurophysiol 94(2):1587–1596

    Article  Google Scholar 

  20. Knill DC, Pouget A (2004) The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27(12):712–719

    Article  Google Scholar 

  21. Kupper R., Gewaltig M.-O., Körner U., Körner E. (2005) Spike-latency codes and the effect of saccades. Neurocomp. 65–66:189–194. Special issue: Computational Neuroscience: Trends in Research 2005 (edited by de Schutter E)

  22. LeCun Y, Huang F-J, Bottou L (2004) Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of CVPR’04. IEEE Press, New York

  23. Ma W, Beck J, Latham P, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9(11):1432–1438

    Article  Google Scholar 

  24. Garcia Ortiz AM (2009) Neural self-adaptation for large-scale system building. In: First international conference on cognitive neurodynamics, Zhejiang University, Hangzhou

  25. Michelet T, Duncan G, Cisek P (2010) Response competition in the primary motor cortex: Corticospinal excitability reflects response replacement during simple decisions. J Neurophysiol 101(1):119–127

    Article  Google Scholar 

  26. Mikhailova I, Goerick C (2005) Conditions of activity bubble uniqueness in dynamic neural fields. Biol Cybern 92(2):82–91

    Article  MATH  MathSciNet  Google Scholar 

  27. Oram MW, Xiao D, Dritschel B, Payne KR (2002) The temporal resolution of neural codes: does response latency have a unique role? Philos Trans R Soc Lond B 357(1424):987–1001

    Article  Google Scholar 

  28. Pouget A, Deneve S, Duhamel J-R (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci 3(9):741–747

    Article  Google Scholar 

  29. Rao RPN (2004) Bayesian computation in recurrent neural circuits. Neural Comput 16(1):1–38

    Article  MATH  Google Scholar 

  30. Reich DS, Mechler F, Victor JD (2001) Temporal coding of contrast in primary visual cortex: when, what, and why. J Neurophysiol 85(3):1039–1050

    Google Scholar 

  31. Rougier NP, Vitay J (2006) Emergence of attention within a neural population. Neural Netw 19(5):573–581

    Article  MATH  Google Scholar 

  32. Sandamirskaya Y, Lipinski J, Iossifidis I (2010) Natural human–robot interaction through spatial language: a dynamic neural fields approach. In: Proceedings of the 19th IEEE international workshop on robot and human interactive communication (ROMAN 2010), Viareggio, Italy

  33. Schrader S, Gewaltig M-O, Körner U, Körner E (2009) Cortext: a columnar model of bottom-up and top-down processing in the neocortex. Neural Netw 22(8):1055–1070

    Article  Google Scholar 

  34. Taylor J (1999) Neural ‘bubble’ dynamics in two dimensions: foundations. Biol Cybern 80:393–409

    Article  MATH  Google Scholar 

  35. Turrigiano GG, Nelson SB (2004) Homeostatic plasticity in the developing nervous system. Nat Rev Neurosci 5(2):97–107

    Article  Google Scholar 

  36. Van Rullen R, Gautrais J, Delorme A, Thorpe S (1998) Face processing using one spike per neurone. Biosystems 48(1–3):229–239

    Article  Google Scholar 

  37. Wilimzig C, Schneider S, Schöner G (2006) The time course of saccadic decision making: dynamic field theory. Neural Netw 19(8):1059–1074

    Article  MATH  Google Scholar 

  38. Zemel RS, Dayan P, Pouget A (1998) Probabilistic interpretation of population codes. Neural Comput 10(2):403–430

    Article  Google Scholar 

  39. Zibner SKU, Faubel C, Spencer JP, Iossifidis I, Schöner G (2010) Scenes and tracking with dynamic neural fields: how to update a robotic scene representation. In: Proceedings of the International Conference on Development and Learning (ICDL10), University of Michigan, Ann Arbor

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Gepperth.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gepperth, A. Processing and Transmission of Confidence in Recurrent Neural Hierarchies. Neural Process Lett 40, 75–91 (2014). https://doi.org/10.1007/s11063-013-9311-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-013-9311-z

Keywords

Navigation