Abstract
This article addresses the construction of hierarchies from dynamic attractor networks. We claim that such networks, e.g., dynamic neural fields (DNFs), contain a data model which is encoded in their lateral connections, and which describes typical properties of afferent inputs. This allows to infer the most likely interpretation of inputs, robustly expressed through the position of the attractor state. The principal problem resides in the fact that positions of attractor states alone do not reflect the quality of match between input and data model, termed decision confidence. In hierarchies, this inevitably leads to final decisions which are not Bayes-optimal when inputs exhibit different degrees of ambiguity or conflict, since the resulting differences in confidence will be ignored by downstream layers. We demonstrate a solution to this problem by showing that a correctly parametrized DNF layer can encode decision confidence into the latency of the attractor state in a well-defined way. Conversely, we show that input stimuli gain competitive advantages w.r.t. each other as a function of their relative latency, thus allowing downstream layers to decode attractor latency in an equally well-defined way. Putting these encoding and decoding mechanisms together, we construct a three-stage hierarchy of DNF layers and show that the top-level layer can take Bayes-optimal decisions when the decisions in the lowest hierarchy levels have variable degrees of confidence. In the discussion, we generalize these findings, suggesting a novel possibility to represent and manipulate probabilistic information in recurrent networks without any need for log-encoding, just using the biologically well-founded effect of response latency as an additional coding dimension.







Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Python/C code implementing all simulations of this article is available under www.gepperth.net/alexander.
References
Amari S-I (1990) Mathematical foundations of neurocomputing. Proc IEEE 78(9):1441–1463
Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Adv Neural Inf Process Syst 19:153
Bicho E, Louro L, Erlhagen W (2010) Integrating verbal and nonverbal communication in a dynamic neural field architecture for human–robot interaction. Front Neurorobot 4:5
Bishop C (2006) Pattern recognition and machine learning. Springer, New York
Borowsky R, Masson M (1996) Semantic ambiguity effects in word identification. J Exp Psychol 22(1):63
Cisek P (2006) Integrated neural processes for defining potential actions and deciding between them: a computational model. J Neurosci 26(38):9761–9770
Cisek P (2007) Cortical mechanisms of action selection: the affordance competition hypothesis. Philos Trans R Soc B 362(1485):1585–1599
Cuijpers R, Erlhagen W (2008) Implementing Bayes’ rule with neural fields. In: Kurkova V, Neruda R, Koutnik J (eds) Proceedings of the international conference on artificial neural networks, ICANN 2008. Springer, Berlin, pp 228–237
Deco G, Rolls ET (2004) A neurodynamical cortical model of visual attention and invariant object recognition. Vision Res 44(6):621–642
Deneve S, Pouget A (2003) Basis functions for object-centered representations. Neuron 37(2):347–359
Erlhagen W, Schöner G (2002) Dynamic field theory of movement preparation. Psychol Rev 109(3):545
Faubel C, Schöner G (2008) Learning to recognize objects on the fly: a neurally based dynamic field approach. Neural Netw 21(4):562–576
Gautrais J, Thorpe S (1998) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48(1–3):57–65
Gepperth A (2012) Efficient online bootstrapping of sensory representations. Neural Netw 41:39–50
Gold J, Shadlen M (2001) Neural computations that underlie decisions about sensory stimuli. Trends Cogn Sci 5(1):10–16
Hazeltine E, Poldrack R, Gabrieli J (2000) Neural activation during response competition. J Cogn Neurosci 2:118–129
Hinton G, Osindero S, Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Johnson JS, Spencer JP, Schöner G (2008) Moving to higher ground: the dynamic field theory and the dynamics of visual cognition. New Ideas Psychol 26(2):227–251
Kiani R, Esteky H, Tanaka K (2005) Differences in onset latency of macaque inferotemporal neural responses to primate and non-primate faces. J Neurophysiol 94(2):1587–1596
Knill DC, Pouget A (2004) The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27(12):712–719
Kupper R., Gewaltig M.-O., Körner U., Körner E. (2005) Spike-latency codes and the effect of saccades. Neurocomp. 65–66:189–194. Special issue: Computational Neuroscience: Trends in Research 2005 (edited by de Schutter E)
LeCun Y, Huang F-J, Bottou L (2004) Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of CVPR’04. IEEE Press, New York
Ma W, Beck J, Latham P, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9(11):1432–1438
Garcia Ortiz AM (2009) Neural self-adaptation for large-scale system building. In: First international conference on cognitive neurodynamics, Zhejiang University, Hangzhou
Michelet T, Duncan G, Cisek P (2010) Response competition in the primary motor cortex: Corticospinal excitability reflects response replacement during simple decisions. J Neurophysiol 101(1):119–127
Mikhailova I, Goerick C (2005) Conditions of activity bubble uniqueness in dynamic neural fields. Biol Cybern 92(2):82–91
Oram MW, Xiao D, Dritschel B, Payne KR (2002) The temporal resolution of neural codes: does response latency have a unique role? Philos Trans R Soc Lond B 357(1424):987–1001
Pouget A, Deneve S, Duhamel J-R (2002) A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci 3(9):741–747
Rao RPN (2004) Bayesian computation in recurrent neural circuits. Neural Comput 16(1):1–38
Reich DS, Mechler F, Victor JD (2001) Temporal coding of contrast in primary visual cortex: when, what, and why. J Neurophysiol 85(3):1039–1050
Rougier NP, Vitay J (2006) Emergence of attention within a neural population. Neural Netw 19(5):573–581
Sandamirskaya Y, Lipinski J, Iossifidis I (2010) Natural human–robot interaction through spatial language: a dynamic neural fields approach. In: Proceedings of the 19th IEEE international workshop on robot and human interactive communication (ROMAN 2010), Viareggio, Italy
Schrader S, Gewaltig M-O, Körner U, Körner E (2009) Cortext: a columnar model of bottom-up and top-down processing in the neocortex. Neural Netw 22(8):1055–1070
Taylor J (1999) Neural ‘bubble’ dynamics in two dimensions: foundations. Biol Cybern 80:393–409
Turrigiano GG, Nelson SB (2004) Homeostatic plasticity in the developing nervous system. Nat Rev Neurosci 5(2):97–107
Van Rullen R, Gautrais J, Delorme A, Thorpe S (1998) Face processing using one spike per neurone. Biosystems 48(1–3):229–239
Wilimzig C, Schneider S, Schöner G (2006) The time course of saccadic decision making: dynamic field theory. Neural Netw 19(8):1059–1074
Zemel RS, Dayan P, Pouget A (1998) Probabilistic interpretation of population codes. Neural Comput 10(2):403–430
Zibner SKU, Faubel C, Spencer JP, Iossifidis I, Schöner G (2010) Scenes and tracking with dynamic neural fields: how to update a robotic scene representation. In: Proceedings of the International Conference on Development and Learning (ICDL10), University of Michigan, Ann Arbor
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Gepperth, A. Processing and Transmission of Confidence in Recurrent Neural Hierarchies. Neural Process Lett 40, 75–91 (2014). https://doi.org/10.1007/s11063-013-9311-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-013-9311-z