Elsevier

Biosystems

Volume 89, Issues 1–3, May–June 2007, Pages 10-15
Biosystems

Optimal signal in sensory neurons under an extended rate coding concept

https://doi.org/10.1016/j.biosystems.2006.04.010Get rights and content

Abstract

We define an optimal signal in parametric neuronal models on the basis of interspike interval data and rate coding schema. Under the classical approach the optimal signal is located where the frequency transfer function is steepest. Its position coincides with the inflection point of this curve. This concept is extended here by using Fisher information which is the inverse asymptotic variance of the best estimator and its dependence on the parameter value indicates accuracy of estimation. We compare the signal producing maximal Fisher information with the inflection point of the sigmoidal frequency transfer function.

Introduction

Sensory neurons usually convert intensity of external stimulation into a spike train that is used for further processing. Characterization of the input–output properties of the sensory neurons, as well as of their models, is commonly done via so called frequency (input–output) transfer functions in which the output frequency, or rate, of firing, taken as constant for a fixed signal, is plotted against the strength of the signal. As a measure of firing rate the inverse of the mean interspike interval (ISI) is usually used (for other alternatives see (Lansky et al., 2004)). In constructing a transfer function, it is implicitly assumed, that the information in the neuron under investigation is coded by the frequency of the action potentials. In neuronal context this is called rate coding Dayan and Abbott, 2001, Gerstner and Kistler, 2002. An optimum signal can be defined with respect to this type of coding. If the signal, s, is weak there is little firing. The firing rate increases with growing signal strength. As s increases further, the rate of firing reaches a maximum possible level and saturates. The region of signal strength, s, which can be most accurately discriminated (estimated) on the basis of firing rate, the so-called dynamic range, is the region where the rate is increasing most sharply (for methods of coding range determination see Nizami, 2002). Within this region, the signal where the transfer function is steepest is the optimum signal. An aim of this paper is to investigate and quantify this statement. As a new, alternative, measure of signal optimality in the output train of spikes we propose Fisher information, which has become a common tool in computational neuroscience Stemmler, 1996, Brunel and Nadal, 1998, Zhang and Sejnowski, 1999, Greenwood et al., 1999, Greenwood et al., 2000, Bialek et al., 2001, Wilke and Eurich, 2002, Jenison, 2001, Bethge et al., 2002, Freund et al., 2002, Wu et al., 2004, Johnson and Ray, 2004, Amari and Nakahara, 2005, Greenwood and Lansky, 2005, Lansky and Greenwood, 2005. Due to the fact that the Fisher information depends on the complete distribution of a random variable, not only on its mean, this approach obviously refines the classical rate coding concept.

It should be noted that the approach employed in this paper does not go in direction of temporal coding as analyzed in works of Bialek and his coworkers (Rieke et al., 1999). There the mutal (Shanon) information between the stimuli and responses is evaluated and compared. Also the approach used recently by Wiener and Richmond (2003) is different from that which is proposed here. These authors assume a finite number of stimuli and to each of them different spike-count distribution is assigned. Here, a continuous range of stimuli is considered (for example, sound intensity or odorant intensity). To each level of this stimulus not only a fixed firing rate is assigned, as in the classical frequency coding schema, but a probability distribution of the frequencies. The aim is to find which stimulus could be identified best from the realizations of the assigned random variable, ISIs sampled from this distribution.

As indicated above, we view the spike train as depending directly on the level, s, of the driving signal. We compare the optimum signal determined by the firing rate with that determined by Fisher information. A shape of the transfer function is assumed and the Fisher information function for a family of distributions with the assumed transfer function is computed. We search for the optimum signal, its existence and position among all the possible values. Calculation of the Fisher information requires knowledge of the random variable distribution, but here we will show that for many common descriptors of the ISIs the first two moments are sufficient. On the other hand, statistical procedures for the estimation of the optimal signal are not investigated in this paper, nor do we propose to use the information about the signal contained in the correlation structure of the spike train. Finally, the proposed method, as well as the original one based on the shape of frequency transfer function, assumes that there is no temporal dynamics of the firing after the stimulus application. This is, of course, a strong simplification of reality neglecting the adaptation and other non-stationarities which are well known features of sensory systems. It means that the methods are usually usable only in a short time window following the stimulus application.

Section snippets

Frequency transfer function

A natural way to calculate the firing rate of a neuron is to divide the number of elicited spikes, N(t), by the length of observation period, t. If this procedure is repeated several times for short time intervals, then the mean of the counting process, E(N(t)), can be estimated and used. In this article we assume that, instead of several short observation series, the observed spiking activity is stationary, consisting of one long series of ISIs depending on signal s and denoted Ts. For each

Fisher information

We suggest that Fisher information can be used as a measure of how well a signal, s, can be estimated from ISI data. In general, the Fisher information reflects how well a parameter can be estimated. It is the inverse of the asymptotic variance of the normalized error from an asymptotically efficient estimator (Rao, 2001). Suppose that the random variable Ts has probability density function belonging to a parametric family g(t;s). The Fisher information with respect to the parameter s isJ=1gg

Comparison of optimality measures

The value of s where Fisher information (3.1) takes its maximum is a strong candidate for the “optimum signal”. Now we compare the values of s where J1 is maximal with the values where J/E(Ts) is maximal for a family of exponential class distributions. The reason for selecting normalized Fisher information J/E(Ts) is the following: the random variable for which (3.1) gives Fisher information is a single ISI. If n realizations are avaible, then J on the right hand side of (3.2) is replaced by nJ

Conclusions

Traditional rate coding in sensory neurons is associated with the sigmoid frequency transfer function. Under this coding scheme the optimum signal is defined as that which induces the highest change in response for the smallest change in the input signal. In this paper we write the criterion which determines this signal as J1. However, this can be applied only to rate coding. We have offered a criterion to find an optimum signal beyond the simple rate coding concept. The method is based on the

Acknowledgements

This work was supported by NSERC, Canada, AV0Z50110509, Center for Neurosciences LC554 and Academy of Sciences of the Czech Republic (Information Society, 1ET400110401).

References (34)

  • N. Brunel et al.

    Mutual information, Fisher information, and population coding

    Neural Comput.

    (1998)
  • R. Chhikara et al.

    The Inverse Gaussian Distribution: Theory, Methodology, and Applications

    (1989)
  • D.R. Cox et al.

    The Statistical Analysis of Series of Events

    (1966)
  • H. Cramer

    Mathematical Methods of Statistics

    (1946)
  • P. Dayan et al.

    Theoretical Neuroscience

    (2001)
  • W.M. Getz et al.

    Ligand concentration coding and optimal Michaelis–Menten parameters in multivalent and heterogeneous receptor membranes

    Chem. Sens.

    (2001)
  • W. Gerstner et al.

    Spiking Neuron Models

    (2002)
  • Cited by (14)

    • Efficiency of rate and latency coding with respect to metabolic cost and time

      2017, BioSystems
      Citation Excerpt :

      As for the temporal code, we focus especially on the time interval between t0 and the first subsequent spike T1, which is also called first-spike latency. Due to its simplicity and analytical tractability, the inverse of the Fisher information is commonly used as an approximation of the mean squared error (Stemmler, 1996; Lansky and Greenwood, 2007). Hence, the value of the Fisher information is supposed to reflect the ultimate decoding accuracy and the maximum of the Fisher information corresponds to the optimum decoding conditions, which is the approach adopted also here.

    • On the Cramér-Rao bound applicability and the role of Fisher information in computational neuroscience

      2015, BioSystems
      Citation Excerpt :

      The shift of the best identifiable signal was shown for a generic, sigmoidal signal-response curve and in the case of a biophysical model with the noise following a Gaussian or a Beta distribution, and finally for an empirical model based on experimental data on the bullfrog olfactory receptor neuron. Lansky and Greenwood (2007) considered a sigmoidal response curve defined by the logistic function and compared the location of the inflexion point of the tuning curve with the location of the maximum of FI for different interspike interval distributions, in particular for gamma and inverse Gaussian. For the latter probability distribution, it was shown, that both points coincide.

    • Transmission efficiency in ring, brain inspired neuronal networks. Information and energetic aspects

      2013, Brain Research
      Citation Excerpt :

      Huge effort has been undertaken recently to understand the nature of neuronal coding, its high efficiency and mechanisms governing it (van Hemmen and Sejnowski, 2006; de Ruyter van Steveninck and Laughlin, 1996; Levin and Miller, 1996; Juusola and French, 1997; Rieke et al., 1997; Salinas and Bentley, 2007; Lánský and Greenwood, 2007; London et al., 2008).

    • How do the amplitude fluctuations affect the neuronal transmission efficiency

      2013, Neurocomputing
      Citation Excerpt :

      Tremendous effort is done to understand how the central nervous system encode and transmit sensory events (inputs) in such reliable and efficient way [2]. Thus, it is natural to ask under what conditions single neuron or network of neurons transmit information in a most effective (optimal) and preferably synergetic way [2–9]? How redundant the neuronal transmission is and what is a mechanism of correcting codes?

    • Efficiency of neural transmission as a function of synaptic noise, threshold, and source characteristics

      2011, BioSystems
      Citation Excerpt :

      The method and efficiency of information transmission by brain is one of the major issues that have been recently investigated, both through data analysis and theoretical modeling (de Ruyter van Steveninck and Laughlin, 1996; Levin and Miller, 1996; Juusola and French, 1997; Rieke et al., 1997; Levy and Baxter, 2002; Salinas and Bentley, 2007; Lánský and Greenwood, 2007; London et al., 2008).

    • Classification of stimuli based on stimulus-response curves and their variability

      2008, Brain Research
      Citation Excerpt :

      As a general measure of signal optimality from the point of view of its identification we propose to use Fisher information. Fisher information has become a common tool in computational neuroscience (Stemmler, 1996; Brunel and Nadal, 1998; Zhang and Sejnowski, 1999; Greenwood et al., 1999, 2000; Jenison, 2001; Wilke and Eurich, 2002; Bethge et al., 2002, Freund et al., 2002; Wu et al., 2004; Johnson and Ray, 2004; Amari and Nakahara, 2005; Greenwood and Lansky, 2005, 2007; Lansky and Greenwood, 2005, 2007; Lansky et al., 2007). Now we see the conditional distribution of the response for a specific signal, appearing in the previous section, as a probability density function belonging to a parametric family g(r;s), where s is an unknown parameter which should be identified from the observation of responses r.

    View all citing articles on Scopus
    View full text