Abstract
A major problem in search of neural substrates of learning and decision making is that the process is highly stochastic and subject dependent, making simple stimulus- or output-triggered averaging inadequate. This paper presents a novel approach of characterizing neural recording or brain imaging data in reference to the internal variables of learning models (such as connection weights and parameters of learning) estimated from the history of external variables by Bayesian inference framework. We specifically focus on reinforcement leaning (RL) models of decision making and derive an estimation method for the variables by particle filtering, a recent method of dynamic Bayesian inference. We present the results of its application to decision making experiment in monkeys and humans. The framework is applicable to wide range of behavioral data analysis and diagnosis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Behrens, T.E., Woolrich, M.W., Walton, M.E., Rushworth, M.F.: Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221 (2007)
Corrado, G., Doya, K.: Understanding neural coding through the model-based analysis of decision making. J. Neurosci. 27, 8178–8180 (2007)
Daw, N.D., O’Doherty, J.P., Dayan, P., Seymour, B., Dolan, R.J.: Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006)
Doucet, A., Freitas, N., Gordon, N.: Sequential Monte Carlo Methods in Practice. Springer, Heidelberg (2001)
Haruno, M., Kuroda, T., Doya, K., Toyama, K., Kimura, M., Samejima, K., Imamizu, H., Kawato, M.: A neural correlate of reward-based behavioral learning in caudate nucleus: a functional magnetic resonance imaging study of a stochastic decision task. J. Neurosci. 24, 1660–1665 (2004)
Pessiglione, M., Seymour, B., Flandin, G., Dolan, R.J., Frith, C.D.: Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature 442, 1042–1045 (2006)
Samejima, K., Doya, K., Ueda, Y., Kimura, M.: Advances in neural processing systems, vol. 16. The MIT Press, Cambridge, Massachusetts, London, England (2004)
Samejima, K., Ueda, Y., Doya, K., Kimura, M.: Representation of action-specific reward values in the striatum. Science 310, 1337–1340 (2005)
Schultz, W., Dayan, P., Montague, P.R.: A neural substrate of prediction and reward. Science 275, 1593–1599 (1997)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT press, Cambridge (1998)
Tanaka, S.C., Samejima, K., Okada, G., Ueda, K., Okamoto, Y., Yamawaki, S., Doya, K.: Brain mechanism of reward prediction under predictable and unpredictable environmental dynamics. Neural Netw. 19, 1233–1241 (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Samejima, K., Doya, K. (2008). Estimating Internal Variables of a Decision Maker’s Brain: A Model-Based Approach for Neuroscience. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds) Neural Information Processing. ICONIP 2007. Lecture Notes in Computer Science, vol 4984. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69158-7_62
Download citation
DOI: https://doi.org/10.1007/978-3-540-69158-7_62
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69154-9
Online ISBN: 978-3-540-69158-7
eBook Packages: Computer ScienceComputer Science (R0)