Abstract
Connectionist modeling and neuroscience have little common ground or mutual influence. Despite impressive algorithms and analysis within connectionism and neural networks, there has been little influence on neuroscience, which remains primarily an empirical science. This chapter advocates two strategies to increase the interaction between neuroscience and neural networks: (1) focus on emergent properties in neutral networks that are apparently “cognitive”, (2) take neuroimaging data seriously and develop neural models of dynamics in the both spatial and temporal dimensions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bower, G.H., Black, J.B., & Turner, T.J. (1979). Scripts in memory for text. Cognitive Psychology, 11, 177–220.
Casey, M, The dynamics of discrete-time computation with applications to recurrent neural networks and finite state machine extraction, Neural Computation, 8,6 pp. 1135–1178, (1996).
Chomsky. N, Syntactic structures. Mouton, The Hague (1957).
Denker, D. Schwartz, B. Wittner, S. Solla, R. Howard, L. Jackel, J.Hopfield, Automatic learning, rule extraction and generalization, Complex Systems, 1(5), 877–922 (1987).
Elman, J.L., E. Bates, M. Johson, A. Karmiloff-Smith, D. Parisi, K. Plunkett, Rethinking Innatenes (MIT Cambridge, 1996).
Elman, J.L., Finding Structures in Time, Cognitive Science, 14 (1990).
Elman, J.L. (1988). Finding structure in time. CRL Technical Report8801. Center for Research in Language, UCSD.
Fodor, Z. Pylyshn, Connectionism and Cognitive Architecture: A critical analysis. In Pinker & Mehler (Eds.), Connections and Symbols, (MIT Cambridge, 1988).
Giles, B. G. Horne, T. Lin, Learning a class of large finite state machines with a recurrent neural network, Neural Networks, 8,(9), pp. 1359–1365 (1995).
Giles, L., Miller, C.B., Chen D. Chen, H.H. Sun G.Z., Lee, Y.C. (1992). Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks, Neural Computation, (In Press)
Hanson S. & D. Burr What connectionist model learn: learning and representation in connectionist models, Behavioral Brain Models, 13,(3), 471 (1990).
Hanson, C. & Hirst, W. (1989). On the representation of events: A study of orientation, recall, and recognition, Journal of Experimental Psychology: General, 118, pp. 124–150.
Hanson, S.J. & Burr, D.J. (1990). What Connectionist Models Learn: Learning and Representation in Connectionist Networks. Behavioral and Brain Sciences, 13,3 pp. 477–518.
Harnad, S. The Symbol Grounding Problem, Physica D 42: 335–346, (1990).
Jordan, Serial Order: A parallel distributed processing approach, ICS Technical Report (UCSD, 1986).
Marcus G., S. Viyayan, P. Bandi Rao, M. Vishton, Science. 283, (1999).
Medin, D.L., & Schaffer, M.M. (1978).Context theory of classification learning. Psychol Review, 85, 207–238.
Miller, G.A. and M. Stein, Grammarama I: Preliminary studies and analysis of protocols. Technical Report No. CS-2, Cambridge: Harvard University, CCS (1963).
Newtson, D. (1973). Attribution and the unit of perception of ongoing behavior. Journal of Personality and Social Psychology, 28, 28–38.
Pinker, S. Enhanced: Out of the Minds of Babes, Science, 283,(1), 40–41 (1999).
Pinker, S. How the Mind works. (W.W. Norton & Co. 1997).
Pinker, S. The Language Instinct (Morrow & Co. 1994).
Reber, A. Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 6, 855–863 (1967).
Reber, A. Transfer of syntactic structure in synthetic languages. Journal of Experimental Psychology, 81, 115–119, (1969).
Redington, J and N. Chater, Transfer in Artificial Grammar Learning: A reevaluation, J. Exp. Psych: General, 125:2, pp. 123–138. (1996).
Rumelhart, D., G. Hinton, and R.J. Williams, Learning representations by back-propagating errors, Nature, 323,9, (1986).
Rumelhart, D., Hinton, G. & Williams, R. (1986). Learning internal representations by error propagation. In D.E. Rumelhar D. and J.L. McClelland (Eds.) Parallel Distributed Processing I: Foundations. Cambridge, Mass: MIT Press.
Schank, R.C. (1982).Dynamic Memory: A theory of reminding and learning in computers and people. Cambridge: Cambridge University Press
Servan-Schreiber D., Cleeremans, A. & McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. CMU Technical Report CS-88-183.
Watrous, R. & Kuhn G. (1992). Induction of Finite-State Languages Using Second-Order Recurrent Networks, Neural Computation
Hanson & Bly, 2000, The distribution of BOLD susceptibility in the Brain is Non-Gaussian
Murthy, Bly & Hanson, 1999, Identification of the fMRI Signal, Cognitive Neuroscience Society.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hanson, S.J., Negishi, M., Hanson, C. (2001). Connectionist Neuroimaging. In: Wermter, S., Austin, J., Willshaw, D. (eds) Emergent Neural Computational Architectures Based on Neuroscience. Lecture Notes in Computer Science(), vol 2036. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44597-8_40
Download citation
DOI: https://doi.org/10.1007/3-540-44597-8_40
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42363-8
Online ISBN: 978-3-540-44597-5
eBook Packages: Springer Book Archive