Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2036))

Abstract

Connectionist modeling and neuroscience have little common ground or mutual influence. Despite impressive algorithms and analysis within connectionism and neural networks, there has been little influence on neuroscience, which remains primarily an empirical science. This chapter advocates two strategies to increase the interaction between neuroscience and neural networks: (1) focus on emergent properties in neutral networks that are apparently “cognitive”, (2) take neuroimaging data seriously and develop neural models of dynamics in the both spatial and temporal dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bower, G.H., Black, J.B., & Turner, T.J. (1979). Scripts in memory for text. Cognitive Psychology, 11, 177–220.

    Article  Google Scholar 

  2. Casey, M, The dynamics of discrete-time computation with applications to recurrent neural networks and finite state machine extraction, Neural Computation, 8,6 pp. 1135–1178, (1996).

    Article  Google Scholar 

  3. Chomsky. N, Syntactic structures. Mouton, The Hague (1957).

    Google Scholar 

  4. Denker, D. Schwartz, B. Wittner, S. Solla, R. Howard, L. Jackel, J.Hopfield, Automatic learning, rule extraction and generalization, Complex Systems, 1(5), 877–922 (1987).

    MATH  MathSciNet  Google Scholar 

  5. Elman, J.L., E. Bates, M. Johson, A. Karmiloff-Smith, D. Parisi, K. Plunkett, Rethinking Innatenes (MIT Cambridge, 1996).

    Google Scholar 

  6. Elman, J.L., Finding Structures in Time, Cognitive Science, 14 (1990).

    Google Scholar 

  7. Elman, J.L. (1988). Finding structure in time. CRL Technical Report8801. Center for Research in Language, UCSD.

    Google Scholar 

  8. Fodor, Z. Pylyshn, Connectionism and Cognitive Architecture: A critical analysis. In Pinker & Mehler (Eds.), Connections and Symbols, (MIT Cambridge, 1988).

    Google Scholar 

  9. Giles, B. G. Horne, T. Lin, Learning a class of large finite state machines with a recurrent neural network, Neural Networks, 8,(9), pp. 1359–1365 (1995).

    Article  Google Scholar 

  10. Giles, L., Miller, C.B., Chen D. Chen, H.H. Sun G.Z., Lee, Y.C. (1992). Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks, Neural Computation, (In Press)

    Google Scholar 

  11. Hanson S. & D. Burr What connectionist model learn: learning and representation in connectionist models, Behavioral Brain Models, 13,(3), 471 (1990).

    Google Scholar 

  12. Hanson, C. & Hirst, W. (1989). On the representation of events: A study of orientation, recall, and recognition, Journal of Experimental Psychology: General, 118, pp. 124–150.

    Article  Google Scholar 

  13. Hanson, S.J. & Burr, D.J. (1990). What Connectionist Models Learn: Learning and Representation in Connectionist Networks. Behavioral and Brain Sciences, 13,3 pp. 477–518.

    Google Scholar 

  14. Harnad, S. The Symbol Grounding Problem, Physica D 42: 335–346, (1990).

    Article  Google Scholar 

  15. Jordan, Serial Order: A parallel distributed processing approach, ICS Technical Report (UCSD, 1986).

    Google Scholar 

  16. Marcus G., S. Viyayan, P. Bandi Rao, M. Vishton, Science. 283, (1999).

    Google Scholar 

  17. Medin, D.L., & Schaffer, M.M. (1978).Context theory of classification learning. Psychol Review, 85, 207–238.

    Article  Google Scholar 

  18. Miller, G.A. and M. Stein, Grammarama I: Preliminary studies and analysis of protocols. Technical Report No. CS-2, Cambridge: Harvard University, CCS (1963).

    Google Scholar 

  19. Newtson, D. (1973). Attribution and the unit of perception of ongoing behavior. Journal of Personality and Social Psychology, 28, 28–38.

    Article  Google Scholar 

  20. Pinker, S. Enhanced: Out of the Minds of Babes, Science, 283,(1), 40–41 (1999).

    Article  Google Scholar 

  21. Pinker, S. How the Mind works. (W.W. Norton & Co. 1997).

    Google Scholar 

  22. Pinker, S. The Language Instinct (Morrow & Co. 1994).

    Google Scholar 

  23. Reber, A. Implicit learning of artificial grammars. Journal of Verbal Learning and Verbal Behavior, 6, 855–863 (1967).

    Article  Google Scholar 

  24. Reber, A. Transfer of syntactic structure in synthetic languages. Journal of Experimental Psychology, 81, 115–119, (1969).

    Article  Google Scholar 

  25. Redington, J and N. Chater, Transfer in Artificial Grammar Learning: A reevaluation, J. Exp. Psych: General, 125:2, pp. 123–138. (1996).

    Article  Google Scholar 

  26. Rumelhart, D., G. Hinton, and R.J. Williams, Learning representations by back-propagating errors, Nature, 323,9, (1986).

    Google Scholar 

  27. Rumelhart, D., Hinton, G. & Williams, R. (1986). Learning internal representations by error propagation. In D.E. Rumelhar D. and J.L. McClelland (Eds.) Parallel Distributed Processing I: Foundations. Cambridge, Mass: MIT Press.

    Google Scholar 

  28. Schank, R.C. (1982).Dynamic Memory: A theory of reminding and learning in computers and people. Cambridge: Cambridge University Press

    Google Scholar 

  29. Servan-Schreiber D., Cleeremans, A. & McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. CMU Technical Report CS-88-183.

    Google Scholar 

  30. Watrous, R. & Kuhn G. (1992). Induction of Finite-State Languages Using Second-Order Recurrent Networks, Neural Computation

    Google Scholar 

  31. Hanson & Bly, 2000, The distribution of BOLD susceptibility in the Brain is Non-Gaussian

    Google Scholar 

  32. Murthy, Bly & Hanson, 1999, Identification of the fMRI Signal, Cognitive Neuroscience Society.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hanson, S.J., Negishi, M., Hanson, C. (2001). Connectionist Neuroimaging. In: Wermter, S., Austin, J., Willshaw, D. (eds) Emergent Neural Computational Architectures Based on Neuroscience. Lecture Notes in Computer Science(), vol 2036. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44597-8_40

Download citation

  • DOI: https://doi.org/10.1007/3-540-44597-8_40

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42363-8

  • Online ISBN: 978-3-540-44597-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics