Skip to main content

A Paradox of Neural Encoders and Decoders or Why Don’t We Talk Backwards?

  • Conference paper
  • First Online:
Simulated Evolution and Learning (SEAL 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1585))

Included in the following conference series:

Abstract

We present a framework for studying the biases that recurrent neural networks bring to language processing tasks. A semantic concept represented by a point in Euclidean space is translated into a symbol sequence by an encoder network. This sequence is then presented to a decoder network which attempts to translate it back to the original concept. We show how a pair of recurrent networks acting as encoder and decoder can develop their own symbolic language that is serially transmitted between them either forwards or backwards. The encoder and decoder bring different constraints to the task, and these early results indicate that the conflicting nature of these constraints may be reflected in the language that ultimately emerges, providing clues to the structure of human languages.

We thank Tony Plate and Elizabeth Sklar for helpful discussions. The research was supported by an APA to BT, a UQ Postdoctoral Fellowship to AB and an ARC grant to JW.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Noam Chomsky. On certain formal properties of grammars. Information and Control, 2(2):113–124, 1959.

    Article  MathSciNet  Google Scholar 

  2. M. H. Christiansen and N. Chater. Toward a connectionist model of recursion in human linguistic performance. To appear: Cognitive Science, 1998.

    Google Scholar 

  3. J. Elman. Distributed representations, simple recurrent networks and grammatical structure. Machine Learning, 7:195–224, 1991.

    Google Scholar 

  4. J. Elman. Learning and development in neural networks: The importance of starting small. Cognition, 48:71–99, 1993.

    Article  Google Scholar 

  5. M. Hare and J. Elman. Learning and morphological change. Cognition, 56:61–98, 1995.

    Article  Google Scholar 

  6. S. Kirby. Fitness and the selective adaptation of language. In James Hurford, Chris Knight, and Michael Studdert-Kennedy, editors, Approaches to the Evolution of Language: Social and Cognitive Bases for the Emergence of Phonology and Syntax. Cambridge University Press, 1998.

    Google Scholar 

  7. S. Lawrence, C. L. Giles, and S. Fong. Natural language grammatical inference with recurrent neural networks. To appear: IEEE Transactions on Knowledge and Data Engineering, 1998.

    Google Scholar 

  8. C. Moore. Dynamical recognizers: Real-time language recognition by analog computers. Theoretical Computer Science, 201(1-2):99–136, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  9. A. M. Turing. On computable numbers, with an application to entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42:230–265, 1936.

    MATH  Google Scholar 

  10. J. Weckerly and J. Elman. A PDP approach to processing center-embedded sentences. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, Hillsdale, NJ, 1992. Erlbaum.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tonkes, B., Blair, A., Wiles, J. (1999). A Paradox of Neural Encoders and Decoders or Why Don’t We Talk Backwards?. In: McKay, B., Yao, X., Newton, C.S., Kim, JH., Furuhashi, T. (eds) Simulated Evolution and Learning. SEAL 1998. Lecture Notes in Computer Science(), vol 1585. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48873-1_46

Download citation

  • DOI: https://doi.org/10.1007/3-540-48873-1_46

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65907-5

  • Online ISBN: 978-3-540-48873-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics