Skip to main content

Recurrent neural networks

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1000))

Abstract

Neural networks have attracted much attention lately as a powerful tool of automatic learning. Of particular interest is the class of recurrent networks which allow for loops and cycles and thus give rise to dynamical systems, to flexible behavior, and to computation. This paper reviews the recent findings that mathematically quantify the computational power and dynamic capabilities of recurrent neural networks. The appeal of the network as a possible standard model of analog computation also will be discussed.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. L. Balcázar, J. Díaz, and J. Gabarró. Structural Complexity, volume I and II. EATCS Monographs in Theoretical Computer Science, Springer-Verlag, Berlin, 1988–1990.

    Google Scholar 

  2. J. L. Balcázar, R. Gavaldà, H.T. Siegelmann, and E. D. Sontag. Some structural complexity aspects of neural computation. In 8th Ann. IEEE conf. on Structure in Complexity Theory, pages 253–265, San Diego, CA, May 1993.

    Google Scholar 

  3. A.R. Barron. Neural net approximation. In Proc. Seventh Yale Workshop on Adaptive and Learning Systems, pages 69–72, Yale University, 1992.

    Google Scholar 

  4. E.B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1:151–160, 1989.

    Google Scholar 

  5. L. Blum, M. Shub, and S. Smale. On a theory of computation and complexity over the real numbers: NP completeness, recursive functions, and universal machines. Bull. A.M.S., 21:1–46, 1989.

    Google Scholar 

  6. G. Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control, Signals, and Systems, 2:303–314, 1989.

    Google Scholar 

  7. J.A. Franklin. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.

    Article  Google Scholar 

  8. S. Franklin and M. Garzon. Neural computability. In O. M. Omidvar, editor, Progress In Neural Networks, pages 128–144. Ablex, Norwood, NJ, 1990.

    Google Scholar 

  9. M. Garzon and S. Franklin. Neural computability. In Proc. 3rd Int. Joint Conf. Neural Networks, volume II, pages 631–637, 1989.

    Article  Google Scholar 

  10. C.L. Giles, B.G. Home, and T. Lin. Learning a class of large finite state machines with a recurrent neural network. Neural Networks, 1995. In press.

    Google Scholar 

  11. R. Hartley and H. Szu. A comparison of the computational power of neural network models. In Proc. IEEE Conf. Neural Networks, pages 17–22, 1987.

    Google Scholar 

  12. S. Haykin. Neural Networks: A Comprehensive Foundation. IEEE Press, New York, 1994.

    Google Scholar 

  13. J.W. Hong. On connectionist models. On Pure and Applied Mathematics, 41, 1988.

    Google Scholar 

  14. J.E. Hopcroft and J.D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley Publishing Company, Inc., Reading, MA, 1979.

    Google Scholar 

  15. K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4:251–257, 1991.

    Article  Google Scholar 

  16. K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3: 551–560, 1990.

    Article  Google Scholar 

  17. J. Kilian and H.T. Siegelmann. On the power of sigmoid neural networks. In Proc. Sixth ACM Workshop on Computational Learning Theory, Santa Cruz, July 1993.

    Google Scholar 

  18. S. C. Kleene. Representation of events in nerve nets and finite automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3–41. Princeton Univ. Press, 1956.

    Google Scholar 

  19. P. Koiran, M. Cosnard, and M. Garzon. Computability with low-dimensional dynamical systems. Theoretical Computer Science, 132:113–128, 1994.

    Article  Google Scholar 

  20. W. Maass, G. Schnitger, and E.D. Sontag. On the computational power of sigmoid versus boolean threshold circuits. In Proc. 32nd Ann. IEEE Symp. Foundations of Computer Science, pages 767–776, 1991.

    Google Scholar 

  21. M. Matthews. On the uniform approximation of nonlinear discrete-time fadingmemory systems using neural network models. Technical Report Ph.D. Thesis, ETH No. 9635, E.T.H. Zurich, 1992.

    Google Scholar 

  22. W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys, 5:115–133, 1943.

    Google Scholar 

  23. C.B. Miller and C.L. Giles. Experimental comparison of the effect of order in recurrent neural networks. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):849–872, 1993. Special Issue on Neural Networks and Pattern Recognition, editors: I. Guyon, P.S.P. Wang.

    Article  Google Scholar 

  24. J. Moody, A. Levin, and S. Rehfuss. Predicting the U. S. index of industrial production. Neural Network World, 3(6):791–794, 1993.

    Google Scholar 

  25. P. Orponen. On the computational power of discrete Hopfield nets. A. Lingas, R. Karlsson, S. Carlsson (Eds.), Automata, Languages and Programming, Proc. 20th Intern. Symposium (ICALP'93), Lecture Notes in Computer Science, Vol. 700, Springer-Verlag, Berlin, 1993, pp. 215–226.

    Google Scholar 

  26. R. Penrose. The Emperor's New Mind. Oxford University Press, Oxford, 1989.

    Google Scholar 

  27. J. B. Pollack. On Connectionist Models of Natural Language Processing. PhD thesis, Computer Science Dept, Univ. of Illinois, Urbana, 1987.

    Google Scholar 

  28. M. M. Polycarpou and P.A. Ioannou. Identification and control of nonlinear systems using neural network models: Design and stability analysis. Technical Report 91-09-01, Department of EE/Systems, USC, Los Angeles, Sept 1991.

    Google Scholar 

  29. H. T. Siegelmann. On the computational power of probabilistic and faulty neural networks. In S. Abiteboul, E. Shamir (Eds.), Automata, Languages and Programming, Proc. 21st Intern. Symposium (ICALP 94), Lecture Notes in Computer Science, Vol. 820, Springer-Verlag, Berlin, 1994, pp. 23–33.

    Google Scholar 

  30. H. T. Siegelmann. Computation beyond the Turing limit. SCIENCE, 268(5210): 545–548, April 1995.

    Google Scholar 

  31. H. T. Siegelmann and E. D. Sontag. Turing computability with neural nets. Appl. Math. Lett., 4(6):77–80, 1991.

    Article  Google Scholar 

  32. H. T. Siegelmann and E. D. Sontag. Analog computation via neural networks. Theoretical Computer Science, Vol. 131: 331–360, 1994.

    Article  Google Scholar 

  33. H. T. Siegelmann and E. D. Sontag. On computational power of neural networks. J. Comp. Syst. Sci, 50(1):132–150, 1995. Previous version appeared in Proc. Fifth ACM Workshop on Computational Learning Theory, pages 440–449, Pittsburgh, July 1992.

    Article  Google Scholar 

  34. H.T. Siegelmann, B.G. Horne, and C.L. Giles. Computational capabilities of recurrent narx neural networks. Technical Report UMIACS-TR-95-12 and CS-TR-3408, Institute for Advanced Computer Studies, University of Maryland, College Park, Maryland, 1995.

    Google Scholar 

  35. E.D. Sontag. Neural nets as systems models and controllers. In Proc. Seventh Yale Workshop on Adaptive and Learning Systems, pages 73–79, Yale University, 1992.

    Google Scholar 

  36. M. Stinchcombe and H. White. Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights. In Proceedings of the International Joint Conference on Neural Networks, IEEE, Vol. 3, pages 7–16, 1990.

    Google Scholar 

  37. H.J. Sussmann. Uniqueness of the weights for minimal feedforward nets with a given input-output map. Neural Networks, 5: 589–593, 1992.

    Article  Google Scholar 

  38. V. Tresp, J. Moody, and W.R. Delong. Neural network modeling of physiological processes. In T. Petsche, M. Kearns, S. Hanson, and R. Rivest, editors, Computational Learning Theory and Natural Learning Systems — vol. 2. MIT Press, Cambridge MA, 1993: 363–378.

    Google Scholar 

  39. J. Utans, J. Moody, S. Rehfuss, and H. T. Siegelmann. Selecting input variables via sensitivity analysis: Application to predicting the U. S. business cycle. In IEEE Computational Intelligence in Financial Engineering Conference, New York, April 1995: 118–122.

    Google Scholar 

  40. D. Wolpert. A computationally universal field computer which is purely linear. Technical Report LA-UR-91-2937, Los Alamos National Laboratory, 1991.

    Google Scholar 

  41. W. Pitts, W.S. McCulloch. A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Jan van Leeuwen

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Siegelmann, H.T. (1995). Recurrent neural networks. In: van Leeuwen, J. (eds) Computer Science Today. Lecture Notes in Computer Science, vol 1000. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0015235

Download citation

  • DOI: https://doi.org/10.1007/BFb0015235

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60105-0

  • Online ISBN: 978-3-540-49435-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics