Skip to main content
Log in

Structural Properties of Recurrent Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this article we research the impact of the adaptive learning process of recurrent neural networks (RNN) on the structural properties of the derived graphs. A trained fully connected RNN can be converted to a graph by defining edges between pairs od nodes having significant weights. We measured structural properties of the derived graphs, such as characteristic path lengths, clustering coefficients and degree distributions. The results imply that a trained RNN has significantly larger clustering coefficient than a random network with a comparable connectivity. Besides, the degree distributions show existence of nodes with a large degree or hubs, typical for scale-free networks. We also show analytically and experimentally that this type of degree distribution has increased entropy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Erdos P, Renyi A (1959) On random graphs. Publicationes Mathematicae 6: 290–297

    MathSciNet  Google Scholar 

  2. Watts DJ (1999) Small worlds. Princeton University Press, Princeton

    Google Scholar 

  3. Reka A, Barabasi AL (2002) Statistical mechanics of complex networks. Rev Mod Phys 74: 47–97

    Article  Google Scholar 

  4. Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Lett Nat 393/4: 440–442

    Article  Google Scholar 

  5. Dorogovtsev SN, Mendes JFF (2002) Evolution of networks. Adv Phys 51(4): 1079–1187

    Article  Google Scholar 

  6. Bornholdt S, Schuster HG (2003) Handbook of graphs and networks. Wiley-VCH , Weinheim

    MATH  Google Scholar 

  7. Piekniewski F, Schreiber T (2008) Spontaneous scale-free structure of spike flow graphs in recurrent neural networks. Neural Networks, Corrected Proof, Available online 27 June 2008 (in press)

  8. Kim JB (2004) Performance of networks of artificial neurons: the role of clustering. Phys Rev E 69: 045101/1–4

    Google Scholar 

  9. Torres JJ, Munoz MA, Marro J, Garrido PL (2004) Influence of topology on the performance of a neural network. Neurocomputing 58–60: 229–234

    Article  Google Scholar 

  10. McGraw PN, Menzinger M (2003) Topology and computational performance of attractor neural networks. Phys Rev E 68: 047102/1–4

    Google Scholar 

  11. Ster B, Gabrijel I, Dobnikar A (2007) Impact of learning on the structural properties of neural networks. In: Proceedings of adaptive and natural computing algorithms ICANNGA(2). Springer, Berlin, pp 63–70

  12. Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1(2): 270–280

    Article  Google Scholar 

  13. Gabrijel I, Dobnikar A (2003) On-line identification and reconstruction of finite automata with generalized recurrent neural networks. Neural Netw 16: 101–120

    Article  Google Scholar 

  14. Cleeremans A, Servan-Schreiber D, McClelland JL (1989) Finite state automata and simple recurrent networks. Neural Comput 1(3): 372–381

    Article  Google Scholar 

  15. Li-Hui Chen, Hock Chuan Chua, Poy-Boon Tan (1998) Grammatical inference using an adaptive recurrent neural network. Neural Process Lett 8(3): 211–219

    Article  Google Scholar 

  16. Lezoray O, Fournier D, Cardot H (2004) Neural network induction graph for pattern recognition. Neurocomputing 57: 257–274

    Article  Google Scholar 

  17. Batagelj V, Mrvar A (1998) Pajek—program for large network analysis. Connections 21(2): 47–57

    Google Scholar 

  18. Kamada T, Kawai S (1989) An algorithm for drawing general undirected graphs. Inform Process Lett 31: 7–15

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Branko Šter.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dobnikar, A., Šter, B. Structural Properties of Recurrent Neural Networks. Neural Process Lett 29, 75–88 (2009). https://doi.org/10.1007/s11063-009-9096-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-009-9096-2

Keywords

Navigation