Abstract
In this article we research the impact of the adaptive learning process of recurrent neural networks (RNN) on the structural properties of the derived graphs. A trained fully connected RNN can be converted to a graph by defining edges between pairs od nodes having significant weights. We measured structural properties of the derived graphs, such as characteristic path lengths, clustering coefficients and degree distributions. The results imply that a trained RNN has significantly larger clustering coefficient than a random network with a comparable connectivity. Besides, the degree distributions show existence of nodes with a large degree or hubs, typical for scale-free networks. We also show analytically and experimentally that this type of degree distribution has increased entropy.
Similar content being viewed by others
References
Erdos P, Renyi A (1959) On random graphs. Publicationes Mathematicae 6: 290–297
Watts DJ (1999) Small worlds. Princeton University Press, Princeton
Reka A, Barabasi AL (2002) Statistical mechanics of complex networks. Rev Mod Phys 74: 47–97
Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Lett Nat 393/4: 440–442
Dorogovtsev SN, Mendes JFF (2002) Evolution of networks. Adv Phys 51(4): 1079–1187
Bornholdt S, Schuster HG (2003) Handbook of graphs and networks. Wiley-VCH , Weinheim
Piekniewski F, Schreiber T (2008) Spontaneous scale-free structure of spike flow graphs in recurrent neural networks. Neural Networks, Corrected Proof, Available online 27 June 2008 (in press)
Kim JB (2004) Performance of networks of artificial neurons: the role of clustering. Phys Rev E 69: 045101/1–4
Torres JJ, Munoz MA, Marro J, Garrido PL (2004) Influence of topology on the performance of a neural network. Neurocomputing 58–60: 229–234
McGraw PN, Menzinger M (2003) Topology and computational performance of attractor neural networks. Phys Rev E 68: 047102/1–4
Ster B, Gabrijel I, Dobnikar A (2007) Impact of learning on the structural properties of neural networks. In: Proceedings of adaptive and natural computing algorithms ICANNGA(2). Springer, Berlin, pp 63–70
Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1(2): 270–280
Gabrijel I, Dobnikar A (2003) On-line identification and reconstruction of finite automata with generalized recurrent neural networks. Neural Netw 16: 101–120
Cleeremans A, Servan-Schreiber D, McClelland JL (1989) Finite state automata and simple recurrent networks. Neural Comput 1(3): 372–381
Li-Hui Chen, Hock Chuan Chua, Poy-Boon Tan (1998) Grammatical inference using an adaptive recurrent neural network. Neural Process Lett 8(3): 211–219
Lezoray O, Fournier D, Cardot H (2004) Neural network induction graph for pattern recognition. Neurocomputing 57: 257–274
Batagelj V, Mrvar A (1998) Pajek—program for large network analysis. Connections 21(2): 47–57
Kamada T, Kawai S (1989) An algorithm for drawing general undirected graphs. Inform Process Lett 31: 7–15
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dobnikar, A., Šter, B. Structural Properties of Recurrent Neural Networks. Neural Process Lett 29, 75–88 (2009). https://doi.org/10.1007/s11063-009-9096-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-009-9096-2