2015 Special IssueHodge–Kodaira decomposition of evolving neural networks
Introduction
Although the network topology can be very important for communicating neurons, the conventional network analyses are often limited to locally defined variables such as degrees. Those intrinsically local variables cannot capture the global recurrent structures ubiquitously observed in neural networks. In fact, a network that alters its coupling strengths under STDP learning rule has tendency to make long paths of sequential firings (Aoki and Aoyagi, 2007, Aoki and Aoyagi, 2009, Aoki and Aoyagi, 2011, Aoki and Aoyagi, 2012, Buonomano, 2005, Edelman et al., 2004, Liu and Buonomano, 2009, Magnasco et al., 2009, Masuda et al., 2009, Morrison et al., 2007, Takahashi et al., 2009, Tsubo et al., 2007).
To characterize the global loop structures, the approaches based on algebraic topology are needed (Bossavit, 1997, Curto et al., 2013, Fulton, 1995, Hatcher, 2002). Recent advances in the field of computational topology made it possible to compute topological invariants, such as the number of “holes” in proteins (Gameiro et al., 2012), in an accessible way (Arai et al., 2009, Edelsbrunner and Harer, 2009, Kaczynski et al., 2010). For example, topological methods can count the number of “marbles” in an image irrespective of their shapes, which can be much more informative in detecting cancers than just using raw pixels. For discrete graphs, “graph invariants”, independent of labeling, are desired (Chandrasekaran, Parrilo, & Willsky, 2012) in the same vein, especially when, say, you randomize initial conditions as in this paper and, therefore, do not care about specific labels.
Here we apply the Hodge–Kodaira decomposition of graph flows (de Rham, 1984, Hodge, 1941, Jiang et al., 2011, Kodaira, 1949, Warner, 1983) to evolving neural network models (Aoki & Aoyagi, 2009) in order to count the number of global loops as a topological measure of network structures. Specifically, it is interesting to see if the measure reflects the bifurcation diagrams and even subdivides chaotic parameter regions, which are usually considered intractable.
In Section 2, we explain the evolving neural network model which we simulated. We also show the method of Hodge–Kodaira decomposition. In Section 3, we show the results of Hodge–Kodaira decomposition applied to the evolving neural networks. Finally, Section 4 presents a summary and discussions.
Section snippets
Simulations
We simulated the following model of phase oscillators whose couplings evolve over time (Aoki & Aoyagi, 2009): where and denote the phase of th neuron and the coupling strength from th to th neuron and we solely use and . The learning scheme can be controlled by : Hebb-rule for , STDP rule for , and anti-Hebb-rule for (Fig. 1, Fig. 2). We entirely used , although the result did not change
Results
Here we consider an evolving neural network model (Aoki & Aoyagi, 2009), which has a parameter to control its learning rule (Materials and Methods). Therefore, we can compare the dynamics of the network structure for various learning rules such as Hebbian, STDP and anti-Hebbian as in Fig. 1. For example, it is expected that the STDP rule generates more paths coincident with causal firing orders. In fact, as in Fig. 2, three attractor states have been observed respectively (Aoki & Aoyagi, 2009
Discussions
We applied the Hodge–Kodaira decomposition, a topological method, to an evolving neural network model in order to characterize its loop structure. The Hodge–Kodaira decomposition decomposes a graph flow into three components (gradient, curl and harmonic flows), and allows us to characterize global loop structures of a directed graph topologically. By controlling a learning rule parametrically, we found that a model with an STDP-rule, which tends to form paths coincident with causal firing
Acknowledgment
This work was supported by JSPS KAKENHI Grant Nos. 24120701 and 24120708.
References (30)
Topology of random clique complexes
Discrete Mathematics
(2009)- et al.
Synchrony-induced switching behavior of spike pattern attractors created by spike-timing-dependent plasticity
Neural Computation
(2007) - et al.
Co-evolution of phases and connection strengths in a network of phase oscillators
Physical Review Letters
(2009) - et al.
Self-organized network of phase oscillators coupled by activity-dependent interactions
Physical Review E
(2011) - et al.
Scale-free structures emerging from co-evolution of a network and the distribution of a diffusive resource on it
Physical Review Letters
(2012) - et al.
Recent development in rigorous computational methods in dynamical systems
Japan Journal of Industrial and Applied Mathematics
(2009) Random graphs
(2001)Computational electromagnetism: variational formulations, complementarity, edge elements
(1997)A learning rule for the emergence of stable dynamics and timing in recurrent networks
Journal of Neurophysiology
(2005)- et al.
Convex graph invariants
SIAM Review
(2012)
The neural ring: an algebraic tool for analyzing the intrinsic structure of neural codes
Bulletin of Mathematical Biology
Differentiable manifolds: forms, currents, harmonic forms
Spike-timing dynamics of neuronal groups
Cerebral Cortex
Computational topology
Algebraic topology: a first course
Cited by (7)
Scaling of Hodge-Kodaira decomposition distinguishes learning rules of neural networks
2015, IFAC-PapersOnLineIdentifying sinks and sources of human flows: A new approach to characterizing urban structures
2024, Environment and Planning B: Urban Analytics and City SciencePulse-coupled spin torque nano oscillators with dynamic synapses for neuromorphic computing
2016, 16th International Conference on Nanotechnology - IEEE NANO 2016Hodge decomposition of information flow on small-world networks
2016, Frontiers in Neural Circuits