Skip to main content

Advertisement

Log in

Neural waves and computation in a neural net model II: Data-like structures and the dynamics of episodic memory

  • Research
  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

The computational resources of a neuromorphic network model introduced earlier were investigated in the first paper of this series. It was argued that a form of ubiquitous spontaneous local convolution enabled logical gate-like neural motifs to form into hierarchical feed-forward structures of the Hubel-Wiesel type. Here we investigate concomitant data-like structures and their dynamic rôle in memory formation, retrieval, and replay. The mechanisms give rise to the need for general inhibitory sculpting, and the simulation of the replay of episodic memories, well known in humans and recently observed in rats. Other consequences include explanations of such findings as the directional flows of neural waves in memory formation and retrieval, visual anomalies and memory deficits in schizophrenia, and the operation of GABA agonist drugs in suppressing episodic memories. We put forward the hypothesis that all neural logical operations and feature extractions are of the convolutional hierarchical type described here and in the earlier paper, and exemplified by the Hubel-Wiesel model of the visual cortex, but that in more general cases the precise geometric layering might be obscured and so far undetected.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data Availability

No datasets were generated or analysed during the current study.

References

Download references

Acknowledgements

For valuable input on various relevant topics many thanks are owed to: Wallace Arthur, Felicitas Ehlen, Bryan Emmerson, the late James Hartle, Alex Kim, Ronald Munson, Gualtiero Piccinini, Piers Rawling and Ivan Selesnick. Grateful thanks also to referees for careful readings and cogent remarks which much improved an earlier version.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

SS is the sole author

Corresponding author

Correspondence to Stephen Selesnick.

Ethics declarations

Informed Consent

The author declares that there were no human or animal participants involved in this purely theoretical study.

Conflict of interest

The authors declare no conflict of interest.

Additional information

Action Editor: Pulin Gong.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

Appendix A: brief outline of the model

The model has two facets: a physical one, describing the internal structure of a network and its dynamics, and a structural one, describing how these networks may be combined and the structures so formed. This is achieved through the use of a logical scheme known as Gentzen sequent calculus. These two facets interact and the model requires them both. The use of the sequent calculus is rudimentary and no deep theorems from logic are needed or invoked.

1.1 The network structure

Our model quantum-like networks have nodes comprising a bipartite system which is supposed to model “standard” biological neurons in general morphology. That is to say, we assume two chamber-like nodes, one corresponding to the input part of the cell’s somatic membrane receiving fanned in dendritic signals, and the other to the output part of the cell’s somatic membrane including the hillock/trigger zone/axon initial segment (or possibly the whole axon). A single axon/output chamber fans out to the input chambers of other such bicameral neurons or b-neurons.

The real Hilbert space of states of a single b-neuron is thus of the form

$$\begin{aligned} \mathbb {R}e_0 \oplus \mathbb {R}e_1 \end{aligned}$$
(A.1)

where \(\mathbb {R}\) denotes the field of real numbers, \(e_0\) denotes the state representing the input node, \(e_1\) the state representing the output node, and we have explicitly written in the generators of the two subspaces. Compared to the qubit of quantum computing fame, \(e_0 = |0 \rangle\) and \(e_1= | 1 \rangle\). In addition we argued that this space is in fact the exterior algebra (aka the Fermi-Dirac-Fock space) of the space \(\mathbb {R}e_1\), the state space of the output node, so that \(e_1\wedge e_1=0\). This latter equation is a reflection of the fact that a biological neuron cannot enter a state of firing while it is firing. It is this exterior product structure that gives b-neurons their fermionic structure, although all vector spaces are over the field of real numbers. Consequently, collections of them obey the statistics of Fermi-Dirac.

Networks of such b-neurons are then defined and give rise to the following structures.

  1. 1.

    A directed finite graph, denoted \(\mathscr {N}_A^b\) say, whose vertices are assigned b-neurons, denoted by \(n_i^A\), for \(i=1,\ldots , N\) where N is the number of vertices, with links/edges fanning in and out;

  2. 2.

    A real finite dimensional Hilbert space, denoted \(\mathfrak {H}_A\), of dimension N, with an orthonormal basis \(\{e^A_i\}\), \(i=1,\ldots , N\), where \(e_k^A\) denotes the state corresponding to the output node of the b-neuron \(n_k^A\). Thus, an element of \(\mathfrak {H}_A\), such as

    $$\begin{aligned} \mathbf {\text {v}}^A = \sum _i v_i^A e_i^A \end{aligned}$$
    (A.2)

    represents a (superpositional) state in which the b-network may be found to have the b-neuron \(n^A_{i_0}\) in its firing state, or ON, with probability

    $$\begin{aligned} \frac{(v_{i_0}^A)^2}{\sum _i (v_{i}^A)^2}. \end{aligned}$$
    (A.3)

    (but please see the proviso in item 4 below). This follows from Born’s Law, which we may adopt for reasons given earlier. Our model b-neurons have no internal structure, or rather such structure has been hidden by the logic of our approach, which is aimed at considering only the relevant structural aspects. Thus the value \(v^A_i\) in the output node of \(n^A_i\) is not causally connected to whatever firing mechanism is involved until a dynamics is imposed to drive this mechanism. Only then does a recognizable form of action potential emerge from our model b-neuron. Such dynamics is discussed in Sect. 2 above.

  3. 3.

    A real \(N\times N\) matrix \((J_{ij})\) whose entries are the product of a factor representing a synaptic weight or scaling factor associated with a single synaptic connection \(n^A_j\) to \(n^A_i\), and the corresponding adjacency matrix for the network, namely the number of edges or links from \(n^A_j\) to \(n^A_i\). We shall sometimes refer to the J functions (generally of time) as the exchange terms for the network at hand.

  4. 4.

    The (real) space of states of the ensemble of fermionic b-neurons of a b-network \(\mathscr {N}_A^b\) is the usual (but real) Fermi-Dirac-Fock space of the occupied states of the individual b-neurons, namely the exterior algebra of the space \(\mathfrak {H}_A\) which we denote by \(E(\mathfrak {H}_A)\). Thus

    $$\begin{aligned} E(\mathfrak {H}_A):= \mathbb {R} \oplus \mathfrak {H}_A \oplus \bigwedge ^2 \mathfrak {H}_A\oplus \dots \oplus \bigwedge ^N \mathfrak {H}_A. \end{aligned}$$
    (A.4)

    This is a graded Hopf algebra and also a real Hilbert space of dimension \(2^N\). Its structure is well known and described at length in the references cited. Its properties are fundamental to our model. The singleton state \(e_j^A\) may be regarded as the ON state of the respective b-neuron, which we also identify with the firing state of that b-neuron with the proviso, laid out in Selesnick (2024), that being in the firing state does not necessarily mean that the b-neuron is firing but merely that is in a state of preparation for firing but may be below the firing threshold for that b-neuron. These thresholds are not part of the model. Thus the basic states \(e_{i_0} \wedge e_{i_1} \wedge \ldots \wedge e_{i_p}\) are states of coacting groups of b-neurons and we have dubbed the general elements of \(E(\mathfrak {H}_A)\) firing patterns though they are generally superpositions of the basic states.

There is a dynamics controlling the inner structure of a network as it moves from one firing pattern to another and another dynamics controlling the network’s interaction with external inputs. These are described in Sect. 2.

1.2 A note on the logic

As noted, an integral facet of the model is the explicit use of a scheme familiar to computational logicians, namely a so-called Gentzen sequent calculus. While the physics-like attributes of the model described above deal with the internal structure of an individual network structure, the sequent calculus is supposed to encapsulate the possible outer structures of many networks and their combinations. The specific rules of our calculus renders it a fragment of a well known such calculus, introduced by J.-Y. Girard in the 1980 s and known as Intuitionistic Linear Logic (ILL). It may be considered to be a short-hand used to express relationships within the category of vector spaces, specifically the finite dimensional real Hilbert spaces that are the state spaces of our networks. The basic object is the sequent which is of the form \(A \vdash B\), which may, for our purposes, be thought of as an (un-named) linear map between the Hilbert spaces A and B. Such a map between state spaces of networks may be pictured (at a particular moment) as a wiring diagram, showing how one network is connected to another. In the cases of interest to us, this “wiring” will also include the disposition of certain non-synaptic connectivities. The inference that one such collection may be replaced by, or reduced to, a second such collection, via the rules set out in a list of basic inferences regarded as axiomatic, is signified by a fraction-like construct, with the hypothesis (a collection of sequents) as numerator and consequent (usually another collection of sequents) as denominator. Details and full references may be found in Selesnick and Piccinini (2019); Selesnick (2022). Here we use only a few of the multiplicative rules of GN which are reproduced below.

GN

Structural Rules

Exchange

$$\begin{aligned} \frac{\varGamma , A, B, {\varGamma }' \vdash D}{\varGamma , B, A, {\varGamma }' \vdash D} \text {LE} \end{aligned}$$
(A.5)

Weakening

$$\begin{aligned} \frac{\varGamma \vdash D}{\varGamma ,!A\vdash D} \text{ LW } \end{aligned}$$
(A.6)

Contraction

$$\begin{aligned} \frac{!A,!A,\varGamma \vdash D}{!A,\varGamma \vdash D}\text {LC} \end{aligned}$$
(A.7)

The Identity Group

Axiom

$$\begin{aligned} A\vdash A\quad \text {Ax} \end{aligned}$$
(A.8)

Cut

$$\begin{aligned} \frac{\varGamma \vdash A \quad A, \varGamma ' \vdash D}{ \varGamma , \varGamma ' \vdash D} \text {CUT} \end{aligned}$$
(A.9)

Multiplicative Logical Rules

Conjunctive (Multiplicative) Connective

$$\begin{aligned} \frac{\varGamma , A, B \vdash D}{\varGamma , A\otimes B\vdash D}\text {L}\otimes \end{aligned}$$
(A.10)
$$\begin{aligned} \frac{\varGamma \vdash A\quad \varGamma '\vdash B}{\varGamma , \varGamma '\vdash A\otimes B}\text{ R }\otimes \end{aligned}$$
(A.11)

Of Course operator

!

$$\begin{aligned} \frac{\varGamma , A \vdash D}{\varGamma , !A \vdash D}\text {L!} \end{aligned}$$
(A.12)
$$\begin{aligned} \frac{!\varGamma \vdash A}{!\varGamma \vdash \,\,!A}\text {R!} \end{aligned}$$
(A.13)

Capital Greeks stand for finite sequences of formulas including possibly the empty one, and D stands for either a single formula or no formula, i.e. the empty sequence, and when it appears in the form \(\otimes D\), the \(\otimes\) symbol is presumed to be absent when D is empty. If \(\varGamma\) denotes the sequence \(A_1,A_2, \ldots , A_n\) then \(!\varGamma\) will denote the sequence \(!A_1,!A_2, \ldots , !A_n\).

We shall ride roughshod over logical niceties, and simply identify the types in our sequent formulas as the spaces of states of corresponding b-networks, and the combinator \(\otimes\) to have its usual connotation. Thus A will now stand for the Hilbert space we had previously denoted by \(\mathfrak {H}_A\), et sim. A blank space, i.e. the absence of a formula, is interpreted as \(\mathbb {R}\), the unoccupied state or vacuum.

Nota bene

The crucial fact that drives the efficacy of the model is the remarkable coincidence (if it is one) that the exterior algebra (aka the Fermi-Dirac-Fock space) perfectly interprets the of course operator, thereby forming a nexus between the physics-like paradigm and the structural logic paradigm.

Accordingly we uniformly denote \(E(\mathfrak {H}_A)\) by !A.

Appendix B: The coproduct on \(E(\mathbb {R}^3)\)

For a basis \(\{e_1,e_2,e_3\}\) for \(\mathbb {R}^3\) and 1 the vacuum or unoccupied state or algebra unit, we have

$$\begin{aligned} \psi (1)&= 1\otimes 1;\end{aligned}$$
(B.1)
$$\begin{aligned} \psi (e_i)&= 1\otimes e_i +e_i \otimes 1;\end{aligned}$$
(B.2)
$$\begin{aligned} \psi (e_i\wedge e_j)&= \psi (e_i) \psi (e_j),\nonumber \\&\text { juxtaposition implying graded product,} \end{aligned}$$
(B.3)
$$\begin{aligned} =1\otimes e_i\wedge e_j + e_i \otimes e_j - e_j \otimes e_i + e_i \wedge e_j \otimes 1,\quad i <j; \end{aligned}$$
(B.4)
$$\begin{aligned} \psi (e_1 \wedge e_2 \wedge e_3)= \psi (e_1)\psi (e_2) \psi (e_3) \end{aligned}$$
(B.5)
$$\begin{aligned} =\!1\otimes e_1\wedge e_2\wedge e_3 +e_1\otimes e_2\wedge e_3 - e_2 \otimes e_1 \wedge e_3 + e_3\otimes e_1 \wedge e_2 + \end{aligned}$$
$$\begin{aligned} +e_1 \wedge e_2 \otimes e_3 +e_2 \wedge e_3 \otimes e_1 - e_1 \wedge e_3 \otimes e_2 + e_1 \wedge e_2 \wedge e_3\otimes 1. \end{aligned}$$
(B.6)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Selesnick, S. Neural waves and computation in a neural net model II: Data-like structures and the dynamics of episodic memory. J Comput Neurosci 52, 223–243 (2024). https://doi.org/10.1007/s10827-024-00876-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-024-00876-0

Keywords