Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Neurosci., 08 July 2021
Volume 15 - 2021 | https://doi.org/10.3389/fncom.2021.699658

Discrete Dynamics of Dynamic Neural Fields

  • Department of Mathematics, Trinity University, San Antonio, TX, United States

Large and small cortexes of the brain are known to contain vast amounts of neurons that interact with one another. They thus form a continuum of active neural networks whose dynamics are yet to be fully understood. One way to model these activities is to use dynamic neural fields which are mathematical models that approximately describe the behavior of these congregations of neurons. These models have been used in neuroinformatics, neuroscience, robotics, and network analysis to understand not only brain functions or brain diseases, but also learning and brain plasticity. In their theoretical forms, they are given as ordinary or partial differential equations with or without diffusion. Many of their mathematical properties are still under-studied. In this paper, we propose to analyze discrete versions dynamic neural fields based on nearly exact discretization schemes techniques. In particular, we will discuss conditions for the stability of nontrivial solutions of these models, based on various types of kernels and corresponding parameters. Monte Carlo simulations are given for illustration.

1. Introduction

To model a large amount of cells randomly placed together with a uniform volume density and capable of supporting various simple forms of activity, including plane waves, spherical and circular waves, and vortex effects, Beurle (1956) proposed a continuous model consisting of a partial differential equation, that can capture some neurons behaviors like excitability at the cortical level. Amari (1977) and Wilson and Cowan (1972) subsequent modifications did allow for new features such as inhibition. This essentially gave birth to the theory of dynamic neural fields (DNFs) with far reaching applications. The literature on the DNFs is rather rich, and the trends nowadays are toward cognition, plasticity, and robotics. In cognitive neuroscience for instance, DNFs have enabled analyses of Electroencephalograms (Nunez and Srinivasan, 2006), short term memory (Camperi and Wang, 1998), visual hallucinations (Ermentrout and Cowan, 1979; Tass, 1995), visual memory (Schmidt et al., 2002). Durstewitz et al. (2000) have proposed neurocomputational models of working memory using single layer DNFs and shown that the temporal dynamics they obtained compare well with the in vivo observations. Simmering et al. (2008) and Perone and Simmering (2019) used DNFs to achieve spatial cognition by successfully simulating infants' performance in some tasks. Quinton and Goffart (2017) used nonlinearities in DNFs to propose a unifying model for goal directed eye-movements that allow for the implementation of qualitatively different mechanisms including memory formation and sensorimotor coupling. Plasticity in neuroscience can be thought of as the ability of the brain to adapt to external activities by modifying some of its structure. Connections between plasticity and DNFs have been made in studies on intrinsic plasticity, see Strub et al. (2017) or parallel works such as Neumann and Steil (2011), Pozo and Goda (2010), and the references therein. Robotics has been a great niche for DNFs with the works of Bicho et al. (2000), Erlhagen and Schöner (2001), Erlhagen and Bicho (2006), and Bicho et al. (2010).

To recall the theoretical aspects of DNFs, let Ω ⊆ ℝd be a manifold where d is positive integer. In the presence of neurons located on Ω at time t arranged on L layers, the average potential function Vk(xk, t) is often used to understand the continuous field on the kth layer. Vk(xk, t) is the average membrane potential of the neuron located at position xk at time t of the kth layer. When L = 1, V(xk, t) can also be understood as the synaptic input or activation at time t of a neuron at position or direction xk. It satisfies the equation (see Amari, 1977) which is given as

Vk(xk,t)t=-Vk(xk,t)                       +=1LΩWk(xk,x)G(V(x,t))dx+Sk(xk,t),    (1)

where Wkl(xk, x) is the intensity of the connection between a neuron located at position xk on the kth layer with a neuron a position x on the ℓth layer, G[V(x, t)] is the pulse emission rate (or activity) at time t of the neuron located at position x on the ℓth layer. G is often chosen as a monotone increasing function. Sk(xk, t) represents the intensity of the external stimulus at time t arriving on the neuron at position xk on the kth layer, see Figure 1 below.

FIGURE 1
www.frontiersin.org

Figure 1. A representation of the DNFs with four layers and 34 neurons. The green dots are the neurons with intra-layer connections represented by the red lines, and inter-layer connections represented by the dashed lines.

Most of the mentions of discrete systems involving DNFs are of Euler forward type discretization—see Erlhagen and Bicho (2006), Quinton and Goffart (2017), and Beim and Hutt (2014)—with simulations as the main objective. Stability analysis of solutions of DNFs in the sense of discrete dynamical systems are either understudied or often ignored. Beim and Hutt (2014) studied an heterogeneous aspect of DNFs and found existence of attractors and saddle nodes for solutions of (1). There are also discussion of DNFs as neural networks. For instance, Sussilo and Barack (2013) discussed continuous time recurrent neural network (RNN) as differential equation of the form

Vkt=-Vk+j=1MLkjG(Vj)+ηk,    (2)

where Vk is an M dimensional state of activations and G(Vj) are the firing rates, and ηk=j=1IBkjuj represents the external stimulus received from I inputs uj and synaptic weights Bkj. The weights Lkj are selected from a normal distribution. In a sense, the Equation (1) can be considered a continuous time (recurrent neural networks) RNN. Authors such as Elman (1990) and Williams and Zipser (1990), and most recently, Durstewitz (2017)—see Equation (9.8) therein, have discussed discrete time RNN formulated as a difference equation of the form

Vn+1(i)=G(Wi0+j=1MWijVn(j)+ηn(i)).    (3)

where G is a monotone increasing function, Wi0 is a basic activity level for the neuron at position xi, the Wij's are weights representing the connection strength between neuron at position xi and neuron at position xj, and ηn(i) is some external stimulus at time n arriving on the neuron at position xi. So essentially, after discretization, (1) may yield not only a discrete time dynamical system, but also a RNN. In fact, Jin et al. (2021) considered a discrete equivalent in the space for Equation (1) for L = 2 while maintaining the time as continuous and discussed it in the context of unsupervised learning which could help understand plasticity in the brain. The two main goals of this paper are:

1. to propose a discrete dynamic neural field model that is both numerically convergent and dynamically consistent. The model is based on nearly exact discretization techniques proposed in Kwessi et al. (2018) that ensure that the discrete system obtained preserves the dynamics of the continuous system it originated from.

2. to propose a rigorous and holistic theoretical framework to study and understand the stability of dynamic neural fields.

The model we propose has the advantage of being general and simple enough to implement. It also captures different generalizations of DNFs such as predictive neural fields (PNFs) and active neural fields (ANFs). We further show that the proposed model is in effect a Mann-type (see Mann, 1953) local iterative process which admits weak-type solutions under appropriate conditions. The model can also be written as a recurrent neural network. Stability analysis shows that in some case, the model proposed can be studied using graph theory.

The remainder of the paper is organized as follows: in section 2, we propose our nearly exact discretization scheme for DNFs. In section 3.1, we make a brief overview of the essential notions for the stability analysis of discrete dynamical systems, in section 3.2, we discuss the existence on nontrivial fixed solutions for DDNFs, in sections 3.3 and 3.4, we discuss stability analysis for a one and multiple layers discrete DNFs. In section 3.5, we propose simulations and their interpretation for neural activity and in section 4, we make our concluding remarks.

2. Materials and Methods

2.1. Discrete Schemes for DNFs

For sake of simplification, we will use the following notations: given a positive integer L, and integers 1 ≤ k, ℓ ≤ L, Mk, M, we consider indices 1 ≤ ikMk, 1 ≤ jM. We put

     Vi,n(k):=Vk(xi,tn)     Si,n(k):=Sk(xi,tn)         αn:=1-ϕ(hn)         βk:=|Ω|MkWij(k,):=Wk(xi,yj),    (4)

where hn = tn+1tn, and ϕ(hn)=1-e-hn is the time scale of the field. We obviously assume that all the layers of the domain Ω have the same time scale. |Ω| represents the size of the field Ω that henceforth we will assume to be finite. xi represents a point on the kth layer and yj a point on the ℓth layer. Mk represents the number of interconnected neurons on the kth layer of the domain Ω, and βk=|Ω|Mk represents the spatial discretization step on the kth layer of the domain Ω. When L = 1, we will adopt the notations Vi,n(1)=Vi,n,Si,n(1):=Si,n and Wij(1,1):=Wij. Henceforth, for sake of clarity and when needed, we will represent vectors or matrices in bold.

To obtain a discrete model for DNFs, we are going to use nearly exact discretization schemes (NEDS) (see Kwessi et al., 2018). We note that according to this reference, Equation (1) is of type T1, therefore, the left-hand-side will be discretized as

Vi,n+1(k)-Vi,n(k)ϕ(hn).

For the right-hand-side, we write a Riemann sum for the integral in Equation (1), over the domain Ω as

=1L|Ω|Mj=1MWij(k,)·G(Vj,n())+Si,n(k).

This means that, for a given L ∈ ℕ, for integers 1 ≤ kL, Mk, and 1 ≤ ikMk, a discrete dynamic neural field (DDNFs) model can be written as

Vi,n+1(k)=αnVi,n(k)+(1-αn)=1Lj=1Mβ·Wij(k,)·G(Vj,n())               +(1-αn)Si,n(k).    (5)

Staying within the confines of the layers' architecture proposed by Amari (1977), in a DDNFs with L ≥ 3, we observe that the activity on neurons located on the bottom (or top) layer depends on the interconnected neurons on that layer and their connections to the layer above (or below). First, we define

Lk={{1,2}if k=1{k-1,k,k+1}if 1<k<L{L-1,L}if k=L.

Middle layers are connected to two layers, one above and one below. This means that at time tn, a neuron at position xi on the first layer receives external input Si,n(1), intra-layer signals Wij(1,1) from M1 neurons on the first layer, and inter-layer signals Wij(1,2) from M2 neurons on the second layer. Likewise, a neuron at position xi on the Lth layer receives external input Si,n(L), inter-layer signals Wij(L-1,L) from ML−1 neurons on the (L − 1)th layer, and intra-layer signals Wij(L,L) from ML neurons on the Lth layer. For 1 < k < L, we have Wij(k,)=0 if ℓ ∉ Lk and Wij(k,)0 if ℓ ∈ Lk. This means that at time t = tn, a neuron at position xi on the kth layer receives external input Si,n(k), inter-layer signals Wij(k-1,k) from Mk−1 neurons on the (k − 1)th layer, intra-layer signals Wij(k,k) from Mk neurons on the kth layer, and inter-layer signals Wij(k,k+1) from Mk+1 neurons on the (k + 1)th layer.

Further, Equation (5) has a matrix notation which helps with the study of stability analysis. Indeed, let M=max1kLMk and d = L × M, let X = (X1, X2, ⋯, XL) ∈ Ω, where for 1 ≤ kL, Xk = {(xk1, xk2, ⋯, xkM)} represents the positions of Mk neurons on the kth layer. We are assuming without loss of generality that xki = 0 if Mk < iM to have a full L × M matrix. Now consider the matrix

Vn(X)=[Vn(1)(X),Vn(2)(X),,Vn(L)(X)],

formed by the column vectors Vn(k)(X)=(V1,n(k),V2,n(k),,VM,n(k))T. In this case, Equation (5) can be written in a matrix form as

Vn+1(X)=F(Vn(X))=[f1(Vn(X)),f2(Vn(X)),,fL(Vn(X))],    (6)

with an element-wise notation

[fk(Vn(X))]i=αnVi,n(k)+(1-αn)Lkj=1Mβ·Wij(k,)·G(Vj,n())                                 +(1-αn)Si,n(k),

for 1 ≤ iM and for 1 ≤ kL. For more complex layers' architectures, the interested reader can refer to De Domenico et al. (2013) and the references therein, where tensor products are used instead of matrices.

2.2. Pulse Emission Rate Functions

Amari (1977) proved the existence of nontrivial solutions for DNFs when G is the Heaviside function. Further studies have shown that in fact this is true for a large classes of functions G. As we stated earlier, functions G satisfying the Hammerstein conditions G(v) ≤ μ1v + μ2 are good candidates. However most practitioners use either the Heaviside function or the sigmoid function. For some θ > 0 and v0 ≥ 0, the sigmoid function, is defined as G1(v,v0)=(1+e-θ(v-v0))-1, and Heaviside function is defined as G2(v, v0) = I(v0, ∞)(v), see Figure 2 below. Here IA is the indicator function and is given as IA(x) = 1 if xA and IA(x) = 0 otherwise. This shows that the assumption that G be nonnegative and increasing is not unrealistic. In fact we can transform many functions to bear similar properties of the Heaviside and sigmoid functions, see for instance Kwessi and Edwards (2020).

FIGURE 2
www.frontiersin.org

Figure 2. An illustration of the pulse emission rate functions G1(v) and G2(v) for v0 = 0.

REMARK 1. We observe that the choice of the sigmoid activation function G1(v)=(1+e-θ(v-v0))-1 is widely preferred in the literature for its bounded nature. Another reason is the fact that it is also suitable when the Xn(i) are binary, that is, they may take values 0, 1, where 0 and 1 represents, respectively active and non active neurons at time n. In this case, G1(Wi0+j=1MWijXn(j)+ηn(i))=Pr(Xn+1(i)=1) would represent the probability that there is an activity on neuron at position xi at time n + 1.

2.3. Connection Intensity Functions

There are a variety of connection intensities functions (or kernels) that one can choose from. This include a Gaussian kernel Gau(x,y,σ)=1σ2πe-12σ2x-y2, the Laplacian kernel defined as Lap(x,y,σ)=12σe-1σx-y, or the hyperbolic tangent kernel Thyp(x, y, β, r) = 1 + tanh(βxT · y + r) where ∥·∥ is a norm in ℝd. Note that as defined, the hyperbolic tangent kernel is not symmetric, however, for a suitable choice of β and r, one may obtain a nearly symmetric kernel. Indeed, if r < 0 and β is close to zero, we have Thyp(x, y, β, r) ≈ (1 + tanh(r))e−β∥xy2, see for instance Lin and Lin (2003). This kernel is very useful in non-convex problems and support vector machines.

In practice however, it is very common to select the function W(x, y) as the difference between either of the functions above, which results in a “Mexican hat” shaped function, that is, either W(x,y)=σ+Gau(x,y,σ1)-σ-Gau(x,y,σ2), or W(x,y)=σ+Lap(x,y,σ1)-σ-Lap(x,y,σ2), or W(x,y)=σ+Thyp(x,y,β1,r)-σ-Thyp(x,y,β2,r). Figure 3 below is an illustration of these kernels for selected parameters. This choice will ensure that neurons within the same cell assembly are connected reciprocally by high synaptic weights σ+, whereas neurons not belonging to the same assembly are connected by low synaptic weights σ (see for instance Durstewitz et al., 2000; Quinton and Goffart, 2017; Wijeakumar et al., 2017; Jin et al., 2021).

FIGURE 3
www.frontiersin.org

Figure 3. An illustration of the connection intensity functions defined above for σ+=4.5,σ-=1.5,σ1=1,σ2=4.5,β1=0.1,β2=0.5, and r = − 0.1.

3. Existence and Stability of Nontrivial Solutions of the DDNFs

3.1. Stability Analysis

We recall the essential notions for the stability analysis of a discrete dynamical system Vn+1 = F(Vn) where F : ℝd → ℝ is continuously differentiable map.

DEFINITION 2.

(i) A point V is said to be a fixed point for F if F(V) = V.

(ii) The orbit O(V) of a point V is defined as O(V)={Fn(V),where n+} with Fn=FFFn times.

(iii) The spectral radius ρ(A) a matrix A is a maximum of its absolute eigenvalues.

(iv) A fixed point V for F is said to be stable if there exists a neighborhood of V whose trajectories are arbitrarily close to V, that is, for all ϵ > 0, δ>0:|V-V|<δ|Fn(V)-V|<ϵ.

(v) A fixed point V for F is said to be attracting if there exists a neighborhood of V whose trajectories converge to V, that is, for all ϵ > 0, δ>0:|V-V|<δlimnFn(V)=V.

(vi) A fixed point V for F is said to be asymptotically stable if it is both stable and attracting.

(v) A fixed point V for F is said to be unstable if it is not stable.

THEOREM 3. Let Vn+1 = F(Vn) be a discrete dynamical system with a fixed point V and let A = JF(V) be its Jacobian matrix.

1. If ρ(A) < 1, then V is asymptotically stable.

2. If ρ(A) > 1, then V is unstable.

3. If ρ(A) = 1, then V may be stable or unstable.

3.2. Existence of Nontrivial Fixed Solutions for the DDNFs Model

A nontrivial solution for the DNFs is a non constant solution for which the left-hand side of Equation (1) is equal to zero. For the DDNFs, it can be formulated in the following way: for a given xi on the kth layer, the DDNFs in (5) has a nontrivial fixed point Vi,(k) if for all n ≥ 1, we have Vi,n+1(k)=F(Vi,n(k))=Vi,n(k):=Vi,(k). This is satisfied when

Vi,n(k)=Lkj=1Mβ·Wi,j(k,)·G(Vj,n())+Si(k).    (7)

The right-hand side of Equation (7) is a Riemann sum for

=1LΩWk(x,y)G(V(y,t))dy+Sk(x).

So the existence of a nontrivial fixed point for the continuous DNFs Equation (1) implies the existence of a non trivial fixed solution for the DDNFs. According to Hammerstein (1930), such a nontrivial solution exists if W(x, y) is symmetric positive definite and G satisfies G(v) ≤ μ1v + μ2, where 0 < μ1 < 1, μ2 > 0 are constants.

We will use a different approach to prove existence of nontrivial solution of DNFs. Let (S,A,μ) be a measure space. We will consider the space L1(S) of equivalent classes of integrable functions on S, endowed with the norm f=S|f(x)|dx, where dx = μ(dx).

DEFINITION 4. A stochastic kernel W is a measurable function W : S × S → ℝ such that

1. W(x, y) ≥ 0, ∀x, yS,

2. SW(x,y)dy=1,xS.

DEFINITION 5. A density function f is an element of L1(S) such that f : S → ℝ such that Sfdμ=1.

DEFINITION 6. A Markov operator P on L1(S) is a positive linear operator that preserves the norm, that is, P : L1(S) → L1(S) is linear such that

1. Pf ≥ 0 if f ≥ 0 and fL1(S),

2. ∥Pf∥ = ∥f∥.

REMARK 7.

We note that f ≥ 0 (and Pf ≥ 0) represents equivalent classes of integrable functions on S that are greater or equal to 0 except on a set of measure zero and thus the inequality should be understood almost everywhere.

We also note that when an operator P satisfies the second condition above, it is sometimes referred to as a non-expansive operator.

THEOREM 8. (Corollary 5.2.1–Lasota and Mackey, 2004). Let S(S,A,μ) be measure space and P : L1(S) → L1(S) be a Markov operator. If for some density function f, there is gL1(S) such that Pnfg, for all n, then

there is a density f such thatPf=f.

Now we use Definitions 4, 5, and 6 and Theorem 8 above to state our result on the existence of nontrivial fixed solution of DNFs.

THEOREM 9. Let S:=Ω=i=1d[ai,bi]d. Consider a DNFs with no external stimulus [S(x) = 0]. If W(x, y) is positive and bounded, then the DNFs (and consequently the DDNFs) has at least one nontrivial fixed solution V.

PROOF. Let A=B(S) be the Borel σ-algebra on S generated by the family {i=1d(li,ui):ai<li<ui<bi} and μ be the associated Borel measure. A nontrivial solution V(x) for the DNFs satisfies the equation

V(x)=SW(x,y)G(V(y))dy,for all xS.

Consider the linear operator P defined on S as

Pf(x)=SW(x,y)f(y)dy,where f(x)=G(V(x)).

Clearly, P is a Markov operator and we have

n1,Pnf(x)=SWn(x,y)f(y)dy,

with W1(x, y) = W(x, y) and Wn+1(x,y)=SW(x,z)Wn(z,y)dz for n ≥ 1. Moreover, by the positivity and boundedness of W, there exists M > 0 such that for a density function f, we have Pnf(x) ≤ M, ∀n ≥ 1. By the above Theorem, there exists a density f such that Pf = f. We conclude by observing that f(x) = G(V(x)) = V(x) for some function V(x) defined on Ω and thus a nontrivial solution for the DNFs and DDNFs exists. That is, V is a fixed point of G.

One major limitation of the above theorem is that it only applies to positive connectivity functions W(x, y) which would exclude a wide range of DNFs, for example, competing DNFs use almost exclusively negative weight functions as well as DNFs for working memories, see for instance Zibner et al. (2011). One remedy could be to prove the existence of weak fixed solutions of (5) using iterative processes without a positivity restriction. Indeed, let Ω be a nonempty, closed and convex subset of l2, the space of infinite sequences v = (v1, v2, ⋯) such that v22=n=1|vn|2<. We recall that l2 is a Hilbert space with the inner product v,w=n=1vnwn¯.

DEFINITION 10. Consider a map T : Ω → l2. Suppose there is constant C such thatT(v1) − T(v2)∥2Cv1v22. Then

1. T is called a Lipschitz map if C > 0.

2. T is called a contraction map if 0 < C < 1.

3. T is called a non-expansive map if C = 1.

DEFINITION 11. Let T : Ω → l2 be either a contraction or a non-expansive map. A Mann-type iterative process is a difference equation on Ω defined as

{V0ΩVn+1=αnVn+(1-αn)T(Vn).    (8)

THEOREM 12. Suppose Ω is a subset of l2 and W(x, y) is bounded by W0. If G(v) is Lipschitz with Lipschitz constant K such that W0C ≤ 1, then the DDNFs (5) is a Mann-type iterative process with at least one weak solution.

PROOF. Here, we denote Vn to mean Vn(X) introduced earlier. We start by rewriting certain quantities in vector and matrix form. Let d = L × M where as above, M=max1kLMk and let Sn={Si,nk,1kL,1iM}. Let G(Vn())=(G(Vj,n()))1jM and Γi(k,)=(βWi,j(k,))1jM, where as above, we complete the vectors with zeros for M<jM. Then we have the dot product

Γi(k,)·G(Vn())=j=1MβWij(k,)G(Vj,n()),1iM,Lk.

Similarly, let Γi(k)=(Γi(k,))Lk and G(Vn)=(G(Vn()))Lk. We also have the dot product

Γi(k)·G(Vn)=LkΓi(k,)·G(Vn()),1iM.

Now we consider the matrix Γ=[ Γ(1),Γ(2),,Γ(L) ]T formed by block matrices Γ(k)=(Γ1(k),Γ2(k),,ΓM(k)) for 1 ≤ kL. We observe that (5) is a Mann-type process written as

{V0ΩVn+1=αnVn+(1-αn)T(Vn) ,    (9)

where T is the linear map on Ω defined as T(Vn) = Γ · G(Vn)+Sn. It follows that for Un, Vn ∈ Ω

T(Vn)-T(Un)2=Γ[G(Un)-G(Vn)]2                                       W0G(Un)-G(Vn)2                                       CW0Un-Vn2,since G is Lipschitz.

Since by hypothesis CW0 ≤ 1, T is either a contraction or a non-expansive map. It is known (see Mann, 1953) that in this case, the Mann-type iterative process (9) has at least one weak solution V, that is, there exists V ∈ Ω such that limnVn,V=V,V for all Vl2.

The second equation in (9) is a generalization of Equation (7) in Quinton and Goffart (2017) and according to their description, it can be considered a predictive neural field (PNF) where T(Vn) is an internal projection corresponding to the activity required to cancel the lag of the neural field Vn for a specific hypothetically observed trajectory, by exciting the field in the direction of the motion, ahead of the current peak location, and inhibiting the field behind the peak.

3.3. Stability Analysis of Hyperbolic Fixed Solutions

To discuss stability analysis for the DDNFs in (5), we are going to separate the analysis into a single-layer system and a multiple-layers system. For simplification purposes, we consider the following notations when necessary:

Kij(k,):=Wij(k,)G(Vj,()),K¯i,·(k,):=1Mj=1MKij(k,).

If L = 1 and for any M1 > 0, we will write Kij:=Kij(1,1) and K¯i,·:=K¯i,·(1,1).

If L > 1 and Mk = M = 1 for all 1 ≤ k, ℓ ≤ L, we will write K(k,):=K11(k,).

3.3.1. Single-Layer DDNFs Model

Let xi ∈ Ω. Suppose that there are M neurons interconnected and connected to the neuron located at xi. Let Wij be the intensity of the connection between the neuron at xi and the neuron at xj. Let

μi=|Ω|Mj=1MWijG(Vj,)=|Ω|K¯i,·

be the weighted average rate of change of pulse emissions received or emitted by the neuron at xi, where Vi,:=Vi,(1) is a nontrivial solution of Equation (5) with L = 1. Some authors like Lin et al. (2009) specify the type of connection, by dividing the connectivities or synaptic strengths Wij into a group of receptive signals called “feed backward” and a group of emission signals called “feed forward.” Here we make no such specification. Recall that αn=1-ϕ(hn)=e-hn. Henceforth, we fix hn = h > 0 for all n so that 0<α:=αn=e-h<1.

THEOREM 13. Let Ω ⊆ ℝ be a bounded region. Let M be a positive integer. Let x = (xi)1≤iM be a vector of M interconnected points on Ω via a connectivity function W(·, ·).

Let V(x) = (Vi,∗)1≤iM be a nontrivial fixed point of the DDNFs on Ω.

If max1iM|μi|<1,then V(x) is asymptotically stable.

PROOF. We have βl=β=|Ω|M. Let xi ∈ Ω and let Vi,∗ be a nontrivial fixed point of the DDNFs with one layer. Let 1 ≤ jM and xj ∈ Ω. The DDNFs with one layer in Equation (5) can be written as

Vn+1(x)=f(Vn(x))=(V1,n+1,V2,n+1,,VM,n+1),

and element-wise as

Vi,n+1=αVi,n+(1-α)|Ω|Mj=1MWijG(Vj,n)+Si,n.

The function f : ℝM → ℝ is a continuously differentiable function since the function G : ℝ → ℝ is, and given 1 ≤ i, pM, we have that

f(Vn(x))iVp,n=Vi,n+1Vp,n=αδip+(1-α)|Ω|Mj=1MWijG(Vj,n)δjp                                                   =αδip+(1-α)|Ω|MWipG(Vp,n),

where δij is the Kronecker symbol with δij = 1 if i = j and δij = 0 if ij. It follows that

f(Vn(x))iVp,n={α+(1-α)|Ω|MWiiG(Vi,n)when p=i(1-α)|Ω|MWipG(Vp,n)when pi .

We define the adjacency matrix as the Jacobian matrix A = Jf[V(x)] where A = (aij) with aij=f(V(x))iVj,n. A is a M × M matrix and let ρ(A) be its spectral radius. Therefore, we have

ρ(A)A=max1iMj=1M|aij|                              max1iMj=1M[αδij+(1-α)||Ω|MWijG(Vj,)|]                              =max1iM[α+(1-α)||Ω|Mj=1MWijG(Vj,)|]                              =max1iM[α+(1-α)|μi|]                              =[α+(1-α)max1iM|μi|].

Since 0 < α < 1, the last equality is equivalent to

min{1,max1iM|μi|}<A<max{1,max1iM|μi|}.

It follows that if max1iM|μi|<1, then we have that ρ(A)A<max{1,max1iN|μi|}=1 and consequently, V(x) is asymptotically stable.

REMARK 14.

The condition max1iMμi<1 links the size of the domain Ω and other parameters such as the rate of change of the pulse emission function G, the connectivity function W(x, y) and the number of connected neurons M. Indeed, μi=|Ω|K¯i,·. Therefore, achieving max1iMμi<1 requires max1iNK¯i·<1|Ω|. This means that to ensure stability of the fixed solution V(x), it is enough to have weighted rates of change of pulse emission Kij=WijG(V(xj)) be bounded by 1 for each neuron. This condition is obviously true if the neuron at xi is not connected with the neuron at xj since Wij would be zero. In fact the latter corresponds to the trivial fixed solution V(x) = 0.

3.3.2. Special Case of Single Layer Model With a Heaviside Pulse Emission Function

Suppose the pulse emission function G(x) is the Heaviside function G2(v, v0). There are two possible approaches to discuss the stability of the nontrivial solution. First, we observe that the derivative G2(v,v0) of G2(v, v0) exists in the sense of distribution and is given as G2(v,v0)=δ(v0) where δ(v0) is the Dirac measure at v0. It can be written as G2(v,v0)=0 if vv0 and G2(v,v0)= if v = v0. This means that for a nontrivial solution, the adjacency matrix A defined above is reduced to the diagonal matrix A = diag(α). It follows that ρ(A) = α < 1 and the nontrivial solution V(x) is asymptotically stable. Another approach is to note that Equation (5) with one layer (L = 1) can be written as

Vi,n+1=αVi,n+Yi,n+1,1iM,    (10)

where

Yi,n+1:=(1-α)[Si,n+|Ω|Mj=1MWijλj,n],

with

λj,n={1if Vj,n>00if Vj,n<0.

Moreover, we have

Vi,n+1=αn+1Vi,0+k=0nαkYi,n-k.

Suppose that for all n ∈ ℕ and for all x ∈ Ω, Sn(x) and W(x, y) are bounded by say S and W, respectively. Let Ym = (1 − α)[S + |Ω|W]. For all 1 ≤ iM, for xi ∈ Ω, and n ∈ ℕ we have |Yi,n+1| ≤ Ym, and

|Vi,n+1|αn|Vi,0|+Ym·1-αn+11-α.

This shows that limnVi,n exists and V(x)=limnVn(x) defines the state of the field overtime. If limnVn(x)<0, then the field settles to an inhibitory phase overtime, and if limnVn(x)>0, the field settles in an excitatory phase overtime. In general Si,n = ν+Ui,n where ν is a negative real number referred to as the resting level that ensures that the DNFs produce no output in the absence of external input Ui,n. If Ui,n ≡ 0 and V0(x) < 0 for all x ∈ Ω, then Vi,n+1=αn+1Vi,0+ν, since λi,nk = 0 for all k = 0, ⋯, n and for all 1 ≤ iM. Hence we will have limnVi,n=ν, see section 3.5.1 below for an illustration.

3.4. Multiple-Layers DDNFs Model

Let 1 ≤ k, mL. Consider the DDNFs with L layers given above and let xi, xp on the kth layer.

Vi,n+1(k)=F(Vi,n(k))=αVi,n(k)+(1-α)Lkj=1Mβ·Wij(k,l)·G(Vj,n())              +(1-α)Si,n(k).

Consider a fixed point solution Vi,(k). Given a layer 1 ≤ kL and a neuron at position xi on it, there are two sources of variability for the potential Vi,n+1(k)=[fk(Vn(X))]i: firstly, the variability due to the Mk neurons on the kth layer connected to the neuron at position xi, and secondly, the variability due to the neurons located on possibly two layers (k − 1) and (k + 1) connected to the kth layer. Therefore, let 1 ≤ mL and let xq be a point on the mth layer. By the chain rule, we have

[fk(Vn(X))]iVq,n(m)=[fk(Vn(X))]iVp,n(k)·Vp,n(k)Vq,n(m)=Φi,p,n(k,k)·Δp,q,n(k,m),

where the variability on the kth layer is

Φi,p,n(k,k)=αδip+(1-α)βkWip(k,k)G(Vp,n(k)).

From Equation (7), we obtain the variability due to layers k − 1 and k + 1:

Δp,q,n(k,m)=Vp,n(k)Vq,n(m)={1if m=kβmWpq(k,m)G(Vq,n(m))if mLk\{k}0if mLk.

Let M==1LM. We recall that Kij(k,):=Wij(k,)G(Vj,()). Now we consider a M × M super-adjacency matrix A=JF(V(X))=(A(k,m))1k,mL, where A(k,m) is a Mk × Mm block matrix defined as A(k,m)=(aij(k,m)), where aij(k,m)=Φi,j(k,k)·Δi,j(k,m) with

Φi,j(k,k)=αδij+(1-α)βkWij(k,k)G(Vj,(k))=αδij+(1-α)βkKij(k,k),    (11)

and

Δi,j(k,m)={1if m=kβmWij(k,m)G(Vj,(m))=βmKij(k,m)if mLk\{k}0if mLk.    (12)

Consequently, A(k,m) = 0 if mLk, where 0 represents the zero matrix. Therefore, the super-adjacency matrix can be written as

A=(A(1,1)A(1,2)0000A(2,1)A(2,2)A(2,3)0000A(3,2)A(2,3)A(3,4)00000A(L-1,L-2)A(L-1,L-1)A(L-1,L)0000A(L,L-1)A(L,L)).

Clearly, A is a block tridiagonal matrix that reduces to a block diagonal matrix if the pulse emission rate function G is the Heaviside function and to single M × M diagonal matrix if L = 1. There is a rich literature on the subject of tridiagonal matrices and the research is still on-going. There is no close form formula for the eigenvalues of A since they are not obvious to calculate, except for some special cases.

3.4.1. Special Case: Single Neuron Multiple-Layers DDNFs

It special case that is worth mentioning since all L layers have exactly one neuron and they are interconnected. In fact in this case, Mk = 1, βk = |Ω| for 1 ≤ kL, and the super adjacency matrix reduces to an L × L tridiagonal matrix

A=(a1b10000c1a2b20000c2a3b200000cL-2aL-1bL-10000cL-1aL),

where from Equations (11) and (12), one has

      ak=Φ11(k,k)·Δ11(k,k)=α+(1-α)|Ω|K(k,k),   for 1kL      bk=Φ11(k,k)·Δ11(k,k+1)=[α+(1-α)|Ω|K(k,k)]|Ω|K(k,k+1),            for 1kL-1ck-1=Φ11(k,k)·Δ11(k-1,k)=[α+(1-α)|Ω|K(k,k)]|Ω|K(k-1,k),            for 2kL.

We observe that K(k,k) is the weighted rate of change of the pulse emission function on the kth layer, and K(k,k+1) can be regarded as the transitional weighted rate of change of the pulse emission function from layer k to layer k + 1. Likewise, K(k−1,k) is the transitional weighted rate of change of the pulse emission function from layer k − 1 to layer k. From Molinari (2008), the eigenvalues of A are given by the equation

det(A-λIL)=tr(k=L2(ak-λ-bk-1ck-11-λ)·(a1-λ010))=0.    (13)

Suppose further that a1 = a2 = ⋯ = aL = a, b1 = b2 = ⋯, bL−1 = b, and c1 = c2 = ⋯cL−1 = c. This means that the weighted rates of change of the pulse emission function K(k,k)=K0 for 1 ≤ kL, K(k-1,k)=K1 for 1 ≤ kL − 1, and K(k,k+1)=K2 for 2 ≤ kL. From Kulkarni et al. (1999), the eigenvalues of A will given by

λk=a-2bccos(πkL+1),1kL.

Now for 1 ≤ kL, put

μk=[α+(1-α)|Ω|K0][1-2|Ω|K1K2cos(πkL+1)].

We therefore have the following result.

THEOREM 15. Suppose we are in the presence of a DDNFs with L layers with M = 1 point each, interconnected via a positive connectivity function W and a rate pulse emission rate function G whose rate of change is identical on all layers. Then

1. The fixed point V(X) is asymptotically stable if and only if max1kL|μk|<1.

2. The fixed point V(X) is unstable if and only if max1kL|μk|>1.

3. The fixed point V(X) is maybe stable or unstable if max1kL|μk|=1.

PROOF. By hypothesis, a = α + (1 − α)|Ω|K0, b = [α + (1 − α)|Ω|K0]|Ω|K1, and c = [α + (1 − α)|Ω|K0]|Ω|K2, for some K0, K1, K2 > 0. For 1 ≤ kL, the eigenvalues are given as

λk=a-2bccos(πkL+1)     =a-2a2|Ω|2K1K2cos(πkL+1)     =a[1-2|Ω|K1K2cos(πkL+1)]     =[α+(1-α)|Ω|K0][1-2|Ω|K1K2cos(πkL+1)]     =μk.

From Thereom 3, the proof is complete.

REMARK 16.

1. It is worth observing that in case L = 1 and M = 1, then K1 = K2 = 0 and thus μ1 = α + (1 − α)|Ω|K0. We see that |μ1 < 1 if |Ω|K0 < 1 yielding the condition obtained in the one-layer-model.

2. We also note that in fact max1kL|μk|=[α+(1-α)|Ω|K0][1+2|Ω|K1K2] and therefore the stability condition [α+(1-α)|Ω|K0][1+2|Ω|K1K2]<1 links all the parameters of system.

3. This special cases is also important in that we can imagine a large number of neurons supposedly placed on individual layers and interconnected in a forward or backward manner only and each receiving external input. In case the first and last neurons are connected so that the system forms a weighted closed graph, like in Figure 4, where the thickness represents the weights.

Then the super-adjacency matrix given as

A=(a1b1000c0c1a2b20000c2a3b200000cL-2aL-1bL-1bL000cL-1aL).

According to Molinari (2008), the eigenvalues are given by

det(A-λIL)=(-1)L+1(i=1Lbi+i=0L-1ci)                         +tr(i=L2(ai-λ-bi-1ci-11-λ)·(a1-λbLc010))                         =0.    (14)
FIGURE 4
www.frontiersin.org

Figure 4. Closed graph with 10 layers for a = 4, b = 3, c = 5, c0 = 7, b10 = 10.

3.4.2. Special Case: Single Neuron Two-Layer DDNFs

We now discuss a single neuron two-layer DDNFs. In this case, we still have Mk = 1 for k = 1, 2 and L = 2. It follows that the super-adjacency matrix is given as A=(a1b1c1a2) where from Equations (11) and (12), we have

a1=α+(1-α)|Ω|K(1,1),b1=[α+(1-α)|Ω|K(1,1)]|Ω|K(1,2),a2=α+(1-α)|Ω|K(2,2),c1=[α+(1-α)|Ω|K(2,2)]|Ω|K(2,1).

Let

   θ=1-|Ω|2K(1,2)K(2,1),μ0=|Ω|[K(1,1)+K(2,2)],   μ=[α(K(1,1)+K(2,2))+(1-α)|Ω|K(1,1)K(2,2)].

We have the following stability result.

THEOREM 17. Consider a two-layer DDNFs with one neuron each. Then the nontrivial solution V(X) is asymptotically stable if and if the following conditions are satisfied:

1. 0 < θ < 1,

2. μ < 1,

3. (μθ+2-μ0)2<4θ(μθ+1-μ0).

PROOF. By the determinant-trace analysis, see for instance Elaydi (2007), it is enough to prove that the above conditions are equivalent to

|tr(A)|-1<det(A)<1.

The determinant of the super-adjacency matrix is

det(A)=a1a2-b1c1             =a1a2[1-|Ω|2K(1,2)K(2,1)]             =[α2+α(1-α)|Ω|(K(1,1)+K(2,2))             +((1-α)|Ω|)2K(1,1)K(2,2)][1-|Ω|2K(1,2)K(2,1)]             =[α2+(1-α)|Ω|[α(K(1,1)+K(2,2))             +(1-α)|Ω|K(1,1)K(2,2)]][1-|Ω|2K(1,2)K(2,1)]             =[α2+(1-α)μ]θ.

Also

tr(A)=|tr(A)|=a1+a2=2α+(1-α)|Ω|[K(1,1)+K(2,2)]          =2α+(1-α)μ0.

Therefore since θ < 1 and μ < 1, we have that det(A) < 1. Let P(α):=det(A)-tr(A)+1=θα2+(μθ+2-μ0)α+μθ+1-μ0. The discriminant of P(α) is Δ=(μθ+2-μ0)2-4θ(μθ+1-μ). Therefore P(α) is positive if Δ < 0 with θ > 0, that is, (μθ+2-μ0)2-4θ(μθ+1-μ) with θ > 0.

REMARK 18.

1. We note that the inequality θ < 1 must be strict, otherwise the third condition in the theorem would be false. To see why, observe that θ = 1 ⇒ L = 1 thus K(2,2) = 0. Hence μ0=|Ω|K(1,1) and μ = αK(1,1). Therefore, (μθ+2-μ0)2<4θ(μθ+1-μ0)(αK(1,1)+2-|Ω|K(1,1))2<4(αK(1,1)+1-|Ω|K(1,1)), that is, ((α|Ω|)K(1,1)+2)2<4(α|Ω|)K(1,1)+1) which is impossible since the latter would be equivalent to saying ((α − |Ω|)K(1,1))2 < 0.

2. We observe that the condition 0 < θ < 1 requires |Ω|2K(1,2)K(2,1) < 1. What is striking about it is that is achieved if max{K(1,2),K(2,1)}<1|Ω|, similarly to Remark 14 above.

3.5. Simulations

There are multiple parameters that affect the stability of a fixed point solution to the DDNFs: the choice of the kernel function W(x, y), the pulse emission rate function G, parameters such the size of the field |Ω|, the time scale function ϕ(h), and the number of points per layer M, and the length of time N. In these simulations, we will consider the Ω = [−20, 20], h = 0.8, N = 100, M = 200, and the W(x,y)=σ+e-0.5(x-y)2/σ12-σ-e-0.5(x-y)2/σ22, with σ+=4,σ-=1.5,σ1=1, and σ2 = 4.5. We select a resting phase ν = −0.5. To initialize our dynamical system, we will choose Vi, 0 = −1.5 for i = 1, 2, ⋯, M. In Figures 58, below, (a) is the three dimension representation of Vn(x): = V(x, n), (b) represents Vn(x) as a function of n for a given x value, (c) represents Vn(x) as function of x for a given n value, and (d) represents the cobweb diagram for a given x value, namely a plot of Vn+1(x) vs. Vn(x) for which the green dot represents the starting point V0(x) = −1.5 and the red arrows connecting the points with coordinates [Vn(x), Vn(x)] and [Vn(x), Vn+1(x)] to show the evolution of the dynamical system overtime. Convergence occurs when the arrows stop evolving, and oscillations occurs when they circle around indefinitely. In the multi-layer case, we let L = 250.

FIGURE 5
www.frontiersin.org

Figure 5. Here Ui,n = 0 and G(v) = G1(v). (A–D) We observe from all graphs that Vn(x) quickly settles to the resting level ν = −0.5 confirming the theoretical results in section 3.3.2.

3.5.1. Single-Layer With Heaviside Function and No External Input

In this simulation, we consider single-layer DDNFs with Ui,n = 0 with a Heaviside pulse emission function, see Figure 5 below.

3.5.2. Single-Layer With Sigmoid Function and No External Input

In this simulation, we consider single-layer DDNFs with Ui,n = 0 but with a sigmoid pulse emission function, see Figure 6 below.

FIGURE 6
www.frontiersin.org

Figure 6. Here Ui,n = 0 and G(v) = G2(v). From (A), we observe that system alternating between inhibitory and excitatory phases. (B,D) Show that for fixed x = 9.548, the system converges overtime. In (C), we clearly see the oscillatory regime of the system in the space domain for fixed n = 98.

3.5.3. Single-Layer With Heaviside Function With Gaussian External Input

In this simulation, we consider single-layer DDNFs with a Heaviside Pulse emission function with a Gaussian external input Ui,n=(2π)-1e12xi2, see Figure 7 below.

FIGURE 7
www.frontiersin.org

Figure 7. Here Ui,n=(2π)-1/2e-12·xi2 and G(v) = G1(v). From (A), we observe that system has an excitatory phase around 0 and an inhibitory phase farther away. (B,D) Show that for fixed x = 2.111, the system converges overtime. In (C), we clearly observe a high level excitation occurring around 0, and the system settling back to its resting phase farther away from 0 for fixed n = 38.

3.5.4. Single-Layer With Sigmoid Function With Gaussian External Input

In this simulation, we consider single-layer DDNFs with a Sigmoid Pulse emission function with a Gaussian external input Ui,n=(2π)-1e12xi2, see Figure 8 below.

FIGURE 8
www.frontiersin.org

Figure 8. Here Ui,n=(2π)-1/2e-xi22 and G(v) = G2(v). This case is similar to the second case above. From (A), we observe that system also alternates between inhibitory and excitatory phases. (B,D) Show that for fixed x = −18.79, the system converges overtime. In (C), we clearly the oscillatory regime of the system in the space domain for fixed n = 86. The system in this case is more frequently inhibitory than excitatory.

3.5.5. Multiple-Layer With Sigmoid Function With No External Input

In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with no external input, Ui,n = 0, see Figure 9 below.

FIGURE 9
www.frontiersin.org

Figure 9. Here Ui,n = 0 and G(v) = G2(v) with L = 200 and M = 1 neuron per layer. From (A), we observe that system quickly excitatory and settles into constant state overtime. (B,D) Show that for fixed k = 293, the system converges overtime. In (C), we clearly see the constant state for n = 76.

3.5.6. Multiple-Layer With Sigmoid Function With Gaussian External Input

In this simulation, we consider Multiple-layer DDNFs with a Sigmoid pulse emission rate function with a Gaussian external input, Ui,n = 0, see Figure 10 below.

FIGURE 10
www.frontiersin.org

Figure 10. Here Ui,n=(2π)-1/2e-xi22and G(v) = G2(v) with L = 200 and M = 1 neuron per layer. From (A), we observe that system is quickly excitatory and settles into constant “Gaussian” state overtime. (B,D) Show that for fixed k = 114, the system converges overtime. In (C), we clearly see the “Gaussian” state for n = 45 with peaks located at around L = 250.

3.5.7. Effect of α

In this simulation, we consider VN(x, α) where N = 100 for different values of α. Since the convergence to the fixed solution is quick, we expect VN(x, α) to be independent of α at n = N = 100, in that any cross-section of the three dimensional plot VN(x, α) should be the same. This is shown in Figure 11 below for a G(v) = G1(v) and Ui,n=(2π)-1/2e-xi22.

FIGURE 11
www.frontiersin.org

Figure 11. Here Ui,n=(2π)-1/2e-xi22 and G(v) = G2(v) with L = 1 and M = 200 neuron per layer. We see that the cross-sections are as in Figure 7 above and the “flaps” at the end represent the case where α = 1, which amounts to the constant case.

4. Discussion

In this paper, we have proposed a discrete model for dynamical neural fields. We have proposed another proof of the existence of nontrivial solutions for dynamic neural fields. We have discussed its stability of nontrivial solutions of dynamic neural field for some particular cases. The simulations we propose capture very well the notion of stability on dynamic neural field. Below are some observations drawn from this analysis.

1. We observe that the cases of no external input Ui,n with one layer and multiple neurons and multiple layers with one neuron each are very similar, albeit the shape of the potential function Vi,n(k). In fact, this is not surprising since multiple layers with one neurons each amount to one layer with neurons connected like in a graph.

2. An important observation is that of the stability conditions depend on the knowledge of a nontrivial solution V for the DDNFs. Two things to note about V(X) are:

(a) We can not obtain V(X) in a close form. We may however rely on estimates like the one proposed in Kwessi (2021). Even in the latter case, using an estimate may proved delicate because of limited accuracy.

(b) If one cannot use a true value for V(X), one may use the properties of the pulse emission functions such as the Heaviside or sigmoid functions that are in most cases in applications bounded.

3. We note the size of the domain Ω is an important parameter for the shape of Vi,n(k).

4. As for the types of kernels, we note that Gaussian and Laplacian kernel tends to produce similar results that are sometimes very different from those produced by hyperbolic tangent kernels.

5. We note that the choice of α = eh is similar to an Euler forward discretization with e-h=dtτ where dt ≪ τ. This choice is made for simplification purposes and also out of an abundance of caution due to the fact for certain one-dimension systems, the Euler forward discretization have been shown not to always preserve the true dynamics from the ordinary differential equation, see for instance Kwessi et al. (2018).

6. Another important note is that the parameter α = 1 was not found to be a bifurcation parameter, thus the bifurcation diagram will not be given for sake of simplicity.

7. We acknowledge that in the two special cases discussed for multiple-layers DDNFs, so far, at least theoretically, the results apply only to positive connectivity functions W(x, y) because we rely on results previously established in the current literature. Simulations however suggest that they may also extend to negative connectivity functions and it would a worthwhile exercise to see how to establish these results theoretically.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Amari, S.-I. (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern., 27, 77–87. doi: 10.1007/BF00337259

PubMed Abstract | CrossRef Full Text | Google Scholar

Beim, P. G., and Hutt, A. (2014). Attractor and saddle node dynamics in heterogeneous neural fields. EPJ Nonlin. Biomed. Phys. EDP Sci. 2:2014. doi: 10.1140/epjnbp17

CrossRef Full Text

Beurle, R. L. (1956). Properties of a mass of cells capable of regenerating pulses. Philos. Trans. R. Soc. Lond. B 240. 55–94.

Google Scholar

Bicho, E., Louro, L., and Erlhangen, W. (2010). Integrating verbal and non-verbal communication in adynamic neural field for human-robot interaction. Front. Neurorobot. 4:5. doi: 10.3389/fnbot.2010.00005

PubMed Abstract | CrossRef Full Text | Google Scholar

Bicho, E., Mallet, P., and Schöner, G. (2000). Target representation on an autonomous vehicle with low-levelsensors. Int. J. Robot. Res. 19, 424–447. doi: 10.1177/02783640022066950

CrossRef Full Text | Google Scholar

Camperi, M., and Wang, X.-J. (1998). A model of visuospatial short-term memory in prefrontal cortex: recurrent network and cellular bistability. J. Comput. Neurosci. 4, 383–405. doi: 10.1023/A:1008837311948

CrossRef Full Text | Google Scholar

De Domenico, M., Solé-Ribalta, A., Cozzo, E., Kivelä, M., Moreno, Y., Porter, M. A., et al. (2013). Mathematical formulation of multilayer networks. Phys. Rev. X 3:041022. doi: 10.1103/PhysRevX.3.041022

CrossRef Full Text | Google Scholar

Durstewitz, D. (2017). Advanced Data Analysis in Neuroscience. Bernstein Series in Computational Neuroscience. Cham: Springer.

Google Scholar

Durstewitz, D., Seamans, J. K., and Sejnowski, T. J. (2000). Neurocomputational models of working memory. Nat. Neurosci. 3(Suppl.), 1184–1191. doi: 10.1038/81460

PubMed Abstract | CrossRef Full Text | Google Scholar

Elaydi, S. N. (2007). Discrete Chaos. Chapman & Hall; CRC.

Google Scholar

Elman, J. L. (1990). Finding structure in time. Cogn. Sci. 14, 179–211.

Google Scholar

Erlhagen, W., and Bicho, E. (2006). The dynamics neural field approach to cognitive robotics. J. Neural Eng. 3, R36–R54. doi: 10.1088/1741-2560/3/3/R02

PubMed Abstract | CrossRef Full Text | Google Scholar

Erlhagen, W., and Schöner, G. (2001). Dynamic field theory of movement preparation. Psychol. Rev. 109, 545–572. doi: 10.1037/0033-295x.109.3.545

PubMed Abstract | CrossRef Full Text | Google Scholar

Ermentrout, G. B., and Cowan, J. D. (1979). A mathematical theory of visual hallucination patterns. Biol. Cybern. 34, 137–150. doi: 10.1007/BF00336965

PubMed Abstract | CrossRef Full Text | Google Scholar

Hammerstein, A. (1930). Nichtlineare integralgleichungen nebst anwendungen. Acta Math. 54, 117–176. doi: 10.1007/BF02547519

CrossRef Full Text | Google Scholar

Jin, D., Qin, Z., Yang, M., and Chen, P. (2021). A novel neural modelwith lateral interactionfor learning tasks. Neural Comput. 33, 528–551. doi: 10.1162/neco_a_01345

PubMed Abstract | CrossRef Full Text | Google Scholar

Kulkarni, D., Scmidt, D., and Tsui, S. K. (1999). Eigen values of tridiagonal pseudo-toeplitz matrices. Linear Algeb. Appl. 297, 63–80.

Google Scholar

Kwessi, E. (2021). A consistent estimator of nontrivial stationary solutions of dynamic neural fields. Stats 4, 122–137. doi: 10.3390/stats4010010

CrossRef Full Text | Google Scholar

Kwessi, E., and Edwards, L. (2020). “Artificial neural networks with a signed-rank objective function and applications,” in Communication in Statistics–Simulation and Computation. 1–26. doi: 10.1080/03610918.2020.1714659

CrossRef Full Text | Google Scholar

Kwessi, E., Elaydi, S., Dennis, B., and Livadiotis, G. (2018). Nearly exact discretization of single species population models. Nat. Resour. Model. 31:e12167. doi: 10.1111/nrm.12167

CrossRef Full Text | Google Scholar

Lasota, A., and Mackey, M. C. (2004). Chaos, Fractals, and Noise. Applied Mathematical Sciences. New York, NY: Springer-Verlag.

Lin, H.-T., and Lin, C.-J. (2003). A study on sigmoid kernels for svm and the training of non-psd kernels by smotype methods. Neural Comput. 13, 2119–2147.

Lin, K. K., Shea-Brown, E., and Young, L.-S. (2009). Spike-time reliability of layered neural oscillator networks. J. Comput. Neurosci. 27, 135–160. doi: 10.1007/s10827-008-0133-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Mann, W. R. (1953). Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510.

Google Scholar

Molinari, L. G. (2008). Determinant of block tridiagonal matrices. Linear Algeb. Appl. 429, 2221–2226. doi: 10.1016/j.laa.2008.06.015

CrossRef Full Text | Google Scholar

Neumann, K., and Steil, J. J. (2011). “Batch intrinsic plasticity for extreme learning machines,” in International Conference on Artificial Neural Networks (Berlin: Springer), 339–346.

Google Scholar

Nunez, P. L., and Srinivasan, R. (2006). Electric Fields of the Brain: The Neurophysics of EEG, 2nd Edn. Oxford University Press. doi: 10.1093/acprof:oso/9780195050387.001.0001

CrossRef Full Text | Google Scholar

Perone, A., and Simmering, V. R. (2019). Connecting the dots: finding continuity across visuospatial tasks and development. Front. Psychol. 2019:1685. doi: 10.3389/fpsyg.2019.01685

PubMed Abstract | CrossRef Full Text | Google Scholar

Pozo, K., and Goda, Y. (2010). Unraveling mechanisms of homeostatic synaptic plasticity. Neuron 66, 337–351. doi: 10.1016/j.neuron.2010.04.028

PubMed Abstract | CrossRef Full Text | Google Scholar

Quinton, J. C., and Goffart, L. (2017). A unified dynamic neural field model of goal directed eye-movements. Connect. Sci. 30, 20–52. doi: 10.1080/09540091.2017.1351421

CrossRef Full Text | Google Scholar

Schmidt, B. K., Vogel, E. K., Woodman, G. F., and Luck, S. J. (2002). Voluntary and automatic attentional control of visual working memory. Percept. Psychophys. 64, 754–763. doi: 10.3758/bf03194742

PubMed Abstract | CrossRef Full Text | Google Scholar

Simmering, A. R., Schutte, V. R., and Spencer, J. P. (2008). Generalizing the dynamic field theory of spatial cognition across real and developmental time scales. Brain Res. 1202, 68–86. doi: 10.1016/j.brainres.2007.06.081

PubMed Abstract | CrossRef Full Text | Google Scholar

Strub, C., Schöner, G., Wörgötter, F., and Sandamirskaya, Y. (2017). Dynamic neural fields with intrinsic plasticity. Front. Comput. Neurosci. 11:74. doi: 10.3389/fncom.2017.00074

PubMed Abstract | CrossRef Full Text | Google Scholar

Sussilo, D., and Barack, O. (2013). Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25, 626–649. doi: 10.1162/NECO_a_00409

PubMed Abstract | CrossRef Full Text | Google Scholar

Tass, P. (1995). Cortical pattern formation during visual hallucinations. J. Biol. Phys. 21, 177–210.

Google Scholar

Wijeakumar, S., Ambrose, J. P., Spencer, J. P., and Curtu, R. (2017). Model-based functional neuroimaging using dynamic neural fields: an integrative cognitive neuroscience approach. J. Math. Psychol. 76(Pt B), 212–235. doi: 10.1016/j.jmp.2016.11.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, R. J., and Zipser, D. (1990). A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 256–263.

Google Scholar

Wilson, H. R., and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations ofmodel neurons. Biophys. J. 12, 1–24. doi: 10.1016/S0006-3495(72)86068-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Zibner, S. K. U., Faubel, C., Iossifidis, I., and Shöner, G. (2011). Dynamic neural fields as building blocks of a cortex-inspired architecture for robotic scene representation. IEEE Trans. Auton. Ment. Dev. 3, 74–91. doi: 10.1109/TAMD.2011.2109714

CrossRef Full Text | Google Scholar

Keywords: dynamic neural fields, discrete, stability, simulations, neurons

Citation: Kwessi E (2021) Discrete Dynamics of Dynamic Neural Fields. Front. Comput. Neurosci. 15:699658. doi: 10.3389/fncom.2021.699658

Received: 23 April 2021; Accepted: 02 June 2021;
Published: 08 July 2021.

Edited by:

Feng Shi, Amazon, United States

Reviewed by:

Jeremy Fix, CentraleSupélec, France
Flora Ferreira, School of Sciences, University of Minho, Portugal

Copyright © 2021 Kwessi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eddy Kwessi, ekwessi@trinity.edu

Download