Fluctuations for spatially extended Hawkes processes

https://doi.org/10.1016/j.spa.2020.03.015Get rights and content

Abstract

In a previous paper Chevallier et al. (2018), it has been shown that the mean-field limit of spatially extended Hawkes processes is characterized as the unique solution u(t,x) of a neural field equation (NFE). The value u(t,x) represents the membrane potential at time t of a typical neuron located in position x, embedded in an infinite network of neurons. In the present paper, we complement this result by studying the fluctuations of such a stochastic system around its mean field limit u(t,x). Our first main result is a central limit theorem stating that the spatial distribution associated to these fluctuations converges to the unique solution of some stochastic differential equation driven by a Gaussian noise. In our second main result we show that the solutions of this stochastic differential equation can be well approximated by a stochastic version of the neural field equation satisfied by u(t,x). To the best of our knowledge, this result appears to be new in the literature.

Introduction

We consider multivariate point processes (N1,,Nn) on [0,) representing the time occurrences of action potentials (often called spikes) of a network of n neurons. We assume that the intensity process of Ni is of the form λti=fUti,Uti=eαtu0(xi)+1nj=1nw(xj,xi)0teα(ts)dNsj.In the above formula, Uti describes the membrane potential of neuron i at time t0 and xi=in represents the position of neuron i in the network. The function f:RR+ is the firing rate of each neuron. The function w:[0,1]×[0,1]R is the matrix of synaptic strengths. It introduces a spatial structure in the model; the value w(xj,xi) models the influence of a spike of neuron j on neuron i, as a function of their positions. On the one hand, when the sign of w(xj,xi) is positive, neuron j excites neuron i. On the other hand, if the sign of w(xj,xi) is negative, neuron j inhibits neuron i. The leakage rate is modeled by the parameter α0. The function u0:[0,1]R describes the membrane potential of all neurons in the network at time t=0. We refer to f,w,u0 and α as parameters of the multivariate point process (N1,,Nn).

Such point processes are known as nonlinear Hawkes Processes, named after the pioneer work of A. G. Hawkes [20] where the model has been introduced in the linear case (i.e., for f linear). Their defining characteristic is that past events (spikes in our framework) can affect the probability of future events to occur. The literature of neuronal modeling via Hawkes processes is vast. To cite just a few articles, see for instance [7], [9], [11], [15], [19], [22], [25], [31], [34] and the references therein.

Recently, in [10], the authors have established a connection between solutions of (scalar) neural field equations (NFE) and mean field limits of nonlinear Hawkes processes. Specifically, it has been proved that the multivariate process (Ut1,,Utn)t defined in (1) converges as n, under some assumptions on the parameters of the model, to a deterministic function u(t,x) which solves the neural field equation: tu(t,x)=αu(t,x)+01w(y,x)f(u(t,y))dy,t>0andx[0,1],u(0,x)=u0(x).Here, u(t,x) represents the membrane potential at time t of a typical neuron located in position x, embedded in an infinite network of neurons. Neural field equations have been widely studied in the literature since the pioneer works of Wilson, Conwan [38], [39] and Amari [1] in the 1970s. Such models have attracted a great interest from the scientific community, due to its wide range of applications and mathematical tractability; see [6] for a recent and comprehensive review.

The goal of the present paper is to complement the results in [10] by describing the fluctuations of the process (Ut1,,Utn)t around its mean field limit u(t,x). More precisely, by writing ηti=n12(Utiu(t,xi)) to denote the individual fluctuations, the purpose of this paper is to study the convergence of the sequence of stochastic processes (Γtn)t as n, where Γtn is the random signed measure on S (representing the spatial fluctuations) defined as Γtn(dx)=1ni=1nηtiδxi(dx).Here, the set S denotes the dual space of the Fréchet space S=C([0,1]), the space of all real-valued functions on [0,1] with continuous derivatives of all orders. Fix T0, denote Γn=(Γtn)0tT and observe that ΓnD([0,T],S), the space of càdlàg functions from [0,T] to S. Our first main result, namely Theorem 1, is a Central Limit Theorem saying that under some assumptions on the parameters of the model, the sequence of processes (Γn)n1 converges in law to a limit process Γ=(Γt)0tT as n. Moreover, the limit process Γ belongs to C([0,T],S), the set of continuous functions from [0,T] to S, and for each t0, the measure ΓtS is characterized by the following identity: for all φS, Γt(φ)=eαtMt(φ)+0teα(ts)Γs01φ(x)w(,x)f(u(s,))dxds,where M=(Mt)t0 is a continuous centered Gaussian process taking values in S with covariance function given, for all t1,t20 and φ1,φ2S, by E(Mt1(φ1)Mt2(φ2))=0t1t201e2αsI[φ1](y)I[φ2](y)f(u(s,y))dyds,I[φ](y)=01φ(x)w(y,x)dx,y[0,1],and u(t,x) is the solution of (2). The interested reader is referred to [26, Φ-Wiener processes] for details on such Gaussian processes.

Let us give some intuition about Eq. (4). The first term in the RHS of (4), namely eαtMt(φ), comes from the error one makes when replacing the point measure dNti by the intensity measure f(Uti)dt. It is the diffusion approximation for point processes: formally taking φ1=φ2=δx the Dirac mass at position x, one obtains in Eq. (5) the product w(y,x)2f(u(s,y)) which is the limit variance of the jumps induced by spiking neurons in position y onto neurons in position x, at time s. The second term in the RHS of (4) comes from the error one makes when replacing the intensity f(Uti) by the limit one f(u(t,xi)): the linearization of f gives the product of the derivative f times the difference between Uti and u(t,xi) (which is encapsulated in ηti and so in the spatial fluctuation Γtn).

The study of the fluctuations is a natural follow-up to the study of the mean-field limits for interacting particle systems (see for instance [5], [8], [13], [14], [21], [27], [28], [29], [36]). These results are not only interesting per se, they are also relevant from an applied point of view. Indeed, in the mean-field limit, typically one can show that the so-called propagation of chaos property holds, meaning that evolution of any finite number of particles (the neurons in our framework) become independent (see for instance [2], [4]). In other terms, mean field limits neglect the correlations between particles which are present in finite (but large) systems. In contrast, the correlations do appear in the fluctuations, in particular in the covariance kernel (5).

With slight abuse of terminology, the mean field limit ut=u(t,), which can be seen as an element of S given by ut(φ)=01φ(x)u(t,x)dx, can be thought of as a zeroth-order approximation of the finite size system (Ut1,,Utn)t. In that respect, we say that the following process with values in S, (ut+n12Γt)t,is a first-order approximation of the finite size system, this last definition being justified by our Central Limit Theorem. In addition to the Central Limit Theorem, we also investigate here the link between the first-order approximation and the solution of the following stochastic neural field equation dVtn(x)=αVtn(x)+01w(y,x)f(Vtn(y))dydt+01w(y,x)f(Vtn(y))nW(dt,dy),V0n(x)=u0(x),where W is a Gaussian white noise on R+×[0,1]. Loosely speaking, in our second main result, namely Theorem 6, we show that the process (ut+n12Γt)t is an “almost” solution of (7). To the best of our knowledge, this result appears to be new in the literature and is of independent interest. To some extent, the solutions of (7) can be interpreted as an intermediate modeling scale, sometimes called mesoscopic scale, between the microscopic scale given by Hawkes process (1) and the macroscopic scale one given by neural field equation (2). In order to give sense to solutions of (7) we follow the approach developed by Walsh (see for instance [12], [17] and the seminal lecture notes [37]). Some heuristics arguments leading to the stochastic neural field equation (7) are provided in Section 8.1. Let us mention the article [8] which discusses similar results in a non rigorous way in the context of non linear stochastic partial differential equations.

The literature devoted to mean-field limits is usually concerned with the convergence of an empirical measure towards a probability measure which is characterized as the solution of some partial differential equation. It is worth mentioning that it is not the case here: the mean-field equation (2) is not satisfied by a probability density of the potential but by the value of the potential itself. This difference makes the study of (7) simpler: the square root term, namely f(Vt(y)), is trivially well-defined which is not the case when the mean field limit concerns an empirical measure (see [8] for instance).

The results of the present paper are stated in the distribution space S so the parameters of the model (f, w and u0) are assumed to be smooth. Concerning the rate function f, we also assume that its first and second derivatives are bounded (in particular, f is Lipschitz) and that it is lower-bounded by a positive constant (only in the last section). No additional assumptions on the model are needed and, in particular, the function f could be unbounded.

The present paper is organized as follows. In Section 2, the notation used throughout the paper is introduced, the model is described and our first main result, Theorem 1, is stated. In Section 3, some regularity properties of solutions of the neural field equation are derived. Uniform estimates on the second moment of the individual fluctuations (used all along the paper) are provided in Section 4. Section 5 is devoted to the proof of the tightness of the sequence (Γn)n defined in (3). In Section 6, we show that the limit of any converging sub-sequence of (Γn)n solves the limit equation (4). In Section 7, the uniqueness of solutions of the limit equation (4) is proved which concludes the proof of the Central Limit Theorem (Theorem 1). In Section 8, we first develop the mathematical framework required to study the stochastic neural field equation (7) and then we prove our second main result, Theorem 6, which makes the link between the first-order approximation (6) and the stochastic neural field equation (7). Some technical results used in the previous sections are collected in the Appendix A. We include in Appendix B some basic definitions about Fréchet spaces.

Section snippets

General notation

Let E and F be some metric spaces. The space of continuous (respectively càdlàg) functions from E to F is denoted by C(E,F) (resp. D(E,F)). When F=R, we write C(E) (resp. D(E)) instead of C(E,R) (resp. D(E,R)). For each integer n1, let [n]={1,,n}. We write C([0,1]) (resp. C(R)) to denote the set of all functions φ:[0,1]R (resp. φ:RR ) with continuous derivatives of all orders. Similarly, we write C([0,1]×[0,1]) to denote the set of all functions ψ:[0,1]×[0,1]R with continuous partial

Solutions of the neural field equation

The purpose of this section is to show regularity properties for the solution u(t,x) of the NFE involved in the definition of the individual fluctuations (ηti)0tT. In the preliminary study made in [10], some regularity properties of u(t,x) are shown. Then, using this a priori regularity we are able to show that u(t,x) is in fact smooth.

In [10], the function of interest is not the limit potential u(t,x) but the limit intensity λ(t,x) which is proven to be continuous and uniquely characterized

First estimates

In the sequel, for each t0 and i[n], we write Mti=Nti0tf(Usi)ds,g(s,xi)=1nj=1nw(xj,xi)f(u(s,xj)).Recall (see Section 2.1) that (Mti)t0 is the local martingale associated with neuron i. With this notation, by using (10), (2), we can rewrite ηti=n12(Utiu(t,xi)) as follows: ηti=Ati+Bti+Cti,where Ati, Bti and Cti are given respectively by Ati=eαtn12j=1n0teαsw(xj,xi)dMsj,Bti=n12j=1n0teα(ts)w(xj,xi)(f(Usj))f(u(s,xj))ds,Cti=n120teα(ts)g(s,xi)01w(y,xi)f(u(s,y))dyds.Note that (C

Tightness

The goal of this section is to prove that the sequence of S-valued stochastic processes (Γn)n1 is tight in D([0,T],S). According to Mitoma [30, Theorem 4.1], it suffices to show that the sequence of stochastic processes (Γn(φ))n1 is tight in D([0,T],R), for each fixed φS.

In what follows, we fix φS and consider the sequence of stochastic processes (Γn(φ))n1. Our goal is to show that this sequence is tight in D([0,T],R). To show this, we use Aldous’ tightness criterion. According to Aldous 

Limit equation

In this section we first show the convergence of the local martingale (Mn)n1 in order to state the limit equation (38) satisfied by the limit points of (Γn)n1.

Definition 2

Let M be a continuous centered Gaussian process with values in S with covariance given, for all φ1 and φ2 in S, for all t1 and t20, by EMt1(φ1)Mt2(φ2)=0t1t201e2αsI[φ1](y)I[φ2](y)f(u(s,y))dyds,where for each y[0,1], I[φ](y)=01w(y,x)φ(x)dx.

Proposition 5

Under assumptions of Proposition 3, the sequence (Mn)n1 of processes in D(R+,S) converges

Convergence

Proposition 7

Under Assumption 1, there is path-wise uniqueness of the solutions of limit equation (38): if Γ and Γ̃ are two solutions in C(R+,S) constructed on the same probability space as M, then Γ and Γ̃ are indistinguishable.

Proof

Let Γ and Γ̃ be two solutions and take T>0. In the following, consider the restrictions of Γ and Γ̃ to [0,T]. For almost every ωΩ, Γ(ω) and Γ̃(ω) are continuous and Fφ(Γ(ω)Γ̃(ω),M)=0 for all φS, i.e. (Γ(ω)Γ̃(ω))t(φ)=0teα(ts)(Γ(ω)Γ̃(ω))s01φ(x)w(,x)f(u(s,))dxds.In the

Connection with a stochastic NFE

Let us begin this section with some discussion about the standard central limit theorem. Let X̄n be the empirical mean of some i.i.d. square integrable centered and normalized random variables X1,,Xn. The law of large numbers and central limit theorem respectively tells that X̄n=0+o(1) and X̄n=0+n12Z+o(n12) where Z is a standard Gaussian random variable. Of course, the second statement is purely informal but gives the flavor of the result.

With this description in mind, we provide here an

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research has been conducted as part of FAPESP, Brazil project Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/07699-0). We also acknowledge the support of CNRS, France under the grant PEPS JCJC MaNHawkes.

References (39)

  • S. Chen, A. Shojaie, E. Shea-Brown, D. Witten, The multivariate Hawkes process in high dimensions: Beyond mutual...
  • ChevallierJ.

    Fluctuations for mean-field interacting age-dependent Hawkes processes

    Electron. J. Probab.

    (2017)
  • ChevallierJ. et al.

    Microscopic approach of a time elapsed neural model

    Math. Models Methods Appl. Sci.

    (2015)
  • ChevallierJ. et al.

    Mean field limits for nonlinear spatially extended Hawkes processes with exponential memory kernels

    Stochastic Process. Appl.

    (2018)
  • ChornoboyE. et al.

    Maximum likelihood identification of neural point process systems

    Biol. Cybernet.

    (1988)
  • DalangR.C. et al.

    A Minicourse on Stochastic Partial Differential Equations, Vol. 1962

    (2009)
  • DelarueF. et al.

    From the master equation to mean field game limit theory: a central limit theorem

    Electron. J. Probab.

    (2019)
  • ErmentroutB.

    Neural networks as spatio-temporal pattern-forming systems

    Rep. Progr. Phys.

    (1998)
  • FaugerasO. et al.

    Stochastic neural field equations: a rigorous footing

    J. Math. Biol.

    (2015)
  • Cited by (9)

    View all citing articles on Scopus
    View full text