Skip to main content

Mathematical Frameworks for Oscillatory Network Dynamics in Neuroscience

Abstract

The tools of weakly coupled phase oscillator theory have had a profound impact on the neuroscience community, providing insight into a variety of network behaviours ranging from central pattern generation to synchronisation, as well as predicting novel network states such as chimeras. However, there are many instances where this theory is expected to break down, say in the presence of strong coupling, or must be carefully interpreted, as in the presence of stochastic forcing. There are also surprises in the dynamical complexity of the attractors that can robustly appear—for example, heteroclinic network attractors. In this review we present a set of mathematical tools that are suitable for addressing the dynamics of oscillatory neural networks, broadening from a standard phase oscillator perspective to provide a practical framework for further successful applications of mathematics to understanding network dynamics in neuroscience.

1 Introduction

Coupled oscillator theory is now a pervasive part of the theoretical neuroscientist’s toolkit for studying the dynamics of models of biological neural networks. Undoubtedly this technique originally arose in the broader scientific community through a fascination with understanding synchronisation in networks of interacting heterogeneous oscillators, and can be traced back to the work of Huygens on “an odd kind of sympathy” between coupled pendulum clocks [1]. Subsequently the theory has been developed and applied to the interaction between organ pipes [2], phase-locking phenomena in electronic circuits [3], the analysis of brain rhythms [4], chemical oscillations [5], cardiac pacemakers [6], circadian rhythms [7], flashing fireflies [8], coupled Josephson junctions [9], rhythmic applause [10], animal flocking [11], fish schooling [12], and behaviours in social networks [13]. For a recent overview of the application of coupled phase oscillator theory to areas as diverse as vehicle coordination, electric power networks, and clock synchronisation in decentralised networks see the recent survey article by Dörfler and Bullo [14].

Given the widespread nature of oscillations in neural systems it is no surprise that the science of oscillators has found such ready application in neuroscience [15]. This has proven especially fruitful for shedding light on the functional role that oscillations can play in feature binding [16, 17], cognition [18], memory processes [19], odour perception [20, 21], information transfer mechanisms [22], inter-limb coordination [23, 24], and the generation of rhythmic motor output [25]. Neural oscillations also play an important role in many neurological disorders, such as excessive synchronisation during seizure activity in epilepsy [26, 27], tremor in patients with Parkinson’s disease [28] or disruption of cortical phase synchronisation in schizophrenia [29]. As such it has proven highly beneficial to develop methods for the control of (de)synchronisation in oscillatory networks, as exemplified by the work of Tass et al. [30, 31] for therapeutic brain stimulation techniques. From a transformative technology perspective, oscillatory activity is increasingly being used to control external devices in brain–computer interfaces, in which subjects can control an external device by changing the amplitude of a particular brain rhythm [32].

Neural oscillations can emerge in a variety of ways, including intrinsic mechanisms within individual neurons or by interactions between neurons. At the single neuron level, sub-threshold oscillations can be seen in membrane voltage as well as rhythmic patterns of action potentials. Both can be modelled using the Hodgkin–Huxley conductance formalism, and analysed mathematically with dynamical systems techniques to shed light on the mechanisms that underly various forms of rhythmic behaviour, including tonic spiking and bursting (see e.g. [33]). The high dimensionality of biophysically realistic single neuron models has also encouraged the use of reduction techniques, such as the separation of time scales recently reviewed in [34, 35], or the use of phenomenological models, such as FitzHugh–Nagumo (FHN) [36], to regain some level of mathematical tractability. This has proven especially useful when studying the response of single neurons to forcing [37], itself a precursor to understanding how networks of interacting neurons can behave. When mediated by synaptic interactions, the repetitive firing of presynaptic neurons can cause oscillatory activation of postsynaptic neurons. At the level of neural ensembles, synchronised activity of large numbers of neurons gives rise to macroscopic oscillations, which can be recorded with a micro-electrode embedded within neuronal tissue as a voltage change referred to as a local field potential (LFP). These oscillations were first observed outside the brain by Hans Berger in 1924 [38] in electroencephalogram (EEG) recordings, and have given rise to the modern classification of brain rhythms into frequency bands for alpha activity (8–13 Hz) (recorded from the occipital lobe during relaxed wakefulness), delta (1–4 Hz), theta (4–8 Hz), beta (13–30 Hz) and gamma (30–70 Hz). The latter rhythm is often associated with cognitive processing, and it is now common to link large scale neural oscillations with cognitive states, such as awareness and consciousness. For example, from a practical perspective the monitoring of brain states via EEG is used to determine depth of anaesthesia [39]. Such macroscopic signals can also arise from interactions between different brain areas, the thalamo-cortical loop being a classic example [40]. Neural mass models (describing the coarse grained activity of large populations of neurons and synapses) have proven especially useful in understanding EEG rhythms [41], as well as in augmenting the dynamic causal modelling framework (driven by large scale neuroimaging data) for understanding how event-related responses result from the dynamics of coupled neural populations [42].

One very influential mathematical technique for analysing networks of neural oscillators, whether they be built from single neuron or neural mass models, has been that of weakly coupled oscillator theory, as comprehensively described by Hoppensteadt and Izhikevich [43]. In the limit of weak coupling between limit-cycle oscillators, invariant manifold theory [44] and averaging theory [45] can be used to reduce the dynamics to a set of phase equations in which the relative phase between oscillators is the relevant dynamical variable. This approach has been applied to neural behaviour ranging from that seen in small rhythmic networks [46] up to the whole brain [47]. Despite the powerful tools and widespread use afforded by this formalism, it does have a number of limitations (such as assuming the persistence of the limit cycle under coupling) and it is well to remember that there are other tools from the mathematical sciences relevant to understanding network behaviour. In this review, we encompass the weakly coupled oscillator formalism in a variety of other techniques ranging from symmetric bifurcation theory and the groupoid formalism through to more “physics-based” approaches for obtaining reduced models of large networks. This highlights the regimes where the standard formalism is applicable, and provides a set of complementary tools when it does not. These are especially useful when investigating systems with strong coupling, or ones for which the rate of attraction to a limit cycle is slow.

In Sect. 2 we review some of the key mathematical models of oscillators in neuroscience, ranging from single neuron to neural mass, as well as introduce the standard machinery for describing synaptic and gap-junction coupling. We then present in Sect. 3 an overview of some of the more powerful mathematical approaches to understanding the collective behaviour in coupled oscillator networks, mainly drawn from the theory of symmetric dynamics. We touch upon the master stability function approach and the groupoid formalism for handling coupled cell systems. In Sect. 4 we review some special cases where it is either possible to say something about the stability of the globally synchronous state in a general setting, or that of phase-locked states for strongly coupled networks of integrate-and-fire neurons. The challenge of the general case is laid out in Sect. 5, where we advocate the use of phase–amplitude coordinates as a starting point for either direct network analysis or network reduction. To highlight the importance of dynamics off cycle we discuss the phenomenon of shear-induced chaos. In the same section we review the reduction to the standard phase-only description of an oscillator, covering the well-known notions of isochrons and phase response curves. The construction of phase interaction functions for weakly coupled phase oscillator networks is covered in Sect. 6, together with tools for analysing phase-locked states. Moreover, we go beyond standard approaches and describe the emergence of turbulent states in continuum models with non-local coupling. Another example of something more complicated than a periodic attractor is that of a heteroclinic attractor, and these are the subject of Sect. 7. The subtleties of phase reduction in the presence of stochastic forcing are outlined in Sect. 8. The search for reduced descriptions of very large networks is the topic of Sect. 9, where we cover recent results for Winfree networks that provide an exact mean-field description in terms of a complex order parameter. This approach makes use of the Ott–Antonsen ansatz that has also found application to chimera states, and which we discuss in a neural context. In Sect. 10 we briefly review some examples where the mathematics of this review have been applied, and finally in Sect. 11 we discuss some of the many open challenges in the field of neural network dynamics.

We will assume the reader has familiarity with the following:

  • The basics of nonlinear differential equation descriptions of dynamical systems such as linear stability and phase-plane analysis.

  • Ideas from the qualitative theory of differential equations/dynamical systems such as asymptotic stability, attractors and limit cycles.

  • Generic codimension-one bifurcation theory of equilibria (saddle node, Hopf) and of periodic orbits (saddle node of limit cycles, heteroclinic, torus, flip).

There are a number of texts that cover this material very well in the context of neuroscience modelling, for example [48, 49]. We include a glossary of some abbreviations that are introduced in the text.

2 Neurons and Neural Populations as Oscillators

Nonlinear ionic currents, mediated by voltage-gated ion channels, play a key role in generating membrane potential oscillations and action potentials. There are many ordinary differential equation (ODE) models for voltage oscillations, ranging from biophysically detailed conductance-based models through to simple integrate-and-fire (IF) caricatures. This style of modelling has also proved fruitful at the population level, for tracking the averaged activity of a near synchronous state. In all these cases bifurcation analysis is especially useful for classifying the types of oscillatory (and possibly resonant) behaviour that are possible. Here we give a brief overview of some of the key oscillator models encountered in computational neuroscience, as well as models for electrical and chemical coupling necessary to build networks.

2.1 The Hodgkin–Huxley Model and Its Planar Reduction

The work of Hodgkin and Huxley in elucidating the mechanism of action potentials in the squid giant axon is one of the major breakthroughs of dynamical modelling in physiology [50], and see [51] for a review. Their work underpins all modern electrophysiological models, exploiting the observation that cell membranes behave much like electrical circuits. The basic circuit elements are (1) the phospholipid bilayer, which is analogous to a capacitor in that it accumulates ionic charge as the electrical potential across the membrane changes; (2) the ionic permeabilities of the membrane, which are analogous to resistors in an electronic circuit; and (3) the electrochemical driving forces, which are analogous to batteries driving the ionic currents. These ionic currents are arranged in a parallel circuit. Thus the electrical behaviour of cells is based upon the transfer and storage of ions such as potassium (K+) and sodium (Na+).

Our goal here is to illustrate, by exploiting specific models of excitable membrane, some of the concepts and techniques which can be used to understand, predict, and interpret the excitable and oscillatory behaviours that are commonly observed in single cell electrophysiological recordings. We begin with the mathematical description of the Hodgkin–Huxley model.

The standard dynamical system for describing a neuron as a spatially isopotential cell with constant membrane potential V is based upon conservation of electric charge, so that

$$ C \frac{\mathrm{d}}{\mathrm{d}t} V = I_{\mathrm{ion}} +I , $$

where C is the cell capacitance, I the applied current and \(I_{\mathrm{ion}}\) represents the sum of individual ionic currents:

$$ I_{\mathrm{ion}} = -g_{\mathrm{K}} (V-V_{\mathrm{K}}) - g_{\mathrm {Na}} (V-V_{\mathrm{Na}}) - g_{L} (V-V_{L}). $$

In the Hodgkin–Huxley (HH) model the membrane current arises mainly through the conduction of sodium and potassium ions through voltage-dependent channels in the membrane. The contribution from other ionic currents is assumed to obey Ohm’s law (and is called the leak current). The \(g_{\mathrm{K}}\), \(g_{\mathrm{Na}}\) and \(g_{L}\) are conductances (reciprocal resistances) that can be interpreted as gating variables. The great insight of Hodgkin and Huxley was to realise that \(g_{\mathrm {K}}\) depends upon four activation gates: \(g_{\mathrm{K}} = \overline{g}_{\mathrm {K}} n^{4}\), whereas \(g_{\mathrm{Na}}\) depends upon three activation gates and one inactivation gate: \(g_{\mathrm{Na}} = \overline{g}_{\mathrm{Na}} m^{3} h\). Here the gating variables all obey equations of the form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {X} = \frac{X_{\infty}(V)-X}{\tau _{X}(V)},\quad X \in\{ m,n,h\}. $$

The conductance variables described by X take values between 0 and 1 and approach the asymptotic values \(X_{\infty}(V)\) with time constants \(\tau_{X}(V)\). These six functions are obtained from fits with experimental data. It is common practice to write \(\tau_{X}(V) = 1/(\alpha_{X}(V) + \beta _{X}(V))\), \(X_{\infty}(V) =\alpha_{X}(V) \tau_{X}(V)\), where α and β have the interpretation of opening and closing channel transition rates, respectively. The details of the HH model are provided in the Appendix for completeness. A numerical bifurcation diagram of the model in response to constant current injection is shown in Fig. 1, illustrating that oscillations can emerge in a Hopf bifurcation with increasing drive.

Fig. 1
figure 1

Bifurcation diagram of the Hodgkin–Huxley model as a function of the external drive I. The green lines show the amplitude of a stable limit cycle and the blue lines indicate unstable limit-cycle behaviour, both born from Hopf bifurcations. The solid red line shows the stable fixed point and the black line shows the unstable fixed point. Details of the model are provided in the Appendix

The mathematical forms chosen by Hodgkin and Huxley for the functions \(\tau_{X}\) and \(X_{\infty}\), \(X \in\{m,n,h\}\), are all transcendental functions (i.e. involve exponentials). Both this and the high dimensionality of the model make analysis difficult. However, considerable simplification is attained with the following observations: (i) \(\tau_{m}(V)\) is small for all V so that the variable m rapidly approaches its equilibrium value \(m_{\infty}(V)\), and (ii) the equations for h and n have similar time-courses and may be slaved together. This has been put on a more formal footing by Abbott and Kepler [52] by expressing both n and h as functions of a scalar quantity \(U(t)\). They obtain a reduced planar model (of the full Hodgkin–Huxley model) in \((V,U)\) co-ordinates under the replacement \(m \rightarrow m_{\infty}(V)\) and \(X = X_{\infty}(U)\) for \(X \in\{ n , h \}\) with a prescribed choice of dynamics for U (such that \(\mathrm{d}^{2} V/ \mathrm{d} t^{2}\) is similar in the two models for a fixed V). The phase plane and nullclines for this system are shown in Fig. 2.

Fig. 2
figure 2

Nullclines (red for V and green for U) of the reduced HH neuron mode, obtained using the reduction technique of Abbott and Kepler [52], for the oscillatory regime (\(I=10\)) capable of supporting a periodic train of spikes. The periodic orbit is shown in blue

For zero external input the fixed point is stable and the neuron is said to be excitable. When a positive external current is applied the low-voltage portion of the V nullcline moves up whilst the high-voltage part remains relatively unchanged. For sufficiently large constant external input the intersection of the two nullclines falls within the portion of the V nullcline with positive slope. In this case the fixed point is unstable and the system may support a limit cycle. If an emergent limit cycle is stable then a train of action potentials will be produced and the system is referred to as being oscillatory. Action potentials may also be induced in the absence of an external current for synaptic stimuli of sufficient strength and duration. This simple planar model captures all of the essential features of the original HH model yet is much easier to understand from a geometric perspective. Indeed the model is highly reminiscent of the famous FHN model, in which the voltage nullcline is taken to be a cubic function. Both models show the onset of repetitive firing at a nonzero frequency as observed in the HH model (when an excitable state loses stability via a subcritical Hopf bifurcation). However, unlike real cortical neurons they cannot fire at arbitrarily low frequency. This brings us to consider modifications of the original HH formalism to accommodate bifurcation mechanisms from excitable to oscillatory behaviours that can respect this experimental observation.

2.2 The Cortical Model of Wilson

Many of the properties of real cortical neurons can be captured by making the equation for the recovery variable of the FHN equations quadratic (instead of linear). We are thus led to the cortical model of Wilson [53]:

$$\begin{aligned} C \frac{\mathrm{d}}{\mathrm{d}t} {v} & = f(v) - w +I,\quad f(v) = v(a-v) (v-1) , \\ \frac{\mathrm{d}}{\mathrm{d}t} {w} & = \beta(v-v_{1}) (v-v_{2})-\gamma w, \end{aligned}$$

where \(0< a<1\) and \(\beta,\gamma, C>0\). Here v is like the membrane potential V, and w plays the role of a gating variable. In addition to the single fixed point of the FHN model it is possible to have another pair of fixed points, as shown in Fig. 3(left). As I increases two fixed points can annihilate at a saddle node on an invariant circle (SNIC) bifurcation at \(I=I_{c}\) [48]. In the neighbourhood of this global bifurcation the firing frequency scales like \(\sqrt{I-I_{c}}\). For large enough I there is only one fixed point on the middle branch of the cubic, as illustrated in Fig. 3(right). In this instance an oscillatory solution occurs via the same mechanism as for the FHN model.

Fig. 3
figure 3

Phase portrait for the cortical neuron model with quadratic recovery variable, \(a=0.1\), \(\beta=\gamma=0.5\), \(v_{1}=0\), \(v_{2}=0.2\). The voltage nullcline is shown in red and that of the recovery variable in green. Left: \(I=0\), showing a stable fixed point (black filled circle), a saddle (grey filled circle) and an unstable fixed point (white filled circle). Right: \(I=0.1\), where there is an unstable fixed point (white filled circle) with a stable limit cycle (in blue) for \(C=0.01\)

2.3 Morris–Lecar with Homoclinic Bifurcation

A SNIC bifurcation is not the only way to achieve a low firing rate in a single neuron model. It is also possible to achieve this via a homoclinic bifurcation, as is possible in the Morris–Lecar (ML) model [54]. This was originally developed as a model for barnacle muscle fibre. Morris and Lecar introduced a set of two coupled ODEs incorporating two ionic currents: an outward, non-inactivating potassium current and an inward, non-inactivating calcium current. Assuming that the calcium currents operate on a much faster time scale than the potassium current one, they formulated the following two-dimensional system:

$$\begin{aligned} C\frac{\mathrm{d}}{\mathrm{d}t}V &= g_{L}(V_{L}-V) +g_{\mathrm{K}} w (V_{\mathrm{K}}-V) + g_{\mathrm{Ca}}m_{\infty}(V) (V_{\mathrm{Ca}}-V) + I , \\ \frac{\mathrm{d}}{\mathrm{d}t}w &= \lambda(V) \bigl(w_{\infty}(V)-w\bigr), \end{aligned}$$

with \(m_{\infty}(V) = 0.5(1+\tanh[(V-V_{1})/V_{2}])\), \(w_{\infty}(V) = 0.5(1+ \tanh[(V-V_{3})/V_{4}])\), \(\lambda(V) = \phi\cosh[(V-V_{3})/(2V_{4})]\), where \(V_{1}, \ldots, V_{4} \) and ϕ are constants. Here w represents the fraction of K+ channels open, and the Ca2+ channels respond to V so rapidly that we assume instantaneous activation. Here \(g_{L}\) is the leakage conductance, \(g_{\mathrm{K}}\), \(g_{\mathrm{Ca}}\) are the potassium and calcium conductances, \(V_{L}\), \(V_{\mathrm{K}}\), \(V_{\mathrm{Ca}}\) are corresponding reversal potentials, \(m_{\infty}(V)\), \(w_{\infty}(V)\) are voltage-dependent gating functions and \(\lambda(V)\) is a voltage-dependent rate. Referring to Fig. 4, as I decreases the periodic orbit grows in amplitude, it comes closer to a saddle point and the period increases such that near the homoclinic bifurcation, where the orbit collides with the saddle at \(I=I_{c}\), the frequency of oscillation scales as \(-1/\log(I-I_{c})\).

Fig. 4
figure 4

Phase portrait of the Morris–Lecar model at \(I=0.075\) with \(C=1\), \(V_{k}=-0.7\), \(V_{L}=-0.5\), \(V_{\mathrm{Ca}}=1\), \(g_{\mathrm {K}}=2\), \(g_{L}=0.5\), \(V_{1}=-0.01\), \(V_{2}=0.15\), \(g_{\mathrm{Ca}}=1.33\), \(V_{3}=0.1\), \(V_{4}=0.145\) and \(\phi=1/3\). The voltage nullcline is shown in red and that of the gating variable in green. The filled black circle indicates a stable fixed point, the grey filled circle a saddle and the filled white circle an unstable fixed point. The periodic orbit is shown in blue

2.4 Integrate-and-Fire

Although conductance-based models like that of Hodgkin and Huxley provide a level of detail that helps us understand how the kinetics of channels (with averaged activation and inactivation variables) can underlie action-potential generation, their high dimensionality is a barrier to studies at the network level. The goal of a network-level analysis is to predict emergent computational properties in populations and recurrent networks of neurons from the properties of their component cells. Thus simpler (lower-dimensional and hopefully mathematically tractable) models are more appealing—especially if they fit single neuron data.

A one-dimensional nonlinear IF model takes the form

$$ \tau\frac{\mathrm{d}}{\mathrm{d}t} v = f(v) + I\quad \mbox{if }v< v_{\mathrm{th}} , $$
(1)

such that \(v(t)\) is reset to \(v_{{R}}\) just after reaching the value \(v_{\mathrm{th}}\). In other words we seek a piece-wise discontinuous solution \(v(t)\) of (1) such that

$$\lim_{t\rightarrow t_{0}-}v(t_{0})=v_{\mathrm{th}} \quad \mbox{implies that}\quad\lim_{t\rightarrow t_{0}+} v(t)=v_{{R}}. $$

Firing times are defined iteratively according to

$$ T_{n} = \inf\bigl\{ t | v(t) \geq v_{\mathrm{th}}; t \geq T_{n-1} \bigr\} . $$

Real cortical data can be very accurately fit using

$$ f(v)= v_{{L}}-v + \kappa \mathrm {e}^{(v-v_{\kappa})/\kappa}, $$

with \(v_{{L}}=-68.5\mbox{ mV}\), \(\tau=3.3\mbox{ ms}\), \(v_{\kappa}= -61.5\mbox{ mV}\) and \(\kappa=4\mbox{ mV}\) [55]. There are many varieties of nonlinear IF model, with the quadratic one [56] being well known as a precursor for the planar Izhikevich spiking model [57], itself capable of generating a wide variety of firing patterns, including bursting and chattering as well as regular spiking. For a more thorough discussion of IF models and the challenges of analysing non-smooth dynamical systems we refer the reader to [58].

2.5 Neuronal Coupling

At a synapse presynaptic firing results in the release of neurotransmitters that cause a change in the membrane conductance of the postsynaptic neuron. This postsynaptic current may be written \(I_{s}(t) = g_{s} s(t) (V_{s} - V(t))\) where \(V(t)\) is the voltage of the postsynaptic neuron, \(V_{s}\) is the membrane reversal potential and \(g_{s}\) is a constant conductance. The variable s corresponds to the probability that a synaptic receptor channel is in an open conducting state. This probability depends on the presence and concentration of neurotransmitter released by the presynaptic neuron. The sign of \(V_{s}\) relative to the resting potential (which without loss of generality we may set to zero) determines whether the synapse is excitatory (\(V_{s} >0\)) or inhibitory (\(V_{s} < 0\)).

The effect of some synapses can be described with a function that fits the shape of the postsynaptic response due to the arrival of an action potential at the presynaptic release site. The postsynaptic response \(s(t)\) would then be given by \(s(t) = \eta(t-T)\), \(t \geq T\) where T is the arrival time of a presynaptic action potential and \(\eta(t)\) fits the shape of a realistic postsynaptic response (with \(\eta(t)=0\) for \(t<0\)). A common (normalised) choice for \(\eta(t)\) is a difference of exponentials:

$$ \eta(t) = \biggl(\frac{1}{\alpha} - \frac{1}{\beta} \biggr)^{-1} \bigl(\mathrm {e}^{-\alpha t} - \mathrm {e}^{-\beta t}\bigr) , $$
(2)

or the alpha function \(\alpha^{2} t \mathrm {e}^{-\alpha t}\) obtained from (2) in the limit \(\beta\rightarrow\alpha\). The postsynaptic response arising from a train of action potentials is given by

$$ s(t) = \sum_{m \in \mathbb {Z}} \eta(t-T_{m}), $$
(3)

where \(T_{m}\) denotes the arrival time of the mth action potential at a synapse.

Interestingly even purely inhibitory synaptic interactions between non-oscillatory neurons can create oscillations at the network level, and can give rise to central pattern generators of “half-centre” type [59]. To see this we need only consider a pair of (reduced) Hodgkin–Huxley neurons with mutual reciprocal inhibition mediated by an α-function synapse with a negative reversal potential. The phenomenon of anode break excitation (whereby a neuron fires an action potential in response to termination of a hyperpolarising current) can underlie a natural anti-phase rhythm, and is best understood in terms of the phase plane shown in Fig. 2. In this case inhibition will effectively move the voltage nullcline down, and the system will equilibrate to a new hyperpolarised state. Upon release of inhibition the fixed point will move to a higher value, though to reach this new state the trajectory must jump to the right hand branch (since now \({\mathrm {d}V}/{\mathrm {d}t}>0\)). An example is shown in Fig. 5.

Fig. 5
figure 5

A Half Center oscillator built from two (reduced) Hodgkin–Huxley neurons with mutual reciprocal inhibition modelled by an α-function synapse and a negative reversal potential. The inset shows the voltage traces (solid and dashed lines) for the two neurons. The solid blue line in the \((V,U)\) space shows the common orbit of the two neurons (though each neuron travels half a period out of phase with the other). Parameters: \(\alpha=1\mbox{ ms}\), \(V_{s}=-100\mbox{ mV}\), \(g_{s}=100\mbox{ mS}\,\mbox{cm}^{-2}\). Spike times determined by a voltage threshold at \(V=-35\mbox{ mV}\)

Gap junctions differ from chemical synapses in that they allow for direct communication between cells. They are typically formed from the juxtaposition of two hemichannels (connexin proteins) and allow the free movement of ions or molecules across the intercellular space separating the plasma membrane of one cell from another. As well as being found in the neocortex, they occur in many other brain regions, including the hippocampus, inferior olivary nucleus in the brain stem, the spinal cord, and the thalamus [60]. Without the need for receptors to recognise chemical messengers, gap junctions are much faster than chemical synapses at relaying signals. The synaptic delay for a chemical synapse is typically in the range 1–100 ms, while the synaptic delay for an electrical synapse may be only about 0.2 ms.

It is common to view the gap junction as nothing more than a channel that conducts current according to a simple ohmic model. For two neurons with voltages \(v_{i}\) and \(v_{j}\) the current flowing into cell i from cell j is given by \(I_{\mathrm{gap}}(v_{i},v_{j})=g (v_{j}-v_{i})\), where g is the constant strength of the gap-junction conductance. They are believed to promote synchrony between oscillators (e.g. see [61]), though the story is more subtle than this as we shall discuss in Sect. 4.

2.6 Neural Mass Models

As well as supporting oscillations at the single neuron level, brain tissue can also generate oscillations at the tissue level. Rather than model this using networks built from single neuron models, it is has proven especially useful to develop low-dimensional models to mimic the collection of thousands of near identical interconnected neurons with a preference to operate in synchrony. These are often referred to as neural mass models, with state variables that track coarse grained measures of the average membrane potential, firing rates or synaptic activity. They have proven especially useful in the description of human EEG power spectra [62], as well as resting brain state activity [63] and mesoscopic brain oscillations [64].

In many neural population models, such as the well-known Wilson–Cowan model [65], it is assumed that the interactions are mediated by firing rates rather than action potentials (spikes) per se. To see how this might arise we rewrite (3) in the equivalent form using a sum of Dirac δ-functions,

$$ \biggl(1 + \frac{1}{\alpha} \frac{\mathrm{d}}{\mathrm{d}t} \biggr) \biggl(1 + \frac {1}{\beta } \frac{\mathrm{d}}{\mathrm{d}t} \biggr) s = \sum_{m \in \mathbb {Z}} \delta(t-T_{m}). $$
(4)

Identifying the right hand side of (4) as a train of presynaptic spikes motivates the form of a phenomenological rate model in the form

$$ Q s = f , $$
(5)

with f identified as a firing rate and Q identified as the differential operator \((1+\alpha^{-1} \mathrm{d}/\mathrm {d}t)(1+\beta^{-1} \mathrm{d}/\mathrm{d} t)\). At the network level it is then common practice to close this system of equations by specifying f to be a function of presynaptic activity. A classic example is the Jansen–Rit model [66], which describes a network of interacting pyramidal neurons (P), inhibitory interneurons (I) and excitatory interneurons (E), and has been used to model both normal and epileptic patterns of cortical activity [67]. This can be written in the form

$$Q_{E} s_{P} = f(s_{E} -s_{I}) , \qquad Q_{E} s_{E} = C_{2}f(C_{1} s_{P}) +A ,\qquad Q_{I}s_{I} = C_{4} f(C_{3} s_{P}) , $$

which is a realisation of the structure suggested by (5), with the choice

$$ f(v) = \frac{\nu}{1+\mathrm {e}^{-r(v-v_{0})}}, $$

and \(Q_{a} = (1+\beta_{a}^{-1}\mathrm{d}/\mathrm{d}t)^{2} \beta_{a}/A_{a}\) for \(a \in \{E,I\}\). Here A is an external input. When this is a constant we obtain the bifurcation diagram shown in Fig. 6. Oscillations emerge via Hopf bifurcations and it is possible for a pair of stable periodic orbits to coexist. One of these has a frequency in the alpha band and the other is characterised by a lower frequency and higher amplitude. Recently a network of such modules, operating in the alpha range and with additive noise, has been used to investigate mechanisms of cross-frequency coupling between brain areas [68]. Neural mass models have also previously been used to model brain resonance phenomena [69], for modelling of epileptic seizures [7072], and they are very popular in the neuroimaging community for model driven EEG/fMRI fusion [73].

Fig. 6
figure 6

Bifurcation diagram on varying the input A for the Jansen–Rit model with \(\beta_{E}=100\mbox{ s}^{-1}\), \(\beta_{I}=50\mbox{ s}^{-1}\), \(A_{E}=3.25\mbox{ mV}\), \(A_{I}=22\mbox{ mV}\), \(\nu= 5\mbox{ s}^{-1}\), \(v_{0}=6\mbox{ mV}\), \(r=0.56\mbox{ mV}^{-1}\), \(C_{1}=135\), \(C_{2}=0.8 C_{1}\), \(C_{3}=0.25 C_{1}= C_{4}\). Solid red (black) lines represent stable (unstable) fixed points. Green (blue) points denote the amplitude of stable (unstable) periodic orbits that emerge via Hopf bifurcations. Note that a SNIC bifurcation occurs at \(A \simeq110\mbox{ Hz}\). The inset shows the coexistence of two stable periodic orbits at \(A=125\mbox{ Hz}\)

Now that we have introduced some oscillator models for neurons and neural populations it is appropriate to consider the set of tools for analysing their behaviour at the network level.

3 Dynamical Systems Approaches to Collective Behaviour

We give a brief overview of some dynamical systems approaches, concepts and techniques that can be used to understand collective behaviour that spontaneously appears in coupled dynamical system models used for neuroscience modelling. We do not give a complete review of this area but try to highlight some of the approaches and how they interact; some examples of applications of these to neural systems are given in later chapters.

In the artificial neural network literature, a distinction is made between recurrent and feedforward networks; see for example [74]. A feedforward network is one that is coupled but contains no feedback loops—i.e. there are no directed loops, while a recurrent network does contain feedback loops. We note that the methodologies discussed in this section (including constraints from symmetries and groupoid structures) may be applied to networks regardless of whether they are feedforward or recurrent. In this review we will later mostly discuss examples that are recurrent, though there are many interesting and relevant questions for feedforward networks as these often appear as models for “input–output” processes in neural systems.

3.1 Synchrony and Asynchrony

A set of N coupled nonlinear systems represented by ODEs can be written

$$ \frac{\mathrm{d}}{\mathrm{d}t}x_{i} = F_{i}(x_{i};x_{1}, \ldots,x_{N}) , $$
(6)

where \(x_{i}\in \mathbb {R}^{d}\) represent the state space of the individual systems whose evolution is affected both by the current state of the system and by the states of those coupled to that system. In the mathematics literature, the \(x_{i}\) are often called “cells” though we note that the \(x_{i}\) may include degrees of freedom in the coupling as well as variables such as membrane potential that reflect the state of the “biological cell”. Note there is potential for notational confusion here: to clarify this we write

$$x\in \mathbb {R}^{Nd},\qquad x_{i}\in \mathbb {R}^{d},\qquad x_{i}^{(j)}\in \mathbb {R}. $$

One of the most important observations concerning the collective dynamics of coupled nonlinear systems relates to whether the collection behaves as one or not—whether there is an attracting synchronous state, or whether more complex spatio-temporal patterns such as generalised synchrony (also called clustering) appear. There is a very large literature, even restricting to the case of applications of synchrony, and one where we cannot hope to do the whole area justice. We refer in particular to [75, 76]. Various applications of synchrony of neural models are discussed, for example, in [7785] while there is a large literature (e.g. [17]) discussing the role of synchrony in neural function. Other work looks for example at synchronisation of groups of networks [86] and indeed synchrony can be measured experimentally [87] in groups of neurons using dynamic patch clamping.

We discuss some of the types of behaviour that can emerge in the collective dynamics and the response of partial synchronised states to external forcing. In many cases a system of N coupled dynamical systems can be written in the form

$$ \frac{\mathrm{d}}{\mathrm{d}t}x_{i} = f_{i}(x_{i})+\epsilon g_{i}(x_{i};x_{1},\ldots ,x_{N}) , $$
(7)

where each system is parametrised by \(x_{i}\in \mathbb {R}^{d}\), with intrinsic dynamics determined by \(f_{i}\); \(\epsilon=0\) corresponds to decoupling the systems and the functions \(g_{i}\) represent drive from other systems on the ith system. Many approaches start with the less general case of additive interactions

$$ \frac{\mathrm{d}}{\mathrm{d}t} x_{i} = f_{i}(x_{i})+ \epsilon\sum_{j=1}^{N} w_{ij} G_{ij}(x_{i},x_{j}), $$
(8)

which can be justified for systems where the collective interaction between systems can be broken into “pairwise” interactions \(G_{ij}\) that are summed according to some linear weights \(w_{ij}\) (some of which may be zero) that represent the strength of the couplings and ϵ the strength of coupling of the network. Note that there is clearly no unique way to write the system in this form; more specifically, one can without loss of generality choose \(\epsilon=1\), \(w_{ij}=1\) by suitable choice of \(G_{ij}\). On the other hand it can be useful to be able to e.g. modulate the strength of the coupling across the whole network independently of changing individual coupling strengths. On the other hand, the special, fully symmetric, case \(G_{ij}=G\) is of particular interest as an example where there is full connectivity.

3.2 Clusters, Exact and Generalised Synchrony

If one has a notion of synchrony between the systems of (7), it is possible to discuss certain generalised forms of synchrony, including clustering according to mutual synchrony. Caution needs to be exercised whenever discussing synchrony—there are many distinct notions of synchrony that may be appropriate in different contexts and, in particular, synchrony is typically a property of a particular solution at a particular point in time rather than a property of the system as a whole.

An important case of synchrony is exact synchrony: we say \(x_{i}(t)\) and \(x_{j}(t)\) are exactly synchronised if \(x_{i}(t)=x_{j}(t)\) for all t. Generalised synchrony is, as the name suggests, much more general and corresponds to there simply being a functional relationship of the form \(x_{i}(t)=F(x_{j}(t))\). Another related notion is that of clustering, where different groups of oscillators are exactly synchronised but there is no exact synchrony between the groups. For oscillators, phases can be used to define additional notions such as phase and frequency synchrony: see Sect. 6.1.

Although we focus mostly on individual units with simple (periodic) dynamics, if the units have more complex dynamics (such as “chaotic oscillators”) one can understand synchrony of the cells by analysis of the linearised differences between oscillators, and there is a sizeable literature on this; see the review [88], or [89] for clusters in a system of globally coupled bistable oscillators. In the case of two linearly coupled identical chaotic oscillators

$$\frac{\mathrm{d}}{\mathrm{d}t} x_{1} = f(x_{1})+\epsilon (x_{2}-x_{1}),\qquad \frac{\mathrm{d}}{\mathrm{d}t} x_{2} = f(x_{2})+\epsilon(x_{1}-x_{2}) , $$

where \((x_{1},x_{2})\in \mathbb {R}^{2d}\), if the individual oscillator \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\) has a chaotic attractor \(A\subset \mathbb {R}^{d}\) then the coupled system will have an attracting exactly synchronised attractor \(\tilde{A}=\{(x,x) : x\in A\}\) only if the coupling ϵ is sufficiently large in relation to the maximal Lyapunov exponent of the synchronous state [88].

3.3 Networks, Motifs and Coupled Cell Dynamics

We focus now on the dynamics of pairwise coupled networks such as (8) as this form is assumed in most cases. Under the additional assumption that the coupling between the oscillators is of the same type and either present or absent, one can consider uniformly weighted coupling of the form

$$ w_{ij}= g A_{ij} , $$

where g is fixed and \(A_{ij}\) is an adjacency matrix of the graph of interactions, i.e. \(A_{ij}=1\) if there is a link from j to i and 0 otherwise, or more generally

$$ w_{ij}= g_{ij} A_{ij} , $$

where \(g_{ij}>0\) represents the strength of coupling and \(A_{ij}\) the adjacency matrix. It is clear that the coupling structure as represented in \(A_{ij}\) will influence the possible dynamics on the network and to make further progress it is useful to restrict to particular network structures. Some important literature on network structures is reviewed for example in [90], while [75] reviews work on synchrony in complex networks up to the time of its publication; recent work includes for example [91]. For a recent review of the application of complex networks to neurological disorders in the brain, see [92].

It is interesting to try and understand the effect of network structure on synchrony, so we briefly outline some basic graph theoretic measures of network structure. The in-degree of the node i is the number of incoming connections (i.e. \(d_{\mathrm{in}}(i)=\sum_{j} A_{ij}\)), while the out-degree is the number of outgoing connections (i.e. \(d_{\mathrm{out}}(i)=\sum_{j} A_{ji}\)) and the distribution of these degrees is often used to characterise a large graph. A scale-free network is a large network where the distribution of in (or out) degrees scales as a power of the degree. This can be contrasted with highly structured homogeneous networks (for example on a lattice) where the degree may be the same at each node. Other properties commonly examined include the clustering properties and path lengths within the graph. There are also various measures of centrality that help one to determine the most important nodes in a graph—for example the betweenness centrality is a measure of centrality that is the probability that a given node is on the shortest path between two uniformly randomly chosen nodes [90]. As expected, the more central nodes are typically most important if one wishes to achieve synchrony in a network.

Other basic topological properties of networks that are relevant to their dynamics include, for example, the following, most of which are mentioned in [75, 90]: The network is undirected if \(A_{ij}=A_{ji}\) for all i, j, otherwise it is directed. We say nodes j and i in the network \(A_{ij}\) are path-connected if for some n there is a path from j to i, i.e. \((A^{n})_{ij}\neq0\) for some n. The network is strongly connected if for each i, j it is path-connected in both directions while it is weakly connected if we replace \(A_{ij}\) by \(\max (A_{ij},A_{ji})\) (i.e. we make the network undirected) and the latter network is strongly connected.In the terminology of artificial neural networks, a strongly connected network is recurrent while one that is not strongly connected must have some feedforward connections between groups of nodes. There is a potential source of confusion in that strong and weak connectivity are properties of a directed network—while strong and weak coupling are properties of the coupling strengths for a given network.

The diameter of a network is the maximal length of a shortest path between two points on varying the endpoints. Other properties of the adjacency matrix are discussed for example in [93] where spectral properties of graph Laplacians are linked to the problem of determining stability of synchronised states. Other work we mention is that of Pecora et al. [94, 95] on synchronisation in coupled oscillator arrays (and see Sect. 4.1), while [96] explores the recurrent appearance of synchrony in networks of pulse-coupled oscillators (and see Sect. 4.2).

Finally, we mention network motifs—these are subgraphs that are “more prevalent” than others within some class of graphs. More precisely, given a network one can look at the frequency with which a small subgraph appears relative to some standard class of graphs (for example Erdös–Rényi random graphs) and if a certain subgraph appears more often than expected, this characterises an important property of the graph [97]. Such analysis has been used in systems biology (such as transcription or protein interaction networks) and has been applied to study the structure in neural systems (see for example [98, 99]) and the implications of this for the dynamics. They have also been used to organise the analysis of the dynamics of small assemblies of coupled cells; see for example [100, 101].

3.4 Weak and Strong Coupling

Continuing with systems of the form (7) or (8), if the coupling parameter ϵ is, in some sense small, we refer to the system as “weakly coupled”. Mathematically, the weak-coupling approximation is very helpful because it allows one to use various types of perturbation theory to investigate the dynamics [43]. For coupling of limit-cycle oscillators it allows one to greatly reduce the dimension of phase space. Nonetheless, many dynamical effects (e.g. “oscillator death” where the oscillations in one or more oscillators are completely suppressed by the action of the network [102]) cannot occur in the weak-coupling limit, and, moreover, real biological systems often have “strong coupling”. We will return to this topic to discuss oscillator behaviour in Sect. 4.3. One can sometimes use additional structure such as weak dissipation and weak coupling of the oscillators to perform a semi-analytic reduction to phase oscillators; see for example [103, 104].

3.5 Synchrony, Dynamics and Time Delay

An area of intense interest is the role of time delay in the collective dynamics of coupled systems. In the neural context it is natural to include propagation delay between neurons explicitly, for example in models such as

$$ \frac{\mathrm{d}}{\mathrm{d}t}x_{i}(t) = f_{i}\bigl(x_{i}(t) \bigr)+ \epsilon\sum_{j=1}^{N} w_{ij} G_{ij}\bigl(x_{i}(t),x_{j}(t- \tau)\bigr) , $$

where the delay time \(\tau>0\) represents the time of propagation of the signal from one neuron to another. This presents a number of serious mathematical challenges, not least due to the fact that delayed dynamical systems are infinite-dimensional: one must specify the initial condition over a time interval \(t\in[t_{0}-\tau,t_{0}]\) in order to have any chance of uniquely defining the future dynamics; this means that for a state to be linearly stable, an infinite number of eigenvalues need to have real part less than zero.

Nonetheless, much can be learned about stability, control and bifurcation of dynamically synchronous states in the presence of delay; for example [84, 105108], and the volume [109] include a number of contributions by authors working in this area. There are also well-developed numerical tools such as DDE-BIFTOOL [110, 111] that allow continuation, stability and bifurcation analysis of coupled systems with delays. For an application of these techniques to the study of a Wilson–Cowan neural population model with two delays we refer the reader to [112].

3.6 A Short Introduction to Symmetric Dynamics

Although no system is ever truly symmetric, in practice many models have a high degree of symmetry.Footnote 1 Indeed many real-world networks that have grown (e.g. giving rise to tree-like structures) are expected to be well approximated by models that have large symmetry groups [113].

Symmetric (more precisely, equivariant) dynamics provides a number of powerful mathematical tools that one can use to understand emergent properties of systems of the form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {x}=f(x) , $$
(9)

with \(x\in R^{N}\). We say (9) is equivariant under the action of a group Γ if and only if \(f(gx)=gf(x)\) for any \(g\in \varGamma\) and \(x\in \mathbb {R}^{N}\). There is a well-developed theory of dynamics with symmetry; in particular see [114116]. These give methods that help in a number of ways:

  • Description: one can identify symmetries of networks and dynamic states to help classify and differentiate between them.

  • Bifurcation: there is a well-developed theory of bifurcation with symmetry to help understand the emergence of dynamically interesting (symmetry broken) states from higher symmetry states.

  • Stability: bifurcation with symmetry often gives predictions about possible bifurcation scenarios that includes information as regards stability.

  • Generic dynamics: symmetries and invariant subspaces can provide a powerful structure with which one can understand more complex attractors such as heteroclinic cycles.

  • Design: one can use symmetries to systematically build models and test hypotheses.

The types of symmetries that are often most relevant for mathematical modelling of finite networks of neurons are the permutation groups, i.e. the symmetric groups and their subgroups. Nonetheless, continuum models of neural systems may have continuous symmetries that influence the dynamics and can be used as a tool to understand the dynamics; see for example [117].

3.7 Permutation Symmetries and Oscillator Networks

We review some aspects of the equivariant dynamics that have proven useful in coupled systems that are relevant to neural dynamics—see for example [118, 119]. In doing so we mostly discuss dynamics that respects some symmetry group of permutations of the systems. The full permutation symmetry group (or simply, the symmetric group) on N objects, \(S_{N}\), is defined to be the set of all possible permutations of N objects. Formally it is the set of permutations \(\sigma:\{1,\ldots,N\}\rightarrow\{1,\ldots,N\}\) (invertible maps of this set). To determine effects of the symmetry, not only the group must be known but also its action on phase space. If this action is linear then it is a representation of the group. The representation of the symmetry group is critical to the structure of the stability, bifurcations and generic dynamics that are equivariant with the symmetry.

For example, if each system is characterised by a single real variable, one can view the action of the permutations on \(\mathbb {R}^{N}\) as a permutation matrix

$$[M_{\sigma} ]_{ij} = \textstyle\begin{cases} 1 & \mbox{if }i=\sigma(j),\\ 0 & \mbox{otherwise,} \end{cases} $$

for each \(\sigma\in\varGamma\); note that \(M_{\sigma}M_{\rho }=M_{\sigma\rho }\) for any \(\sigma,\rho\in\varGamma\). Table 1 lists some commonly considered examples of symmetry groups used in coupled oscillator network models.

Table 1 Some permutation symmetry groups that have been considered as examples of symmetries of coupled oscillator networks

More generally, for (7) equivariance under the action of Γ means that for all \(\sigma\in\varGamma\), \(x\in \mathbb {R}^{Nd}\) and \(i=1,\ldots,N\) we have

$$f_{\sigma(i)}(x_{\sigma(i)})+\epsilon g_{\sigma(i)}(x_{\sigma (i)};x_{1}, \ldots,x_{N})=f_{i}(x_{\sigma(i)})+\epsilon g_{i}(x_{\sigma (i)};x_{\sigma(1)},\ldots,x_{\sigma(N)}). $$

A simple consequence of this is: if Γ acts transitively on \(\{1,\ldots,N\}\) (i.e. if for any i and j there is a \(\sigma\in \varGamma\) such that \(\sigma(j)=i\)) then all oscillators are identical, i.e. \(f_{i}(x_{i})=F(x_{i})\) for some function F.

The presence of symmetries means that solutions can be grouped together into families—given any x the set \(\varGamma x:= \{ gx : g\in\varGamma\}\) is the group orbit of x and all points on this group orbit will behave in dynamically the same way.

3.8 Invariant Subspaces, Solutions and Symmetries

Given a point \(x\in \mathbb {R}^{N}\) (for simplicity we consider the case \(d=1\) below) we define the isotropy subgroup (or simply the symmetry) of x under the action of Γ on \(\mathbb {R}^{N}\) to be

$$\varSigma_{x}:=\{ g\in\varGamma: gx=x\}. $$

This is a subgroup of Γ, and the set of these groups forms a lattice (the isotropy lattice) by inclusion of subgroups. They are dynamically important in that for any trajectory \(x(t)\) we have \(\varSigma_{x(0)}=\varSigma_{x(t)}\) for all t. A converse problem is to characterise the set of all points with a certain symmetry. If H is a subgroup (or more generally, a subset) of Γ then the fixed-point space of H is defined to be

$$\operatorname {Fix}(H):=\bigl\{ x\in \mathbb {R}^{N} : gx=x \mbox{ for all }g\in H\bigr\} . $$

Because all \(x\in \operatorname {Fix}(H)\) have symmetry H these subspaces are dynamically invariant. See Fig. 7 that illustrates this principle. Typical points \(x\in \operatorname {Fix}(H)\) have \(\varSigma_{x}=H\), however, some points may have more symmetry; more precisely, if \(H\subset K\) are isotropy subgroups then \(\operatorname {Fix}(H)\supset \operatorname {Fix}(K)\); and the partial ordering of the isotropy subgroups corresponds to a partial ordering of the fixed-point subspaces.

Fig. 7
figure 7

Suppose the six-element group \(\varGamma=\mathbb {D}_{3}\) of symmetries of the equilateral triangle acts on \(\mathbb {R}^{2}\), generated by a rotation \(g_{2}\) and a reflection \(g_{1}\) in the x-axis. The group orbit of the point u that is not fixed by any symmetries also has six elements (shown by filled circles), while any group orbit of a point v that is fixed by a symmetry (e.g. \(g_{1}\)) has correspondingly fewer points (shown by open circles) in the group orbit. Bifurcation of equilibria with more symmetry typically leads to several equilibria with less (“broken”) symmetry

One can identify similar types of isotropy subgroup as those that are conjugate in the group, i.e. we say \(H_{1}\) and \(H_{2}\) are conjugate subgroups of Γ if there is a \(g\in\varGamma\) such that \(gH_{1}=H_{2}g\). If this is the case, note that

$$\begin{aligned} g\operatorname {Fix}(H_{1}) =& g\bigl\{ x\in \mathbb {R}^{N} : hx=x \mbox{ for all }h\in H_{1}\bigr\} \\ =& \bigl\{ x\in \mathbb {R}^{N} : hg^{-1}x=g^{-1}x \mbox{ for all }h\in H_{1}\bigr\} \\ =& \bigl\{ x\in \mathbb {R}^{N} : ghg^{-1}x=x \mbox{ for all }h\in H_{1}\bigr\} \\ =& \bigl\{ x\in \mathbb {R}^{N} : hx=x \mbox{ for all }h\in H_{2} \bigr\} = \operatorname {Fix}(H_{2}) , \end{aligned}$$

meaning that the fixed-point spaces of conjugate subgroups (and the dynamics on them) are in some sense equivalent.

Identifying symmetries up to conjugacy allows for a considerable reduction of the number of cases one needs to consider; note that conjugate subgroups must have fixed-point subspaces of the same dimension where essentially the same dynamics will occur.

The fixed-point subspaces are often used (implicitly or explicitly) to enable one to reduce the dimension of the system and thus to make it more tractable. As an example, to determine the existence of an exactly synchronised solution one only needs to suppose \(x_{i}(t)=x(t)\) and determine whether there is such a solution \(x(t)\) for the system (7).

For periodic orbits, the symmetry of points on the orbit to symmetries of the orbit as an invariant set are as follows. Suppose P is a periodic orbit (which can be viewed as a “loop” in phase space \(\mathbb {R}^{N}\)). Let K denote the symmetries that fix all points on P (the “spatial symmetries”) and H denote the symmetries that fix P as a set (the “spatio-temporal symmetries”); note that K will be a subgroup of H. Finally, let

$$L_{K} := \bigcup_{g\notin K} \operatorname {Fix}\bigl(\{g\} \bigr)\cap \operatorname {Fix}(K), $$

be the set of points in phase space that have strictly more symmetry than K.

Theorem 3.1

(Theorem 3.4 in [116])

Consider ODEs on \(\mathbb {R}^{N}\) with a given finite symmetry group Γ. There is a periodic orbit P with spatial symmetries K and spatio-temporal symmetries H if and only if all of the following hold:

  • \(H/K\) is cyclic,

  • K is an isotropy subgroup,

  • \(\dim \operatorname {Fix}(K)\geq2\),

  • H fixes a connected component of \(\operatorname {Fix}(K)/L_{K}\), where \(L_{K}\) is defined as above.

One way of saying this is that the only possible spatio-temporal symmetries of periodic orbits are cyclic extensions of isotropy subgroups. Further theory, outlined in [116], shows that one can characterise possible symmetries of chaotic attractors; these may include a much wider range of spatio-temporal symmetries \((H,K)\) including some that do not satisfy the hypotheses of Theorem 3.1. This means that the symmetries of attractors may contain dynamical information about the attractor.

3.9 Symmetries and Linearisation

The presence of symmetries for an ODE will clearly constrain the system in many ways; one of the most important of these being the effects on linear stabilities of solutions. For a given equivariant ODE of the form (9), suppose we have an equilibrium \(x\in \mathbb {R}^{N}\) with isotropy subgroup \(\varSigma_{x}\) under the action of Γ on \(\mathbb {R}^{N}\). Then \(\varSigma_{x}\) constrains the Jacobian \(D f_{x}\) of the ODE because this must commute with the action of \(\varSigma_{x}\); more precisely

$$(g Df_{x})v= (Df_{x}) (gv), $$

for any \(g\in\varSigma_{x}\) and \(v\in \mathbb {R}^{N}\). This means that eigenvalues of the linearisation will be multiple whenever they are mapped around in a nontrivial way by the action of the isotropy subgroup.

Fortunately, there is a well-developed theory that enables one to exactly characterise the structure of the Jacobians of such maps—this involves splitting the action into a number of isotypic components according to irreducible representations that are the most trivial invariant subspaces under the action of the group. We do not have space here to go into this in detail, but refer the reader to [116] and references therein. This characterisation can be extended to nonlinear terms of vector fields, and more general invariant sets (such as periodic orbits) in addition to equilibria.

3.10 Bifurcations with Symmetry and Genericity

Bifurcations of ODEs can be classified and analysed by codimension according to methods for example in texts [45, 123]. This involves the following steps:

  1. (a)

    Identification of the marginally unstable modes (the directions that are losing stability: for equilibria, this corresponds to the eigenspace of the Jacobian where the eigenvalues have zero real part).

  2. (b)

    Reduction to a centre manifold parametrised by the marginally unstable modes (generically this is one- or two-dimensional when only one parameter is varied).

  3. (c)

    Study of the dynamics of the normal form for the bifurcation under generic assumptions on the normal form coefficients.

Indeed, the only generic codimension-one local bifurcations (i.e. the only one-parameter bifurcations of equilibria that will not split into a number of simpler bifurcations for some small perturbation on the system) are the saddle-node (also called fold, or limit point) and the Hopf (also called Andronov–Hopf) bifurcation. Additional more complicated bifurcations can appear generically at higher codimension. This classification by codimension has enabled development of a powerful set of numerical tools to help the analysis of such systems, not just for local bifurcations of equilibria but also some global bifurcations (in particular, periodic orbit and homoclinic bifurcations). Particular packages to do this include AUTO [124], MatCont [125], CONTENT [126] and XPPAUT [127] (which includes an implementation of AUTO).

If we restrict to symmetry preserving perturbations, a much wider range of bifurcations can appear at low codimension—this is because, as described above, the symmetry can cause a marginal mode of instability in one direction to appear simultaneously in many other directions meaning that (a), (b) and (c) above must be replaced by

(a′):

Identification of the marginally unstable modes (as discussed in Sect. 3.9, symmetry means there can generally be several of these that will become unstable at the same time).

(b′):

Reduction to a centre manifold parametrised by the marginally unstable modes (these are preserved by the action of the symmetries and may be of dimension greater than two even for one-parameter bifurcations).

(c′):

Study of the dynamics of the normal form for the symmetric bifurcation under generic assumptions on the normal form coefficients (the symmetries mean that some coefficients may be zero, some are constrained to be equal while others may be forced to satisfy nontrivial and sometimes obscure algebraic relationships).

These factors conspire to make symmetric bifurcations rich and interesting in behaviour—even in codimension one it is possible for heteroclinic cycles or chaos to bifurcate directly from high symmetry solutions. However, the same factors mean that many features cannot be caught by numerical path-following packages such as those listed above—the degeneracies mean that many branches may emanate from the bifurcation; it is generally a challenge to identify all of these. Essentially, bifurcation theory needs to be developed in the context of the particular group action. Examples of some consequences of this for weakly coupled oscillator networks with symmetries are considered in Sect. 6.

3.11 Robust Heteroclinic Attractors, Cycles and Networks

The presence of symmetries in a dynamical system can cause highly nontrivial dynamics even away from bifurcation points. Of particular interest are robust invariant sets that consist of networks of equilibria (or periodic orbits, or more general invariant sets) connected via heteroclinic connections that are preserved under small enough perturbations that respect the symmetries [128]. These structures may be cycles or more generally networks. They can be robust to perturbations that preserve the symmetries and indeed they can be attracting [116, 129]. We are particularly interested in the attracting case in which case we call these invariant sets heteroclinic attractors and trajectories approaching such attractors show a typical intermittent behaviour—periods that are close to the dynamics of an unstable saddle-type invariant set, and switches between different behaviours.

In higher-dimensional systems, heteroclinic attractors may have subtle structures such as “depth two connections” [130], “cycling chaos” where there are connections between chaotic saddles [116, 131, 132] and “winnerless competition” [133, 134]. Related dynamical structures are found in the literature in attractors that show “chaotic itinerancy” or “slow switching”. Such complex attractors can readily appear in neural oscillator models in the presence of symmetries and have been used to model various dynamics that contribute to the function of neural systems; we consider this, along with some examples, in Sect. 7.

3.12 Groupoid and Related Formalisms

Some less restrictive structures found in some coupled dynamical networks also have many of the features of symmetric networks (including invariant subspaces, bifurcations that appear to be degenerate, and heteroclinic attractors) but without necessarily having the symmetries.

One approach [135] has been to use a structure of groupoids—these are mathematical structures that satisfy some, but not all, of the axioms of a group and can be useful in understanding the constraints on the dynamics of coupled cell systems of the form (6). A groupoid is similar to a group except that the composition of two elements in a groupoid is not always defined, and the inverse of a groupoid element may only be locally defined. This formalism can be used to describe the permutations of inputs of cells as in [135, 136].

Suppose that we have (6) with cells \(\mathcal {C}=\{1,\ldots,N\}\) and suppose that there are connections \(\mathcal {E}\), i.e. there are pairs of cells \((i,j)\) in \(\mathcal{C}\) such that cell i appears in the argument of the dynamics of cell j. We say

$$I(j)=\bigl\{ i\in\mathcal{C} : (i,j)\in\mathcal{E}\bigr\} , $$

is the input set of cell j and there is a natural equivalence relation \(\sim_{I}\) defined by \(j\sim_{I} k\) if there is a bijection (an input automorphism)

$$\beta:I(j)\rightarrow I(k) , $$

with \(\beta(j)=k\) such that for all \(i\in I(j)\) we have \((i,j)\sim_{E} (\beta(i),k)\). The set \(B(j,k)\) of input automorphisms of this type and the set of all such input automorphisms has the structure of a groupoid [135].

Given a coupling structure of this type, an admissible vector field is a vector field on the product space of all cells that respects the coupling structure, and this generalises the idea of an equivariant vector field in the presence of a symmetry group acting on the set of cells. The dynamical consequences of this have a similar flavour to the consequences one can find in symmetric systems except that fewer cases have been worked out in detail, and there are many open questions.

To illustrate, consider the system of three cells,

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}x_{1} &= g(x_{1},x_{2},x_{3}) , \\ \frac{\mathrm{d}}{\mathrm{d}t}x_{2} &= g(x_{2},x_{1},x_{3}) , \\ \frac{\mathrm{d}}{\mathrm{d}t}x_{3} &= h(x_{3},x_{1}) , \end{aligned}$$

where \(g(x,y,z)=g(x,z,y)\); this is discussed in Sect. 5 in [136] in some detail. This has a coupling structure as shown in Fig. 8. In spite of there being no exact symmetries in the system there is a nontrivial invariant subspace where \(x_{1}=x_{2}\). In the approach of [136] the dynamically invariant subspaces that can be understood in terms of the balanced colourings of the graph where the cells are grouped in such a way that the inputs are respected—this corresponds to an admissible pattern of synchrony.

Fig. 8
figure 8

Left: a system of three coupled cells with two cell types (indicated by the circle and square) coupled in a way that there is no permutation symmetry but there is an invariant subspace corresponding to cells 1 and 2 being synchronised. The different line styles show coupling types that can potentially be permuted (after Fig. 3 in [135]). Middle: the quotient two-cell network corresponding to cells 1 and 2 being synchronised. Right: the same network structure shown using the notation of [137]

The invariant subspaces that are forced to exist by this form of coupling structure have been called polydiagonals in this formalism, which correspond to clustering of the states. For every polydiagonal one can associate a quotient network by identifying cells that are synchronised, to give a smaller network. As in the symmetric case the existence of an invariant subspace does not guarantee that it contains any attracting solutions. Some work has been done to understand generic symmetry breaking bifurcations in such networks—see for example [138], or spatially periodic patterns in lattice networks [139]. Variants of this formalism have been developed to enable different coupling types between the same cells to be included.

Periodic orbits in such networks can also have interesting structures associated with the presence of invariant subspaces. The so-called rigid phase conjecture [136, 140], recently proved in [141], states that if there is a periodic orbit in the network such that two cells have a rigid phase relation between them (i.e. one that is preserved for all small enough structure-preserving perturbations) then this must be forced by either a \(\mathbb {Z}_{n}\) symmetric perturbation of the cells in the network, or in some quotient network.

An alternative formalism for discussing asymmetric coupled cell networks has been developed in [137, 142144] that also allows one to identify invariant subspaces. Each cell has one output and several inputs that may be of different types. These papers concentrate on the questions: (a) When are two-cell networks formally equivalent (i.e. when can the dynamics of one cell network be found in the other, under suitable choice of cell)? (b) How can one construct larger coupled cell systems with desired properties by “inflating” a smaller system S, such that the larger system has S as a quotient? (c) What robust heteroclinic attractors exist in such systems?

4 Coupled Limit-Cycle Oscillators

In addition to variants on the systems in Sect. 2, we mention that several nonlinear “conceptual” planar limit-cycle oscillators are studied as archetypes of nonlinear oscillators. These include:

$$\begin{aligned} &\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} x+ \bigl(a_{0}+a_{1} x^{2} \bigr)\frac{\mathrm {d}}{\mathrm{d}t}x +\omega^{2}x=0 \quad (\mbox{van der Pol [3]}), \\ &\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}} x+ \bigl(a_{0}+a_{1} x^{2} \bigr)\frac{\mathrm {d}}{\mathrm{d}t}x +\omega^{2}x+a_{2}x^{3}=0 \quad (\mbox{van der Pol--Duffing [145]}), \\ &\frac{\mathrm{d}}{\mathrm{d}t} z = (\lambda+i\omega) z + (a_{0}+ i a_{1}) \vert z\vert ^{2}z \quad (\mbox{Stuart--Landau [5]}), \end{aligned}$$

where x is real, z is complex and the coefficients λ, ω and \(a_{j}\) are real constants.

If the coupling between two or more limit-cycle oscillators is relatively large, it can affect not only the phases but also the amplitudes, and a general theory of strongly interacting oscillators is likely to be no more or less complicated than a general theory of nonlinear systems. However, the theory of weak coupling is relatively well developed (see Sect. 5 and Sect. 6) and specific progress for strong coupling can sometimes be made for special choices of neuron model. Examples where one can do this include IF (see Sect. 4.3), piece-wise linear models such as McKean [146], caricatures of FHN and ML [147], and singularly perturbed relaxation oscillators with linear [148] or fast threshold modulation coupling [149].

For linear coupling of planar oscillators, much is known about the general case [150, 151]. If the linear coupling is proportional to the difference between two state variables this is referred to as “diffusive”, and otherwise it is called “direct”. The difference between the two cases is most strongly manifest when considering the mechanism of oscillator death (see Sect. 3.4). The diffusive case is more natural in a neuroscience context as it can be used to model electrical gap-junction coupling (which depends on voltage-differences). The existence of synchronous states in networks of identical units is inherited from the properties of the underlying single neuron model since in this case coupling vanishes, though the stability of this solution will depend upon the pattern of gap-junction connectivity.

Gap junctions are primarily believed to promote synchrony, though this is not always the case and they can also lead to robust stable asynchronous states [152], as well as “bursting” generated by cyclic transitions between coherent and incoherent network states [153]. For work on gap junctions and their role in determining network dynamics see for example [147, 154158].

4.1 Stability of the Synchronised State for Complex Networks of Identical Systems

There is one technique specific to the analysis of the synchronous state in a quite large class of network models that is valid for strongly coupled identical systems, namely the master stability function (MSF) approach. For networks of coupled systems or oscillators with identical components the MSF approach of Pecora and Carroll [159] can be used to determine the stability of the synchronous state in terms of the eigenstructure of the network connectivity matrix. To introduce the MSF formalism it is convenient to consider N nodes (oscillators)Footnote 2 and let \(x_{i} \in \mathbb {R}^{d}\) be the d-dimensional vector of dynamical variables of the ith node with isolated (uncoupled) dynamics \({\mathrm {d}x_{i}}/{\mathrm {d}t} = {F}({x}_{i})\), with \(i=1,\ldots, N\). The output for each node is described by a vector function \({H} \in \mathbb {R}^{d}\) (which is not necessarily linear). For example, for a three-dimensional system with \({x}=(x^{(1)},x^{(2)},x^{(3)})\) and linear coupling only occurring through the \(x^{(3)}\)-component then we would set \({H}({x}) = (0,0,x^{(3)})\). For a given adjacency matrix \(A_{ij}\) and associated set of connection strengths \(g_{ij}\) and a global coupling strength σ the network dynamics of N coupled identical systems, to which the MSF formalism applies, is specified by

$$ \frac{\mathrm{d}}{\mathrm{d}t} {{x}}_{i} = {F}({x}_{i}) + \sigma\sum _{j=1}^{N} A_{ij} g_{ij} \bigl[ {H}({x}_{j})-{H}({x}_{i}) \bigr] \equiv{F}({x}_{i}) - \sigma\sum_{j=1}^{N} \mathcal{G}_{ij} {H}({x}_{j}) . $$

Here the matrix \(\mathcal{G}\) with blocks \(\mathcal{G}_{ij}\) has the graph-Laplacian structure \(\mathcal{G}_{ij} = -A_{ij}g_{ij} + \delta _{ij} \sum_{k}A_{ik} g_{ik}\). The \(N-1\) constraints \({x}_{1}(t)={x}_{2}(t)=\cdots={x}_{N}(t) = s(t)\) define the (invariant) synchronisation manifold, with \({s}(t)\) a solution in \(\mathbb {R}^{d}\) of the uncoupled system, namely \({\mathrm {d}s}/{\mathrm {d}t}={F}({s})\). Although we will not discuss in detail here, we assume that the asymptotic behaviour \(s(t)\) is such that averages along trajectories converge, i.e. the behaviour of \(s(t)\) for the uncoupled system is governed by a natural ergodic (Sinai–Ruelle–Bowen) measure for the dynamics.

To assess the stability of this state we perform a linear stability analysis expanding a solution as \({x}_{i}(t) = {s}(t) +\delta{x}_{i}(t)\) to obtain the variational equation

$$ \frac{\mathrm{d}}{\mathrm{d}t} { \delta{x}_{i}} = D {F}({s}) \delta {x}_{i} - \sigma D {H}({s}) \sum_{j=1}^{N} \mathcal{G}_{ij} \delta{x}_{j} . $$

Here \(D {F}({s})\) and \(D {H}({s})\) denote the Jacobian of \({F}({s})\) and \({H}({s})\) around the synchronous solution, respectively. The variational equation has a block form that can be simplified by projecting δx into the eigenspace spanned by the (right) eigenvectors of the matrix \(\mathcal{G}\). This yields a set of N decoupled equations in the block form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {{\xi}_{l}} = \bigl[ D {F}({s}) - \sigma \lambda_{l} D {H}({s}) \bigr] {\xi}_{l},\quad l=1, \ldots,N, $$

where \({\xi}_{l}\) is the lth (right) eigenmode associated with the eigenvalue \(\lambda_{l}\) of G (and \(D {F}({s})\) and \(D {H}({s})\) are independent of the block label l). Since \(\sum_{i} \mathcal{G}_{ii}=0\) there is always a zero eigenvalue, say \(\lambda_{1}=0\), with corresponding eigenvector \((1,1,\ldots,1)\), describing a perturbation parallel to the synchronisation manifold. The other \(N-1\) transverse eigenmodes must damp out for synchrony to be stable. For a general matrix \(\mathcal{G}\) the eigenvalues \(\lambda_{l}\) may be complex, which brings us to consideration of the system

$$ \frac{\mathrm{d}}{\mathrm{d}t} {{\xi}} = \bigl[ D {F}({s}) - \alpha D {H}({s}) \bigr] {\xi}, \quad\alpha= \sigma\lambda_{l} \in \mathbb {C}. $$
(10)

For given \(s(t)\), the MSF is defined as the function which maps the complex number α to the greatest Lyapunov exponent of (10). The synchronous state of the system of coupled oscillators is stable if the MSF is negative at \(\alpha=\sigma\lambda_{l}\) where \(\lambda_{l}\) ranges over the eigenvalues of the matrix \(\mathcal{G}\) (excluding \(\lambda_{1}=0\)).

For a ring of identical (or near identical) coupled periodic oscillators in which the connections have randomly heterogeneous strength, Restrepo et al. [160] have used the MSF method to determine the possible patterns at the desynchronisation transition that occurs as the coupling strengths are increased. Interestingly they demonstrate Anderson localisation of the modes of instability, and show that this could organise waves of desynchronisation that would spread to the whole network. For a further discussion as regards the use of the MSF formalism in the analysis of synchronisation of oscillators on complex networks we refer the reader to [75, 161], and for the use of this formalism in a non-smooth setting see [162]. This approach has recently been extended to cover the case of cluster states by making extensive use of tools from computational group theory to determine admissible patterns of synchrony [163] (and see also Sect. 3.12) in unweighted networks.

4.2 Pulse-Coupled Oscillators

Another example of a situation in which analysis of network dynamics can be carried out without the need for any reduction or assumption is that of pulse-coupled oscillators, in which interactions between neurons are mediated by instantaneous “kicks” of the voltage variable.

Networks of N identical oscillators with global (all-to-all) strong pulse coupling were first studied by Mirollo and Strogatz [164]. They assumed that each oscillator is defined by a state variable v and is of integrate–and–fire type with threshold \(v_{\mathrm{th}}=1\) and reset value \(v_{R}=0\). When oscillator i in the network fires the instantaneous pulsatile coupling pulls all other oscillators \(j \neq i\) up by a fixed amount ϵ or to firing, whichever is less, i.e.

$$\text{if } v_{i}(t)=1 \quad\mbox{then } v_{j}\bigl(t^{+} \bigr)= \min\bigl(1, v_{j}(t) +\epsilon\bigr) \quad\text{for all $j \neq i$.} $$

Mirollo and Strogatz assume that the coupling is excitatory (\(\epsilon >0\)). If m oscillators fire simultaneously then the remaining \(N-m\) oscillators are pulled up by , or to firing threshold.

In the absence of coupling each oscillator has period Δ and there is a natural phase variable \(\phi(t) =t/\Delta\mod1\) such that \(\phi=0\) when \(v=0\) and \(\phi=1\) when \(v=1\). Mirollo and Strogatz further assume that the dynamics of each (uncoupled) oscillator is governed by \(v(t)=f(\phi)\) where f is a smooth function satisfying \(f(0)=0\), \(f(1)=1\), \(f^{\prime}(\phi)>0\) and \(f^{\prime\prime}(\phi)<0\) for all \(\phi\in[0,1]\). Because of these hypotheses on f, it is invertible with inverse \(\phi=g(v)\).

Note that for the leaky (linear) integrate-and-fire model (LIF) where

$$\tau\frac{\mathrm{d}}{\mathrm{d}t} v = -v +I , $$

we have \(f(\phi) = I(1-\mathrm {e}^{-\Delta\phi/\tau})\) where \(\Delta= \tau\ln (I/(I-1))\), for \(I>1\), which satisfies the above conditions. However, quadratic IF models do not satisfy the concavity assumption.

If an oscillator is pulled up to firing threshold due to the coupling and firing of a group of m oscillators which have already synchronised then the oscillator is ‘absorbed’ into the group and remains synchronised with the group for all time. (Here synchrony means firing at the same time.) Since there are now more oscillators in the synchronised group, the effect of the coupling on the remaining oscillators is increased and this acts to rapidly pull more oscillators into synchronisation. Mirollo and Strogatz [164] proved that for pulsatile coupling and f satisfying the conditions above, the set of initial conditions for which the oscillators do not all become synchronised has zero measure. Here we briefly outline the proof for two pulse-coupled oscillators. See Mirollo and Strogatz [164] for the generalisation of this proof to populations of size N.

Consider two oscillators labelled A and B with \(\phi_{A}\) and \(v_{A}\) denoting, respectively, the phase and state of oscillator A and similarly for oscillator B. Suppose that oscillator A has just fired so that \(\phi_{A}=0\) and \(\phi_{B}=\phi\) as in Fig. 9(a). The return map \(R(\phi)\) is the phase of B immediately after A next fires. It can be shown that the return map has a unique, repelling fixed point:

Fig. 9
figure 9

A system of two oscillators governed by \(v = f(\phi)\), and interacting by pulse coupling. a The state of the system immediately after oscillator A has fired. b The state of the system just before oscillator B reaches the firing threshold. c The state of the system just after B has fired. B has jumped back to zero, and the state of A is now \(\min(1, \epsilon+ f(1 - \phi))\)

Oscillator B reaches threshold when \(\phi_{A}=1-\phi\) and an instant later B fires and the pulsatile coupling makes \(v_{A}= \min(1, \epsilon + f(1-\phi))\). If \(v_{A}=1\) then the oscillators have synchronised. Assuming that \(v_{A}= \epsilon+ f(1-\phi)<1\) then \(\phi_{A} = g(\epsilon+ f(1-\phi))\) and after one firing the system has moved from \((\phi_{A}, \phi_{B})= (0, \phi)\) to \((h(\phi), 0)\) where \(h(\phi) = g(\epsilon+ f(1-\phi))\) is the firing map (see Fig. 9(c)). The return map is given by a further application of the firing map so that \(R(\phi)=h(h(\phi))\). The assumption that \(\epsilon+ f(1-\phi)<1\) is satisfied when \(\epsilon\in[0,1)\) and \(\phi\in(\delta, 1)\) where \(\delta=1- g(1-\epsilon)\). Thus the domain of h is \((\delta,1)\) and the domain of R is \((\delta, h^{-1}(\delta ))\). Since h is monotonically decreasing, \(\delta< h^{-1}(\delta)\) for \(\epsilon<1\) and the interval is nonempty. Thus on the whole of \([0,1)\) the return map is defined as

$$R(\phi) = \textstyle\begin{cases} 1, & \phi>h^{-1}(\delta),\\ h(h(\phi)), & \phi\in(\delta, h^{-1}(\delta)), \\ 0, & \phi< \delta. \end{cases} $$

Since the points 0 and 1 are identified, if \(\phi<\delta\) or \(\phi >h^{-1}(\delta)\) then the two oscillators will become synchronised.

It can be shown that almost all initial conditions eventually become synchronised since (i) R has a unique fixed point \(\overline{\phi} \in(\delta, h^{-1}(\delta))\) and (ii) this fixed point is unstable (i.e. \(\vert R^{\prime}(\overline{\phi})\vert >1\)). To see that R has a unique fixed point, observe that fixed points ϕ̅ are roots of \(F(\phi)\equiv\phi-h(\phi)\). Now \(F(\delta)=\delta-1<0\) and \(F(h^{-1}(\delta))= h^{-1}(\delta )-\delta>0\) so F has a root in \((\delta, h^{-1}(\delta))\) and this root is unique since \(F^{\prime}(\phi)=1-h^{\prime}(\phi)>2\).

Extensions to the framework of Mirollo and Strogatz include the introduction of a time delay in the transmission of pulses and the consideration of inhibitory coupling. It has been observed that delays have a dramatic effect on the dynamics in the case of excitatory coupling. Considering first a pair of oscillators, Ernst et al. [165] demonstrate analytically that inhibitory coupling with delays gives stable in-phase synchronisation while for excitatory coupling, synchronisation with phase lag occurs. As the number of globally coupled oscillators increases, so does the number of attractors which can exist for both excitatory and inhibitory coupling.

In the presence of delays many different cluster state attractors can coexist. The dynamics settle down onto a periodic orbit with clusters reaching threshold and sending pulses alternately [165167]. Under the addition of weak noise when the coupling is inhibitory, the dynamics stay near this periodic orbit indicating that all cluster state attractors are stable [167]. However, the collective behaviour shows a marked difference when the coupling is excitatory. In this case, weak noise is sufficient to drive the system away from the periodic orbit and results in persistent switching between unstable (Milnor) attractors.

These dynamics are somewhat akin to heteroclinic switching and the relationship between networks of unstable attractors and robust heteroclinic cycles has been addressed by a number of authors [168170]. In particular, Broer et al. [170] highlight a situation in which there is a bifurcation from a network of unstable attractors to a heteroclinic cycle within a network of pulse-coupled oscillators with delays and inhibitory coupling. They note that the model used in previous work [165167] is locally noninvertible since the original phase of an oscillator cannot be recovered once it has received an input which takes it over threshold causing the phase to be reset. Kirst and Timme [170] employ a framework in which supra-threshold activity is partially reset, such that \(v_{j}(t^{+})= \mathcal{R}(v_{j}(t)-1)\) if \(v_{j}>1\) with a reset function \(\mathcal {R}(v)=cv\), \(c \in[0,1]\), which ensures that the flow becomes locally time invertible when \(c>0\). They demonstrate that for \(c=0\) (where the locally noninvertible dynamics is recovered), the system has a pair of periodic orbits \(A_{1}\) and \(A_{2}\), which are unstable attractors enclosed by the basin of each other. When \(c>0\), \(A_{1}\) and \(A_{2}\) are non-attracting saddles with a heteroclinic connection from \(A_{1}\) to \(A_{2}\). Furthermore, there is a continuous bifurcation from the network of two unstable attractors when \(c=0\) to a heteroclinic two cycle when \(c>0\).

For an interesting dynamical systems perspective on the differences between “kick” synchronisation (in pulsatile coupled systems) and “diffusive” synchronisation [171] and the lack of mathematical work on the former problem see [172]. For example, restrictions on the dynamics of symmetrically coupled systems of oscillators when the coupling is time-continuous can be circumvented for pulsatile coupling leading to more complex network dynamics [173].

In the real world of synaptic interactions, however, pulsatile kicks are probably the exception rather than the rule, and the biology of neurotransmitter release and uptake is better modelled with a distributed delay process, giving rise to a postsynaptic potential with a finite rise and fall time. For spike-time event driven synaptic models, described in Sect. 2.5, analysis at the network level is hard for a general conductance-based model (given the usual expectation that the single neuron model will be high-dimensional and nonlinear), though far more tractable for LIF networks, especially when the focus is on phase-locked states [174176]. Indeed in this instance many results can be obtained in the strongly coupled regime [177], without recourse to any approximation or reduction.

4.3 Synaptic Coupling in Networks of IF Neurons

The instantaneous reaction of one neuron to the firing of another, as in the pulse-coupled neurons above, does not account for the role of synapses in the transmission of currents. Bressloff and Coombes [177] consider a network of N identical, LIF neurons that interact via synapses by transmitting spike trains to one another. Let \(v_{i}(t)\), the state of neuron i at time t, evolve according to

$$ \frac{\mathrm{d}}{\mathrm{d}t} v_{i} = -v_{i} +I_{i} + X_{i}(t), $$
(11)

where \(I_{i}\) is a constant external bias and \(X_{i}(t)\) is the total synaptic current into the cell. As before, we supplement this with the reset condition that whenever \(v_{i}=1\) neuron i fires and is reset to \(v_{i}=0\). The synaptic current \(X_{i}(t)\) is generated by the arrival of spikes from other neurons j and can be taken to have the form

$$X_{i}(t) = \epsilon\sum_{j=1}^{N} w_{ij} \sum_{m \in \mathbb {Z}} J\bigl(t-T_{j}^{m} \bigr), $$

where \(\epsilon w_{ij}\) represents the weight of the connection from the jth neuron to the ith neuron with ϵ characterising the overall strength of synaptic interactions, \(T_{j}^{m}\) denotes the sequence of firing times of the jth neuron and \(J(t)\) determines the course of postsynaptic response to a single spike. A biologically motivated choice for \(J(t)\) is

$$J(t) = \eta(t)\varTheta(t), \quad\eta(t)=\alpha^{2} t\mathrm {e}^{-\alpha t}, $$

where \(\varTheta(t)=1\) if \(t>0\) and zero otherwise. Here η is an alpha function (see also Sect. 2.5) and the maximum synaptic response occurs at a nonzero delay \(t=\alpha^{-1}\). Note that in the limit of large inverse rise time α, \(J(t)\) approximates a delta function (more like pulse coupling).

Bressloff and Coombes [177] show that the behaviour of the network of oscillators differs depending on the strength of the coupling. This is another instance in which information is lost through making weak-coupling assumptions. To see this one may integrate (11) between \(T_{i}^{n}\) and \(T_{i}^{n+1}\), exploiting the linearity of the equations and solving with variation of parameters, to obtain a map of the firing times. Since the drive \(X_{i}(t)\) depends upon all previous firing events of the other neuron this is an implicit map that relates all the network firing events to one another. It is convenient to introduce the set of inter-spike intervals (ISIs) \(\Delta_{ij}^{n,m}= T_{i}^{n}-T_{j}^{m}\), so that we may write the firing map in the convenient form

$$ I_{i} \bigl(1-\mathrm {e}^{-\Delta_{ii}^{n+1,n}}\bigr)-1 + \sum _{j=1}^{N} \sum_{m \in \mathbb {Z}} F_{ij} \bigl(\Delta_{ii}^{n+1,n}, \Delta_{ij}^{n,m}\bigr) = 0 , $$
(12)

where \(T_{i}^{n+1}> T_{j}^{m}\) for all j, m, and

$$ F_{ij}(x,y) = \epsilon w_{ij} \mathrm {e}^{-x} \int _{0}^{x} \mathrm {e}^{s} J(s+y)\, \mathrm{d}s. $$

Phase-locked solutions may be found, for an arbitrary coupling strength ϵ, using the ansatz \(T_{j}^{m}=(m-\phi_{j}) \Delta\) for some self-consistent ISI Δ and constant phases \(\phi_{j}\). Substitution into (12) yields

$$ 1= I_{i} \bigl(1-\mathrm {e}^{-\Delta}\bigr) + \epsilon\sum _{j=1}^{N} w_{ij} K(\phi _{j}- \phi_{i}), $$
(13)

where

$$ K(\phi) = \mathrm {e}^{-\Delta} \int_{0}^{\Delta} \mathrm {e}^{s} P(s+\phi\Delta)\, \mathrm{d}s, \quad P(t) = \sum _{m \in \mathbb {Z}} J(t-m \Delta) , $$

and \(P(t)=P(t+\Delta)\). Choosing one of the phases, say \(\phi_{1}\), as a reference then (13) provides a set of N equations for the unknown period Δ and the remaining \(N-1\) relative phases \(\phi_{j}-\phi_{1}\).

In order to investigate the linear stability of phase-locked solutions of Eq. (13), we consider perturbations of the firing times which we write in the form \(T_{j}^{m} =(m-\phi_{j}) \Delta+ \delta_{j}^{m}\). Linearisation of the firing map (12) gives an explicit map for these perturbations that can be written as

$$ I_{i}\mathrm {e}^{-\Delta}\bigl(\delta_{i}^{n+1}- \delta_{i}^{n}\bigr) + \sum_{j=1}^{N} \sum_{m \in \mathbb {Z}} \frac{\partial F_{ij}}{\partial x}\bigl( \delta_{i}^{n+1}-\delta _{i}^{n}\bigr) + \frac{\partial F_{ij}}{\partial y} \bigl(\delta_{i}^{n}- \delta_{j}^{m}\bigr) = 0, $$

where the partial derivatives of \(F_{ij}\) are evaluated at the phase-locked state \((\Delta_{ii}^{n+1,n}, \Delta _{ij}^{n,m})=(\Delta, (n-m)\Delta+(\phi_{j}-\phi_{i})\Delta)\). For solutions of the form \(\delta _{j}^{m} = \lambda^{m}\delta_{j}\) this reduces to

$$ (\lambda-1) P_{i} \delta_{i} = \epsilon\sum _{j=1}^{N} H_{ij}(\lambda) \delta_{j} , $$
(14)

where \(P_{i}=I_{i}-1+\epsilon\sum_{j} W_{ij} P((\phi_{j}-\phi_{i})\Delta)\), \(H_{ij}(\lambda) = W_{ij} G((\phi_{j}-\phi_{i})\Delta,\lambda) - \delta _{ij} \sum_{k} W_{ik}G((\phi_{k}-\phi_{i})\Delta,1)\) and

$$ G(t,\lambda) = \sum_{m \in \mathbb {Z}} \lambda^{-m} \mathrm {e}^{-\Delta} \int_{0}^{\Delta} \mathrm {e}^{s} J'(s+t+m\Delta) \,\mathrm{d}s . $$

One solution to Eq. (14) is \(\lambda=1\) with \(\delta _{i} = \delta\) for all \(i=1,\ldots,N\). This reflects the invariance of the dynamics with respect to uniform phase shifts in the firing times. Thus the condition for linear stability of a phase-locked state is that all remaining solutions λ of Eq. (14) are within the unit disc. For \(\lambda-1 \sim O(\epsilon)\), and ϵ small, we may expand (14) as

$$ (\lambda-1) (I_{i}-1)\delta_{i} = \epsilon\sum _{j=1}^{N} \mathcal{H}_{ij} (\varPhi) \delta_{j} + O\bigl(\epsilon^{2}\bigr) , $$

where \(\mathcal{H}_{ij} (\varPhi) = H_{ij}(1) = [W_{ij} K'(\phi_{j}-\phi_{i}) - \delta_{ij} \sum_{k} W_{ik}K'(\phi_{k}-\phi_{i})]/\Delta\) and we have used the result that \(G(\phi,1)=K'(\phi)/\Delta\). Suppose for simplicity that \(I_{i}=I>1\) for all i, so that \(\Delta= \ln [I/(I-1)] + O(\epsilon)\). Then the weak-coupling stability condition is that \(\operatorname {Re}\lambda_{p} <0\), where \(\lambda_{p}\), \(p=1,\ldots,N\), are the eigenvalues of the \(N \times N\) matrix with elements \(\epsilon \mathcal{H}_{ij}(\varPhi)\).

As an explicit example let us consider the synchronous state (\(\phi _{i}=\phi\) for all i). From (13) we see that this is guaranteed to exist for the choice \(\sum_{j} W_{ij}=\gamma\) and \(I_{i}=I[1-\epsilon K(0) \gamma]\) for some constant \(I>1\), which sets the period as \(\Delta= \ln[I/(I-1)]\). Using the result that \(K'(\phi )=-\Delta K(\phi) + \Delta P(\phi\Delta)/I\) the spectral problem for the synchronous state then takes the form

$$ \bigl[ (\lambda-1) \bigl(I-1 + \epsilon I \gamma K'(0)/\Delta \bigr) + \epsilon\gamma K'(0)/\Delta \bigr] \delta_{i}= \epsilon G(0,\lambda) \sum_{j=1}^{N} W_{ij} \delta_{j} . $$

We may diagonalise this equation in terms of the eigenvalues of the weight matrix, denoted by \(\nu_{p}\) with \(p=1,\ldots, N\) (and we note that γ is the eigenvalue with eigenvector \((1,1,\ldots,1)\) corresponding to a uniform phase shift). Looking for nontrivial solutions then gives the set of spectral equations \(\mathcal{E}_{p}(\lambda)=0\), where

$$ \mathcal{E}_{p}(\lambda) = (\lambda-1) \bigl(I-1+\epsilon I \gamma K'(0)/\Delta \bigr) + \epsilon\gamma K'(0)/\Delta- \epsilon\nu_{p} G(0,\lambda). $$
(15)

We may use (15) to determine bifurcation points defined by \(\vert \lambda \vert =1\). For sufficiently small ϵ, solutions to (15) will either be in a neighbourhood of the real solution \(\lambda=1\) or in a neighbourhood of one of the poles of \(G(0,\lambda)\). Since the latter lie in the unit disc, the stability of the synchronous solution (for weak coupling) is determined by \(\epsilon K'(0)[\operatorname {Re}\nu_{p}-\gamma] <0\). For strong coupling the characteristic equation has been solved numerically in [177] to illustrate the possibility of Hopf bifurcations (\(\lambda=\mathrm {e}^{i \theta}\), \(\theta\neq0\), \(\theta \neq\pi\)) with increasing \(\vert \epsilon \vert \), leading to oscillator death in a globally coupled inhibitory network for a sufficiently slow synapse and bursting behaviour in asymmetrically coupled networks. Bifurcation diagrams illustrating these phenomenon are shown in Fig. 10. To see how these phenomena can occur from a more analytical perspective it is useful to consider the Fourier representation \(J(t) = (2 \pi )^{-1} \int_{-\infty}^{\infty}\mathrm{d}\omega\widetilde{J}(\omega ) \mathrm {e}^{i \omega t}\), where \(\widetilde{J}(\omega)=\alpha^{2}/(\alpha+i \omega )^{2}\), so that G may easily be evaluated with \(\lambda=\mathrm {e}^{z}\) as

$$ G\bigl(0,\mathrm {e}^{z}\bigr) = \frac{1}{\Delta} \sum _{n \in \mathbb {Z}} \frac{\widetilde{J}(\omega _{n}-iz/\Delta) (i \omega_{n} + z/\Delta) (\mathrm {e}^{z}-\mathrm {e}^{-\Delta}) }{1+i \omega _{n} + z/\Delta},\quad \omega_{n} = 2 \pi n/\Delta, $$
(16)

from which it is easy to see a pole at \(z=-\alpha\Delta\). This suggests writing z in the form \(z=-\alpha(1+\kappa_{p}) \Delta\) and expanding the spectral equation in powers of α to find a solution. For small α we find from (16) that \(G(0,\mathrm {e}^{z}) = -\alpha(1+\kappa_{p}) (1-\mathrm {e}^{-\Delta })/(\kappa_{p}^{2} \Delta)\). Balancing terms of order α then gives \(\kappa^{2}_{p}=\epsilon\nu_{p} (1-\mathrm {e}^{-\Delta})/(\Delta^{2}(I-1))\), where we use the result that \(G(0,\mathrm {e}^{0}) = O(\alpha^{2})\). Thus for small α, namely slow synaptic currents, we have \(K'(0)=0\), so that a weak-coupling analysis would predict neutral stability (consistent with the notion that a set of IF neurons with common constant input would frequency lock with an arbitrary set of phases). However, our strong coupling analysis predicts that the synchronous solution will only be stable if \(\operatorname {Re}z^{\pm}_{p}<0\) with \(z^{\pm}_{p}=[-1\pm\kappa_{p}] \alpha \Delta\). Introducing the firing rate function \(f=1/\Delta\) then \(z^{\pm}_{p}\) can be written succinctly as

$$ z^{\pm}_{p}= \bigl[-1 \pm\sqrt{\epsilon\nu_{p} f'(I)} \bigr] \alpha \Delta. $$

Thus for an asymmetric network (with at least one complex conjugate pair of eigenvalues) it may occur that as \(\vert \epsilon \vert \) is increased a pair of eigenvalues determined by \(z^{\pm}_{p}\) may cross the imaginary axis to the right hand complex plane signalling a discrete Hopf bifurcation in the firing times. For a symmetric network with real eigenvalues an instability may occur as some \(\kappa_{p} \in \mathbb {R}\) increases through 1, signalling a breakdown of frequency locking. The above results (valid for slow synapses) can also be obtained using a firing rate approach, as described in [177].

Fig. 10
figure 10

Left: region of stability for a synchronised pair of identical IF neurons with inhibitory coupling and collective period \(T=\ln2\). The solid curve \(\vert \epsilon \vert = \epsilon _{c}(\alpha)\) denotes the boundary of the stability region. Crossing the boundary from below signals excitation of the linear mode \((1,-1)\) leading to a stable state in which one neuron becomes quiescent (oscillator death). For \(\alpha> \alpha_{0}\) the synchronous state is stable for all ϵ. The dashed curve denotes corresponds to the eigenvalue with \(\nu=1\). Right: plot of critical coupling \(\epsilon_{c}\) as a function of α for various network sizes N. The critical inverse rise time \(\alpha_{0}(N)\) is seen to be a decreasing function of N with \(\alpha _{0}(N)\rightarrow0\) as \(N\rightarrow\infty\)

The results above, albeit valid for strong coupling, are only valid for LIF networks. To obtain more general results for networks of limit-cycle oscillators it is useful to consider a reduction to phase models.

5 Reduction of Limit-Cycle Oscillators to Phase–Amplitude and Phase Models

Consider a system of the form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {x} = f(x) + \epsilon g(x,t) ,\quad x\in \mathbb {R}^{n} , $$
(17)

such that for \(\epsilon=0\) the system possesses a periodic orbit

$$\varGamma=\bigl\{ u(t) : t\in \mathbb {R}\bigr\} , $$

with minimal period \(T>0\) (such that \(u(t)=u(t+T)\) for all \(t\in \mathbb {R}\) but \(u(t)\neq u(s)\) for \(0< s< T\)). We will assume that Γ is attracting and hyperbolic, i.e. linearly stable so that there is one zero Floquet exponent and the others have negative real part. Then we say that (17) is a limit-cycle oscillator. We will reduce this to a description that involves a phase that lives on a topological circle that can be thought of as an interval \([0,T)\) with the topology that comes from identifying the ends of the interval: ϵ and \(T-\epsilon\) are close for ϵ small. This circle is sometimes called the one-torus \(\mathbb {T}\).

There are a number of conventions used in the literature to represent this phase. We discuss these conventions before looking at general reduction methods. One convention is to define a phase ϑ modulo T such that

$$\frac{\mathrm{d}}{\mathrm{d}t} {\vartheta}=1 , $$

where \(\vartheta(u(t))=t\) interpreted modulo T and such that \(\vartheta(u_{0})=0\) for some chosen point \(u_{0}\in\varGamma\). Another convention is to define a phase θ modulo 2π such that

$$\frac{\mathrm{d}}{\mathrm{d}t} \theta= \omega, $$

where \(\omega=2\pi/T\) is the angular frequency of the oscillator. Again we pick a point \(u_{0}\in\varGamma\) and require that \(\theta=0\) at \(u=u_{0}\).

In the remainder of the article we will use ϑ and θ as an indication of the convention we are using for phase, in the special case \(T=2\pi\) both conventions coincide and we use θ. Typically one of these descriptions is more convenient than the others but all can in principle be adapted to any phase reduction. Table 2 expresses some features of the conventions which commonly used for phase variables.

Table 2 Comparison of the three conventions for a phase variable that we use in this review. The ϕ is used for IF models

We now review some techniques of reduction which can be employed to study the dynamics of (17) when \(\epsilon\neq0\) so that the perturbations may take the dynamics away from the limit cycle. In doing so we will reduce for example to an ODE for \(\vartheta(t)\) taken modulo T. Clearly any solution of an ODE must be continuous in t and typically \(\vartheta(t)\) will be unbounded in t growing at a rate that corresponds to the frequency of the oscillator. Strictly speaking, the coordinate we are referring to in this case is on the lift of the circle \(\mathbb {T}\) to a covering space \(\mathbb {R}\), and for any phase \(\vartheta\in[0,T)\) there are infinitely many lifts to \(\mathbb {R}\) given by \(\vartheta+ kT\) for \(k\in \mathbb {Z}\). However, in common with most literature in this area we will not make a notational difference between whether the phase is understood on the unit cell e.g. \(\theta \in [0,2\pi)\) or on the lift, e.g. \(\theta\in \mathbb {R}\) modulo 2π.

5.1 Isochronal Coordinates

Consider (17) with \(\epsilon=0\). The asymptotic (or latent) phase of a point \(x_{0}\) in the basin of attraction of the limit cycle Γ of period T is the value of \(\vartheta(x_{0})\) such that

$$ \lim_{t\to\infty} \bigl\vert x(t)- u\bigl(t+ \vartheta(x_{0})\bigr)\bigr\vert =0 , $$

where \(x(t)\) is a trajectory starting at \(x_{0}\). Thus if \(u(t)\) and \(x(t)\) are trajectories on and off the limit cycle, respectively, they have the same asymptotic phase if the distance between \(u(t)\) and \(x(t)\) vanishes as \(t\to\infty\). The locus of all points with the same asymptotic phase is called an isochron. Thus an isochron extends the notion of phase off the cycle (within its basin of attraction). Isochrons can also be interpreted as the leaves of the stable manifold of a hyperbolic limit cycle. They fully specify the dynamics in the absence of perturbations [178].

There are very few instances where the isochrons can be computed in closed form (though see the examples in [179] for plane-polar models where the radial variable decouples from the angular one). Computing the isochron foliation of the basin of attraction of a limit cycle is a major challenge since it requires knowledge of the limit cycle and therefore can only be computed in special cases or numerically.

One computationally efficient method for numerically determining the isochrons is backward integration, however, it is unstable and in particular for strongly attracting limit cycles the trajectories determined by backwards integration may quickly diverge to infinity. See Izhikevich [48] for a MATLAB code which determines smooth curves approximating isochrons. Other methods include the continuation-based algorithm introduced by Osinga and Moehlis [180], the geometric approach of Guillamon and Huguet to find high-order approximations to isochrons in planar systems [181], quadratic- and higher-order approximations [182, 183], and the forward integration method using the Koopman operator and Fourier averages as introduced by Mauroy and Mezić [184]. This latter method is particularly appealing and given its novelty we describe the technique below.

The Koopman operator approach for constructing isochrons for a T-periodic orbit focuses on tracking observables (or measures on a state space) rather than the identification of invariant sets. The Koopman operator, K, is defined by \(K = z \circ\varPhi_{t}(x)\), where \(z: \mathbb {R}^{n} \rightarrow \mathbb {R}\) is some observable of the state space and \(\varPhi_{t}(x)\) denotes the flow evolved for a time t, starting at a point x. The Fourier average of an observable z is defined as

$$ \widehat{z}(x;\omega) = \lim_{T \rightarrow\infty} \frac {1}{T}\int _{0}^{T} (z \circ\varPhi_{t}) (x) \mathrm {e}^{-i \omega t} \,\mathrm{d}t . $$
(18)

For a fixed x, (18) is equivalent to a Fourier transform of the (time-varying) observable computed along a trajectory. Hence, for a dynamics with a stable limit cycle (of frequency \(2\pi/T\)), it is clear that the Fourier average can be nonzero only for the frequencies \(\omega_{n}= 2 \pi n/ T\), \(n \in \mathbb {Z}\). The Fourier averages are the eigenfunctions of K, so that

$$ K \widehat{z}(x;\omega_{n}) = \mathrm {e}^{i \omega_{n} t} \widehat{z}(x;\omega _{n}),\quad n \in \mathbb {Z}. $$

Perhaps rather remarkably the isochrons are level sets of \(\widehat {z}(x;\omega_{n})\) for almost all observables. The only restriction being that the first Fourier coefficient of the Fourier observable evaluated along the limit cycle is nonzero over one period. An example of the use of this approach is shown in Fig. 11, where we plot the isochrons of a Stuart–Landau oscillator.

Fig. 11
figure 11

Isochrons found using the method of Fourier averages for the Stuart–Landau oscillator: \(\mathrm{d}{z} / \mathrm{d}t =z[\lambda(1+ic)/2+i \omega] - z \vert z\vert ^{2} (1+i c)/2\), \(z=x+iy\), with \(\lambda=2\), \(c=1\) and \(\omega=1\) so that \(T=2\pi\); see [185]. The black curve represents the periodic orbit of the system; in this case, we have \(\theta =\vartheta\). The background colour represents the Fourier average, whilst the coloured lines are the isochrons, given as level sets of the Fourier average. The white dots are the actual isochrons, computed analytically as \((x,y)=((1+r) \cos(\theta+ c \ln(1+r)),(1+r) \sin(\theta+ c \ln (1+r)))\), \(r \in(-1,\infty)\)

5.2 Phase–Amplitude Models

An alternative (non isochronal) framework for studying oscillators with an attracting limit cycle is to make a transformation to a moving orthonormal coordinate system around the limit cycle where one coordinate gives the phase on the limit cycle while the other coordinates give a notion of distance from the limit cycle. It has long been known in the dynamical systems community how to construct such a coordinate transformation; see [186] for a discussion. The importance of considering the effects of both the phase and the amplitude interactions of neural oscillators has been highlighted by several authors including Ermentrout and Kopell [187] and Medvedev [188], and that this is especially pertinent when considering phenomenon such as oscillator death (and see Sect. 3.4). Phase–amplitude descriptions have already successfully been used to find equations for the evolution of the energies (amplitudes) and phases of weakly coupled weakly dissipative networks of nonlinear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [103, 189, 190]. Lee et al. [191] use the notion of phase and amplitudes of large networks of globally coupled Stuart–Landau oscillators to investigate the effects of a spread in amplitude growth parameter (units oscillating with different amplitudes and some not oscillating at all) and the effect of a homogeneous shift in the nonlinear frequency parameter.

We now discuss the phase–amplitude coordinate transformation detailed by Hale [186], some of the situations where it has been employed and other areas in which a phase–amplitude description of oscillators is necessary to reveal the dynamics of the system. Consider again the system (17) with \(\epsilon=0\), which has an attracting hyperbolic periodic orbit. Make a transformation to a moving orthonormal coordinate system as follows. Choose one axis of the coordinate system to be in the direction of unit tangent vector along the periodic orbit, ξ, given by

$$ \xi(\vartheta) = \frac{\mathrm{d}u}{\mathrm{d}\vartheta}\Big/ \biggl\vert \frac{\mathrm{d}u}{\mathrm{d}\vartheta }\biggr\vert . $$

The remaining coordinate axes can be expressed as the columns of an \(n\times(n-1)\) matrix ζ. We can then write an arbitrary point x in terms of its phase \(\vartheta\in[0,T)\) and its amplitude ρ:

$$ x(\vartheta,\rho)=u(\vartheta)+\zeta(\vartheta)\rho. $$

Here, \(|\rho|\) represents the Euclidean distance from the limit cycle. The vector of amplitudes \(\rho\in \mathbb {R}^{n-1}\) allows us to consider points away from the limit cycle.

Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system for \(\vartheta\in[0,T)\) and \(\rho\in \mathbb {R}^{n-1}\)

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta}=1+f_{1}(\vartheta,\rho) ,\qquad \frac {\mathrm{d}}{\mathrm{d}t} {\rho}=A(\vartheta)\rho+f_{2}(\vartheta,\rho) , \end{aligned}$$

where

$$\begin{aligned} f_{1}(\vartheta,\rho)&=-h^{T}(\vartheta, \rho)\frac{\mathrm{d}\zeta }{\mathrm{d}\vartheta }\rho +h^{T}(\vartheta,\rho) \bigl[f(u+\zeta \rho)-f(u) \bigr] , \\ f_{2}(\vartheta,\rho)&=-\zeta^{T}\frac{\mathrm{d}\zeta}{\mathrm {d}\vartheta}\rho f_{1}+\zeta ^{T} \bigl[f(u+\zeta\rho)-f(u)-Df\zeta\rho \bigr] , \\ h(\vartheta,\rho) &= \biggl[\biggl\vert \frac{\mathrm{d}u}{\mathrm {d}\vartheta}\biggr\vert + \xi ^{T} \frac{\mathrm{d}\zeta}{\mathrm{d}\vartheta}\rho \biggr]^{-1} \xi, \qquad A( \vartheta)=\zeta ^{T} \biggl[-\frac{\mathrm{d}\zeta}{\mathrm{d}\vartheta}+Df\zeta \biggr] , \end{aligned}$$
(19)

and Df is the Jacobian of the vector field f, evaluated along the periodic orbit u. The technical details to specify the orthonormal coordinates forming ζ can be found in the appendix of [192]. It is straightforward to show that \(f_{1}(\vartheta,\rho) \rightarrow0\) as \(|\rho|\rightarrow0\), \(f_{2}(\vartheta,0) = 0\) and that \(\partial f_{2} (\vartheta,0)/\partial\rho= 0\). In the above, \(A(\vartheta)\) describes the ϑ-dependent rate of attraction or repulsion from cycle and \(f_{1}\) captures the shear present in the system, that is, whether the speed of ϑ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [193] and will be discussed further in Sect. 5.3.

Some caution must be exercised when applying this transformation as it will break down when the determinant of the Jacobian of the transformation vanishes. This never occurs on cycle (where \(\rho=0\)) but it may do so for some \(|\rho|=k>0\), setting an upper bound on how far from the limit cycle these phase–amplitude coordinates can be used to describe the system. In [192] it is noted that for the planar ML model the value of k can be relatively small for some values of ϑ, but that breakdown occurs where the orbit has high curvature. In higher-dimensional systems this issue would be less problematic.

Similarly, the coordinate transformation can be applied to driven systems (i.e. (17) with \(\epsilon\neq0\)) where ϵ is not necessarily small. In this case the dynamics in \((\vartheta, \rho)\) coordinates, where \(\vartheta\in[0,T)\) and \(\rho\in \mathbb {R}^{n-1}\), are

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta}&=1+f_{1}(\vartheta,\rho )+\epsilon h^{T}(\vartheta,\rho)g\bigl(u(\vartheta)+\zeta(\vartheta)\rho,t\bigr), \end{aligned}$$
(20)
$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\rho}&=A(\vartheta)\rho +f_{2}(\vartheta ,\rho)+ \epsilon \zeta^{T} B(\vartheta,\rho) g\bigl(u(\vartheta)+\zeta( \vartheta)\rho ,t\bigr), \end{aligned}$$
(21)

with

$$ B(\vartheta,\rho) = \mathrm{I}_{n}-\frac{\mathrm{d}\zeta}{\mathrm {d}\vartheta}\rho h^{T}(\vartheta,\rho) , $$
(22)

and \(\mathrm{I}_{n}\) is the \(n\times n\) identity matrix. Here, h and B describe the effect in terms of ϑ and ρ that the perturbations have. For planar models, \(B=\mathrm{I}_{2}\).

Medvedev [188] has employed this phase–amplitude description to determine conditions for stability of the synchronised state in a network of identical oscillators with separable linear coupling. Medvedev [194] has also used the framework to consider the effects of white noise on the synchronous state, identifying the types of linear coupling operators which lead to synchrony in a network of oscillators provided that the strength of the interactions is sufficiently strong.

5.3 Dynamics of Forced Oscillators: Shear-Induced Chaos

Since phase–amplitude coordinates can capture dynamics a finite distance away from the limit cycle (and additionally have the advantage over isochronal coordinates of being defined outside of the basin of attraction of the limit cycle), they can be used to model dynamical phenomena in driven systems where the perturbations necessarily push the dynamics away from the limit cycle. There is no need to make any assumptions about the strength of the forcing ϵ.

The phase–amplitude description of a forced oscillator is able to detect the presence of other structures in the phase space. For example if the system were multistable, phase–amplitude coordinates would track trajectories near these other structures and back again, should another perturbation return the dynamics to the basin of attraction of the limit cycle. These coordinates would also detect the presence of other non-attracting invariant structures such as saddles in the unperturbed flow. Orbits passing near the saddle will remain there for some time and forcing may act to move trajectories near this saddle before returning to the limit cycle. It may also be the case that the forcing acts to create trapping regions if the forcing is strong compared to the attraction to the limit cycle.

Another dynamical phenomenon which phase–amplitude coordinates are able to capture is the occurrence of shear-induced chaos. Shear in a system near a limit cycle Γ is the differential in components of the velocity tangent to the limit cycle as one moves further from Γ in phase space. Shear forces within the system act to speed up (or slow down) trajectories further away from the limit cycle compared to those closer to it. This phenomenon is illustrated in Fig. 12.

Fig. 12
figure 12

The stretch-and-fold action of a kick followed by relaxation in the presence of shear. Left: a non-constant kick moves data away from the limit cycle (horizontal grey line). As the image of the cycle under the kick relaxes back to the cycle (under the unperturbed flow) the action of shear causes folds to appear. Right: the geometry of folding in relation to the isochron foliation (black lines). An initial segment \(\gamma_{0}\) is kicked to the blue curve and is then allowed to relax back to cycle, passing through the red and green curves, which are images of the blue curve under the unperturbed flow

As we show below, when an impulsive force is applied (the system is kicked) a ‘bump’ in the image of Γ is produced. If there is sufficient shear in the system then the bump is folded and stretched as it is attracted back to the limit cycle. Such folding can potentially lead to the formation of horseshoes and strange attractors. However, if the attraction to the limit cycle is large compared to the shear strength or size of the kick then the bumps will dissipate before any significant stretching occurs.

Shear-induced chaos is most commonly discussed in the context of discrete time kicking of limit cycles. Wang and Young [195197] prove rigorous results in the case of periodically kicked limit cycles with long relaxation times. Their results provide details of the geometric mechanism for producing chaos. Here we briefly review some of these results. More detailed summaries can be found in [198] and [199].

Let \(\varPhi_{t}\) be the flow of the unperturbed system, which has an attracting hyperbolic limit cycle Γ. We can think of a kick as a mapping κ. The dynamics of the system with periodic kicking every τ units of time can be obtained by iterating the map \(F_{\tau}=\varPhi_{\tau}\circ\kappa\). Provided that there is a neighbourhood \(\mathcal{U}\) of Γ such that points in \(\mathcal{U}\) remain in the basin of attraction of Γ under kicks and τ is long enough that the kicked points can return to \(\mathcal{U}\), then \(\varGamma _{\tau}= \bigcap_{n\geq0} F_{\tau}^{n}(\mathcal{U})\) is an attractor for the periodically kicked system. If the kicks are weak then \(\varGamma_{\tau}\) is generally expected to be a slightly perturbed version of Γ and we should expect fairly regular behaviour. In this case \(\varGamma_{\tau}\) is an invariant circle. To obtain more complicated dynamics it is necessary to break the invariant circle. This idea is illustrated by the following linear shear model, considered in [196]:

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta} = 1+\sigma y ,\qquad \frac{\mathrm{d}}{\mathrm{d}t} {y} =- \lambda y+A H(\vartheta) \sum_{n=0}^{\infty}\delta (t+n\tau), $$
(23)

where \((\vartheta, y) \in[0,T)\times\mathbb{R} = S^{1} \times\mathbb {R}\) are coordinates in the phase space, \(\lambda, \sigma, A>0\) are constants and \(H:S^{1} \to\mathbb{R}\) is a non-constant smooth function. The unforced system (\(A=0\)) has a limit cycle \(\varGamma=S^{1} \times\{0\} \). If the quantity

$$\frac{\sigma}{\lambda} A \equiv\frac{\mbox{shear}}{\mbox{contraction rate}}\times\bigl(\mbox{kick `amplitude'}\bigr) , $$

is sufficiently large, then it is possible to show that there is a positive measure set \(\mathcal{T}\subset\mathbb{R}^{+}\) such that for all \(\tau\in\mathcal{T}\), \(\varGamma_{\tau}\) is a strange attractor of \(F_{\tau}\) [197]. How large the quantity must be depends on the function \(H(\vartheta)\). Since \(H(\vartheta)\) is non-constant (to create the bumps), the larger shear σ and the kick amplitude A, the more folding will occur as the bump is attracted back to the limit cycle. Also note that weaker limit cycles (smaller λ) create more favourable conditions for shear induced chaos as the shear has longer to act before the bump is attracted back to the limit cycle.

For a more general system with periodic forcing the shear may not appear explicitly as a parameter. To elucidate what kind of kicks may cause shear induced chaos in this case we appeal to the isochrons of the system. Suppose, as illustrated in Fig. 12, that a section \(\varGamma _{0}\) of the limit cycle is kicked upwards with the end points held fixed and assume \(\tau= np\) for some \(n,p \in\mathbb{Z}^{+}\). Since the isochrons are invariant under the action of \(\varPhi_{T}\), during relaxation the flow moves each point of the kicked curve \(\kappa(\varGamma_{0})\) back towards Γ along the isochrons. In Fig. 12 we can clearly see the effect of the shear with a fold forming.

From Fig. 12 one can see that kicks along isochrons or in directions roughly parallel to the isochrons will not produce strange attractors, nor will kicks that carry points from one isochron to another. The cause of the stretching and folding is the variation in how far points \(x \in\varGamma\) are moved by κ in the direction transverse to the isochrons (i.e. the ordering of points in terms of asymptotic phase is altered by the action of the kick). Lin and Young [198] emphasise that the occurrence of shear induced chaos depends on the interplay between the geometries of the kicks and the dynamical structures of the unforced system.

In the case of the linear shear model above, given by Eq. (23), the isochrons of the unforced system are simply the lines with slope \(-\lambda/\sigma\) in \((\vartheta, y)\) coordinates. Variation in kick distances in directions transverse to these isochrons is guaranteed with any non-constant function H, with greater variation given by larger values of \(\sigma/\lambda\) and A.

Beyond the rigorous results proved by Wang and Young [195198] concerning periodically kicked limit cycles of the linear shear model and for supercritical Hopf bifurcations, Ott and Stenlund [193] prove that shear-induced chaos may exist near general limit cycles. In addition, Lin and Young [198] have carried out numerical studies including random kicks at times given by a Poisson process and systems driven by white noise. They also consider forcing of a pair of coupled oscillators. In all cases, shear-induced chaos occurs when the shearing and amplitude of the forcing are large enough to overcome the effects of damping.

Lin et al. [200] demonstrate that the ML model can exhibit shear-induced chaos near the homoclinic bifurcation when periodically forced, by plotting images of the periodic orbit under successive applications of the kick map and calculating the maximum Lyapunov exponent. They also emphasise that the phenomenon of shear-induced chaos cannot be detected by the perturbative techniques such as those outlined in Sect. 5.4 and Sect. 5.5 below.

Wedgwood et al. [192] go on to show that the phase–amplitude description is well suited to understanding the behaviour of neural oscillators in response to external stimuli. They consider a number of neural oscillator models in addition to a generic planar model and examine the response of the system to periodic pulsatile forcing by taking \(x\in \mathbb {R}^{2}\),

$$g(x, t) = \sum_{n\in\mathbb{Z}} \bigl(\delta(t-n\tau),0 \bigr)^{T} , $$

and a variety of choices for \(f(x)\) in (17). Their numerical simulations indicate that for both the FHN and the ML models the behaviour remains qualitatively similar when the relevant functions \(f_{1}(\vartheta, \rho)\) are replaced by σρ, \(f_{2}(\vartheta, \rho)\) is dropped and \(A(\vartheta)\) is replaced with −λ (for a wide range of \(\sigma, \lambda>0\)). They then focus on the effect of the form of the forcing function in the phase–amplitude coordinates. Evaluating the functions \(h(\vartheta, \rho)\) and \(B(\vartheta, \rho)\) of Eqs. (19) and (22), respectively, for the particular model and denoting the first component of h as \(P_{1}\) and the first component of ξ as \(P_{2}\) they find that forcing each system at the same ratio of the natural frequency of the underlying periodic orbit and implementing the choices above gives the following ODE on \(\vartheta \in[0,T)\), \(\rho\in \mathbb {R}^{n-1}\):

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta} = 1 +\sigma\rho+ \epsilon P_{1}( \vartheta ,\rho) \sum_{n\in\mathbb{Z}} \delta(t-n\tau), \qquad \frac {\mathrm{d}}{\mathrm{d}t} {\rho} = -\lambda\rho+\epsilon P_{2}(\vartheta)\sum _{n\in\mathbb{Z}} \delta (t-n\tau). $$

By developing a stroboscopic map for this system, Wedgwood et al. [192] numerically evaluate the largest Lyapunov exponent which is found to be positive for the Morris–Lecar model but negative for the FHN model.

The analysis of the behaviour of generic networks of oscillators within a phase–amplitude framework is a challenging open problem but such a description would allow for greater accuracy (compared to the phase-only methods traditionally used and described below) in elucidating a richer variety of the complex dynamics of oscillator networks.

5.4 Phase Oscillator Models

To obtain a phase description, one can consider the limit of strong attraction [102] in Eqs. (20)–(21). However, it is more appealing to consider a phase variable with a uniform rotation rate that assigns a phase coordinate \(\vartheta \in[0,T)\) to each point \(x \in\varGamma\) according to

$$ \frac{\mathrm{d}}{\mathrm{d}t}\vartheta\bigl(x(t)\bigr) = 1,\quad x \in \varGamma. $$
(24)

This reduction to a phase description gives a simple dynamical system, albeit one that cannot describe evolution of trajectories in phase space that are away from the limit cycle. However, the phase-reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). This can be written, for a given ODE model, as the solution to an adjoint equation. For a given high-dimensional conductance-based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [201].

The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. Writing this vector quantity as Q means that weak forcing \(\epsilon g(t)\) in the original high-dimensional models transforms as \({\mathrm {d}\vartheta}/{\mathrm {d}t}=1+\epsilon \langle Q, g \rangle\) where \(\langle\cdot,\cdot\rangle\) defines the standard inner product and ϵ is a small parameter. For periodic forcing such equations can be readily analysed, and questions relating to synchronisation, mode-locking and Arnol’d tongues can be thoroughly explored [76]. Moreover, this approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phase-locking and central pattern generation in neural circuitry and see for example [43].

However, the assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed for the popular type I ML firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [200], as just highlighted in Sect. 5.3 (since strong interactions will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold).

5.5 Phase Response

It is common practice in neuroscience to characterise a neuronal oscillator in terms of its phase response to a perturbation [202]. This gives rise to the notion of a so-called phase response curve (PRC), which for a real neuron can be determined experimentally [203205], and can also be related to the poststimulus time histogram [206]. Following [201], consider a dynamical system \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\), \(x\in \mathbb {R}^{N}\) with a T-periodic solution \(u(t)=u(t+T)\) and introduce an infinitesimal perturbation \(\Delta x_{0}\) to the trajectory \(u(t)\) at time \(t=0\). This perturbation evolves according to the linearised equation of motion:

$$ \frac{\mathrm{d}}{\mathrm{d}t} \Delta x = Df\bigl(u(t)\bigr) \Delta x ,\quad \Delta x(0) = \Delta x_{0}. $$

Here \(Df(u)\) denotes the Jacobian of f evaluated along u. Introducing a time-independent isochronal phase shift Δϑ as \(\vartheta(u(t)+\Delta u(t))-\vartheta(u(t))\), we have to first order in Δx that

$$ \Delta\vartheta= \bigl\langle Q(t), \Delta x(t) \bigr\rangle , $$
(25)

where \(\langle\cdot,\cdot\rangle\) defines the standard inner product, and \(Q=\nabla_{u} \vartheta\) is the gradient of ϑ evaluated at \(u(t)\). Taking the time-derivative of (25) gives

$$ \biggl\langle \frac{\mathrm{d}}{\mathrm{d}t} Q, \Delta x \biggr\rangle = - \biggl\langle Q, \frac{\mathrm{d}}{\mathrm{d}t} \Delta x \biggr\rangle = - \bigl\langle Q, Df(u) \Delta x \bigr\rangle = - \bigl\langle Df^{T}(u)Q, \Delta x \bigr\rangle . $$

Since the above equation must hold for arbitrary perturbations, we see that the gradient \(Q=\nabla_{u} \vartheta\) satisfies the linear equation

$$ \frac{\mathrm{d}}{\mathrm{d}t} Q = D(t) Q ,\quad D(t) = - Df^{T}\bigl(u(t)\bigr) , $$
(26)

subject to the boundary conditions

$$\bigl\langle \nabla_{u(0)} \vartheta, f\bigl(u(0)\bigr) \bigr\rangle =1 \quad\mbox{and}\quad Q(t)=Q(t+T). $$

The first condition simply guarantees that \({\mathrm {d}\vartheta}/{\mathrm {d}t}=1\) (at any point on the periodic orbit), and the second enforces periodicity. Note that the notion of phase response can also be extended to time-delayed systems [207, 208].

In general, Eq. (26) must be solved numerically to obtain the iPRC, say, using the adjoint routine in XPPAUT [127] or MatCont [209]. However, for the case of a nonlinear IF model, defined by (1), the PRC is given explicitly by

$$ Q(\vartheta) = \frac{1}{I+f \circ\varPsi^{-1}(\vartheta)},\quad \varPsi (v) = \tau\int_{v_{{R}}}^{v(t)} \frac{\mathrm{d}v'}{I+f(v')} , $$
(27)

where the period of oscillation is found by solving \(\varPsi(v_{\mathrm{th}})=T\) (and the response function is valid for finite perturbations). For example, for the quadratic IF (QIF) model, obtained from (1) with the choice with \(f(v)=v^{2}\), and taking the limit \(v_{\mathrm{th}} \rightarrow\infty\) and \(v_{{R}} \rightarrow-\infty\) we find \(Q(\vartheta) = \sin^{2}(\vartheta\pi/T)/I\) with \(T=\pi\tau/\sqrt{I}\), recovering the iPRC expected of an oscillator close to a SNIC bifurcation [210, 211]. For a comprehensive discussion of iPRCs for various common oscillator models see the excellent survey article by Brown et al. [201]. The iPRC for planar piece-wise linear IF models can also be computed explicitly [58], although we do not discuss this further here. In Fig. 13 we show the numerically computed iPRCs for the Hodgkin–Huxley model, the Wilson cortical model and the Morris–Lecar model previously discussed in Sect. 2. For completeness it is well to note the iPRC for a planar oscillator close to a supercritical Hopf bifurcation has an adjoint with a shape given by \((-\sin(2 \pi t/T),\cos(2 \pi t/T))\) for an orbit shape \((\cos (2 \pi t/T), \sin(2 \pi t/T))\).

Fig. 13
figure 13

From left to right: phase response curves for the Hodgkin–Huxley model, the Wilson model and the Morris–Lecar model corresponding to the discussion in Sect. 2. Orbits are in blue and iPRCs in red

Having defined the phase in some neighbourhood of the limit cycle, we can write (24) as

$$\frac{\mathrm{d}}{\mathrm{d}t} \vartheta(x) = \biggl\langle \nabla _{x} \vartheta, \frac {\mathrm{d} }{\mathrm{d}t} {x} \biggr\rangle . $$

For \({\mathrm {d}x}/{\mathrm {d}t}=f(x)\) this gives \(\langle\nabla_{x} \vartheta, f(x) \rangle = 1\). We now consider the effect of a small external periodic force on the self sustained oscillations as in (17), with \(g(x,t)=g(x,t+\Delta)\), where in general Δ is different from T. For weak perturbations (\(\epsilon\ll1\)) the state point will leave Γ, though will stay in some neighbourhood, which we denote by \(\mathcal{U}\). We extend the phase off cycle using isochronal coordinates so that \(\langle\nabla_{x} \vartheta, f(x) \rangle= 1\) holds for any point \(x \in\mathcal{U}\). For a perturbed oscillator, in these coordinates we have

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta} = \biggl\langle \nabla_{x} \vartheta, \frac {\mathrm{d}}{\mathrm{d}t} {x} \biggr\rangle = \bigl\langle \nabla_{x} \vartheta, f(x) + \epsilon g(x,t) \bigr\rangle = 1 + \epsilon \bigl\langle \nabla_{x} \vartheta, g(x,t) \bigr\rangle . $$

As a first approximation we evaluate the right hand side of the above on the limit cycle to get an equation for the phase dynamics:

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\vartheta} = 1 + \epsilon I(\vartheta,t),\quad I(\vartheta ,t) = \bigl\langle Q\bigl(u(\vartheta)\bigr) , g\bigl(u(\vartheta),t\bigr) \bigr\rangle . $$
(28)

The phase dynamics (28) is still very hard to analyse, and as such it is common to resort to a further simplification, namely averaging. First let us introduce a rotating phase \(\psi=\vartheta- Tt/\Delta\), in which case (28) becomes

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\psi} = -\delta+ \epsilon I(\psi+T t/\Delta,t),\quad \delta= T/\Delta-1. $$

If both ϵ and δ are small then \({\mathrm {d}\psi}/{\mathrm {d}t} \simeq0\) and ψ evolves slowly. Thus we may set \(\psi(s) \simeq\psi (t)\) for \(s \in[t, t+T]\). Averaging the above over one period gives

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\psi} \simeq-\delta+ \epsilon H(\psi),\quad H(\psi) = \frac{1}{T} \int_{0}^{T} \bigl\langle Q \bigl(u(\psi+ s)\bigr), g \bigl(u(\psi+ s),s\bigr) \bigr\rangle \,\mathrm{d}s , $$
(29)

where we have used the result that \(I(\psi+s,t)=I(\psi+s+T,t+T)\). The function \(H(\psi)\) is T-periodic and can be written as a Fourier series \(H(\psi) = \sum_{n} H_{n} \mathrm {e}^{2 \pi i n \psi/T}\), with the simplest example of an averaged phase dynamics being

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\psi} = -\delta+ \epsilon\sin \psi, $$

which is called the Adler equation [212]. If we denote the maximum and minimum of H by \(H_{\mathrm{min}}\) and \(H_{\mathrm{max}}\), respectively, then for a phase-locked \(1:1\) state defined by \(\frac{\mathrm{d}}{\mathrm{d}t} {\psi}=0\) we require \(\epsilon H_{\mathrm{min}} < \delta< \epsilon H_{\mathrm{max}}\). In this case there are two fixed points \(\psi_{\pm}\) defined by \(H(\psi_{\pm}) = \delta\). One of these is unstable (say \(\psi_{-}\), so that \(\epsilon H'(\psi _{-})>0\)) and the other stable (\(\psi_{+}\), with \(\epsilon H'(\psi_{+})<0\)). This gives rise to a rotating solution with constant rotation frequency so that \(\vartheta=(1+\delta)t+ \psi_{+}\). The two solutions coalesce in a saddle-node bifurcation when \(\delta= \epsilon H_{\mathrm{min}}\) and \(\delta= \epsilon H_{\mathrm{max}}\) (or equivalently when \(H'(\psi _{\pm})=0\)). In the case of the Adler model the parameter region for phase locking is given explicitly by a triangular wedge defined by \(\epsilon= |\delta|\)—a so-called Arnol’d tongue. Outside of this tongue solutions drift (they cannot lock to the forcing period) according to (29), and the system evolves quasi-periodically. We treat weakly coupled phase oscillators in Sect. 6.

6 Weakly Coupled Phase Oscillator Networks

The theory of weakly coupled oscillators [5, 213] is now a standard tool of dynamical systems theory and has been used by many authors to study oscillatory neural networks; see for example [213217]. The book by Hoppensteadt and Izhikevich provides a very comprehensive review of this framework [43], which can also be adapted to study networks of relaxation oscillators (in some singular limit) [146, 218].

Consider, for illustration, a system of interacting limit-cycle oscillators (8). Following the method in Sect. 5.5, similar to (29) we obtain the network’s phase dynamics in the form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta_{i}} = \omega_{i} + \epsilon \sum_{j=1}^{N} w_{ij} \bigl\langle Q_{i}\bigl(u_{i}(\theta_{i})\bigr) , G \bigl(u_{j}(\theta_{j})\bigr) \bigr\rangle , $$
(30)

where the frequency \(\omega_{i}\) allows for the fact that oscillators are not identical and, for this reason, we will assume that \(\theta_{i}\in [0,2\pi)\). Precisely this form of network model was originally suggested by Winfree to describe populations of coupled oscillators. The Winfree model [219] assumes a separation of time scales so that an oscillator can be solely characterised by its phase on cycle (fast attraction to cycle) and is described by the network equations,

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{i} = \omega_{i} + \epsilon R( \theta _{i}) \frac {1}{N} \sum_{j=1}^{N} P(\theta_{j}) , $$
(31)

describing a globally coupled network with a biologically realistic PRC R and pulsatile interaction function P. Using a mixture of analysis and numerics Winfree found that with large N there was a transition to macroscopic synchrony at a critical value of the heterogeneity of the population. Following this, Kuramoto [5] introduced a simpler model with interactions mediated by phase differences, and showed how the transition to collective synchronisation could be understood from a more mathematical perspective. For an excellent review of the Kuramoto model see [220] and [221].

The natural way to obtain a phase-difference model from (30) is, as in Sect. 5.5, to average over one period of oscillation. For simplicity let us assume that all the oscillators are identical, and \(\omega_{i}=\omega\) for all i, in which case we find that

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{i} = \omega+ \epsilon\sum _{j=1}^{N} w_{ij} H(\theta_{j}- \theta_{i}), $$
(32)

where

$$ H(\psi) = \frac{1}{2\pi}\int_{0}^{2\pi} \bigl\langle Q \bigl(u(s)\bigr), G \bigl(u(\psi + s)\bigr) \bigr\rangle \,\mathrm{d}s . $$
(33)

The 2π-periodic function H is referred to as the phase interaction function (or coupling function). If we write complex Fourier series for Q and G as

$$Q(t)=\sum_{n\in \mathbb {Z}} Q_{n} \mathrm {e}^{i n t} \quad \mbox{and}\quad G(t)=\sum_{n\in \mathbb {Z}} G_{n} \mathrm {e}^{ i n t}, $$

respectively, then

$$ H(\psi) = \sum_{n\in \mathbb {Z}} H_{n} \mathrm {e}^{i n \psi} $$
(34)

with \(H_{n} = \langle Q_{-n}, G_{n} \rangle\). Note that a certain caution has to be exercised in applying averaging theory. In general, one can only establish that a solution of the unaveraged equations is ϵ-close to a corresponding solution of the averaged system for times of \(O(\epsilon^{-1})\). No such problem arises in the case of hyperbolic fixed points corresponding to phase-locked states.

When describing a piece of cortex or a central pattern generator circuit with a set of oscillators, the biological realism of the model typically resides in the phase interaction function. The simplest example is \(H(\psi)=\sin(\psi)\), which when combined with a choice of global coupling defines the well-known Kuramoto model [5]. However, to model realistic neural networks one should calculate (33) directly, using knowledge of the single neuron iPRC and the form of interaction. As an example consider a synaptic coupling, described in Sect. 2.5, that can be written in the form \(G(u(\psi)) = \sum_{m} \eta(\psi+m 2\pi)\), and a single neuron model for which the iPRC in the voltage variable v is given by R (say experimentally or from numerical investigation). In this case

$$ H(\psi) = \int_{0}^{\infty}R (2 \pi s-\psi) \eta(2 \pi s) \,\mathrm {d}s . $$
(35)

If instead we were interested in diffusive (gap junction) coupling then we would have

$$ H(\psi) = \frac{1}{2 \pi}\int_{0}^{2 \pi} R(s) \bigl[v(s+\psi)-v(s)\bigr] \,\mathrm{d}s .$$

For the HH model \(R(t)\) is known to have a shape like \(-\sin(t)\) for a spike centred on the origin (see Fig. 13). Making the further choice that \(\eta(t)=\alpha^{2} t \mathrm {e}^{-\alpha t}\) then (35) can be evaluated as

$$ H(\psi) = \frac{[1-(1/\alpha)^{2}] \sin(\psi)-2/\alpha\cos(\psi)}{2 \pi[1+(1/\alpha)^{2}]^{2}}. $$
(36)

In the particular case of two oscillators with reciprocal coupling and \(\omega=1\) then

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{1} &= 1 + \epsilon H(\theta _{2}-\theta_{1}) , \\ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{2} &= 1 + \epsilon H(\theta _{1}-\theta_{2}) , \end{aligned}$$

and we define \(\varphi:=\theta_{2}(t)-\theta_{1}(t)\). A phase-locked solution has a constant phase difference ϕ that is a zero of the (odd) function

$$K(\varphi)=\epsilon\bigl[H(-\varphi)-H(\varphi)\bigr]. $$

A given phase-locked state is then stable provided that \(K'(\varphi)<0\). Note that by symmetry both the in-phase (\(\varphi=0\)) and the anti-phase (\(\varphi =\pi\)) states are guaranteed to exist. For the form of the phase interaction function given by (36), the stability of the synchronous solution is governed by the sign of \(K'(0)\):

$$ \operatorname {sgn}K'(0) = \operatorname {sgn}\bigl\{ -\epsilon H'(0)\bigr\} = \operatorname {sgn}\bigl\{ -\epsilon\bigl[\bigl(1-(1/\alpha)\bigr)^{2}\bigr] \bigr\} . $$

Thus for inhibitory coupling (\(\epsilon< 0\)) synchronisation will occur if \(1/\alpha> 1\), namely when the synapse is slow (\(\alpha\rightarrow0\)). It is also a simple matter to show that the anti-synchronous solution (\(\varphi =\pi \)) is stable for a sufficiently fast synapse (\(\alpha \rightarrow\infty\)). It is also possible to develop a general theory for the existence and stability of phase-locked states in larger networks than just a pair.

6.1 Phase, Frequency and Mode Locking

Now suppose we have a general population of \(N\geq2\) coupled phase oscillators

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{j}=f_{j}(\theta_{1}, \ldots ,\theta_{N}), $$

described by phases \(\theta_{j}\in \mathbb {R}/2\pi \mathbb {Z}\). For a particular continuous choice of phases \(\theta(t)\) for the trajectory one can define the frequency of the jth oscillator as

$$\varOmega_{j}=\lim_{T\rightarrow\infty} \frac{1}{T} \bigl[ \theta _{j}(T)-\theta_{j}(0)\bigr]. $$

This limit will converge under fairly weak assumptions on the dynamics [222], though it may vary for different attractors in the same system, for different oscillators j and in some cases it may vary even for different trajectories within the same attractor.

We say two oscillators j and k are phase locked with ratio \((n:m)\) for \((n,m)\in \mathbb {Z}^{2}\setminus(0,0)\) with no common factors of n and m, if there is an \(M>0\) such that

$$\bigl\vert n\theta_{j}(t)-m\theta_{k}(t)\bigr\vert < M , $$

for all \(t>0\). The oscillators are frequency locked with ratio \((n:m)\) if

$$n\varOmega_{j}-m\varOmega_{k}=0. $$

If we say they are simply phase (or frequency locked) without explicit mention of the \((n:m)\) ratio, we are using the convention that they are \((1:1)\) phase (or frequency) locked. The definition of \(\varOmega_{j}\) means that if two oscillators are phase locked then they are frequency locked. The converse is not necessarily the case: two oscillators may be frequency locked but not phase locked if the phase difference \(n\theta_{j}(t)-m\theta_{k}(t)\) grows sublinearly with t.

For the special case of globally coupled networks (\(w_{ij}=1/N\) for the system (32)), the system is \(S_{N} \times \mathbb {T}\) equivariant. By topological arguments, maximally symmetric solutions describing synchronous, splay, and a variety of cluster states exist generically for weak coupling [118]. The system (32) with global coupling is in itself an interesting subject of study in that it is of arbitrarily high dimension N but is effectively determined by the single function \(H(\varphi)\) that is computable from a single pair of oscillators. The system (and variants thereof) have been productively studied by thousands of papers since the seminal work of Kuramoto [5].

6.2 Dynamics of General Networks of Identical Phase Oscillators

The collective dynamics of phase oscillators have been investigated for a range of regular network structures including linear arrays and rings with uni- or bi-directional coupling e.g. [118, 120, 213, 223], and hierarchical networks [224]. In some cases the systems can be usefully investigated in terms of permutation symmetries of (32) with global coupling, for example \(\mathbb {Z}_{N}\) or \(\mathbb {D}_{N}\) for uni- or bi-directionally coupled rings. In other cases a variety of approaches have been developed and adapted to particular structures though these have not in all cases been specifically applied to oscillator networks; some of these approaches are discussed in Sect. 3.3

We recall that the form of the coupling in (32) is special in the sense that it assumes the interactions between two oscillators are independent of any third—this is called pairwise coupling [118, 120]. If there are degeneracies such as

$$ \sum_{k=0}^{m-1} H \biggl( \varphi+\frac{2\pi k}{m} \biggr)=0 , $$
(37)

which can appear when some of the Fourier components of H are zero, this can lead to degeneracies in the dynamics. For example [225], while Theorem 7.1 in [118] shows that if H satisfies (37) for some \(m\geq2\) and N is a multiple of m then (32), with global coupling, will have m-dimensional invariant tori in phase space that are foliated by neutrally stable periodic orbits. This degeneracy will disappear on including either non-pairwise coupling or introducing small but nonzero Fourier components in the expansion of H but as noted in [226] this will typically be the case for the interaction of oscillators even if they are near a Hopf bifurcation.

We examine in more detail some of the phase-locked states that can arise in weakly coupled networks of identical phase oscillators described by (32). We define a \(1:1\) phase-locked solution to be of the form \(\theta_{i}(t)=\varphi_{i}+ \varOmega t\), where \(\varphi_{i}\) is a constant phase and Ω is the collective frequency of the coupled oscillators. Substitution into the averaged system (32) gives

$$ \varOmega= \omega+{\epsilon}\sum_{j=1}^{N} w_{ij} H(\varphi_{j} - \varphi_{i}) $$
(38)

for \(i=1,\ldots,N\). These N equations determine the collective frequency Ω and \(N-1\) relative phases with the latter independent of ϵ.

It is interesting to compare the weak-coupling theory for phase-locked states with the analysis of LIF networks from Sect. 4.3. Equation (13) has an identical structure to that of Eq. (38) (for \(I_{i} = I\) for all i), so that the classification of solutions using group theoretic methods is the same in both situations. There are, however, a number of significant differences between phase-locking equations (38) and (13). First, Eq. (13) is exact, whereas Eq. (38) is valid only to \(O(\epsilon)\) since it is derived under the assumption of weak coupling. Second, the collective period of oscillations Δ must be determined self-consistently in Eq. (13).

In order to analyse the local stability of a phase-locked solution \(\varPhi=(\phi_{1},\ldots,\phi_{N})\), we linearise the system by setting \(\theta_{i}(t)={\varphi}_{i}+\varOmega t+ \widetilde{\theta}_{i}(t)\) and expand to first-order in \(\widetilde{\theta}_{i}\):

$$ \frac{\mathrm{d}}{\mathrm{d}t} \widetilde{\theta}_{i}= {\epsilon} \sum _{j=1}^{N}\widehat {\mathcal {H}}_{ij}(\varPhi) \widetilde{\theta}_{j} , $$

where

$$ \widehat{\mathcal{H}}_{ij}(\varPhi)=w_{ij}H'( \varphi_{j}-\varphi_{i}) -\delta_{i,j}\sum _{k=1}^{N} w_{ik}H'( \varphi_{k}-\varphi_{i}) , $$

and \(H'(\varphi) = \mathrm{d}H(\varphi)/\mathrm{d}\varphi\). One of the eigenvalues of the Jacobian \(\widehat{\mathcal {H}}\) is always zero, and the corresponding eigenvector points in the direction of the uncoupled flow, that is, \((1,1,\ldots,1)\). The phase-locked solution will be stable provided that all other eigenvalues have a negative real part. We note that the Jacobian has the form of a graph-Laplacian mixing both anatomy and dynamics, namely it is the graph-Laplacian of the matrix with components \(-w_{ij}H'(\varphi_{j}-\varphi_{i})\).

6.2.1 Synchrony

Synchrony (more precisely, exact phase synchrony), where \(\theta _{1}=\theta_{2}= \cdots= \theta_{N-1}=\theta_{N} = \varOmega t+ t_{0}\) for some fixed frequency Ω, is a classic example of a phase-locked state. Substitution into (32), describing a network of identical oscillators, shows that Ω has symmetry \(S_{N}\) and must satisfy the condition

$$ \varOmega= \omega+ \epsilon H(0)\sum_{j=1}^{N} w_{ij} \quad \forall i . $$

One way for this to be true for all i is if \(H(0)=0\), which is the case, say, for \(H(\theta)=\sin(\theta)\) or for diffusive coupling, which is linear in the difference between two state variables so that \(H(0)=0\). The existence of synchronous solutions is also guaranteed if \(\sum_{j=1}^{N} w_{ij}\) is independent of i. This would be the case for global coupling where \(w_{ij}=1/N\), so that the system has permutation symmetry.

If the synchronous solution exists then the Jacobian is given by \(-\epsilon H'(0)\mathcal{L}\) where \(\mathcal{L}\) is the graph-Laplacian with components \(\mathcal{L}_{ij}=\delta_{ij}\sum_{k} w_{ik}-w_{ij}\). We note that \(\mathcal{L}\) has one zero eigenvalue, with eigenvector \((1,1,\ldots,1,1)\). Hence if all the other eigenvalues of \(\mathcal{L}\) lie on one side of the imaginary axis then stability is solely determined by the sign of \(\epsilon H'(0)\). This would be the case for a weighted connectivity matrix with all positive entries since the graph-Laplacian in this instance would be positive semi-definite. For example, for global coupling we have \(\mathcal{L}_{ij} = \delta_{ij}-N^{-1}\), and the (\(N-1\) degenerate) eigenvalue is +1. Hence the synchronous solution will be stable provided \(\lambda= -\epsilon H'(0)<0\).

6.2.2 Asynchrony

Another example of a phase-locked state is the purely asynchronous solution whereby all phases are uniformly distributed around the unit circle. This is sometimes referred to as a splay state, discrete rotating wave with \(\mathbb{Z}_{N}\) symmetry or splay-phase state and can be written \({\mathrm {d}\theta_{i}}/{\mathrm {d}t}=\varOmega\) with \(\theta_{i+1}-\theta_{i}=2\pi/N\) i. Like the synchronous solution it will be present but not necessarily stable in networks with global coupling, with an emergent frequency that depends on H:

$$ \varOmega= \omega+ \epsilon\frac{1}{N}\sum_{j=1}^{N} H \biggl(\frac {2\pi j}{N} \biggr). $$

In this case the Jacobian takes the form

$$ \widehat{\mathcal{H}}_{nm}(\varPhi)= \frac{\epsilon}{N} [A_{n-m} -\delta_{nm} \varGamma ] , $$

where \(\varGamma=\sum_{k} H'(2\pi k/N)\) and \(A_{n}=H'(2\pi n/N)\). Hence the eigenvalues are given by \(\lambda_{p} = \epsilon[\nu_{p} -\varGamma]/N\), \(p=0,\ldots,N-1\) where \(\nu_{p}\) are the eigenvalues of \(A_{n-m}\): \(\sum_{m} A_{n-m} a _{m}^{p} = \nu_{p} a_{n}^{p}\), where \(a_{n}^{p}\) denote the components of the pth eigenvector. This has solutions of the form \(a_{n}^{p} = \mathrm {e}^{-2 \pi i np/N}\) so that \(\nu_{p} = \sum_{m} A_{m} \mathrm {e}^{2 \pi i m p/N}\), giving

$$ \lambda_{p} = \frac{\epsilon}{N} \sum_{m=1}^{N} H' \biggl(\frac{2\pi m}{N} \biggr) \bigl[\mathrm {e}^{2 \pi i m p/N}-1 \bigr] , $$

and the splay state will be stable if \(\operatorname {Re}(\lambda_{p}) < 0\) \(\forall p \neq0\).

In the large N limit \(N \rightarrow\infty\) we have the useful result that (for global coupling) network averages may be replaced by time averages:

$$ \lim_{N \rightarrow\infty} \frac{1}{N} \sum _{j=1}^{N} F \biggl(\frac {2\pi j}{N} \biggr) = \frac{1}{2\pi} \int_{0}^{2\pi} F(t) \,\mathrm{d}t = F_{0} , $$

for some 2π-periodic function \(F(t)=F(t+2\pi)\) (which can be established using a simple Riemann sum argument), with a Fourier series \(F(t) = \sum_{n} F_{n} \mathrm {e}^{i n t}\). Hence in the large N limit the collective frequency of a splay state (global coupling) is given by \(\varOmega= \omega+ \epsilon H_{0}\), with eigenvalues

$$ \lambda_{p} = \frac{\epsilon}{2\pi} \int _{0}^{2\pi} H'(t) \mathrm {e}^{ i p t} \, \mathrm{d}t = -2 \pi i p \epsilon H_{-p} . $$

Hence a splay state is stable if \(-p \epsilon \operatorname {Im}H_{p} < 0\), where we have used the fact that since \(H(\theta)\) is real, then \(\operatorname {Im}H_{-p}=- \operatorname {Im}H_{p}\). As an example consider the case \(H(\theta) = \theta(\pi-\theta )(\theta -2 \pi)\) for \(\theta\in[0,2\pi)\) and \(\epsilon>0\) (where H is periodically extended outside \([0,2\pi)\)). It is straightforward to calculate the Fourier coefficients (34) as \(H_{n}=6i/n^{3}\), so that \(-p \epsilon \operatorname {Im}H_{p} = - 6 \epsilon/p^{2} < 0\) \(\forall p \neq0\). Hence the asynchronous state is stable. If we flip any one of the coefficients \(H_{m} \rightarrow- H_{m}\) then \(\operatorname {Re}\lambda_{m} >0\) and the splay state will develop an instability to an eigenmode that will initially destabilise the system in favour of an m-cluster, and see Fig. 14.

Fig. 14
figure 14

a A phase interaction given by \(\epsilon H(\theta) = \theta (\pi-\theta)(\theta-2 \pi)\) for \(\theta\in[0,2\pi)\) with complex Fourier series coefficients given by \(H_{n}=6i/n^{3}\). The remaining panels show the effect of flipping the sign of just one of the coefficients, namely \(H_{m} \rightarrow-H_{m}\). b \(m=1\), and the asynchronous solution will destabilise in favour of the synchronous solution since \(H'(0)>0\). c \(m=2\), and the asynchronous solution will destabilise in favour of a two-cluster. d \(m=3\), and the asynchronous solution will destabilise in favour of a three-cluster. Note that only small changes in the shape of H, as seen in panels cd, can lead to a large change in the emergent network dynamics

6.2.3 Clusters for Globally Coupled Phase Oscillators

For reviews of the stability of cluster states (in which subsets of the oscillator population synchronise, with oscillators belonging to different clusters behaving differently) we refer the reader to [118, 120, 227]; here we use the notation of [228]. If a group of N oscillators is neither fully synchronised nor desynchronised it may be clustered. We say \(\mathcal {A}=\{ A_{1},\ldots,A_{M}\}\) is an M-cluster partition of \(\{1,\ldots,N\}\) if

$$ \{1,\ldots,N\}=\bigcup_{p=1}^{M} A_{p} , $$

where \(A_{p}\) are pairwise disjoint sets (\(A_{p}\cap A_{q}=\emptyset\) if \(p \neq q\)). Note that if \(a_{p}=\vert A_{p}\vert \) then

$$ \sum_{p=1}^{M} a_{p} = N . $$

One can refer to this as a cluster of type \((a_{1},\ldots,a_{M})\). It is possible to show that any clustering can be realised as a stable periodic orbit of the globally coupled phase oscillator system [228] for suitable choice of phase interaction function; more precisely, there is a coupling function H for the system (32), with global coupling, such that for any N and any given M-cluster partition \(\mathcal {A}\) of \(\{1,\ldots, N\}\) there is a linearly stable periodic orbit realising that partition (and all permutations of it). See also [229], where the authors consider clustering in this system where \(H(\varphi)=\sin M\varphi\). More generally, there are very many invariant subspaces corresponding to spatio-temporal clustering that we can characterise in the following theorem.

Theorem 6.1

(Theorem 3.1 in [118])

The subsets of \(\mathbb {T}^{N}\) that are invariant for (32), with global coupling, because of symmetries of \(S_{N}\times \mathbb {T}\) correspond to isotropy subgroups in the conjugacy class of

$$\varSigma_{k,m}:=(S_{k_{1}}\times\cdots\times S_{k_{\ell}})^{m} \times_{s} \mathbb {Z}_{m}, $$

where \(N=mk\), \(k=k_{1}+\cdots+k_{\ell}\) and \(\times_{s}\) denotes the semidirect product. The points with this isotropy have ℓm clusters that are split into groups of m clusters of the size \(k_{i}\). The clusters within these groups are cyclically permuted by a phase shift of \(2\pi/m\). The number of isotropy subgroups in this conjugacy class is \(N!/[m(k_{1}!\cdots k_{\ell}!)]\).

It is a nontrivial problem to discover which of these subspaces contain periodic solutions. Note that the in-phase case corresponds to \(\ell =m=1\), \(k_{1}=N\) while splay phase corresponds to \(\ell=k_{1}=1\), \(m=N\). The stability of several classes of these solutions can be computed in terms of properties of \(H(\varphi)\); see for example Sect. 6.2.1 and Sect. 6.2.2 and for other classes of solution [118, 120, 228].

6.2.4 Generic Loss of Synchrony in Globally Coupled Identical Phase Oscillators

Bifurcation properties of the globally coupled oscillator system (32) on varying a parameter that affects the coupling \(H(\varphi)\) are surprisingly complicated because of the symmetries present in the system; see Sect. 3.6. In particular, the high multiplicity of the eigenvalues for loss of stability of the synchronous solution means:

  • Path-following numerical bifurcation programs such as AUTO, CONTENT, MatCont or XPPAUT need to be used with great care when applying to problems with \(N\geq3\) identical oscillators—these typically will not be able to find all solutions branching from one that loses stability.

  • A large number of branches with a range of symmetries may generically be involved in the bifurcation; indeed, there are branches with symmetries corresponding to all possible two-cluster states \(S_{k} \times S_{N-k}\).

  • Local bifurcations may have global bifurcation consequences owing to the presence of connections that are facilitated by the nontrivial topology of the torus [118, 230].

  • Branches of degenerate attractors such as heteroclinic attractors may appear at such bifurcations for \(N\geq4\) oscillators.

Hansel et al. [231] consider the system (32) with global coupling and phase interaction function of the form

$$ H(\varphi)=\sin(\varphi-\alpha)-r\sin(2\varphi) , $$
(39)

for \((r,\alpha)\) fixed parameters; detailed bifurcation scenarios in the cases \(N=3,4\) are shown in [232]. As an example, Fig. 15 shows regions of stability of synchrony, splay-phase solutions and robust heteroclinic attractors as discussed later in Sect. 7.

Fig. 15
figure 15

Regions of stability for the globally coupled \(N=4\)-oscillator system (32) with phase interaction function (39) and parameters in the region \(r>0\), \(0<\alpha<\pi\) [232]. The narrow stripes show the region of stability of synchrony, while the wide stripes show the region of stability of the splay-phase solution. The pink shaded area shows a region of existence of a robust heteroclinic network that is an attractor with in the checkerboard region; the boundaries are described in [232]

6.3 Phase Waves

The phase-reduction method has been applied to a number of important biological systems, including the study of travelling waves in chains of weakly coupled oscillators that model processes such as the generation and control of rhythmic activity in central pattern generators (CPGs) underlying locomotion [233, 234] and peristalsis in vascular and intestinal smooth muscle [213]. Related phase models have been motivated by the observation that synchronisation and waves of excitation can occur during sensory processing in the cortex [235]. In the former case the focus has been on dynamics on a lattice and in the latter continuum models have been preferred. We now present examples of both these types of model, focusing on phase wave solutions [236].

Phase Waves: A Lattice Model

The lamprey is an eel-like vertebrate which swims by generating travelling waves of neural activity that pass down its spinal cord. The spinal cord contains about 100 segments, each of which is a simple half-centre neural circuit capable of generating alternating contraction and relaxation of the body muscles on either side of body during swimming. In a seminal series of papers, Ermentrout and Kopell carried out a detailed study of the dynamics of a chain of weakly coupled limit-cycle oscillators [213, 237, 238], motivated by the known physiology of the lamprey spinal cord. They considered N phase oscillators arranged on a chain with nearest-neighbour anisotropic interactions, as illustrated in Fig. 16, and identified a travelling wave as a phase-locked state with a constant phase difference between adjacent segments. The intersegmental phase differences are defined as \(\varphi_{i} = \theta_{i+1} - \theta_{i}\). If \(\varphi_{i} < 0\) then the wave travels from head to tail whilst for \(\varphi_{i}>0\) the wave travels from the tail to the head. For a chain we set \(W_{ij} = \delta_{i-1,j}W_{-} + \delta_{i+1,j}W_{+}\) to obtain

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{1}&= \omega_{1} + W_{+} H( \theta_{2} -\theta_{1}) , \\ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{i} &= \omega_{i} + W_{+} H( \theta _{i+1} -\theta_{i}) + W_{-} H (\theta_{i-1}- \theta_{i}),\quad i=2,\ldots,N-1 , \\ \frac{\mathrm{d}}{\mathrm{d}t} {\theta}_{N} &= \omega_{N} + W_{-} H(\theta _{N-1} -\theta_{N}) , \end{aligned}$$

where \(\theta_{i}\in \mathbb {R}/2\pi \mathbb {Z}\). Pairwise subtraction and substitution of \(\varphi_{i} = \theta_{i+1} - \theta_{i}\) leads to an \(N-1\)-dimensional system for the phase differences,

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\varphi}_{i} = \Delta\omega_{i} + W_{+} \bigl[H(\varphi_{i+1})- H(\varphi_{i})\bigr] + W_{-} \bigl[H (- \varphi_{i})-H (-\varphi_{i-1})\bigr], $$

for \(i=1, \ldots, N-1\), with boundary conditions \(H(-\varphi_{0}) = 0 = H(\varphi_{N+1})\), where \(\Delta\omega_{i} = \omega_{i+1} - \omega_{i}\). There are at least two different mechanisms that can generate travelling wave solutions.

Fig. 16
figure 16

A chain of N phase oscillators \(\varphi_{i}=\theta _{i+1}-\theta _{i}\) with \(H_{\pm}(\varphi) =W_{\pm}H(\varphi)\)

The first is based on the presence of a gradient of frequencies along the chain, that is, \(\Delta\omega_{i}\) has the same sign for all i, with the wave propagating from the high frequency region to the low frequency region. This can be established explicitly in the case of an isotropic, odd interaction function, \(W_{\pm}=1\) and \(H(\varphi) = -H(-\varphi)\), where we have

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\varphi}_{i} = \Delta\omega_{i} + H( \varphi _{i+1}) + H (\varphi_{i-1}) - 2 H( \varphi_{i}) . $$

The fixed points \(\varPhi=(\varphi_{1},\ldots,\varphi_{N})\) satisfy the matrix equation \({ H} ( \varPhi) = -{ A}^{-1} { D}\), where \({ H} (\varPhi) = (H(\varphi_{1}), \ldots, H(\varphi_{N}))^{\top}\), \({ D} = (\Delta\omega_{1}, \ldots, \Delta\omega_{N})^{\top}\), and A is a tridiagonal matrix with elements \(A_{ii}=-2\), \(A_{i,i+1}=A_{1+1,i}=1\). For the sake of illustration suppose that \(H(\varphi) = \sin(\varphi+\sigma)\). Then a solution Φ will exist if every component of \({ A}^{-1} { D}\) lies between ±1. Let \(a_{0} = \max\{ | ({ A}^{-1} { D})_{i} | \}\). If \(a_{0} < 1\) then for each \(i=1,\ldots,N\) there are two distinct solutions \(\varphi_{i}^{\pm}\) in the interval \([0, 2\pi]\) with \(H'(\varphi_{i}^{-} ) > 0\) and \(H'(\varphi_{i}^{+} ) < 0\). In other words, there are \(2^{N}\) phase-locked solutions. Linearising about a phase-locked solution and exploiting the structure of the matrix A, it can be proven that only the solution \(\varPhi^{-}=(\varphi_{1}^{-}, \ldots,\varphi_{N}^{-})\) is stable. Assuming that the frequency gradient is monotonic, this solution corresponds to a stable travelling wave. When the gradient becomes too steep to allow phase locking (i.e. \(a_{0} > 1\)), two or more clusters of oscillators (frequency plateaus) tend to form that oscillate at different frequencies. Waves produced by a frequency gradient do not have a constant speed or, equivalently, constant phase lags along the chain.

Constant speed waves in a chain of identical oscillators can be generated by considering phase-locked solutions defined by \(\varphi_{i} = \varphi\) for all i, with a collective period of oscillation Ω determined using \({\mathrm {d}\theta_{1}}/{\mathrm {d}t} = \varOmega\) to give \(\varOmega=\omega_{1} + W_{+} H(\varphi_{1})\). The steady state equations are then \(\Delta\omega_{1} +W_{+} H(-\varphi) =0\), \(\Delta\omega_{N-1} -W_{-}H(\varphi) =0 \) and \(\Delta\omega_{i} =0\), for \(i=2,\ldots, N-2\). Thus, a travelling wave solution requires that all frequencies must be the same except at the ends of the chain. One travelling solution is given by \(\Delta \omega_{N-1}=0\) with \(\Delta\omega_{1} = -W_{-} H(-\varphi)\) and \(H(\varphi) = 0\). For the choice \(H(\varphi)=\sin(\varphi+\sigma)\) we have \(\varphi =-\sigma\) and \(\Delta\omega_{1} = -W_{-} \sin(2 \sigma)\). If \(2 \sigma< \pi\) then \(\Delta\omega_{1} =\omega_{2} - \omega_{1}<0\) and \(\omega_{1}\) must be larger than \(\omega_{2}\) and hence all the remaining \(\omega_{i}\) for a forward travelling wave to exist. Backward swimming can be generated by setting \(\omega_{1}=0\) and solving in a similar fashion.

Phase Waves: A Continuum Model

There is solid experimental evidence for electrical waves in awake and aroused vertebrate preparations, as well as semi-intact and active invertebrate preparations, as nicely described by Ermentrout and Kleinfeld [235]. Moreover, these authors argue convincingly for the use of phase models in understanding waves seen in cortex and speculate that they may serve to label simultaneously perceived features in the stimulus stream with a unique phase. More recently it has been found that cortical oscillations can propagate as travelling waves across the surface of the motor cortex of monkeys (Macaca mulatta) [239]. Given that to a first approximation the cortex is often viewed as being built from a dense reciprocally interconnected network of corticocortical axonal pathways, of which there are typically 1010 in a human brain it is natural to develop a continuum phase model, along the lines described by Crook et al. [240]. These authors further incorporated axonal delays into their model to explain the seemingly contradictory result that synchrony is stable for short range excitatory coupling, but unstable for long range. To see how a delay-induced instability may arise we consider a continuum model of identical phase oscillators with space-dependent delays

$$ \frac{\partial}{\partial t} \theta(x,t) = \omega+ \epsilon\int_{D} W(x,y) H\bigl(\theta (y,t)-\theta(x,t) - \tau(x,y)\bigr) \,\mathrm{d}y , $$
(40)

where \(x \in D \subseteq \mathbb {R}\) and \(\theta\in \mathbb {R}/2\pi \mathbb {Z}\). This model is naturally obtained as the continuum limit of (32) for a network arranged along a line with communication delays \(\tau(x,y)\) set by the speed of an action potential v that mediates the long range interaction over a distance between points in the tissue at x and y. Here we have used the result that for weak coupling delays manifest themselves as phase shifts. The function W sets the anatomical connectivity pattern. It is convenient to assume that interactions are homogeneous and translation invariant, so that \(W(x,y) = W(|x-y|)\) with \(\tau(x,y)=|x-y|/v\), and we either assume periodic boundary conditions or take \(D=\mathbb {R}\).

Following [240] we construct travelling wave solutions of Eq. (40) for \(D=\mathbb {R}\) of the form \(\theta(x,t) = \varOmega t + \beta x\), with the frequency Ω satisfying the dispersion relation

$$ \varOmega= \omega+\epsilon\int_{-\infty}^{\infty}\, \mathrm{d}y W\bigl(\vert y\vert \bigr) H \bigl(\beta y - \vert y\vert /v \bigr) . $$

When \(\beta=0\), the solution is synchronous. To explore the stability of the travelling wave we linearise (40) about \(\varOmega t + \beta x\) and consider perturbations of the form \(\mathrm {e}^{\lambda t} \mathrm {e}^{ipx}\), to find that travelling phase waves solutions are stable if \(\operatorname {Re}\lambda(p) <0\) for all \(p \neq0\), where

$$ \lambda(p)= \epsilon\int_{-\infty}^{\infty}W\bigl(\vert y\vert \bigr) H'\bigl(\beta y - \vert y\vert /v\bigr) \bigl[ \mathrm {e}^{i p y} -1\bigr] \,\mathrm{d}y . $$

Note that the neutrally stable mode \(\lambda(0) =0\) represents constant phase shifts. For example, for the case that \(H(\theta)=\sin\theta\) then we have

$$ \operatorname {Re}\lambda(p)=\pi\epsilon\bigl[\varLambda(p, \beta_{+}) + \varLambda(-p, \beta_{-}) \bigr] , $$

where

$$ \varLambda(p,\beta) = W_{c}(p+\beta) +W_{c}(p-\beta)-2 W_{c}(\beta),\quad W_{c}(p) = \int_{0}^{\infty}W(y) \cos(p y) \,\mathrm{d}y , $$

and \(\beta_{\pm}= \pm\beta- 1/(v)\). A plot of the region of wave stability for the choice \(W(y)=\exp(-\vert y\vert )/2\) and \(\epsilon>0\) in the \((\beta,v)\) plane is shown in Fig. 17. Note that the synchronous solution \(\beta=0\) is unstable for small values of v. For a discussion of more realistic forms of phase interaction, describing synaptic interactions, see [241].

Fig. 17
figure 17

Stability region (black) for a phase wave \(\theta (x,t) = \varOmega t + \beta x\) in the \((\beta,v)\) plane for the choice \(H(\theta) = \sin\theta\), \(W(y)=\exp(-|y|)/2\) and \(\epsilon>0\)

6.3.1 Phase Turbulence

For appropriate choice of the anatomical kernel and the phase interaction function, continuum models of the form (40) can also support weak turbulent solutions reminiscent of those seen in the Kuramoto–Sivashinsky (KS) equation. The KS equation generically describes the dynamics near long wavelength primary instabilities in the presence of appropriate (translational, parity and Galilean) symmetries, and it is of the form

$$ \theta_{t} = -\alpha\theta_{xx}+\beta( \theta_{x})^{2} -\gamma\theta _{xxxx} , $$
(41)

where \(\alpha,\beta,\gamma>0\). For a further discussion of this model see [76]. For the model (40) with decaying excitatory coupling excitation and purely sinusoidal phase coupling, simulations on a large domain show a marked tendency to generate phase slips and spatio-temporal pattern shedding, resulting in a loss of spatial continuity of \(\theta(x,t)\). However, Battogtokh [242] has shown that a mixture of excitation and inhibition with higher harmonics in the phase interaction can counteract this tendency and allow the formation of turbulent states. To see how this can arise consider an extension of (40) to allow for a mixing of spatial scales and nonlinearities in the form

$$ \frac{\partial}{\partial t} \theta(x,t) = \omega+ \sum_{\mu=1}^{M} \epsilon_{\mu}\int_{\mathbb {R}} W_{\mu}(x-y) H_{\mu}\bigl(\theta(y,t)-\theta(x,t)\bigr) \,\mathrm{d}y , $$
(42)

where we drop the consideration of axonal delays. Using the analysis of Sect. 6.3 the synchronous wave solution will be stable provided \(\lambda(p) < 0\) for all \(p \neq0\) where

$$ \lambda(p) = \sum_{\mu=1}^{M} \epsilon_{\mu}H_{\mu}'(0) \bigl[ \widehat {W}_{\mu}(p)-\widehat{W}_{\mu}(0) \bigr],\quad \widehat{W}_{\mu}(p) = \int_{\mathbb {R}} W_{\mu}\bigl(\vert y\vert \bigr) \mathrm {e}^{i p y} \,\mathrm{d}y . $$

After introducing the complex function \(z(x,t) = \exp(i \theta(x,t))\) and writing the phase interaction functions as Fourier series of the form \(H_{\mu}(\theta) = \sum_{n} H_{n}^{\mu} \mathrm {e}^{i n \theta}\) then we can rewrite (42) as

$$ z_{t} = i z \Biggl\{ \omega+ \sum_{\mu=1}^{M} \sum_{n \in \mathbb {Z}} \epsilon _{\mu}H_{n}^{\mu}z^{-n} \psi_{n}^{\mu}\Biggr\} , $$
(43)

where

$$ \psi_{n}^{\mu}(x,t) = \int_{\mathbb {R}} W_{\mu}(x-y) z^{n} (y,t) \,\mathrm{d}y \equiv W_{\mu}\otimes z^{n} . $$

The form above is useful for computational purposes, since \(\psi_{n}^{\mu}\) can easily be computed using a fast Fourier transform (exploiting its convolution structure). Battogtokh [242] considered the choice \(M=3\) with \(H_{1}(\theta)=H_{2}(\theta)=\sin(\theta+ \alpha)\), \(H_{3}=\sin(2\theta+ \alpha)\) and \(W_{\mu}(x)=\gamma_{\mu}\exp(-\gamma_{\mu}|x|)/2\) with \(\gamma _{2}=\gamma_{3}\). In this case \(\widehat{W}_{\mu}(p) = \gamma_{\mu}^{2}/(\gamma _{\mu}^{2}+p^{2})\), so that

$$ \lambda(p) = -p^{2} \cos(\alpha) \biggl(\frac{\epsilon_{1}}{\gamma _{1}^{2}+p^{2}}+ \frac{\epsilon_{2}+2 \epsilon_{3}}{\gamma_{2}^{2}+p^{2}} \biggr). $$

By choosing a mixture of positive and negative coupling strengths the spectrum can easily show a band of unstable wave-numbers from zero up to some maximum as shown in Fig. 18. Indeed this shape of spectrum is guaranteed when \(\sum_{\mu}\epsilon _{\mu}H_{\mu}'(0)>0\) and \(\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0)/\gamma_{\mu}^{2}<0\). Similarly the KS equation (41) has a band of unstable wave-numbers between zero and one (with the most unstable wave-number at \(1/\sqrt{2}\)). For the case that all the spatial scales \(\gamma _{\mu}^{-1}\) are small compared to the system size then we may develop a long wavelength argument to develop local models for \(\psi _{n}^{\mu}\). To explain this we first construct the Fourier transform \(\widehat{\psi}_{n}^{\mu}(p,t) = \widehat{W}_{\mu}(p) f_{n}(p,t)\), where \(f_{n}\) is the Fourier transform of \(z^{n}\) and use the expansion \(\widehat {W}_{\mu}(p) \simeq1-\gamma_{\mu}^{-2} p^{2} +\gamma_{\mu}^{-4} p^{4} + \cdots\). After inverse Fourier transforming we find

$$ \psi_{n}^{\mu}\simeq \bigl[1 + \gamma_{\mu}^{-2} \partial_{xx} - \gamma _{\mu}^{-4} \partial_{xxxx} + \cdots \bigr] z^{n} . $$

Noting that \(H^{1}_{1}=H_{2}^{1}=H_{3}^{2}=\exp(i \alpha)/(2i) \equiv\varGamma\) with all other Fourier coefficients zero then (43) yields

$$\begin{aligned} \theta_{t} ={}& \varOmega+ 2 \operatorname {Re}\varGamma\sum _{\mu=1,2} \epsilon _{\mu} \mathrm {e}^{-i \theta} \bigl( \gamma_{\mu}^{-2}\partial_{xx} - \gamma_{\mu}^{-4}\partial _{xxxx} + \cdots \bigr) \mathrm {e}^{i \theta} \\ &{}+ \epsilon_{3} \mathrm {e}^{-2 i \theta} \bigl(\gamma_{3}^{-2} \partial_{xx} - \gamma_{3}^{-4} \partial_{xxxx} + \cdots \bigr) \mathrm {e}^{2 i \theta} , \end{aligned}$$
(44)

where \(\varOmega=\omega+ \sum_{\mu} \epsilon_{\mu}H_{\mu}(0)\). Expanding (44) to second order gives \(\theta_{t} = \varOmega -\alpha\theta_{xx} +\beta(\theta_{x})^{2}\), where \(\alpha=-\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0) \gamma_{\mu}^{-2}\) and \(\beta =-\sum_{\mu}\epsilon_{\mu}H_{\mu}''(0) \gamma_{\mu}^{-2}\). Going to higher order yields fourth-order terms and we recover an equation of KS type with the coefficient of \(-\theta_{xxxx}\) given by \(\gamma=\sum_{\mu}\epsilon_{\mu}H_{\mu}'(0) \gamma_{\mu}^{-4}\). To generate phase turbulence we thus require \(\alpha>0\), which is also a condition required to generate a band of unstable wave-numbers, and \(\beta,\gamma>0\). A direct simulation of the model with the parameters for Fig. 18, for which \(\alpha,\beta,\gamma>0\), shows the development of a phase turbulent state. This is represented in Fig. 19 where we plot the absolute value of the complex function \(\varPsi= (\epsilon_{1} W_{1}+\epsilon_{2} W_{2} )\otimes z + \epsilon_{3} W_{3} \otimes z^{2}\).

Fig. 18
figure 18

Spectrum for the phase oscillator continuum model (42) with a mixture of spatial scales and nonlinearities. Here \(H_{1}(\theta)=H_{2}(\theta)=\sin(\theta+ \alpha)\), \(H_{3}=\sin (2\theta + \alpha)\) and \(W_{\mu}(x)=\gamma_{\mu}\exp(-\gamma_{\mu}|x|)/2\) with \(\gamma _{2}=\gamma_{3}\). Parameters are \(\epsilon_{1}=0.5\), \(\epsilon_{2}=0.15\), \(\epsilon_{3}=-0.3\), \(\gamma_{1}=1/2\), \(\gamma_{2}=1/4\), and \(\alpha=-1.45\). There is a band of unstable wave-numbers with \(p\in(0,p_{c})\), with \(p_{c} \simeq1.25\)

Fig. 19
figure 19

The emergence of a turbulent phase state in a phase oscillator continuum model. The parameters are those as in Fig. 18 with \(\omega=0\) for which \(\alpha=0.63\), \(\beta =5.16\) and \(\gamma=0.096\). The physical domain size is 27 and we have used a numerical mesh of 212 points with Matlab ode45 to evolve Eq. (43) with convolutions computed using fast Fourier transforms. As an order parameter describing the system we have chosen \(|\varPsi|\), where \(\varPsi= (\epsilon_{1} W_{1}+\epsilon_{2} W_{2} )\otimes z + \epsilon_{3} W_{3} \otimes z^{2}\)

7 Heteroclinic Attractors

In addition to dynamically simple periodic attractors with varying degrees of clustering, the emergent dynamics of coupled phase oscillator systems such as (32) can be remarkably complex even in the case of global coupling and similar effects can appear in a wide range of coupled systems. In the case of globally coupled phase oscillators, the dynamical complexity depends only on the phase interaction function H and the number of oscillators N. Chaotic dynamics [243] can appear in four or more globally coupled phase oscillators for phase interaction functions of sufficient complexity. We focus now on attractors that are robust and apparently prevalent in many such systems: robust heteroclinic attractors.

In a neuroscience context such attractors have been investigated under several related names, including slow switching [231, 244246] where the system evolves towards an attractor that displays slow switching between cluster states where the switching is on a time scale determined by the noise, heteroclinic networks [137, 232, 247] or winnerless competition [248250]. The effects can be seen in “microscale” models of small numbers of neurons or in “macroscale” models of cognitive function. In all these cases there are a number of possible patterns of activity that compete with each other but such that each pattern is unstable to some perturbations that take it to another pattern—this can be contrasted to winner-takes-all competition where there is attraction to an asymptotically stable pattern.

These attractors obtain their dynamical structure from the presence of invariant subspaces for the dynamics that allow the possibility of robust connections between saddles. These subspaces may be symmetry-induced fixed-point subspaces, spaces with a given clustering, subspaces forced by multiplicative coupling or subspaces forced by assumptions on the nature of the coupling. In all cases, there will be a number of dynamically simple nodes, usually equilibria or periodic orbits, say \(x_{i}\), \(i=1,\ldots,k\) each of which is of saddle type. These nodes have unstable manifolds that, within some invariant subspace, limit to other nodes within the network—usually because there is a robust (transverse) saddle-to-sink connection between the nodes within some invariant subspace; see [251]. It can be verified that such heteroclinic networks can be attracting if, in some sense, the rate of expansion away from the nodes is not as strong as the rate of contraction towards the nodes—see [129] for some precise results in this direction. Figure 20 illustrates some of the ingredients for a robust heteroclinic attractor.

Fig. 20
figure 20

Schematic diagram showing a trajectory \(x(t)\) (solid line) approaching part of a robust heteroclinic network in phase space (bold dashed lines). The nodes \(x_{i}\) represent equilibria or periodic orbits of saddle type and the invariant subspaces \(P_{i}\) are forced to exist by model assumptions and there are connecting (heteroclinic) orbits \(c_{i}\) that connect the nodes within the \(P_{i}\) in a robust manner. A neighbourhood of the connecting orbits \(c_{i}\) is an absorbing stable heteroclinic channel that can be used to describe various aspects of neural system function in systems with this dynamics; see for example [250]

7.1 Robust Heteroclinic Attractors for Phase Oscillator Networks

Hansel et al. [231] considered the dynamics of (32) with global coupling and phase interaction function of the form (39) for \((r,\alpha)\) fixed parameters. For large N, they find an open region in parameter space where typical attractors are heteroclinic cycles that show slow switching between states where the clustering is into two clusters of macroscopic size. This dynamics is examined in more depth in [244] where the simulations for typical initial conditions show a long and intermittent transient to a two-cluster state that, surprisingly, is unstable. This is a paradox because only a low-dimensional subset of initial conditions (the stable manifold) should converge to a saddle. The resolution of this paradox is a numerical effect: as the dynamics approaches the heteroclinic cycle where the connection is in a clustered subspace, there can be numerical rounding into the subspace. For weak perturbation of the system by additive noise, [244] find that the heteroclinic cycle is approximated by a roughly periodic transition around the cycle whose approximate period scales as the logarithm of the noise amplitude.

The bifurcations that give rise to heteroclinic attractors in this system on varying \((r, \alpha)\) are quite complex even for small N. As discussed in [232] one can only find attracting robust heteroclinic attractors in (32), (39) for \(N\geq4\): in this case Fig. 15 shows a region where robust heteroclinic attractors of the type illustrated in Fig. 21 appear. A trajectory approaching such a network will spend much of its time near a cluster state with two groups of two oscillators each. Each time there is a connection between the states, one of the groups will break clustering for a short time, and over a longer period the clusters will alternate between breaking up and keeping together. Qualitatively similar heteroclinic attractors can be found for example in coupled Hodgkin–Huxley type limit-cycle oscillators with delayed synaptic coupling as detailed in [251] and illustrated in Fig. 22.

Fig. 21
figure 21

A robust heteroclinic cycle attractor for the all-to-all coupled 4-oscillator system (32) with phase interaction function (39) and an open region of parameter space, as in [232]. The figure shows the cycle in the three-dimensional space of phase differences; the vertices R all represent the fully symmetric (inphase) oscillations \((\varphi ,\varphi ,\varphi,\varphi)\), varying by 2π in one of the arguments. The point Q represents solutions \((\varphi,\varphi,\varphi+\pi ,\varphi+\pi )\) with symmetry \((S_{2}\times S_{2})\times_{s} \mathbb {Z}_{2}\). The heteroclinic cycle links two saddle equilibria \(P_{1}=(\varphi,\varphi,\varphi+\alpha ,\varphi +\alpha)\) and \(P_{2}=(\varphi,\varphi,\varphi+2\pi-\alpha,\varphi +2\pi -\alpha)\) with \(S_{2}\times S_{2}\) isotropy. The robust connections \(G_{1}\) and \(G_{2}\) shown in red lie within two-dimensional invariant subspaces with isotropy \(S_{2}\) while the equilibria S have isotropy \(S_{3}\). This structure is an attractor for parameters in the region indicated in Fig. 15

Fig. 22
figure 22

Robust heteroclinic cycle attractor for all-to-all coupled oscillatory Hodgkin–Huxley systems with delay synaptic coupling. The top panel shows synchronisation indices \(r_{ij}\) that only equal one when the systems i and j are synchronised while the bottom panel shows the action potentials \(v_{i}\) for the four oscillators; see [251] for more details. Observe that the mechanism of symmetry breaking and re-synchronisation of pairs is the same as in Fig. 21

The heteroclinic attractors that appear for \(N>4\) can have more complex structures. For \(N=5\) this is investigated in [253, 254] for (32), (39) and in [252] for the phase interaction function

$$ H(\varphi)=\sin(\varphi+\alpha)-r\sin(2\varphi+\beta) , $$
(45)

where α, β and r are parameters. Figure 23 illustrates a trajectory of (32) with global coupling and phase interaction function (45) as a raster plot for five phase oscillators with parameters \(r=0.2\), \(\alpha=1.8\), \(\beta=-2.0\) and \(\omega=1.0\) and with addition of noise of amplitude 10−5. Observe there is a dynamic change of clustering over the course of the time-series with a number of switches taking place between cluster states of the type

$$(\theta_{1},\ldots,\theta_{5})=\varOmega t (1,1,1,1,1)+(y,y,g,b,b), $$

where y, g and b represent different relative phases that are coloured “yellow”, “green” or “blue” in Fig. 24 to other symmetrically related cluster states. One can check that the group orbit of states with this clustering gives 30 symmetrically related cluster states for the system; details of the connections are shown in Fig. 24. All cluster states connect together to form a single large heteroclinic network that is an attractor for typical initial conditions [252]. Figure 23 illustrates the possible switchings between phase differences near one particular cluster state for the heteroclinic network in this case. The dynamics of this system can be used for encoding a variety of inputs, as discussed in [255] where it is shown that a constant heterogeneity of the natural frequencies between oscillators in this system leads to a spatio-temporal encoding of the heterogeneities. The sequences produced by the system can be used by a similar system with adaptive dynamics to learn a spatio-temporal encoded state [228].

Fig. 23
figure 23

Raster plot showing a robust heteroclinic attractor in a system of five globally coupled phase oscillators (32) with phase interaction function (45) and a particular choice of parameters (see text). The vertical dark lines mark times (t in horizontal axis) when the oscillator represented by \(\theta_{k}(t)\) (k in vertical axis) passes through zero. Observe that most of the time there are three clusters. Occasionally the clustering splits and reforms different three clusters reforms, at times indicated by the grey bars, approximately every 70 time units. (Adapted from [252])

Fig. 24
figure 24

a Graph of heteroclinic connections between three cluster states for a robust heteroclinic attractor in a system of \(N=5\) globally coupled phase oscillators. Phase interaction function and parameters as in Fig. 23; see [252] for more details of the dynamics. Each vertex represents a particular saddle periodic cluster state that is a permutation of the states shown in bd. Note that there are precisely two incoming and two outgoing connections at each vertex. bd show the relative phases of the five oscillators, indicated by the angles of the “pendula” at the vertices of a regular pentagon, for a sequence of three consecutive three saddle cluster states visited on a longer trajectory that randomly explores graph a in the presence of noise. Inequivalent clusters are characterised by different coloured “pendula” and numbers where yellow corresponds to 1, green to 2 and blue to 3. The yellow cluster is stable to cluster-splitting perturbations while the blue cluster is unstable to such perturbations—observe that after the connection the yellow cluster becomes the blue cluster while the blue cluster splits up in one of two possible ways

Robust heteroclinic attractors also appear in a range of coupled phase oscillator models where the coupling is not global (all-to-all) but such that it still preserves enough invariant subspaces for the connections to remain robust. For example, [222] study the dynamics of a network “motif” of four coupled phase oscillators and find heteroclinic attractors that are “ratchets”, i.e. they are robust heteroclinic networks that wind preferentially around the phase space in one direction—this means that under the influence of small perturbations, phase slips in only one direction can appear.

7.2 Winnerless Competition and Other Types of Heteroclinic Attractor

Heteroclinic connections play a key role in describing winnerless competition dynamics [248, 249]. This dynamics is usually associated with a firing rate model of coupled neurons of “Lotka–Volterra” form

$$ \frac{\mathrm{d}}{\mathrm{d}t} {x}_{i} = x_{i} \Biggl( \lambda_{i} + \sum_{j=1}^{N} A_{ij} x_{j} \Biggr) , $$
(46)

where \(x_{i}(t)\) is an instantaneous firing rate of a neuron or group of neurons, \(\lambda_{i}\) is a self-excitation term and \(A_{ij}\) represents coupling between the \(x_{i}\), though it is also possible to consider generalisations [134]. These models were originally developed for predator-prey interaction of species in an ecosystem. Having found wide use in population ecology, they have more recently be applied to evolutionary game dynamics [256], including applications in economics. Since the seminal paper of May [257] it has been recognised that (46) can have robust heteroclinic attractors for an open set of parameter choices \(\lambda_{i}\) and \(A_{ij}\), as long as at least three species are involved. Indeed, the system can show a wide range of dynamics such as “winner-takes-all” equilibria where there is a stable equilibrium with \(x_{i}>0\) and \(x_{j}=0\) for \(j\neq i\) (i is the “winner”), “cooperative” equilibria where several of the \(x_{i}\) are nonzero as well as nontrivial periodic or chaotic dynamics. The particular case of “winnerless competition” gives attractors consisting of chains or networks of saddles joined by connections that are robust because the absence of a species (or in our case, the lack of firing of one neuron or group of neurons) \(x_{i}=0\) is preserved by the dynamics of the model (46).

The simplest case of this appears in a ring of \(N=3\) coupled neurons with dynamics

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {x}_{1} & = x_{1}(1 -x_{1} - \alpha x_{2}-\beta x_{3} ), \\ \frac{\mathrm{d}}{\mathrm{d}t} {x}_{2} & = x_{2}(1 -x_{2} - \alpha x_{3}-\beta x_{1} ), \\ \frac{\mathrm{d}}{\mathrm{d}t} {x}_{3} & = x_{3}(1 -x_{3} - \alpha x_{1}-\beta x_{2} ), \end{aligned}$$

for \(\alpha+\beta>2\) and \(0<\alpha<1\) [257], corresponding to cyclic inhibition of the neurons in one direction around the ring and cyclic excitation in the other direction. This “rock–paper–scissors” type of dynamics leads to winnerless competition that has been applied in a variety of more complex models of neural systems. The local behaviour near connections in such heteroclinic attractors has been called “stable heteroclinic channels” [250] and used to model a variety of low-level and high-level neural behaviours including random sequence generation, information dynamics, encoding of odours and working memory. We refer the reader to the reviews [133, 134, 250].

Analogous behaviour has been found in a range of other coupled systems, for example [137] or delayed pulse-coupled oscillators [167170, 245, 258]. Recent work has also considered an explicit constructive approach to heteroclinic networks to realise arbitrary directed networks as a heteroclinic attractor of a coupled cell system [247] or as a closely-related excitable network attractor that appears at a bifurcation of the heteroclinic attractor [259].

More complex but related dynamical behaviour has been studied under the names of “chaotic itinerancy” (see for example [260]), “cycling chaos” [131], “networks of Milnor attractors” [261] and “heteroclinic cycles for chaotic oscillators” [262]. It has been suggested that these and similar models are useful for modelling of the functional behaviour of neural systems [263].

Because heteroclinic attractors are quite singular in their dynamical behaviour (averages of observables need not converge, there is a great deal of sensitivity of the long-term dynamics to noise and system heterogeneity), it is important to consider the effect of noise and/or heterogeneities in the dynamics. This leads to a finite average transition time between states determined by the level of noise and/or heterogeneity (which may be due to inputs to the system) and the local dynamics—see for example [264]. Another useful feature of heteroclinic attractors is that they allow one to model “input–output” response of the system to a variety of inputs.

8 Stochastic Oscillator Models

Noise is well known to play a constructive role in the neural encoding of natural stimuli, leading to increased reliability or regularity of neuronal firing in single neurons [265, 266] and across populations [267]. From a mathematical perspective it is natural to consider how noise may affect the reduction to a phase oscillator description. Naively one may simply consider the addition of noise to a deterministic phase oscillator model to generate a stochastic differential equation. Indeed models of this type have been studied extensively at the network level to understand noise-induced first- and second-order phase transitions, and new phenomenon such as noise-induced synchrony [268270] or asynchrony [271], and noise-induced turbulence [272]. We refer the reader to the review by Lindner [273] for a comprehensive discussion. More recently Schwabedal and Pikovsky have extended the foundations of deterministic phase descriptions to irregular, noisy oscillators (based on the constancy of the mean first return times) [274], Ly and Ermentrout [275] and Nakao et al. [276] have built analytical techniques for studying weak noise forcing, and Moehlis has developed techniques to understand the effect of white noise on the period of an oscillator [277].

At the network level (global coupling) a classic paper examining the role of external noise in IF populations, using a phase description, is that of Kuramoto [278], who analysed the onset of collective oscillations. Without recourse to a phase reduction it is well to mention that Medvedev has been pioneering a phase–amplitude approach to studying the effects of noise on the synchronisation of coupled stochastic limit-cycle oscillators [194, 279], and that Newhall et al. have developed a Fokker–Planck approach to understanding cascade-induced synchrony in stochastically driven IF networks with pulsatile coupling and Poisson spike-train external drive [280]. More recent work on pairwise synchrony in network of heterogeneous coupled noisy phase oscillators receiving correlated and independent noise can be found in [281]. However, note that even in the absence of synaptic coupling, two or more neural oscillators may become synchronised by virtue of the statistical correlations in their noisy input streams [282284].

8.1 Phase Reduction of a Planar System with State-Dependent Gaussian White Noise

For clarity of exposition let us consider the phase reduction of a planar system described by \({\mathrm {d}x}/{\mathrm {d}t}=F(x)+p(x) \xi(t)\), where \(\xi(t)\) is white Gaussian noise such that \(\langle\xi(t) \rangle=0\) and \(\langle\xi(t) \xi(s) \rangle= 2D \delta(t-s)\), where \(\langle \cdot \rangle\) denotes averaging over the realisation of ξ, and \(D>0\) scales the noise intensity. We employ a Stratonovich interpretation of the noise (such that the chain rule of ordinary calculus holds). In the absence of noise we shall assume that the system has a stable \(2 \pi/\omega\)-periodic limit-cycle solution, with a phase that satisfies \(\mathrm{d}{\theta }/{\mathrm{d}t} = \omega\). For weak noise perturbations the state point will leave the cycle, though will stay in some neighbourhood, which we denote by \(\mathcal{U}\). To extend the notion of a phase off the cycle we let θ be a smooth function of x such that \(\langle\nabla_{x} \theta , F(x) \rangle=\omega\) holds for any point \(x \in\mathcal{U}\). We shall denote the other coordinate in \(\mathcal{U}\) by ρ, and assume that there exists a smooth coordinate transformation \(x \rightarrow(\theta, \rho)\). For a noise perturbed oscillator, in the new coordinates we have

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta} = \omega+ h(\theta,\rho) \xi (t), \qquad \frac{\mathrm{d}}{\mathrm{d} t} {\rho}= f(\theta,\rho)+ g(\theta,\rho) \xi(t) , $$
(47)

where we have introduced \(h=\langle\nabla_{x} \theta, p \rangle\), \(f=\langle\nabla_{x} \rho, F \rangle\) and \(g=\langle\nabla_{x} \rho, p \rangle\). Note that the full coordinate transformation \((\theta,\rho)=(\theta (x),\rho(x))\) is not prescribed here, though it is common to use one such that ρ can be interpreted as some distance from cycle. Thus, although results may be formally developed for the system (47) they cannot be directly interpreted in terms of a given model until the full coordinate transformation taking one from \(x \in \mathbb {R}^{2}\) to \((\theta,\rho)\) is given.

One can transform (47) into a stochastic phase–amplitude equation in the Itō sense, where it reads

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\theta} &= \omega+ D\bigl[h_{\theta}(\theta ,\rho) h( \theta ,\rho)+ h_{\rho}(\theta,\rho) g(\theta,r) \bigr]+ h(\theta,\rho) \xi(t), \\ \frac{\mathrm{d}}{\mathrm{d}t} {\rho} &= f(\theta,\rho) + D\bigl[g_{\theta}(\theta, \rho) h(\theta,\rho)+ g_{\rho}(\theta,\rho) g(\theta,r) \bigr] +g(\theta, \rho )\xi(t) , \end{aligned}$$

where the subscripts θ and ρ denote partial derivatives with respect to θ and ρ, respectively. Using the Itō form we may construct a Fokker–Planck equation for the time-dependent probability distribution \(Q(t,\theta,\rho)\) as

$$\begin{aligned} \frac{\partial}{\partial t} Q ={}& {-}\frac{\partial}{\partial \theta} \bigl[ \bigl\{ \omega+ D(h_{\theta}h +h_{\rho}g) \bigr\} Q \bigr] + D \frac{\partial^{2} [h^{2} Q]}{\partial\theta^{2}} \\ &{}-\frac{\partial}{\partial\rho} \bigl[ \bigl\{ f + D(g_{\theta}h +g_{\rho}g) \bigr\} Q \bigr] + 2 D \frac{\partial^{2} [h g Q]}{\partial\theta\partial\rho} +D \frac{\partial^{2} [g^{2} Q]}{\partial\rho^{2}} , \end{aligned}$$
(48)

with periodic boundary condition \(Q(t,0,\rho)=Q(t,2 \pi,\rho)\).

When \(D=0\) the steady state distribution is given by \(Q_{0}(\theta,\rho )=\delta(\rho)/(2 \pi)\). For small noise amplitude D we expect \(Q_{0}\) to still localise near \(\rho=0\) [285], say over a region \(-\rho_{c} < \rho< \rho _{c}\). In this case it is natural to make the approximation that for large t, \(Q=0=\partial Q /\partial\rho\) at \(\rho=\pm\rho_{c}\). Now introduce the marginal distribution

$$ P(t,\theta) \equiv\int_{-\rho_{c}}^{\rho_{c}} Q(t,\theta,\rho) \, \mathrm{d}\rho. $$

Following [286] we can integrate (48) over the interval \(I=[-\rho_{c},\rho_{c}]\) and generate a Fokker–Planck equation for P. The last three terms in (48) vanish after integration due to the boundary conditions, so that we are left with

$$ \frac{\partial}{\partial t} P = -\frac{\partial}{\partial\theta} \int_{I} D (h_{\theta}h + h_{\rho}g) Q \,\mathrm{d}\rho + D \frac{\partial^{2} }{\partial\theta^{2}} \int_{I} h^{2} Q \,\mathrm {d} \rho. $$

We now expand about \(\rho=0\) to give \(h(\theta,\rho)=Z(\theta)+\rho h_{\rho}(\theta,0)+\cdots\) and \(g(\theta,\rho)=g(\theta,0) +\rho g_{\rho}(\theta,0)+\cdots\), where \(Z(\theta)\) is identified as the phase response \(\langle\nabla_{x} \theta, p \vert _{\rho=0} \rangle\). In the limit of small D where \(Q_{0} \simeq\delta(\rho)/(2 \pi)\) we note that, for an arbitrary function \(R(\theta)\),

$$ \lim_{D \rightarrow0} \frac{\partial^{n}}{\partial\theta^{n}} \int_{I} \rho R(\theta) Q \,\mathrm{d}\rho = \frac{\partial^{n}}{\partial\theta^{n}}\lim_{D \rightarrow0} \int_{I} \rho R(\theta) Q \,\mathrm{d} \rho=0 . $$

Making use of the above gives the Fokker–Planck equation:

$$ \frac{\partial}{\partial t} P = -\frac{\partial}{\partial\theta} \bigl[ \bigl\{ 1 + D \bigl(Z Z' + Y \bigr)\bigr\} P \bigr] +D \frac{\partial^{2}[Z^{2}P]}{\partial\theta^{2}}, $$
(49)

where \(Y(\theta) = h_{\rho}(\theta,0)g(\theta,0)\). Hence, the corresponding Itō equation is

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta} = \omega+ D\bigl[Z(\theta )Z'(\theta)+ Y(\theta) \bigr]+ Z(\theta) \xi(t), $$
(50)

while the Stratonovich version is

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta} = \omega+ DY(\theta)+ Z(\theta) \xi(t). $$
(51)

Equations (50) and (51) are the stochastic phase oscillator descriptions for a limit cycle driven by weak white noise. These make it clear that naively adding noise to the phase description misses not only a multiplication by the iPRC but also the addition of a further term \(Y(\theta)\) that contains information as regards the amplitude response of the underlying limit-cycle oscillator.

We are now in a position to calculate the steady state probability distribution \(P_{0}(\theta)\) and use this to calculate the moments of the phase dynamics. Consider (49) with the boundary condition \(P(t,0)=P(t,2 \pi)\), and set \(P_{t}=0\). Adopting a Fourier representation for \(P_{0}\), Z and Y as \(P_{0}(\theta ) = \sum_{n} P_{n} \mathrm {e}^{in \theta}\), \(Z(\theta) = \sum_{n} Z_{n} \mathrm {e}^{in \theta }\), \(Y(\theta) = \sum_{n} Y_{n} \mathrm {e}^{in \theta}\), allows us to obtain a set of equations for the unknown amplitudes \(P_{l}\) as

$$ -P_{l} + D \sum_{n,m \in \mathbb {Z}} Z_{n} Z_{m} i(l-n) P_{l-(n+m)} -D \sum_{n \in \mathbb {Z}} Y_{n} P_{l-n} = K \delta_{l,0},\quad l \in \mathbb {Z}, $$
(52)

for some constant K. For \(D=0\) we have \(P_{0}=K\), and after enforcing normalisation we may set \(K=1/(2 \pi)\). For small D we may then substitute \(P_{0}\) into (52) and work to next order in D to obtain an approximation for the remaining amplitudes, \(l \neq0\), in the form

$$ P_{l} =\frac{D}{2 \pi} \biggl\{ \sum _{\{n,m |n+m=l \}} Z_{n} Z_{m} i m -Y_{l} \biggr\} . $$

Using this we may reconstruct the distribution \(P_{0}(\theta)\) for small D as

$$ P_{0}(\theta) = \frac{1}{2\pi} + \frac{D}{2 \pi} \bigl( Z( \theta) Z'(\theta) -Y(\theta) +Y_{0} \bigr),\quad Y_{0} = \frac{1}{2 \pi} \int_{0}^{2 \pi} Y(\theta) \, \mathrm{d}\theta. $$
(53)

The mean frequency of the oscillator is defined by \({\varOmega} = \lim_{T \rightarrow\infty} T^{-1} \int_{0}^{T} \theta'(t) \,\mathrm{d}t\). This can be calculated by replacing the time average with the ensemble average. For an arbitrary 2π-periodic function R we set \(\lim_{T \rightarrow \infty} T^{-1} \int_{0}^{T} R(t) \,\mathrm{d}t = \int_{0}^{2 \pi} R(\theta) P_{0}(\theta ) \,\mathrm{d}\theta\). Using (53) and (50) we obtain

$$ {\varOmega} = \omega+ D Y_{0} + O(D) , $$

where we have used the fact that \(\langle Z(\theta) \xi(t) \rangle= \langle Z(\theta) \rangle\langle\xi(t) \rangle=0\). We may also calculate the phase diffusion as

$$\begin{aligned} \widetilde{D}&=\int_{-\infty}^{\infty}\biggl\langle \biggl[\frac{\mathrm{d}}{\mathrm{d}t} {\theta}(t+\tau ) - \biggl\langle \frac{\mathrm{d}}{\mathrm{d}t} { \theta} \biggr\rangle \biggr] \biggl[\frac {\mathrm{d}}{\mathrm{d}t} {\theta}(t) - \biggl\langle \frac{\mathrm{d}}{\mathrm{d}t} {\theta } \biggr\rangle \biggr] \biggr\rangle \,\mathrm{d} \tau \\ &=\frac{D}{\pi} \int_{0}^{2 \pi} Z^{2}(\theta) \,\mathrm{d}\theta +O\bigl(D^{2}\bigr) , \end{aligned}$$

where we use the fact that \(\langle\xi(t) \rangle=0\) and \(\langle \xi (t) \xi(s) \rangle= 2D \delta(t-s)\). This recovers a well-known result of Kuramoto [5].

8.2 Phase Reduction for Noise with Temporal Correlation

A recent paper by Teramae et al. [287] shows that when one considers noise described by an Ornstein–Uhlenbeck (OU) process with a finite correlation time then this can interact with the attraction time scale of the limit cycle and give fundamentally different results when compared to Gaussian white noise (which has a zero correlation time). This observation has also been independently made in [286]. Both approaches assume weak noise, though [286] makes no assumptions about relative time scales, and is thus a slightly more general approach than that of [287]. Related work by Goldobin et al. [288] for noise \(\eta(t)\) with zero-mean \(\langle\eta(t) \rangle= 0\) and prescribed auto-correlation function \(C(\tau)=\langle\eta(\tau) \eta (0) \rangle\) yields the reduced Stratonovich phase equation,

$$ \frac{\mathrm{d}}{\mathrm{d}t} {\theta} = \omega+ D \widetilde {Y}(\theta ) +Z(\theta) \eta(t) , $$
(54)

where

$$ \widetilde{Y}(\theta) = \frac{1}{2 D} h_{\rho}(\theta,0) \int _{0}^{\infty}g(\theta-\psi,0) C(\psi) \mathrm {e}^{-\lambda\psi} \,\mathrm{d}\psi, $$
(55)

where λ is the average (linearised) rate of attraction to the limit cycle. Note that for \(C(\tau)=2 D \delta(\tau)\), \(\widetilde{Y}(\theta) = {Y}(\theta)\) and (54) reduces to (51) as expected. To lowest order in the noise strength the steady state probability distribution will simply be \(P_{0}(\theta)=1/(2 \pi)\). Therefore to lowest noise order the mean frequency is determined from an ensemble average as

$$ \widetilde{\omega}=\omega+D \widetilde{Y}_{0} + \frac{1}{4 \pi} \int_{0}^{2 \pi}\mathrm{d}\theta Z'( \theta) \int_{0}^{\infty}\mathrm{d}\psi Z(\theta-\psi) C(\psi) , $$

where the last term comes from using the Itō form of (54) and the subscript 0 notation is defined as in (53). The phase-diffusion coefficient at lowest noise order is given by

$$ \widetilde{D} = \frac{1}{2\pi} \int_{0}^{2 \pi} \mathrm{d}\theta \int_{-\infty }^{\infty}\mathrm{d}\tau Z( \theta) Z(\theta+ \tau) C(\tau) . $$

Let us now consider the example of OU noise so that \(C(\tau)=D \gamma \exp(-\gamma \vert \tau \vert )\). Furthermore let us take the simultaneous limit \(\gamma\rightarrow\infty\) (zero correlation time scale) and \(\lambda \rightarrow\infty\) (infinitely fast attraction), such that the ratio \(\alpha=\lambda/\gamma\) is constant. In this case we have from (55) that

$$ \widetilde{Y}(\theta) = \frac{Y(\theta)}{1+\alpha} . $$
(56)

Hence, when the correlation time of the noise is much smaller than the decay time constant \(\alpha=0\) and we recover the result for white Gaussian noise. In the other extreme when \(\alpha\rightarrow\infty\), where the amplitude of the limit cycle decays much faster than the correlation time of the noise, then vanishes and the reduced phase equation is simply \({\mathrm {d}\theta}/{\mathrm {d}t}=\omega+ Z(\theta) \xi (t)\), as would be obtained using the standard phase-reduction technique without paying attention to the stochastic nature of the perturbation.

9 Low-Dimensional Macroscopic Dynamics and Chimera States

The self-organisation of large networks of coupled neurons into macroscopic coherent states, such as observed in phase locking, has inspired a search for equivalent low-dimensional dynamical descriptions. However, the mathematical step from microscopic to macroscopic dynamics has proved elusive in all but a few special cases. For example, neural mass models of the type described in Sect. 2.6 only track mean activity levels and not the higher-order correlations of an underlying spiking model. Only in the thermodynamic limit of a large number of neurons firing asynchronously (producing null correlations) are such rate models expected to provide a reduction of the microscopic dynamics. Even here the link from spike to rate is often phenomenological rather than rigorous. Unfortunately only in some rare instances has it been possible to analyse spiking networks directly (usually under some restrictive assumption such as global coupling) as in the spike-density approach [289], which makes heavy use of the numerical solution of coupled PDEs. Recently however, exact results for globally pulse-coupled oscillators described by the Winfree model [219] have been obtained by Pazó and Montbrió [290], making use of the Ott–Antonsen (OA) ansatz. The OA anstaz was originally used to find solutions on a reduced invariant manifold of the Kuramoto model [278], and essentially assumes that the distribution of phases has a simple unimodal shape, capable of describing synchronous (peaked) and asynchronous (flat) distributions, though is not capable of describing clustered states (multi-peak phase distributions), and see below for a more detailed discussion. The major difference between the Winfree and Kuramoto phase oscillator models is that the former has interactions described by a phase product structure and the latter a phase-difference structure.

9.1 Ott–Antonsen Reduction for the Winfree Model

The Winfree model is described in Sect. 6 as a model for weakly globally pulse-coupled biological oscillators, and can support incoherence, frequency locking, and oscillator death when \(P(\theta)=1+\cos\theta\) and \(R(\theta)=-\sin\theta\) [291]. We note, however, that the same model is exact when describing nonlinear IF models described by a single voltage equation, and that in this case we do not have to restrict attention to weak coupling. Indeed the OA ansatz has proved equally successful in describing both the Winfree network with a sinusoidal PRC [290] and a network of theta neurons [292]. A theta neuron is formally equivalent to a QIF neuron with infinite positive threshold and infinite negative reset values. The PRC of a QIF neutron can be computed using (27), and for the case described by (1) with \(\tau=1\) and \(f(v)=v^{2}\) and infinite threshold and reset then \(R(\theta)=a (1-\cos \theta)\) with \(a=1/\sqrt{I}\) for \(\theta\in[0,2 \pi)\) (which is the shape expected for an oscillator near a SNIC bifurcation). We shall now focus on this choice of PRC and a pulsatile coupling that we write in the form

$$ P(\theta) = 2 \pi\sum_{n \in \mathbb {Z}} \delta(\theta- 2 \pi n) \equiv \sum_{m \in \mathbb {Z}} \mathrm {e}^{i m \theta} , $$

where we have introduced a convenient Fourier representation for the periodic function P. We now consider the large N limit in (31) and let \(\rho (\theta|\omega,t) \,\mathrm{d}\theta\) be the fraction of oscillators with phases between θ and \(\theta+ \mathrm{d}\theta\) and natural frequency ω at time t. The dynamics of the density ρ is governed by the continuity equation (expressing the conservation of oscillators)

$$ \frac{\partial\rho}{\partial t} + \frac{\partial(\rho v)}{\partial \theta} = 0 ,\quad v(\theta,t)= \omega+ \epsilon a h(t)\bigl[1 - \bigl(\mathrm {e}^{i \theta}+\mathrm {e}^{-i\theta}\bigr)/2\bigr] , $$
(57)

where the mean-field drive is \(h(t)= \lim_{N \rightarrow\infty} \sum_{j} N^{-1}P(\theta_{j})\). Boundary conditions are periodic in the probability flux, namely \(\rho (0|\omega,t) v (0,t) = \lim_{\theta\rightarrow2 \pi} \rho(\theta |\omega,t) v(\theta,t)\). A further reduction in dimensionality is obtained for the choice that the distribution of frequencies is described by a Lorentzian \(g(\omega )\) with

$$ g(\omega) = \frac{1}{\pi}\frac{\Delta}{(\omega-\omega _{0})^{2}+\Delta^{2}}, $$

for fixed Δ and \(\omega_{0}\) (controlling the width and mean of the distribution, respectively), which has simple poles at \(\omega_{\pm}=\omega_{0} \pm i \Delta\).

A generalised set of order parameters is defined as

$$ Z_{m}(t) = \int_{-\infty}^{\infty}\mathrm{d} \omega g(\omega) \int_{0}^{2 \pi}\mathrm{d} \theta \rho(\theta| \omega, t) \mathrm {e}^{i m \theta},\quad m \in \mathbb {N}. $$

The integration over ω can be done using a contour in the lower half complex plane so that \(Z_{m}(t) = \langle \mathrm {e}^{-i m \theta}, \rho(\theta| \omega_{-}, t) \rangle\), where we have introduced the inner product \(\langle u(\theta), v (\theta) \rangle= (2 \pi)^{-1}\int_{0}^{2 \pi }\mathrm{d} \theta\overline{u(\theta)} v(\theta) \,\mathrm{d}\theta\). The OA ansatz assumes that the density ρ can be written in a restricted Fourier representation as

$$ 2 \pi\rho(\theta|\omega,t) = 1 + \sum_{m=1}^{\infty}\alpha(\omega,t)^{m} \mathrm {e}^{i m \theta} + \mathrm{cc}, $$
(58)

where cc stands for complex conjugate. Substitution into the continuity equation (57) and balancing terms in \(\mathrm {e}^{i m \theta}\) show that α must obey

$$ \frac{\partial}{\partial t} \alpha= -i(\omega+ \epsilon a h) \alpha +i \epsilon a \frac {h}{2} \bigl(1+\alpha^{2}\bigr) . $$
(59)

Moreover, using the inner product structure of \(Z_{m}\) we easily see that \(Z_{m}(t) = [\alpha(\omega_{-},t)]^{m}\). Thus the Kuramoto order parameter \(Z_{1} \equiv R \mathrm {e}^{-i \varPsi}\) is governed by (59) with \(\omega =\omega_{-}\), yielding

$$ \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {R} &= -\Delta R - \epsilon a \frac{h}{2} \bigl(1-R^{2}\bigr) \sin \varPsi, \\ \frac{\mathrm{d}}{\mathrm{d}t} {\varPsi} &= \omega_{0} + \epsilon a h \biggl[ 1 - \frac {1+R^{2}}{2 R} \cos\varPsi \biggr]. \end{aligned} $$
(60)

To calculate the mean-field drive h we note that it can be written as

$$\begin{aligned} h &= \int_{0}^{2 \pi} \mathrm{d}\theta\rho(\theta| \omega_{-},t) P(\theta) = \sum_{m} Z_{m} \\ &= 1+ \sum_{m=1}^{\infty}(Z_{1})^{m} +(\overline{Z_{1}})^{m} = 1 + \frac {Z_{1}}{1-Z_{1}}+ \frac{\overline{Z_{1}}}{1-\overline{Z_{1}}},\quad \vert Z_{1}\vert < 1. \end{aligned}$$

Hence, we have explicitly that \(h=h(R,\varPsi)\) with

$$ h(R,\varPsi) = \frac{1-R^{2}}{1-2 R \cos\varPsi+ R^{2}} , \quad0 \leq R < 1 . $$
(61)

The planar system of equations defined by (60) and (61) can be readily analysed using numerical bifurcation analysis.

Recent work by Montbrió et al. [293] has shown that QIF networks can also be analysed without having to invoke the OA ansatz. Indeed they show that choosing a Lorentzian distribution for the voltages also leads to a reduced mean-field description that tracks the population firing rate and the mean membrane voltage. Interestingly they also relate these variables back to the complex Kuramoto order parameter via a conformal mapping. A treatment of QIF networks with gap-junction coupling has also recently been achieved by Laing [294], using the OA ansatz, showing how this can destroy certain spatio-temporal patterns, such as localised “bumps”, and create others, such as travelling waves and spatio-temporal chaos. The OA ansatz has also proved remarkably useful in understanding nontrivial solutions such as chimera states (where a sub-population of oscillators synchronises in an otherwise incoherent sea).

9.2 Chimera States

Phase or cluster synchronised states in systems of identical coupled oscillators have distinct limitations as descriptions of neural systems where not just phase but also frequency clearly play a part in the processing, computation and output of information. Indeed, one might expect that for any coupled oscillator system that is homogeneous (in the sense that any oscillators can be essentially replaced by any other by a suitable permutation of the oscillators), the only possible dynamical states are homogeneous in the sense that the oscillators behave in either a coherent or an incoherent way. This expectation, however, is not justified—there can be many dynamical states that cannot easily be classified as coherent or incoherent, but that seem to have a mixture of coherent and incoherent regions. Such states have been given the name “chimera state” by Abrams and Strogatz [295, 296] and have been the subject of intensive research over the past five years. For reviews of chimera state dynamics we refer the reader to [297, 298].

Kuramoto and Battogtokh [299, 300] investigated continuum systems of oscillators of the form

$$ \frac{\partial}{\partial t}\theta(x,t) = \omega- \epsilon\int_{D} G \bigl(x-x'\bigr) \sin\bigl(\theta(x,t)-\theta\bigl(x',t \bigr) +\alpha\bigr) \,\mathrm{d}x' , $$
(62)

where θ represent phases at locations \(x \in D \subseteq \mathbb {R}\), the kernel \(G(u)=\kappa\exp(-\kappa \vert u\vert )/2\) represents a non-local coupling and ω, α, κ are constants. Interestingly this model is precisely in the form presented in Sect. 6.3 as Eq. (40) for an oscillatory model of cortex, although here there are no space-dependent delays and the interaction function is \(H(\theta) = \sin(\theta-\alpha)\). Kuramoto and Battogtokh found for a range of parameters near \(\alpha=\pi/2\), and carefully selected initial conditions that the oscillators can split into two regions in x, one region which is frequency synchronised (or coherent) while the other region shows a nontrivial dependence of frequency on location x. An example of a chimera state is shown in Fig. 25.

Fig. 25
figure 25

A snapshot of a chimera state for the model (62) in a system of length 4 using 29 numerical grid points and periodic boundary conditions. Here \(\omega=0\), \(\epsilon =0.1\), \(\alpha=1.45\) and \(\kappa=1\)

Note that a discretisation of (62) to a finite set of N coupled oscillators is

$$ \frac{\mathrm{d}}{\mathrm{d}t}\theta(x_{i},t) = \omega- \frac {\epsilon}{N} \sum _{k=1}^{N} K_{ij} \sin( \theta_{i}-\theta_{j} +\alpha) $$
(63)

where \(\theta_{i}\in[0,2\pi)\) represents the phase at location \(i=1,\ldots ,N\) and the coupling matrix \(K_{ij}= G(\vert i-j\vert /N)/N\) is the discretised interaction kernel (assuming a domain of length 1). Using different kernels, \(G(u) = \exp(-\kappa\cos(2\pi u))\) and an approximation \(G(u) = 1 -\kappa\cos(2\pi u)\) for small κ, Abrams and Strogatz [295] identified similar behaviour and [296] discussed a limiting case of parameters such that the continuum system provably has chimera solutions. The OA reduction discussed in Sect. 9.1 allows an exact reduction of oscillator networks of this form and in the continuum limit this can give a solvable low-order system whose solutions include a variety of chimera states [297]. It is useful to note that when \(\alpha=\pi/2\), pure cosine coupling results in an integrable system [301], such that disordered initial states will remain disordered. Thus α determines a balance between spontaneous order and permanent disorder.

However, it seems that chimera states are much more “slippery” in finite oscillator systems than in the continuum limit. In particular, Wolfrum and Omel’chenko [302] note that for finite approximations of the ring (62) by N oscillators, with a mixture of local and nearest R-neighbour coupling corresponding to (63) with a particular choice of coupling matrix \(K_{ij}\), chimera states apparently only exist as transients. However, the lifetime of the typical transient apparently grows exponentially with N. Thus, at least for some systems of the form (63), chimeras appear to be a type of chaotic saddle. This corresponds to the fact that the boundaries between the regions of coherent and incoherent oscillation fluctuate apparently randomly over a long time scale. These fluctuations lead to wandering of the incoherent region as well as change in size of the region. Eventually these fluctuations appear to result in typical collapse to a fully coherent oscillation [302].

Although this appears to be the case for chimeras for (63), there are networks such as coupled groups of oscillators; [303] or two-dimensional lattices [304] where chimera attractors can appear. It is not clear what will cause a chimera to be transient or not, or indeed exactly what types of chimera-like states can appear in finite oscillator networks. A suggestion of [305] is that robust neutrally stable chimeras may be due to the special type of single-harmonic phase interaction function used in (62), (63).

More recent work includes investigations of chimeras (or chimera-like states) in chemical [306] or mechanical oscillator networks [307]; chimeras in systems of coupled oscillators other than phase oscillators have been investigated in many papers; for example in Stuart–Landau oscillators [299, 308, 309], Winfree oscillators [290] and models with inertia [310]. Other recent work includes discussion of feedback control to stabilise chimeras [311], investigations of chimeras with multiple patches of incoherence [312], multicluster and travelling chimera states [313].

In a neural context chimeras have also been found in pulse-coupled LIF networks [314], and hypothesised to underly coordinated oscillations in unihemispheric slow-wave sleep, whereby one brain hemisphere appears to be inactive while the other remains active [315].

10 Applications

We briefly review a few examples where mathematical frameworks are being applied to neural modelling questions. These cover functional and structural connectivity in neuroimaging, central pattern generators (CPGs) and perceptual rivalry. There are many other applications we do not review, for example deep brain stimulation protocols [316] or modelling of epileptic seizures where network structures play a key role [71].

10.1 Functional and Structural Connectivity in Neuroimaging

Functional connectivity (FC) refers to the temporal synchronisation of neural activity in spatially remote areas. It is widely believed to be significant for the integrative processes in brain function. Anatomical or structural connectivity (SC) plays an important role in determining the observed spatial patterns of FC. However, there is clearly a role to be played by the dynamics of the neural tissue. Even in a globally connected network we would expect this to be the case, given our understanding of how synchronised solutions can lose stability for weak coupling. Thus it becomes useful to study models of brain like systems built from neural mass models (such as the Jansen–Rit model of Sect. 2.6), and ascertain how the local behaviour of the oscillatory node dynamics can contribute to global patterns of activity. For simplicity, consider a network of N globally coupled identical Wilson–Cowan [65] neural mass models:

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {x}_{i} &= -x_{i}+P+c_{1} f(x_{i})-c_{2} f(y_{i}) +\frac {\epsilon }{N} \sum _{j=1}^{N} f(x_{j}) , \\ \frac{\mathrm{d}}{\mathrm{d}t} {y}_{i} &= -y_{i}+Q+c_{3} f(x_{i})-c_{4} f(y_{i}) , \end{aligned}$$

for \(i=1,\ldots, N\). Here \((x_{i},y_{i}) \in \mathbb {R}^{2}\) represents the activity in each of a pair of coupled neuronal population models, f is a sigmoidal firing rate given by \(f(x) = 1/(1+\mathrm {e}^{-x})\) and \((P,Q)\) represent external drives. The strength of connections within a local population is prescribed by the coefficients \(c_{1},\ldots, c_{4}\), which we choose as \(c_{1} = c_{2} = c_{3} = 10\) and \(c_{4} = -2\) as in [43]. For \(\epsilon=0\) it is straightforward to analyse the dynamics of a local node and find the bifurcation diagram in the \((P,Q)\) plane as shown in Fig. 26. Moreover, for \(\epsilon\ll1\) we may invoke weak-coupling theory to describe the dynamics of the full network within the oscillatory regime bounded by the two Hopf curves shown in Fig. 26. From the theory of Sect. 6.2.1 we would expect the synchronous solution to be stable if \(\epsilon H'(0) >0\). Taking \(\epsilon>0\) we can consider \(H'(0)\) as a proxy for the robustness of synchrony. The numerical construction of this quantity, as in [317], predicts that there will be regions in the \((P,Q)\) plane associated with a breakdown of FC (where \(H'(0) <0\)), as indicated by points a and c in Fig. 26. This highlights the role that local node dynamics can have on emergent network dynamics. Moreover, we see that simply by tuning the local dynamics to be deeper within the existence region for oscillatory solutions we can, at least for this model, enhance the degree of FC. It would be interesting to explore this simple model further for more realistic brain like connectivities, along the lines described in [318]. Moreover, given that this would preclude the existence of the synchronous state by default (since we would neither have \(H(0)=0\) nor would \(\sum_{j} w_{ij}\) be independent of i) then it would be opportune to explore the use the recent ideas in [163, 319] to understand how the system could organise into a regime of remote synchronisation whereby pairs of nodes with the same network symmetry could synchronise. For related work on Wilson–Cowan networks with some small dynamic noise see [320], though here the authors construct a phase oscillator network by linearising around an unstable fixed point, rather than use the notion of phase response.

Fig. 26
figure 26

Bifurcation diagram for an isolated Wilson–Cowan node in the \((P,Q)\) plane. Here HB denotes Hopf bifurcation and SN a saddle node of fixed-points bifurcation. At points a and c we find \(H'(0) < 0\) and at point b \(H'(0) > 0\). A breakdown of FC (loss of global synchrony) within a globally coupled network is predicted at points a and c

10.2 Central Pattern Generators

CPGs are (real or notional) neural subsystems that are implicated in the generation of spatio-temporal patterns of activity [321], in particular for driving the relatively autonomous activities such as locomotion [322324] or for driving involuntary activities such as heartbeat, respiration or digestion [325]. These systems are assumed to be behind the creation of the range of walking or running patterns (gaits) that appear in different animals [326]. The analysis of phase locking provides a basis for understanding the behaviour of many CPGs, and for a nice overview see the review articles by Marder and Bucher [327] and Hooper [328].

In some cases, such as the leech (Hirudo medicinalis) heart or Caenorhabditis elegans locomotion, the neural circuitry is well studied. For more complex neural systems and in more general cases CPGs are still a powerful conceptual tool to construct notional minimal neural circuitry needed to undertake a simple task. In this notional sense they have been extensively investigated to design control circuits for actuators for robots; see for example the review [329]. Recent work in this area includes robots that can reproduce salamander walking and swimming patterns [330]. Since the control of motion of autonomous “legged” robots is still a very challenging problem in real-time control, one hope of this research is that nature’s solutions (for example, how to walk stably on two legs) will help inspire robotic ways of doing this.

CPGs are called upon to produce one or more rhythmic patterns of actuation; in the particular problem of locomotion, a likely CPG is one that will produce the range of observed rhythms of muscle actuation, and ideally the observed transitions between then. For an early discussion of design principles for modelling CPGs, see [46]. This is an area of modelling where consideration of symmetries as in Sect. 3.6 has been usefully applied to constrain the models. For example [331] examine models for generating the gaits in a range of vertebrate animals, from those with two legs (humans) through those with four (quadrupeds such as horses have a wide range of gaits—walk, trot, pace, canter, gallop—they may use) or larger numbers of legs (myriapods such as centipedes). Insects make use of six legs for locomotion while other invertebrates such as centipedes and millipedes have a large number of legs that are to some extent independently actuated. As an example, [332] consider a schematic CPG of 2n oscillators for animals with n legs, as shown in Fig. 27(a). The authors use symmetry arguments and Theorem 3.1 to draw a number of model-independent conclusions from the CPG structure.

Fig. 27
figure 27

Schematic diagram showing central pattern generators (a) with 4n coupled cells that is used to model gait patterns in animals with 2n legs [332] and (b) a three-cell motif of bursters with varying coupling strengths, as considered in [333, 334]

One can also view CPGs as a window into more fundamental problems of how small groups of neurons coordinate to produce a range of spatio-temporal patterns. In particular, it is interesting to see how the observable structure of the connections influences the range and type of dynamical patterns that can be produced. For example, [333] consider a simple three-cell “motif” networks of bursters and classify a range of emergent spatio-temporal patterns in terms of the coupling parameters. Detailed studies [334] investigate properties such as multistability and bifurcation of different patterns and the influence of inhomogeneities in the system. This is done by investigating return maps for the burst timings relative to each other.

The approach of [135, 136] discussed in Sect. 3.12 provides an interesting framework to discuss CPG dynamics in cases where the connection structure is given but not purely related to symmetries of the network. For example, [141] use that formalism to understand possible spatio-temporal patterns that arise in lattices or [100] that relates synchrony properties of small motif networks to spectral properties of the adjacency matrix.

10.3 Perceptual Rivalry

Many neural systems process information—they need to produce outputs that depend on inputs. If the system effectively has no internal degrees of freedom then this will give a functional relationship between output and input so that any temporal variation in the output corresponds to a temporal variation of the input. However, this is not the case for all but the simplest systems and often outputs can vary temporally unrelated to the input. A particularly important and well-studied system that is a model for autonomous temporal output is perceptual rivalry, where conflicting information input to a neural system results, not in a rejection or merging of the information, but in an apparently random “flipping” between possible “rival” states (or percepts) of perception. This nontrivial temporal dynamics of the perception appears even in the absence of a temporally varying input. The best studied example of this type is binocular rivalry, where conflicting inputs are simultaneously made to each eye. It is widely reported by subjects that perception switches from one eye to the other, with a mean frequency that depends on a number of factors such as the contrast of the image [335]. More general perceptual rivalry, often used in “optical illusions” such as ambiguous figures—the Rubin vase, the Necker cube—show similar behaviour with percepts shifting temporally between possible interpretations.

Various approaches [336] have been made to construct nonlinear dynamical models of the generation of a temporal shifting between possible percepts such as competition models [337], bifurcation models, ones based on neural circuitry [338], or conceptual ones [339] based on network structures [340] or on heteroclinic attractors [341].

11 Discussion

As with any review we have had to leave out many topics that will be of interest to the reader. In particular we have confined ourselves to “cell” and “system-level” dynamics rather that “sub-cellular” behaviour of neurons. We briefly mention some other active areas of mathematical research relevant to the science of rhythmic neural networks. Perhaps the most obvious area that we have not covered in any depth is that of single unit (cell or population) forcing, which itself is a rather natural starting point for gaining insights into network behaviour and how best to develop mathematical tools for understanding response [342, 343]. For a general perspective on mode-locked responses to periodic forcing see [344] and [76], and for the role of spatially correlated input in generating oscillations in feedforward neuronal networks see [345]. Other interesting recent work includes uncovering some surprising nonlinear dynamical properties of feedforward networks [346, 347].

For a more recent discussion of the importance of mode-locking in auditory neuroscience see [348, 349] and in motor systems, see [350]. However, it is well to note that not much is known about nonlinear systems with three or more interacting frequencies [351], as opposed to periodically forced systems where the notions of Farey tree and the devil’s staircase have proven especially useful. We have also painted the notion of synchrony with a broad mathematical brush, and not discussed more subtle notions of envelope locking that may arise between coupled bursting neurons (where the within burst patterns may desynchronise) [352]. This is especially relevant to studies of synchronised bursting [353] and the emergence of chaotic phenomena [354]. Indeed, we have said very little about coupling between systems that are chaotic, such as described in [355], the emergence of chaos in networks [356, 357] or chaos in symmetric networks [243].

The issue of chaos is also relevant to notions of reliability, where one is interested in the stability of spike trains against fluctuations. This has often been discussed in relation to stochastic oscillator forcing rather than those arising deterministically in a high-dimensional setting [267, 358360]. Of course, given the sparsity of firing in cortex means that it may not even be appropriate to treat neurons as oscillators. However, some of the ideas developed for oscillators can be extended to excitable systems, as described in [361363]. As well as this it is important to point out that neurons are not point processors, and have an extensive dendritic tree, which can also contribute significantly to emergent rhythms as described in [241, 364], as well as couple strongly to glial cells. Although the latter do not fire spikes, they do show oscillations of membrane potential [365]. At the macroscopic level it is also important to acknowledge that the amplitude of different brain waves can also be significantly affected by neuromodulation [366], say, through release of norepinephrine, serotonin and acetylcholine, and the latter is also thought to be able to modulate the PRC of a single neuron [367].

This review has focussed mainly on the embedding of weakly coupled oscillator theory within a slightly wider framework. This is useful in setting out some of the neuroscience driven challenges for the mathematical community in establishing inroads into a more general theory of coupled oscillators. Heterogeneity is one issue that we have mainly side-stepped, and we recall that the weakly coupled oscillator approach requires frequencies of individual oscillators to be close. This can have a strong effect on emergent network dynamics [368], and it is highly likely that even a theory with heterogeneous phase response curves [369] will have little bearing on real networks. The equation-free coarse-graining approach may have merit in this regard, though is a numerically intensive technique [370].

We suggest a good project for the future is to develop a theory of strongly coupled heterogeneous networks based upon the phase–amplitude coordinate system described in Sect. 5.2, with the challenge to develop a reduced network description in terms of a set of phase–amplitude interaction functions, and an emphasis on understanding the new and generic phenomena associated with nontrivial amplitude dynamics (such as clustered phase–amplitude chaos and multiple attractors). To achieve this one might further tap into recent ideas for classifying emergent dynamics based upon the group of structural symmetries of the network. This can be computed as the group of automorphisms for the graph describing the network. For many real-world networks, this can be decomposed into direct and wreath products of symmetric groups [113]. This would allow for the use of tools from computational group theory [371] and open up a way to classify the generic forms of behaviour that a given network may exhibit using the techniques of equivariant bifurcation theory.

Notes

  1. Indeed, the human brain consists of the order of 1011 neurons, but of the order of 100–1000 types http://neuromorpho.org meaning there is a very high replication of cells that are only different by their location and exact morphology.

  2. In this section we assume little about the dynamics of the nodes—they may be “chaotic oscillators”.

Abbreviations

DDE:

Delay differential equation

IF:

Integrate and fire (model for neural oscillator)

iPRC:

Infinitesimal phase response curve

ISI:

Inter-spike interval

FHN:

FitzHugh–Nagumo equation (model for neural oscillator)

HH:

Hodgkin–Huxley equation (model for neural oscillator)

LIF:

Leaky integrate and fire (model for neural oscillator)

ML:

Morris–Lecar equation (model for neural oscillator)

MSF:

Master stability function

ODE:

Ordinary differential equation

PDE:

Partial differential equation

PRC:

Phase response curve

QIF:

Quadratic integrate and fire (model for neural oscillator)

SDE:

Stochastic differential equation

SHC:

Stable heteroclinic channel

SNIC:

Saddle node on an invariant circle (bifurcation)

WLC:

Winnerless competition

References

  1. Huygens C. Oeuvres complétes de Christiaan Huygens. La Haye: Martinus Nijhoff; 1888. doi:10.5962/bhl.title.21031.

    Google Scholar 

  2. Rayleigh JWSB. The theory of sound. Vol. 2. London: Macmillan; 1896.

    MATH  Google Scholar 

  3. Van Der Pol B. Forced oscillations in a circuit with non-linear resistance. Lond Edinb Dublin Philos Mag J Sci, Ser 7. 1927;3:65–80.

    Google Scholar 

  4. Wiener N. Cybernetics; or control and communication in the animal and the machine; 1948.

    Google Scholar 

  5. Kuramoto Y. Chemical oscillations, waves and turbulence. Heidelberg: Springer; 1984.

    Book  MATH  Google Scholar 

  6. Michaels DC, Matyas EP, Jalife J. Mechanisms of sinoatrial pacemaker synchronization: a new hypothesis. Circ Res. 1987;61:704–14.

    Article  Google Scholar 

  7. Liu C, Weaver DR, Strogatz SH, Reppert SM. Cellular construction of a circadian clock: period determination in the suprachiasmatic nuclei. Cell. 1997;91:855–60.

    Article  Google Scholar 

  8. Ermentrout GB. An adaptive model for synchrony in the firefly Pteroptyx malaccae. J Math Biol. 1991;29:571–85.

    Article  MathSciNet  MATH  Google Scholar 

  9. Wiesenfeld K, Colet P, Strogatz SH. Frequency locking in Josephson arrays: connection with the Kuramoto model. Phys Rev E. 1998;57:1563–9.

    Article  Google Scholar 

  10. Néda Z, Ravasz E, Vicsek T, Brechet Y, Barabási A-L. Physics of the rhythmic applause. Phys Rev E. 2000;61:6987–92.

    Google Scholar 

  11. Ha SY, Jeong E, Kang MJ. Emergent behaviour of a generalized Viscek-type flocking model. Nonlinearity. 2010;23:3139–56.

    Article  MathSciNet  MATH  Google Scholar 

  12. Paley DA, Leonard NE, Sepulchre R, Grunbaum D, Parrish JK. Oscillator models and collective motion. IEEE Control Syst Mag. 2007;27:89–105.

    Article  Google Scholar 

  13. Assenza S, Gutiérrez R, Gómez-Gardenes J, Latora V, Boccaletti S. Emergence of structural patterns out of synchronization in networks with competitive interactions. Sci Rep. 2011;1:99.

    Article  Google Scholar 

  14. Dörfler F, Bullo F. Synchronization in complex networks of phase oscillators: a survey. Automatica. 2014;50:1539–64.

    Article  MATH  Google Scholar 

  15. Velazquez JLP. Brain research: a perspective from the coupled oscillators field. NeuroQuantology. 2006;4:55–165.

    Google Scholar 

  16. Singer W. Synchronization of cortical activity and its putative role in information processing and learning. Annu Rev Physiol. 1993;55:349–74.

    Article  Google Scholar 

  17. Uhlhaas PJ, Pipa G, Lima B, Melloni L, Neuenschwander S, Nikolic D, Lin S-C. Neural synchrony in cortical networks: history, concept and current status. Neuroscience. 2009;3:1–19.

    Google Scholar 

  18. Wang XJ. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol Rev. 2010;90:1195–268.

    Article  Google Scholar 

  19. Fell J, Axmacher N. The role of phase synchronization in memory processes. Nat Rev Neurosci. 2011;12:105–18.

    Article  Google Scholar 

  20. Wehr M, Laurent G. Odour encoding by temporal sequences of firing in oscillating neural assemblies. Nature. 1996;384:162–6.

    Article  Google Scholar 

  21. Laurent G, Stopfer M, Friedrich RW, Rabinovich MI, Volkovskii A, Abarbanel HD. Odor encoding as an active, dynamical process: experiments, computation, and theory. Annu Rev Neurosci. 2001;24:263–97.

    Article  Google Scholar 

  22. Buzśaki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–9.

    Article  Google Scholar 

  23. Haken H, Kelso JAS, Bunz H. A theoretical model of phase transitions in human hand movements. Biol Cybern. 1985;51:347–56.

    Article  MathSciNet  MATH  Google Scholar 

  24. Kelso JAS. Dynamic patterns: the self-organization of brain and behaviour. Cambridge: MIT Press; 1995.

    Google Scholar 

  25. Stein PSG, Grillner S, Selverston AI, Stuart DG, editors. Neurons, networks and motor behavior. Cambridge: MIT Press; 1999.

    Google Scholar 

  26. Milton J, Jung P, editors. Epilepsy as a dynamic disease. Berlin: Springer; 2003.

    MATH  Google Scholar 

  27. Coombes S, Terry JR. The dynamics of neurological disease: integrating computational, experimental and clinical neuroscience. Eur J Neurosci. 2012;36:2118–20.

    Article  Google Scholar 

  28. Titcombe MS, Edwards R, Beuter A. Mathematical modelling of Parkinsonian tremor. Nonlinear Stud. 2004;11(3):363–84.

    MathSciNet  MATH  Google Scholar 

  29. Bressler SL. Cortical coordination dynamics and the disorganization syndrome in schizophrenia. Neuropsychopharmacology. 2003;28:35–9.

    Article  Google Scholar 

  30. Tass PA. Phase resetting in medicine and biology: stochastic modelling and data analysis. Berlin: Springer; 1999.

    Book  MATH  Google Scholar 

  31. Tass PA, Hauptmann C, Popovych OV. Development of therapeutic brain stimulation techniques with methods from nonlinear dynamics and statistical physics. Int J Bifurc Chaos. 2006;16:1889–911.

    Article  MathSciNet  MATH  Google Scholar 

  32. Rao RPN. Brain–computer interfacing: an introduction. Cambridge: Cambridge University Press; 2013.

    Book  Google Scholar 

  33. Coombes S, Bressloff PC, editors. Bursting: the genesis of rhythm in the nervous system. Singapore: World Scientific; 2005.

    Google Scholar 

  34. Rinzel J, Huguet G. Nonlinear dynamics of neuronal excitability, oscillations, and coincidence detection. Commun Pure Appl Math. 2013;66:1464–94.

    Article  MathSciNet  MATH  Google Scholar 

  35. Kuehn C. Multiple time scale dynamics. Cham: Springer; 2015.

    Book  MATH  Google Scholar 

  36. Fitzhugh R. Impulses and physiological states in theoretical models of nerve membrane. Biophys J. 1961;1:445–66.

    Article  Google Scholar 

  37. Coombes S, Osbaldestin AH. Period adding bifurcations and chaos in a periodically stimulated excitable neural relaxation oscillator. Phys Rev E. 2000;62:4057–66.

    Article  Google Scholar 

  38. Berger H. Über das Elektrenkephalogramm des Menschen. Arch Psychiatr Nervenkrankh. 1929;87:527–70.

    Article  Google Scholar 

  39. Voss L, Sleigh J. Monitoring consciousness: the current status of EEG-based depth of anaesthesia monitors. Best Pract Res Clin Anaesthesiol. 2007;21:313–25.

    Article  Google Scholar 

  40. Sherman M, Guillery RW. Functional connections of cortical areas: a new view from the thalamus. Cambridge: MIT Press; 2013.

    Book  Google Scholar 

  41. Wright JJ, Liley DTJ. Dynamics of the brain at global and microscopic scales: neural networks and the EEG. Behav Brain Sci. 1996;19:285–320.

    Article  Google Scholar 

  42. David O, Friston KJ. A neural mass model for MEG/EEG: coupling and neuronal dynamics. NeuroImage. 2003;20:1743–55.

    Article  Google Scholar 

  43. Hoppensteadt FC, Izhikevich EM. Weakly connected neural networks. New York: Springer; 1997.

    Book  Google Scholar 

  44. Fenichel N. Persistence and smoothness of invariant manifolds for flows. Indiana Univ Math J. 1971;21:193–226.

    Article  MathSciNet  MATH  Google Scholar 

  45. Guckenheimer J, Holmes P. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. New York: Springer; 1990. (Applied mathematical sciences; vol. 42).

    Google Scholar 

  46. Kopell N. Toward a theory of modelling central pattern generators. In: Neural control of rhythmic movements in vertebrates. New York: Wiley; 1988.

    Google Scholar 

  47. Cabral J, Hugues E, Sporns O, Deco G. Role of local network oscillations in resting-state functional connectivity. NeuroImage. 2011;57:130–9.

    Article  Google Scholar 

  48. Izhikevich E. Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge: MIT Press; 2006.

    Google Scholar 

  49. Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. New York: Springer; 2010.

    Book  MATH  Google Scholar 

  50. Hodgkin AL, Huxley AF. A quantitative description of membrane and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44.

    Article  Google Scholar 

  51. Rinzel J. Electrical excitability of cells, theory and experiment: review of the Hodgkin–Huxley foundation and an update. Bull Math Biol. 1990;52:3–23.

    Article  Google Scholar 

  52. Abbott LF, Kepler TB. Model neurons: from Hodgkin–Huxley to Hopfield. In: Garrido L, editor. Statistical mechanics of neural networks. Berlin: Springer; 1990. p. 5–18. (Lecture notes in physics; vol. 368).

    Chapter  Google Scholar 

  53. Wilson HR. Simplified dynamics of human and mammalian neocortical neurons. J Theor Biol. 1999;200:375–88.

    Article  Google Scholar 

  54. Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. Biophys J. 1981;35:193–213.

    Article  Google Scholar 

  55. Badel L, Lefort S, Berger TK, Petersen CCH, Gerstner W, Richardson MJE. Extracting nonlinear integrate-and-fire models from experimental data using dynamic I–V curves. Biol Cybern. 2008;99:361–70.

    Article  MathSciNet  MATH  Google Scholar 

  56. Latham PE, Richmond BJ, Nelson PG, Nirenberg S. Intrinsic dynamics in neuronal networks I. Theory. J Neurophysiol. 2000;83:808–27.

    Google Scholar 

  57. Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw. 2003;14:1569–72.

    Article  Google Scholar 

  58. Coombes S, Thul R, Wedgwood KCA. Nonsmooth dynamics in spiking neuron models. Physica D. 2012;241:2042–57.

    Article  MathSciNet  Google Scholar 

  59. Brown TG. On the nature of the fundamental activity of the nervous centres; together with an analysis of the conditioning of rhythmic activity in progression and a theory of the evolution of function in the nervous system. J Physiol (Lond). 1914;48:18–46.

    Article  Google Scholar 

  60. Connors BW, editor. Gap junctions in the mammalian brain. Boca Raton: CRC Press; 2005.

    Google Scholar 

  61. Torben-Nielsen B, Segev I, Yarom Y. The generation of phase differences and frequency changes in a network model of inferior olive subthreshold oscillations. PLoS Comput Biol. 2012;8:1002580.

    Article  Google Scholar 

  62. Liley DTJ, Cadusch PJ, Dafilis MP. A spatially continuous mean field theory of electrocortical activity. Netw Comput Neural Syst. 2002;13:67–113.

    Article  MATH  Google Scholar 

  63. Deco G, Jirsa VK, McIntosh AR. Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat Rev Neurosci. 2011;12:43–56.

    Article  Google Scholar 

  64. Young CK, Eggermont JJ. Coupling of mesoscopic brain oscillations: recent advances in analytical and theoretical perspectives. Prog Neurobiol. 2009;89:61–78.

    Article  Google Scholar 

  65. Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24.

    Article  Google Scholar 

  66. Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern. 1995;73:357–66.

    Article  MATH  Google Scholar 

  67. Touboul J, Wendling F, Chauvel P, Faugeras O. Neural mass activity, bifurcations, and epilepsy. Neural Comput. 2011;23:3232–86.

    Article  MathSciNet  Google Scholar 

  68. Jedynak M, Pons AJ, Garcia-Ojalvo J. Cross-frequency transfer in a stochastically driven mesoscopic neuronal model. Front Comput Neurosci. 2015;9:14.

    Article  Google Scholar 

  69. Spiegler A, Knösche TR, Schwab K, Haueisen J, Atay FM. Modeling brain resonance phenomena using a neural mass model. PLoS Comput Biol. 2011;7(12):1002298.

    Article  Google Scholar 

  70. Breakspear M, Roberts JA, Terry JR, Rodrigues S, Mahant N, Robinson PA. A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis. Cereb Cortex. 2006;16:1296–313.

    Article  Google Scholar 

  71. Terry JR, Benjamin O, Richardson MP. Seizure generation: the role of nodes and networks. Epilepsia. 2012;53(9):166–9.

    Article  Google Scholar 

  72. Chowdhury FA, Woldman W, FitzGerald TH, Elwes RD, Nashef L, Terry JR, Richardson MP. Revealing a brain network endophenotype in families with idiopathic generalised epilepsy. PLoS ONE. 2014;9:110136.

    Article  Google Scholar 

  73. Valdes-Sosa PA, Sanchez-Bornot JM, Sotero RC, Iturria-Medina Y, Aleman-Gomez Y, Bosch-Bayard J, Carbonell F, Ozaki T. Model driven EEG/fMRI fusion of brain oscillations. Hum Brain Mapp. 2009;30:2701–21.

    Article  Google Scholar 

  74. Gençay R, Liu T. Nonlinear modelling and prediction with feedforward and recurrent networks. Physica D. 1997;108:119–34.

    Article  Google Scholar 

  75. Arenas A, Díaz-Guilera A, Kurths J, Moreno Y, Zhou C. Synchronization in complex networks. Phys Rep. 2008;469:93–153.

    Article  MathSciNet  Google Scholar 

  76. Pikovsky A, Rosenblum M, Kurths J. Synchronization. Cambridge: Cambridge University Press; 2001. (Cambridge nonlinear science series; vol. 12).

    Book  MATH  Google Scholar 

  77. Brette R. Computing with neural synchrony. PLoS Comput Biol. 2012;8:1002561.

    Article  Google Scholar 

  78. Brunel N, Hakim V. Sparsely synchronized neuronal oscillations. Chaos. 2008;18:015113.

    Article  MathSciNet  Google Scholar 

  79. Casagrande V, Mikhailov A. Birhythmicity, synchronization, and turbulence in an oscillatory system with nonlocal inertial coupling. Physica D. 2005;205:154–69.

    Article  MATH  Google Scholar 

  80. Deville REL, Peskin CS. Synchrony and asynchrony for neuronal dynamics defined on complex networks. Bull Math Biol. 2012;74:769–802.

    Article  MathSciNet  MATH  Google Scholar 

  81. Nowotny T, Huerta R, Rabinovich MI. Neuronal synchrony: peculiarity and generality. Chaos. 2008;18:037119.

    Article  MathSciNet  Google Scholar 

  82. Parga N, Abbott LF. Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Front Neurosci. 2007;1:57–66.

    Article  Google Scholar 

  83. Pinsky PF, Rinzel J. Synchrony measures for biological neural networks. Biol Cybern. 1995;137:129–37.

    Article  Google Scholar 

  84. Popovych OV, Hauptmann C, Tass P. Control of neuronal synchrony by nonlinear delayed feedback. Biol Cybern. 2006;95:69–85.

    Article  MathSciNet  MATH  Google Scholar 

  85. Rubin J, Terman D. Synchronized activity and loss of synchrony among heterogeneous conditional oscillators. SIAM J Appl Dyn Syst. 2002;1:146–74.

    Article  MathSciNet  MATH  Google Scholar 

  86. Sorrentino F, Ott E. Network synchronization of groups. Phys Rev E. 2007;76:056114.

    Article  MathSciNet  Google Scholar 

  87. Reyes AD. Synchrony-dependent propagation of firing rate in iteratively constructed networks in vitro. Nat Neurosci. 2003;6:593–9.

    Article  Google Scholar 

  88. Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang D-U. Complex networks: structure and dynamics. Phys Rep. 2006;424:175–308.

    Article  MathSciNet  Google Scholar 

  89. Nekorkin VI, Voronin ML, Velarde MG. Clusters in an assembly of globally coupled bistable oscillators. Eur Phys J B. 1999;9:533–43.

    Article  Google Scholar 

  90. Newman MEJ. Networks: an introduction. Oxford: Oxford University Press; 2010.

    Book  Google Scholar 

  91. Do A-L, Gross T. Self-organization in continuous adaptive networks. Copenhagen: River; 2012.

    Google Scholar 

  92. Stam CJ. Modern network science of neurological disorders. Nat Rev Neurosci. 2014;15:683–95.

    Article  Google Scholar 

  93. Atay FM, Biyikoglu T, Jost J. Network synchronization: spectral versus statistical properties. Physica D. 2006;224:35–41.

    MathSciNet  MATH  Google Scholar 

  94. Pecora L, Carroll T, Johnson G, Mar D, Fink KS. Synchronization stability in coupled oscillator arrays: solution for arbitrary configurations. Int J Bifurc Chaos. 2000;10:273–90.

    Article  MathSciNet  MATH  Google Scholar 

  95. Pecora L, Barahona M. Synchronization of oscillators in complex networks. Chaos Complex Lett. 2005;1:61–91.

    MATH  Google Scholar 

  96. Rothkegel A, Lehnertz K. Recurrent events of synchrony in complex networks of pulse-coupled oscillators. Europhys Lett. 2011;95:38001.

    Article  Google Scholar 

  97. Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U. Network motifs: simple building blocks of complex networks. Science. 2002;298:824–7.

    Article  Google Scholar 

  98. Sporns O, Kötter R. Motifs in brain networks. PLoS Biol. 2004;2:369.

    Article  Google Scholar 

  99. Sporns O, Honey CJ, Kötter R. Identification and classification of hubs in brain networks. PLoS ONE. 2007;2(10):1049.

    Article  Google Scholar 

  100. Kamei H. The existence and classification of synchrony-breaking bifurcations in regular homogeneous networks using lattice structures. Int J Bifurc Chaos. 2009;19:3707–32.

    Article  MathSciNet  MATH  Google Scholar 

  101. Benjamin O, Fitzgerald THB, Ashwin P, Tsaneva-Atanasova K, Chowdhury F, Richardson MP, Terry JR. A phenomenological model of seizure initiation suggests network structure may explain seizure frequency in idiopathic generalised epilepsy. J Math Neurosci. 2012;2:1.

    Article  MathSciNet  MATH  Google Scholar 

  102. Ermentrout GB, Kopell N. Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math. 1990;50:125–46.

    Article  MathSciNet  MATH  Google Scholar 

  103. Ashwin P, Dangelmayr G. Isochronicity-induced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst. 2000;15:263–86.

    Article  MathSciNet  MATH  Google Scholar 

  104. Sepulchre J-A, MacKay RS. Localized oscillations in conservative or dissipative networks of weakly coupled autonomous oscillators. Nonlinearity. 1997;10:679–713.

    Article  MathSciNet  MATH  Google Scholar 

  105. Campbell SA. Time delays in neural systems. In: Handbook of brain connectivity. Berlin: Springer; 2007. (Understanding complex systems).

    Google Scholar 

  106. Hoevel P, Dahlem MA, Schoell E. Control of synchronization in coupled neural systems by time-delayed feedback. Int J Bifurc Chaos. 2010;20:813–5.

    Article  MATH  Google Scholar 

  107. Dhamala M, Jirsa V, Ding M. Enhancement of neural synchrony by time delay. Phys Rev Lett. 2004;92:074104.

    Article  Google Scholar 

  108. Levnajić Z, Pikovsky A. Phase resetting of collective rhythm in ensembles of oscillators. Phys Rev E. 2010;82:056202.

    Google Scholar 

  109. Atay FM. Complex time-delay systems. Berlin: Springer; 2010. (Understanding complex systems).

    Book  MATH  Google Scholar 

  110. Engelborghs K, Luzyanina T, Roose D. Numerical bifurcation analysis of delay differential equations using DDE-BIFTOOL. ACM Trans Math Softw. 2002;28:1–21.

    Article  MathSciNet  MATH  Google Scholar 

  111. Sieber J, Engelborghs K, Luzyanina T, Samaey G, Roose D. DDE-BIFTOOL manual—bifurcation analysis of delay differential equations. arXiv:1406.7144 (2014).

  112. Coombes S, Laing C. Delays in activity-based neural networks. Philos Trans R Soc A. 2009;367:1117–29.

    Article  MathSciNet  MATH  Google Scholar 

  113. MacArthur BD, Sanchez-Garcia RJ, Anderson JW. Symmetry in complex networks. Discrete Appl Math. 2008;156:3525–31.

    Article  MathSciNet  MATH  Google Scholar 

  114. Golubitsky M, Schaeffer DG. Singularities and groups in bifurcation theory. Vol. I. New York: Springer; 1985. (Applied mathematical sciences; vol. 51).

    Book  Google Scholar 

  115. Golubitsky M, Stewart I, Schaeffer DG. Singularities and groups in bifurcation theory. Vol. II. New York: Springer; 1988. (Applied mathematical sciences; vol. 69).

    Book  Google Scholar 

  116. Golubitsky M, Stewart I. The symmetry perspective: from equilibrium to chaos in phase space and physical space. Basel: Birkhäuser Verlag; 2002. (Progress in mathematics; vol. 200).

    Book  Google Scholar 

  117. Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC. What geometric visual hallucinations tell us about the visual cortex. Neural Comput. 2002;14(3):473–91.

    Article  MATH  Google Scholar 

  118. Ashwin P, Swift JW. The dynamics of n weakly coupled identical oscillators. J Nonlinear Sci. 1992;2:69–108.

    Article  MathSciNet  MATH  Google Scholar 

  119. Bressloff PC, Coombes S. Symmetry and phase-locking in a ring of pulse-coupled oscillators with distributed delays. Physica D. 1999;126:99–122.

    Article  MathSciNet  MATH  Google Scholar 

  120. Brown E, Holmes P, Moehlis J. Globally coupled oscillator networks. In: Kaplan E, Marsden J, Sreenivasan K, editors. Perspectives and problems in nonlinear science: a celebratory volume in honor of Larry Sirovich. New York: Springer; 2003. p. 183–215.

    Chapter  Google Scholar 

  121. Othmer HG, Scriven LE. Instability and dynamic pattern in cellular networks. J Theor Biol. 1971;32:507–37.

    Article  Google Scholar 

  122. Dionne B, Golubitsky M, Stewart I. Coupled cells with internal symmetry. I. Wreath products. Nonlinearity. 1996;9:559–74.

    Article  MathSciNet  MATH  Google Scholar 

  123. Kuznetsov YA. Elements of applied bifurcation theory. 2nd ed. New York: Springer; 1998. (Applied mathematical sciences; vol. 112).

    MATH  Google Scholar 

  124. Krauskopf B, Osinga HM, Galan-Vioque J, editors. Numerical continuation methods for dynamical systems path following and boundary value problems. Heidelberg: Springer; 2007.

    MATH  Google Scholar 

  125. Dhooge A, Govaerts W, Kuznetsov YA. MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans Math Softw. 2003;29:141–64.

    Article  MathSciNet  MATH  Google Scholar 

  126. Kuznetsov YA, Levitin VV, Skovoroda AR. Continuation of stationary solutions to evolution problems in CONTENT. Amsterdam, The Netherlands: Centrum voor Wiskunde en Informatica; 1996. Report No.: AM-R9611.

  127. Ermentrout GB. Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students. Philadelphia: SIAM; 2002.

    Book  Google Scholar 

  128. Krupa M. Robust heteroclinic cycles. J Nonlinear Sci. 1997;7:129–76.

    Article  MathSciNet  MATH  Google Scholar 

  129. Krupa M, Melbourne I. Asymptotic stability of heteroclinic cycles in systems with symmetry. II. Proc R Soc Edinb, Sect A. 2004;134:1177–97.

    Article  MathSciNet  MATH  Google Scholar 

  130. Ashwin P, Field M. Heteroclinic networks in coupled cell systems. Arch Ration Mech Anal. 1999;148:107–43.

    Article  MathSciNet  MATH  Google Scholar 

  131. Dellnitz M, Field M, Golubitsky M, Ma J, Hohmann A. Cycling chaos. Int J Bifurc Chaos. 1995;5:1243–7.

    Article  MathSciNet  MATH  Google Scholar 

  132. Ashwin P, Rucklidge AM. Cycling chaos: its creation, persistence and loss of stability in a model of nonlinear magnetoconvection. Physica D. 1998;122:134–54.

    Article  MathSciNet  MATH  Google Scholar 

  133. Rabinovich M, Varona P, Selverston A, Abarbanel H. Dynamical principles in neuroscience. Rev Mod Phys. 2006;78:1213–65.

    Article  Google Scholar 

  134. Rabinovich MI, Varona P, Tristan I, Afraimovich VS. Chunking dynamics: heteroclinics in mind. Front Comput Neurosci. 2014;8:22.

    Article  Google Scholar 

  135. Stewart I, Golubitsky M, Pivato M. Symmetry groupoids and patterns of synchrony in coupled cell networks. SIAM J Appl Dyn Syst. 2003;2:609–46.

    Article  MathSciNet  MATH  Google Scholar 

  136. Golubitsky M, Stewart I. Nonlinear dynamics of networks: the groupoid formalism. Bull Am Math Soc. 2006;43:305–64.

    Article  MathSciNet  MATH  Google Scholar 

  137. Aguiar M, Ashwin P, Dias A, Field M. Dynamics of coupled cell networks: synchrony, heteroclinic cycles and inflation. J Nonlinear Sci. 2011;21:271–323.

    Article  MathSciNet  MATH  Google Scholar 

  138. Aguiara MAD, Dias APS, Golubitsky M, Leitee MD-CA. Bifurcations from regular quotient networks: a first insight. Physica D. 2009;238:137–55.

    Article  MathSciNet  Google Scholar 

  139. Dias APS, Pinho EM. Spatially periodic patterns of synchrony in lattice networks. SIAM J Appl Dyn Syst. 2009;8:641–75.

    Article  MathSciNet  MATH  Google Scholar 

  140. Stewart I, Parker M. Periodic dynamics of coupled cell networks I: rigid patterns of synchrony and phase relations. Dyn Syst. 2007;22:389–450.

    Article  MathSciNet  MATH  Google Scholar 

  141. Golubitsky M, Romano D, Wang Y. Network periodic solutions: patterns of phase-shift synchrony. Nonlinearity. 2012;25:1045–74.

    Article  MathSciNet  MATH  Google Scholar 

  142. Field MJ. Combinatorial dynamics. Dyn Syst. 2004;19:217–43.

    Article  MathSciNet  MATH  Google Scholar 

  143. Agarwal N, Field MJ. Dynamical equivalence of network architecture for coupled dynamical systems I: asymmetric inputs. Nonlinearity. 2010;23:1245–68.

    Article  MathSciNet  MATH  Google Scholar 

  144. Agarwal N, Field MJ. Dynamical equivalence of network architecture for coupled dynamical systems II: general case. Nonlinearity. 2010;23:1269–89.

    Article  MathSciNet  MATH  Google Scholar 

  145. Holmes P, Rand D. Phase portraits and bifurcations of the non-linear oscillator \(\ddot{x}+(\alpha+\gamma x^{2})\dot{x}+\beta x+\delta x^{3}=0\). Int J Non-Linear Mech. 1980;15:449–58.

    Article  MathSciNet  MATH  Google Scholar 

  146. Coombes S. Phase locking in networks of synaptically coupled McKean relaxation oscillators. Physica D. 2001;160:173–88.

    Article  MathSciNet  MATH  Google Scholar 

  147. Coombes S. Neuronal networks with gap junctions: a study of piece-wise linear planar neuron models. SIAM J Appl Dyn Syst. 2008;7:1101–29.

    Article  MathSciNet  MATH  Google Scholar 

  148. Storti DW, Rand RH. Dynamics of two strongly coupled relaxation oscillators. SIAM J Appl Math. 1986;46:56–67.

    Article  MathSciNet  MATH  Google Scholar 

  149. Somers D, Kopell N. Waves and synchrony in networks of oscillators of relaxation and non-relaxation type. Physica D. 1995;89:169–83.

    Article  MathSciNet  MATH  Google Scholar 

  150. Aronson DG, Ermentrout GB, Kopell N. Amplitude response of coupled oscillators. Physica D. 1990;41:403–49.

    Article  MathSciNet  MATH  Google Scholar 

  151. Kurrer C. Synchronization and desynchronization of weakly coupled oscillators. Phys Rev E. 1997;56:3799–802.

    Article  MathSciNet  Google Scholar 

  152. Sherman A, Rinzel J. Rhythmogenic effects of weak electrotonic coupling in neuronal models. Proc Natl Acad Sci USA. 1992;89:2471–4.

    Article  Google Scholar 

  153. Han SK, Kurrer C, Kuramoto Y. Dephasing and bursting in coupled neural oscillators. Phys Rev Lett. 1995;75:3190–3.

    Article  Google Scholar 

  154. Postnov D, Han SK, Kook H. Synchronization of diffusively coupled oscillators near the homoclinic bifurcation. Phys Rev E. 1999;60:2799–807.

    Article  MathSciNet  Google Scholar 

  155. Pfeuty B, Mato G, Golomb D, Hansel D. Electrical synapses and synchrony: the role of intrinsic currents. J Neurosci. 2003;23:6280–94.

    Google Scholar 

  156. Ermentrout B. Gap junctions destroy persistent states in excitatory networks. Phys Rev E. 2006;74:031918.

    Article  MathSciNet  Google Scholar 

  157. Daido H, Nakanishi K. Diffusion-induced inhomogeneity in globally coupled oscillators: swing-by mechanism. Phys Rev Lett. 2006;96:054101.

    Article  Google Scholar 

  158. Mancilla JG, Lewis TJ, Pinto DJ, Rinzel J, Connors BW. Synchronization of electrically coupled pairs of inhibitory interneurons in neocortex. J Neurosci. 2007;27:2058–73.

    Article  Google Scholar 

  159. Pecora LM, Carroll TL. Master stability functions for synchronized coupled systems. Phys Rev Lett. 1998;80:2109–12.

    Article  Google Scholar 

  160. Restrepo J, Ott E, Hunt B. Desynchronization waves and localized instabilities in oscillator arrays. Phys Rev Lett. 2004;93:114101.

    Article  Google Scholar 

  161. Porter MA, Gleeson JP. Dynamical systems on networks: a tutorial. arXiv:1403.7663 (2014).

  162. Thul R, Coombes S. Understanding cardiac alternans: a piecewise linear modelling framework. Chaos. 2010;20:045102.

    Article  MathSciNet  Google Scholar 

  163. Pecora LM, Sorrentino F, Hagerstrom AM, Murphy TE, Roy R. Cluster synchronization and isolated desynchronization in complex networks with symmetries. Nat Commun. 2014;5:4079.

    Article  Google Scholar 

  164. Mirollo RE, Strogatz SH. Synchronization of pulse-coupled biological oscillators. SIAM J Appl Math. 1990;50:1645–62.

    Article  MathSciNet  MATH  Google Scholar 

  165. Ernst U, Pawelzik K, Geisel T. Synchronization induced by temporal delays in pulse-coupled oscillators. Phys Rev Lett. 1995;74:1570–3.

    Article  Google Scholar 

  166. Ernst U, Pawelzik K, Geisel T. Delay-induced multistable synchronization of biological oscillators. Phys Rev E. 1998;57:2150–62.

    Article  MathSciNet  Google Scholar 

  167. Timme M, Wolf F, Geisel T. Unstable attractors induce perpetual synchronization and desynchronization. Chaos. 2003;13:377–87.

    Article  MathSciNet  MATH  Google Scholar 

  168. Ashwin P, Timme M. Unstable attractors: existence and robustness in networks of oscillators with delayed pulse coupling. Nonlinearity. 2005;18:2035–60.

    Article  MathSciNet  MATH  Google Scholar 

  169. Broer H, Efstathiou K, Subramanian E. Heteroclinic cycles between unstable attractors. Nonlinearity. 2008;21:1385–410.

    Article  MathSciNet  MATH  Google Scholar 

  170. Kirst C, Timme M. From networks of unstable attractors to heteroclinic switching. Phys Rev E. 2008;78:065201.

    Article  MathSciNet  Google Scholar 

  171. Steur E, Tyukin I, Nijmeijer H. Semi-passivity and synchronization of diffusively coupled neuronal oscillators. Physica D. 2009;238:2119–28.

    Article  MathSciNet  MATH  Google Scholar 

  172. Mauroy A, Sacr P, Sepulchre R. Kick synchronization versus diffusive synchronization. In: Proceedings of the IEEE conference on decision and control. 2012. p. 7171–83.

    Google Scholar 

  173. Kielblock H, Kirst C, Timme M. Breakdown of order preservation in symmetric oscillator networks with pulse-coupling. Chaos. 2011;21:025113.

    Article  MathSciNet  Google Scholar 

  174. Vreeswijk C. Partial synchronization in populations of pulse-coupled oscillators. Phys Rev E. 1996;54:5522–37.

    Article  Google Scholar 

  175. van Vreeswijk C. Analysis of the asynchronous state in networks of strongly coupled oscillators. Phys Rev Lett. 2000;84:5110–3.

    Article  Google Scholar 

  176. van Vreeswijk C, Hansel D. Patterns of synchrony in neural networks with spike adaptation. Neural Comput. 2001;13:959–92.

    Article  MATH  Google Scholar 

  177. Bressloff PC, Coombes S. Dynamics of strongly-coupled spiking neurons. Neural Comput. 2000;12:91–129.

    Article  Google Scholar 

  178. Guckenheimer J. Isochrons and phaseless sets. J Math Biol. 1975;1:259–73.

    Article  MathSciNet  MATH  Google Scholar 

  179. Winfree A. The geometry of biological time. 2nd ed. New York: Springer; 2001.

    Book  MATH  Google Scholar 

  180. Osinga HM, Moehlis J. A continuation method for computing global isochrons. SIAM J Appl Dyn Syst. 2010;9:1201–28.

    Article  MathSciNet  MATH  Google Scholar 

  181. Guillamon A, Huguet G. A computational and geometric approach to phase resetting curves and surfaces. SIAM J Appl Dyn Syst. 2009;8:1005–42.

    Article  MathSciNet  MATH  Google Scholar 

  182. Su̧vak O, Demir A. Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods, and accurate phase computations. Comput Aided Des. 2010;29:1215–28.

    Google Scholar 

  183. Takeshita D, Feres R. Higher order approximation of isochrons. Nonlinearity. 2010;23:1303–23.

    Article  MathSciNet  MATH  Google Scholar 

  184. Mauroy A, Mezić I. On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos. 2012;22:033112.

    Article  MathSciNet  Google Scholar 

  185. Wedgwood K. Dynamical systems techniques in the analysis of neural systems. PhD thesis, University of Nottingham; 2013.

  186. Hale JK. Ordinary differential equations. New York: Wiley; 1969.

    MATH  Google Scholar 

  187. Ermentrout GB, Kopell N. Multiple pulse interactions and averaging in systems of coupled neural oscillators. J Math Biol. 1991;29:195–217.

    Article  MathSciNet  MATH  Google Scholar 

  188. Medvedev GS. Synchronization of coupled limit cycles. J Nonlinear Sci. 2011;21:441–64.

    Article  MathSciNet  MATH  Google Scholar 

  189. Ashwin P. Weak coupling of strongly nonlinear, weakly dissipative identical oscillators. Dyn Syst. 1989;10:2471–4.

    Google Scholar 

  190. Ashwin P, Dangelmayr G. Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators. Dyn Syst. 2005;20:333–67.

    Article  MathSciNet  MATH  Google Scholar 

  191. Lee WS, Ott E, Antonsen TM. Phase and amplitude dynamics in large systems of coupled oscillators: growth heterogeneity, nonlinear frequency shifts, and cluster states. Chaos. 2013;23:033116.

    Article  MathSciNet  Google Scholar 

  192. Wedgwood KCA, Lin KK, Thul R, Coombes S. Phase-amplitude descriptions of neural oscillator models. J Math Neurosci. 2013;3:2.

    Article  MathSciNet  Google Scholar 

  193. Ott W, Stenlund M. From limit cycles to strange attractors. Commun Math Phys. 2010;296:215–49.

    Article  MathSciNet  MATH  Google Scholar 

  194. Medvedev GS. Synchronization of coupled stochastic limit cycle oscillators. Phys Lett A. 2010;374:1712–20.

    Article  MATH  Google Scholar 

  195. Wang Q, Young L-S. Strange attractors with one direction of instability. Commun Math Phys. 2001;218:1–97.

    Article  MathSciNet  MATH  Google Scholar 

  196. Wang Q, Young L-S. From invariant curves to strange attractors. Commun Math Phys. 2002;225:275–304.

    Article  MathSciNet  MATH  Google Scholar 

  197. Wang Q. Strange attractors in periodically-kicked limit cycles and Hopf bifurcations. Commun Math Phys. 2003;240:509–29.

    Article  MATH  Google Scholar 

  198. Lin KK, Young L-S. Shear-induced chaos. Nonlinearity. 2008;21:899–922.

    Article  MathSciNet  MATH  Google Scholar 

  199. Lin KK, Young L-S. Dynamics of periodically kicked oscillators. J Fixed Point Theory Appl. 2010;7:291–312.

    Article  MathSciNet  MATH  Google Scholar 

  200. Lin KK, Wedgwood KCA, Coombes S, Young L-S. Limitations of perturbative techniques in the analysis of rhythms and oscillations. J Math Biol. 2013;66:139–61.

    Article  MathSciNet  MATH  Google Scholar 

  201. Brown E, Moehlis J, Holmes P. On the phase reduction and response dynamics of neural oscillator populations. Neural Comput. 2004;16:671–715.

    Article  Google Scholar 

  202. Rinzel J, Ermentrout GB. Analysis of neural excitability and oscillations. In: Koch C, Segev I, editors. Methods in neuronal modeling: from synapses to networks. 1st ed. Cambridge: MIT Press; 1989. p. 135–69.

    Google Scholar 

  203. Galán RF, Ermentrout GB, Urban NN. Efficient estimation of phase-resetting curves in real neurons and its significance for neural-network modeling. Phys Rev Lett. 2005;94:158101.

    Article  Google Scholar 

  204. Tateno T, Robinson HPC. Phase resetting curves and oscillatory stability in interneurons of rat somatosensory cortex. Biophys J. 2007;92:683–95.

    Article  Google Scholar 

  205. Netoff T, Schwemmer MA, Lewis TJ. Experimentally estimating phase response curves of neurons: theoretical and practical issues. In: Schultheiss NW, Prinz AA, Butera RJ, editors. Phase response curves in neuroscience: theory, experiment, and analysis. Berlin: Springer; 2012. p. 95–129.

    Chapter  Google Scholar 

  206. Gutkin BS, Ermentrout GB, Reyes AD. Phase-response curves give the responses of neurons to transient inputs. J Neurophysiol. 2005;94:1623–35.

    Article  Google Scholar 

  207. Kotani K, Yamaguchi I, Ogawa Y, Jimbo Y, Nakao H, Ermentrout GB. Adjoint method provides phase response functions for delay-induced oscillations. Phys Rev Lett. 2012;109:044101.

    Article  Google Scholar 

  208. Novičenko V, Pyragas K. Phase reduction of weakly perturbed limit cycle oscillations in time-delay systems. Physica D. 2012;241:1090–8.

    Article  MathSciNet  MATH  Google Scholar 

  209. Govaerts W, Sautois B. Computation of the phase response curve: a direct numerical approach. Neural Comput. 2006;18:817–47.

    Article  MathSciNet  MATH  Google Scholar 

  210. Ermentrout B. Type I membranes, phase resetting curves, and synchrony. Neural Comput. 1996;8:979–1001.

    Article  Google Scholar 

  211. Ermentrout GB, Glass L, Oldeman BE. The shape of phase-resetting curves in oscillators with a saddle node on an invariant circle bifurcation. Neural Comput. 2012;24:3111–25.

    Article  MathSciNet  MATH  Google Scholar 

  212. Adler R. A study of locking phenomena in oscillators. Proc IRE. 1946;34:351–7.

    Article  Google Scholar 

  213. Ermentrout GB, Kopell N. Frequency plateaus in a chain of weakly coupled oscillators. SIAM J Appl Math. 1984;15:215–37.

    Article  MathSciNet  MATH  Google Scholar 

  214. van Vreeswijk C, Abbott LF, Ermentrout GB. When inhibition not excitation synchronizes neural firing. J Comput Neurosci. 1994;1:313–21.

    Article  Google Scholar 

  215. Chow CC. Phase-locking in weakly heterogeneous neuronal networks. Physica D. 1998;118(3–4):343–70.

    Article  MathSciNet  MATH  Google Scholar 

  216. Lewis TJ, Rinzel J. Dynamics of spiking neurons connected by both inhibitory and electrical coupling. J Comput Neurosci. 2003;14:283–309.

    Article  Google Scholar 

  217. Corinto F, Bonnin M, Gilli M. Weakly connected oscillatory network models for associative and dynamic memories. Int J Bifurc Chaos. 2007;17:4365–79.

    Article  MathSciNet  MATH  Google Scholar 

  218. Izhikevich EM. Phase equations for relaxation oscillators. SIAM J Appl Math. 2000;60:1789–805.

    Article  MathSciNet  MATH  Google Scholar 

  219. Winfree AT. Biological rhythms and the behaviour of populations of coupled oscillators. J Theor Biol. 1967;16:15–42.

    Article  Google Scholar 

  220. Strogatz S. From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators. Physica D. 2000;143:1–20.

    Article  MathSciNet  MATH  Google Scholar 

  221. Acebrón JA, Bonilla LL, Vicente CJP, Ritort F, Spigler R. The Kuramoto model: a simple paradigm for synchronization phenomena. Rev Mod Phys. 2005;77:137–85.

    Article  Google Scholar 

  222. Karabacak Ö, Ashwin P. Heteroclinic ratchets in networks of coupled oscillators. J Nonlinear Sci. 2009;20:105–29.

    Article  Google Scholar 

  223. Ermentrout GB. The behaviour of rings of coupled oscillators. J Math Biol. 1985;23:55–74.

    Article  MathSciNet  MATH  Google Scholar 

  224. Skardal PS, Restrepo JG. Hierarchical synchrony of phase oscillators in modular networks. Phys Rev E. 2012;85:016208.

    Article  Google Scholar 

  225. Watanabe S, Strogatz SH. Constants of motion for superconducting Josephson arrays. Physica D. 1994;74:197–253.

    Article  MATH  Google Scholar 

  226. Kori H, Kuramoto Y, Jain S, Kiss IZ, Hudson JL. Clustering in globally coupled oscillators near a Hopf bifurcation: theory and experiments. Phys Rev E. 2014;89:062906.

    Article  Google Scholar 

  227. Golomb D, Wang XJ, Rinzel J. Synchronization properties of spindle oscillations in a thalamic reticular nucleus model. J Neurophysiol. 1994;72:1109–26.

    Google Scholar 

  228. Orosz G, Moehlis J, Ashwin P. Designing the dynamics of globally coupled oscillators. Prog Theor Phys. 2009;122:611–30.

    Article  MATH  Google Scholar 

  229. Skardal PS, Ott E, Restrepo JG. Cluster synchrony in systems of coupled phase oscillators with higher-order coupling. Phys Rev E. 2011;84:036208.

    Article  Google Scholar 

  230. Ashwin P, King GP, Swift JW. Three identical oscillators with symmetric coupling. Nonlinearity. 1990;3:585–601.

    Article  MathSciNet  MATH  Google Scholar 

  231. Hansel D, Mato G, Meunier C. Clustering and slow switching in globally coupled phase oscillators. Phys Rev E. 1993;48:3470–7.

    Article  Google Scholar 

  232. Ashwin P, Burylko O, Maistrenko Y. Bifurcation to heteroclinic cycles and sensitivity in three and four coupled phase oscillators. Physica D. 2008;237:454–66.

    Article  MathSciNet  MATH  Google Scholar 

  233. Cohen AH, Holmes PJ, Rand RH. The nature of the coupling between segmental oscillators of the spinal lamprey generator. J Math Biol. 1982;13:345–69.

    Article  MathSciNet  MATH  Google Scholar 

  234. Kopell N. Chains of coupled oscillators. In: Arbib MA, editor. The handbook of brain theory and neural networks. Cambridge: MIT Press; 1995. p. 178–83.

    Google Scholar 

  235. Ermentrout GB, Kleinfeld D. Traveling electrical waves in cortex: insights from phase dynamics and speculation on a computational role. Neuron. 2001;29:33–44.

    Article  Google Scholar 

  236. Ermentrout GB. Stable periodic solutions to discrete and continuum arrays of weakly coupled nonlinear oscillators. SIAM J Appl Math. 1992;52:1665–87.

    Article  MathSciNet  MATH  Google Scholar 

  237. Kopell N, Ermentrout GB. Phase transitions and other phenomena in chains of coupled oscillators. SIAM J Appl Math. 1990;50:1014–52.

    Article  MathSciNet  MATH  Google Scholar 

  238. Kopell N, Ermentrout GB. Symmetry and phaselocking in chains of weakly coupled oscillators. Commun Pure Appl Math. 1986;39:623–60.

    Article  MathSciNet  MATH  Google Scholar 

  239. Rubino D, Robbins KA, Hatsopoulos NG. Propagating waves mediate information transfer in the motor cortex. Nat Neurosci. 2006;9:1549–57.

    Article  Google Scholar 

  240. Crook SM, Ermentrout GB, Vanier MC, Bower JM. The role of axonal delay in the synchronization of networks of coupled cortical oscillators. J Comput Neurosci. 1997;4:161–72.

    Article  MATH  Google Scholar 

  241. Bressloff PC, Coombes S. Physics of the extended neuron. Int J Mod Phys B. 1997;11:2343–92.

    Article  Google Scholar 

  242. Battogtokh D. Phase turbulence in the nonlocally coupled phase equation. Phys Lett A. 2002;299:558–64.

    Article  MATH  Google Scholar 

  243. Bick C, Timme M, Paulikat D, Rathlev D, Ashwin P. Chaos in symmetric phase oscillator networks. Phys Rev Lett. 2011;107:244101.

    Article  Google Scholar 

  244. Kori H, Kuramoto Y. Slow switching in globally coupled oscillators: robustness and occurence through delayed coupling. Phys Rev E. 2001;63:046214.

    Article  Google Scholar 

  245. Kori H, Kuramoto Y. Slow switching in a population of delayed pulse-coupled oscillators. Phys Rev E. 2003;68:021919.

    Google Scholar 

  246. Kiss IZ, Rusin CG, Kori H, Hudson JL. Engineering complex dynamical structures: sequential patterns and desynchronization. Science. 2007;316:1886.

    Article  MathSciNet  MATH  Google Scholar 

  247. Ashwin P, Postlethwaite C. On designing heteroclinic networks from graphs. Physica D. 2013;265:26–39.

    Article  MathSciNet  MATH  Google Scholar 

  248. Rabinovich M, Huerta R, Varona P. Heteroclinic synchronization: ultrasubharmonic locking. Phys Rev Lett. 2006;96:014101.

    Article  Google Scholar 

  249. Rabinovich MI, Muezzinoglu MK. Nonlinear dynamics of the brain: emotion and cognition. Phys Usp. 2010;53:357–72.

    Article  Google Scholar 

  250. Rabinovich MI, Afraimovich VS, Bick C, Varona P. Information flow dynamics in the brain. Phys Life Rev. 2012;9:51–73.

    Article  Google Scholar 

  251. Ashwin P, Karabacak Ö, Nowotny T. Criteria for robustness of heteroclinic cycles in neural microcircuits. J Math Neurosci. 2011;1:13.

    Article  MathSciNet  Google Scholar 

  252. Ashwin P, Orosz G, Wordsworth J, Townley S. Dynamics on networks of cluster states for globally coupled phase oscillators. SIAM J Appl Dyn Syst. 2007;6:728–58.

    Article  MathSciNet  MATH  Google Scholar 

  253. Ashwin P, Borresen J. Encoding via conjugate symmetries of slow oscillations for globally coupled oscillators. Phys Rev E. 2004;70:026203.

    Article  MathSciNet  Google Scholar 

  254. Ashwin P, Borresen J. Discrete computation using a perturbed heteroclinic network. Phys Lett A. 2005;347:208–14.

    Article  MathSciNet  MATH  Google Scholar 

  255. Wordsworth J, Ashwin P. Spatiotemporal coding of inputs for a system of globally coupled phase oscillators. Phys Rev E. 2008;78:066203.

    Article  MathSciNet  Google Scholar 

  256. Hofbauer J, Sigmund K. Evolutionary game dynamics. Bull Am Math Soc. 2003;40:479–519.

    Article  MathSciNet  MATH  Google Scholar 

  257. May RM, Leonard WJ. Nonlinear aspects of competition between three species. SIAM J Appl Math. 1975;29:243–53.

    Article  MathSciNet  MATH  Google Scholar 

  258. Neves FS, Timme M. Computation by switching in complex networks of states. Phys Rev Lett. 2012;109:018701.

    Article  Google Scholar 

  259. Ashwin P, Postlethwaite C. Designing heteroclinic and excitable networks in phase space using two populations of coupled cells. J Nonlinear Sci. 2015. doi:10.1007/s00332-015-9277-2.

    Google Scholar 

  260. Kaneko K, Tsuda I. Focus issue on chaotic itinerancy. Chaos. 2003;13:926–1164.

    Article  MathSciNet  MATH  Google Scholar 

  261. Kaneko K. On the strength of attractors in a high-dimensional system: Milnor attractor network, robust global attraction, and noise-induced selection. Physica D. 1998;124:322–44.

    Article  MathSciNet  MATH  Google Scholar 

  262. Kuznetsov AS, Kurths J. Stable heteroclinic cycles for ensembles of chaotic oscillators. Phys Rev E. 2002;66:026201.

    Article  MathSciNet  Google Scholar 

  263. Tsuda I. Hypotheses on the functional roles of chaotic transitory dynamics. Chaos. 2009;19:015113.

    Article  MathSciNet  Google Scholar 

  264. Stone E, Holmes P. Random perturbations of heteroclinic attractors. SIAM J Appl Math. 1990;50:726–43.

    Article  MathSciNet  MATH  Google Scholar 

  265. Mainen Z, Sejnowski T. Reliability of spike timing in neocortical neurons. Science. 1995;268:1503–6.

    Article  Google Scholar 

  266. Taillefumier T, Magnasco M. A transition to sharp timing in stochastic leaky integrate-and-fire neurons driven by frozen noisy input. Neural Comput. 2014;26:819–59.

    Article  MathSciNet  Google Scholar 

  267. Ermentrout GB, Galán R, Roberto F, Urban NN. Reliability, synchrony and noise. Trends Neurosci. 2008;31:428–34.

    Article  Google Scholar 

  268. Goldobin DS, Pikovsky A. Synchronization and desynchronization of self-sustained oscillators by common noise. Phys Rev E. 2005;71:045201.

    Article  MathSciNet  Google Scholar 

  269. Nakao H, Arai K, Kawamura Y. Noise-induced synchronization and clustering in ensembles of uncoupled limit-cycle oscillators. Phys Rev Lett. 2007;98:184101.

    Article  Google Scholar 

  270. Lai YM, Porter MA. Noise-induced synchronization, desynchronization, and clustering in globally coupled nonidentical oscillators. Phys Rev E. 2013;88:012905.

    Article  Google Scholar 

  271. Kawamura Y. Collective phase dynamics of globally coupled oscillators: noise-induced anti-phase synchronization. Physica D. 2014;270:20–9.

    Article  MathSciNet  MATH  Google Scholar 

  272. Kawamura Y, Nakao H, Kuramoto Y. Noise-induced turbulence in nonlocally coupled oscillators. Phys Rev E. 2007;75:036209.

    Article  MathSciNet  Google Scholar 

  273. Lindner B. Effects of noise in excitable systems. Phys Rep. 2004;392:321–424.

    Article  Google Scholar 

  274. Schwabedal JTC, Pikovsky A. Phase description of stochastic oscillations. Phys Rev Lett. 2013;110:204102.

    Article  Google Scholar 

  275. Ly C, Ermentrout GB. Analytic approximations of statistical quantities and response of noisy oscillators. Physica D. 2011;240:719–31.

    Article  MATH  Google Scholar 

  276. Nakao H, Teramae J-N, Goldobin DS, Kuramoto Y. Effective long-time phase dynamics of limit-cycle oscillators driven by weak colored noise. Chaos. 2010;20:033126.

    Article  MathSciNet  Google Scholar 

  277. Moehlis J. Improving the precision of noisy oscillators. Physica D. 2014;272:8–17.

    Article  MathSciNet  MATH  Google Scholar 

  278. Kuramoto Y. Collective synchronization of pulse-coupled oscillators and excitable units. Physica D. 1991;50:15–30.

    Article  MATH  Google Scholar 

  279. Medvedev GS, Zhuravytska S. The geometry of spontaneous spiking in neuronal networks. J Nonlinear Sci. 2012;22:689–725.

    Article  MathSciNet  MATH  Google Scholar 

  280. Newhall K, Kovačič G, Kramer P, Cai D. Cascade-induced synchrony in stochastically driven neuronal networks. Phys Rev E. 2010;82:041903.

    Article  MathSciNet  Google Scholar 

  281. Ly C. Dynamics of coupled noisy neural oscillators with heterogeneous phase resetting curves. SIAM J Appl Dyn Syst. 2014;13:1733–55.

    Article  MathSciNet  MATH  Google Scholar 

  282. Abouzeid A, Ermentrout B. Correlation transfer in stochastically driven neural oscillators over long and short time scales. Phys Rev E. 2011;84:061914.

    Article  Google Scholar 

  283. Bressloff PC, Lai Y-M. Stochastic synchronization of neuronal populations with intrinsic and extrinsic noise. J Math Neurosci. 2011;1:2.

    Article  MathSciNet  Google Scholar 

  284. Burton SD, Ermentrout GB, Urban NN. Intrinsic heterogeneity in oscillatory dynamics limits correlation-induced neural synchronization. J Neurophysiol. 2012;108:2115–33.

    Article  Google Scholar 

  285. Yoshimura K, Arai K. Phase reduction of stochastic limit cycle oscillators. Phys Rev Lett. 2008;101:154101.

    Article  Google Scholar 

  286. Yoshimura K. Phase reduction of stochastic limit-cycle oscillators. In: Schuster HG, editor. Reviews of nonlinear dynamics and complexity. Weinheim: Wiley; 2010. p. 59–90.

    Chapter  Google Scholar 

  287. Teramae J, Nakao H, Ermentrout GB. Stochastic phase reduction for a general class of noisy limit cycle oscillators. Phys Rev Lett. 2009;102:194102.

    Article  Google Scholar 

  288. Goldobin DS, Teramae J-N, Nakao H, Ermentrout GB. Dynamics of limit-cycle oscillators subject to general noise. Phys Rev Lett. 2010;105:154101.

    Article  Google Scholar 

  289. Nykamp DQ, Tranchina D. A population density approach that facilitates large-scale modeling of neural networks: extension to slow inhibitory synapses. Neural Comput. 2001;13:511–46.

    Article  MATH  Google Scholar 

  290. Pazó D, Montbrió E. Low-dimensional dynamics of populations of pulse-coupled oscillators. Phys Rev X. 2014;4:011009.

    Google Scholar 

  291. Ariaratnam J, Strogatz S. Phase diagram for the Winfree model of coupled nonlinear oscillators. Phys Rev Lett. 2001;86:4278–81.

    Article  Google Scholar 

  292. Luke TB, Barreto E, So P. Complete classification of the macroscopic behaviour of a heterogeneous network of theta neurons. Neural Comput. 2013;25:3207–34.

    Article  MathSciNet  Google Scholar 

  293. Montbrió E, Pazó D, Roxin A. Macroscopic description for networks of spiking neurons. Phys Rev X. 2015;5:021028.

    Google Scholar 

  294. Laing CR. Exact neural fields incorporating gap junctions. SIAM J Appl Dyn Syst. 2015;14(4):1899–929.

    Article  MathSciNet  Google Scholar 

  295. Abrams D, Strogatz S. Chimera states for coupled oscillators. Phys Rev Lett. 2004;93:174102.

    Article  Google Scholar 

  296. Abrams D, Mirollo R, Strogatz S, Wiley D. Solvable model for chimera states of coupled oscillators. Phys Rev Lett. 2008;101:084103.

    Article  Google Scholar 

  297. Laing CR. The dynamics of chimera states in heterogeneous Kuramoto networks. Physica D. 2009;238:1569–88.

    Article  MathSciNet  MATH  Google Scholar 

  298. Panaggio MJ, Abrams DM. Chimera states: coexistence of coherence and incoherence in networks of coupled oscillators. Nonlinearity. 2015;28:67–87.

    Article  MathSciNet  Google Scholar 

  299. Kuramoto Y, Battogtokh D. Coexistence of coherence and incoherence in nonlocally coupled phase oscillators. Nonlinear Phenom Complex Syst. 2002;5:380–5.

    Google Scholar 

  300. Kuramoto Y. Reduction methods applied to non-locally coupled oscillator systems. In: Nonlinear dynamics and chaos: where do we go from here?. Bristol: IOP; 2003. p. 209–27.

    Google Scholar 

  301. Watanabe S, Strogatz SH. Constants of motion for superconducting Josephson arrays. Physica D. 1994;74:197–253.

    Article  MATH  Google Scholar 

  302. Wolfrum M, Omel’chenko OE. Chimera states are chaotic transients. Phys Rev E. 2011;84:015201.

    Article  Google Scholar 

  303. Martens EA. Bistable chimera attractors on a triangular network of oscillator populations. Phys Rev E. 2010;82:016216.

    MathSciNet  Google Scholar 

  304. Shima S-I, Kuramoto Y. Rotating spiral waves with phase-randomized core in nonlocally coupled oscillators. Phys Rev E. 2004;69:036213.

    Article  Google Scholar 

  305. Ashwin P, Burylko O. Weak chimeras in minimal networks of coupled phase oscillators. Chaos. 2015;25:013106.

    Article  MathSciNet  Google Scholar 

  306. Tinsley MR, Nkomo S, Showalter K. Chimera and phase-cluster states in populations of coupled chemical oscillators. Nat Phys. 2012;8:662–5.

    Article  Google Scholar 

  307. Martens EA, Thutupalli S, Fourrière A, Hallatschek O. Chimera states in mechanical oscillator networks. Proc Natl Acad Sci USA. 2013;110:10563–7.

    Article  Google Scholar 

  308. Omel’chenko I, Maistrenko Y, Hövel P, Schöll E. Loss of coherence in dynamical networks: spatial chaos and chimera states. Phys Rev Lett. 2012;109:234102.

    Article  Google Scholar 

  309. Sethia GC, Sen A. Chimera states: the existence criteria revisited. Phys Rev Lett. 2014;112:144101.

    Article  Google Scholar 

  310. Olmi S, Navas A, Boccaletti S, Torcini A. Hysteretic transitions in the Kuramoto model with inertia. Phys Rev E. 2014;90:042905.

    Article  Google Scholar 

  311. Sieber J, Omel’chenko O, Wolfrum M. Controlling unstable chaos: stabilizing chimera states by feedback. Phys Rev Lett. 2014;112:054102.

    Article  Google Scholar 

  312. Maistrenko Y, Vasylenko A, Sudakov O, Levchenko R, Maistrenko V. Cascades of multi-headed chimera states for coupled phase oscillators. Int J Bifurc Chaos. 2014;24:1440014.

    Article  MathSciNet  Google Scholar 

  313. Xie J, Knobloch E. Multicluster and traveling chimera states in nonlocal phase-coupled oscillators. Phys Rev E. 2014;90:022919.

    Article  Google Scholar 

  314. Olmi S, Politi A, Torcini A. Collective chaos in pulse-coupled neural networks. Europhys Lett. 2010;92:60007.

    Article  Google Scholar 

  315. Ma R, Wang J, Liu Z. Robust features of chimera states and the implementation of alternating chimera states. Europhys Lett. 2010;91:40006.

    Article  Google Scholar 

  316. Lysyansky B, Popovych OV, Tass P. Multi-frequency activation of neuronal networks by coordinated reset stimulation. Interface Focus. 2010;1:75–85.

    Article  Google Scholar 

  317. Hlinka J, Coombes S. Using computational models to relate structural and functional brain connectivity. Eur J Neurosci. 2012;36:2137–45.

    Article  Google Scholar 

  318. Arsiwalla XD, Zucca R, Betella A, Martinez E, Dalmazzo D, Omedas P, Deco G, Verschure PFMJ. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction. Front Neuroinform. 2015;9:2.

    Article  Google Scholar 

  319. Nicosia V, Nicosia V, Valencia M, Valencia M, Chavez M, Chavez M, Díaz-Guilera A, Díaz-Guilera A, Latora V. Remote synchronization reveals network symmetries and functional modules. Phys Rev Lett. 2013;110:174102.

    Article  Google Scholar 

  320. Daffertshofer A, van Wijk BCM. On the influence of amplitude on the connectivity between phases. Front Neuroinform. 2011;5:6.

    Article  Google Scholar 

  321. Nadim F. Invertebrate pattern generation: overview. In: Jaeger D, Jung R, editors. Encyclopedia of computational neuroscience. Heidelberg: Springer; 2014.

    Google Scholar 

  322. Collins JJ, Richmond SA. Hard-wired central pattern generators for quadrupedal locomotion. Biol Cybern. 1994;71:375–85.

    Article  MATH  Google Scholar 

  323. Sherwood WE, Harris-Warrick R, Guckenheimer J. Synaptic patterning of left-right alternation in a computational model of the rodent hindlimb central pattern generator. J Comput Neurosci. 2010;30:323–60.

    Article  MathSciNet  Google Scholar 

  324. Sakurai A, Newcomb J, Lillvis J, Katz P. Different roles for homologous interneurons in species exhibiting similar rhythmic behaviors. Curr Biol. 2011;21:1036–43.

    Article  Google Scholar 

  325. Bal T, Nagy F, Moulins M. The pyloric central pattern generator in Crustacea: a set of conditional neural oscillators. J Comp Physiol. 1988;163:715–27.

    Article  Google Scholar 

  326. Cohen AH, Rossignol S, Grillner S, editors. Neural control of rhythmic movements in vertebrates. New York: Wiley-Blackwell; 1988.

    Google Scholar 

  327. Marder E, Bucher D. Central pattern generators and the control of rhythmic movements. Curr Biol. 2001;11:986–96.

    Article  Google Scholar 

  328. Hooper SL. Central pattern generators. In: eLS. New York: Wiley; 2001.

    Google Scholar 

  329. Ljspeert AJ. Central pattern generators for locomotion control in animals and robots: a review. Neural Netw. 2008;21:642–53.

    Article  Google Scholar 

  330. Crespi A, Karakasiliotis K, Guignard A, Ljspeert AJ. Salamandra robotica II: an amphibious robot to study salamander-like swimming and walking gaits. IEEE Trans Robot. 2013;29:308–20.

    Article  Google Scholar 

  331. Collins JJ, Stewart IN. Coupled nonlinear oscillators and the symmetries of animal gaits. J Nonlinear Sci. 1993;3:349–92.

    Article  MathSciNet  MATH  Google Scholar 

  332. Golubitsky M, Stewart I, Buono P-L, Collins JJ. Symmetry in locomotor central pattern generators and animal gaits. Nature. 1999;401:693–5.

    Article  Google Scholar 

  333. Shilnikov A, Gordon R, Belykh I. Polyrhythmic synchronization in bursting networking motifs. Chaos. 2008;18:037120.

    Article  MathSciNet  Google Scholar 

  334. Wojcik J, Schwabedal J, Clewley R, Shilnikov AL. Key bifurcations of bursting polyrhythms in 3-cell central pattern generators. PLoS ONE. 2014;9:92918.

    Article  Google Scholar 

  335. Brascamp JW, Klink PC, Levelt WJM. The ‘laws’ of binocular rivalry: 50 years of Levelt’s propositions. Vis Res. 2015;109:20–37.

    Article  Google Scholar 

  336. Shpiro A, Curtu R, Rinzel J, Rubin N. Dynamical characteristics common to neuronal competition models. J Neurophysiol. 2007;97:462–73.

    Article  Google Scholar 

  337. Blake R. A neural theory of binocular rivalry. Psychol Rev. 1989;96:145–67.

    Article  Google Scholar 

  338. Laing C, Chow C. A spiking neuron model for binocular rivalry. J Comput Neurosci. 2002;12:39–53.

    Article  Google Scholar 

  339. Wilson HR. Minimal physiological conditions for binocular rivalry and rivalry memory. Vis Res. 2007;47:2741–50.

    Article  Google Scholar 

  340. Diekman CO, Golubitsky M. Network symmetry and binocular rivalry experiments. J Math Neurosci. 2014;4:12.

    Article  MathSciNet  Google Scholar 

  341. Ashwin P, Lavric A. A low-dimensional model of binocular rivalry using winnerless competition. Physica D. 2010;239:529–36.

    Article  MathSciNet  MATH  Google Scholar 

  342. Mintchev SM, Young L-S. Self-organization in predominantly feedforward oscillator chains. Chaos. 2009;19:043131.

    Article  MathSciNet  Google Scholar 

  343. Lanford OE, Mintchev SM. Stability of a family of travelling wave solutions in a feedforward chain of phase oscillators. Nonlinearity. 2015;28:237–62.

    Article  MathSciNet  MATH  Google Scholar 

  344. Glass L, Mackey MC. From clocks to chaos: the rythms of life. Princeton: Princeton University Press; 1988.

    Google Scholar 

  345. Payeur A, Maler L, Longtin A. Oscillatorylike behavior in feedforward neuronal networks. Phys Rev E. 2015;92:012703.

    Article  Google Scholar 

  346. McCullen NJ, Mullin T, Golubitsky M. Sensitive signal detection using a feed-forward oscillator network. Phys Rev Lett. 2007;98:254101.

    Article  Google Scholar 

  347. Golubitsky M, Postlethwaite C. Feed-forward networks, center manifolds, and forcing. Discrete Contin Dyn Syst. 2012;32(8):2913–35.

    Article  MathSciNet  MATH  Google Scholar 

  348. Laudanski J, Coombes S, Palmer AR, Summer CJ. Mode-locked spike trains in responses of ventral cochlear nucleus chopper and onset neurons to periodic stimuli. J Neurophysiol. 2010;103:1226–37.

    Article  Google Scholar 

  349. Lerud KD, Almonte FV, Kim JC, Large EW. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hear Res. 2014;308:41–9.

    Article  Google Scholar 

  350. Daffertshofer A, van Wijk BCM. Transient motor behavior and synchronization in the cortex. In: Rabinovich MI, Friston KJ, Varona P, editors. Principles of brain dynamics. Cambridge: MIT Press; 2012. p. 233–59.

    Google Scholar 

  351. Cartwright JHE, González DL, Piro O. Universality in three-frequency resonances. Phys Rev E. 1999;59:2902–6.

    Article  Google Scholar 

  352. al Azad AK, Ashwin P. Within-burst synchrony changes for coupled elliptic bursters. SIAM J Appl Dyn Syst. 2010;9:261–81.

    Article  MathSciNet  MATH  Google Scholar 

  353. Segev R, Shapira Y, Benveniste M, Ben-Jacob E. Observations and modeling of synchronized bursting in two-dimensional neural networks. Phys Rev E. 2001;64:011920.

    Article  Google Scholar 

  354. Nakada K, Miura K, Hayashi H. Burst synchronization and chaotic phenomena in two strongly coupled resonate-and-fire neurons. Int J Bifurc Chaos. 2008;18:1249.

    Article  MathSciNet  MATH  Google Scholar 

  355. Zhou C, Kurths J. Spatiotemporal coherence resonance of phase synchronization in weakly coupled chaotic oscillators. Phys Rev E. 2002;65:040101.

    Article  MathSciNet  Google Scholar 

  356. So P, Barreto E. Generating macroscopic chaos in a network of globally coupled phase oscillators. Chaos. 2011;21:033127.

    Article  MathSciNet  Google Scholar 

  357. So P, Luke TB, Barreto E. Networks of theta neurons with time-varying excitability: macroscopic chaos, multistability, and final-state uncertainty. Physica D. 2014;267:16–26.

    Article  MathSciNet  MATH  Google Scholar 

  358. Tiesinga PHE, Fellous J-M, Sejnowski TJ. Attractor reliability reveals deterministic structure in neuronal spike trains. Neural Comput. 2002;14(7):1629–50.

    Article  MATH  Google Scholar 

  359. Goldobin D, Pikovsky A. Antireliability of noise-driven neurons. Phys Rev E. 2006;73:061906.

    Article  MathSciNet  Google Scholar 

  360. Lin KK, Shea-Brown E, Young L-S. Reliability of coupled oscillators. J Nonlinear Sci. 2009;19:497–545.

    Article  MathSciNet  MATH  Google Scholar 

  361. Ichinose N, Aihara K, Judd K. Extending the concept of isochrons from oscillatory to excitable systems for modelling an excitable neuron. Int J Bifurc Chaos. 1998;8:2375–85.

    Article  MATH  Google Scholar 

  362. Rabinovitch A, Rogachevskii I. Threshold, excitability and isochrons in the Bonhoeffer–van der Pol system. Chaos. 1999;9:880–6.

    Article  MathSciNet  MATH  Google Scholar 

  363. Wilson D, Moehlis J. Extending phase reduction to excitable media: theory and applications. SIAM Rev. 2015;57:201–22.

    Article  MathSciNet  Google Scholar 

  364. Schwemmer M, Lewis TJ. The robustness of phase-locking in neurons with dendro-dendritic electrical coupling. J Math Biol. 2014;68:303–40.

    Article  MathSciNet  MATH  Google Scholar 

  365. Amzica F, Steriade M. Neuronal and glial membrane potentials during sleep and paroxysmal oscillations in the neocortex. J Neurosci. 2000;20:6648–65.

    Google Scholar 

  366. Lee S-H, Dan Y. Neuromodulation of brain states. Neuron. 2012;76:209–22.

    Article  Google Scholar 

  367. Stiefel KM, Gutkin BS, Sejnowski TJ. The effects of cholinergic neuromodulation on neuronal phase-response curves of modeled cortical neurons. J Comput Neurosci. 2009;26:289–301.

    Article  MathSciNet  Google Scholar 

  368. Vladimirski BB, Tabak J, O’Donovan MJ, Rinzel J. Episodic activity in a heterogeneous excitatory network, from spiking neurons to mean field. J Comput Neurosci. 2008;25:39–63.

    Article  MathSciNet  Google Scholar 

  369. Tsubo Y, Teramae J-N, Fukai T. Synchronization of excitatory neurons with strongly heterogeneous phase responses. Phys Rev Lett. 2007;99:228101.

    Article  Google Scholar 

  370. Moon S, Ghanem R, Kevrekidis I. Coarse graining the dynamics of coupled oscillators. Phys Rev Lett. 2006;96:144101.

    Article  Google Scholar 

  371. Holt DF. Handbook of computational group theory. London: Chapman & Hall; 2005. (Discrete mathematics and its applications).

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank many people for commenting on draft versions of the manuscript, in particular Chris Bick, Áine Byrne, Kurtis Gibson, Diego Pázo, Mason Porter and Kyle Wedgwood. We thank Kyle Wedgwood for providing Fig. 11. SC was supported by the European Commission through the FP7 Marie Curie Initial Training Network 289146, NETT: Neural Engineering Transformative Technologies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Ashwin.

Additional information

Competing Interests

The authors declare that they have no competing interests.

Authors’ Contributions

PA, SC and RN contributed equally. All authors read and approved the final manuscript.

Appendix

Appendix

The Hodgkin–Huxley description of nerve tissue is completed with

$$\begin{aligned} \alpha_{m}(V) &= \frac{0.1(V+40)}{1-\exp[-0.1(V+40)]}, \qquad\alpha _{h}(V) = 0.07 \exp\bigl[-0.05(V+65)\bigr], \\ \alpha_{n}(V) &= \frac{0.01(V+55)}{1-\exp[-0.1(V+55)]}, \qquad \beta _{m}(V) = 4.0 \exp\bigl[-0.0556(V+65)\bigr], \\ \beta_{h}(V) &= \frac{1}{1+ \exp[-0.1(V+35)]}, \qquad \beta_{n}(V) = 0.125 \exp\bigl[-0.0125(V+65)\bigr] , \end{aligned}$$

and \(C=1\mbox{ $\mu$F}\,\mbox{cm}^{-2}\), \(g_{L} = 0.3\mbox{ mmho}\,\mbox{cm}^{-2}\), \(g_{\mathrm{K}}=36\mbox{ mmho}\,\mbox{cm}^{-2}\), \(g_{\mathrm{Na}}= 120\mbox{ mmho}\,\mbox{cm}^{-2}\), \(V_{L}=-54.402\mbox{ mV}\), \(V_{\mathrm{K}}=-77\mbox{ mV}\) and \(V_{\mathrm{Na}}=50\mbox{ mV}\). (All potentials are measured in mV, all times in ms and all currents in μA per cm2.)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ashwin, P., Coombes, S. & Nicks, R. Mathematical Frameworks for Oscillatory Network Dynamics in Neuroscience. J. Math. Neurosc. 6, 2 (2016). https://doi.org/10.1186/s13408-015-0033-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13408-015-0033-6

Keywords