1 Introduction

Transport in Dynamical Systems

Instrumental to understanding the essential behavior of complicated non-autonomous flows is to grasp how transport is happening in them. This leads on a qualitative level to objects that prohibit transport, commonly named transport barriers; often originating from the geometric picture for autonomous systems and that trajectories are unable to cross co-dimension 1 invariant manifolds (Haller 2000, 2001; Haller and Beron-Vera 2012, 2013). For periodically forced systems, invariant manifolds enclose regions called “lobes” that get transported across these periodically varying manifolds (MacKay et al. 1984; Rom-Kedar and Wiggins 1990).

On a quantitative level, one searches for surfaces of small flux (Balasuriya et al. 2014; Karrasch 2016; Froyland and Koltai 2017), so-called partial barriers (Wigner 1937; Meiss 1992). Instead of characterizing regions that do not mix with one another via enclosing them by boundaries of low flux, there are approaches that aim to describe these sets directly. Such set-oriented concepts are strongly interwoven with the theory of transfer operators (Perron–Frobenius and Koopman operators) and comprise almost-invariant sets (Dellnitz and Junge 1999), ergodic partitions (Mezić and Wiggins 1999) in autonomous, and coherent sets (Froyland et al. 2010; Froyland 2013) in the non-autonomous cases.

Distinctive attention has been given to coherent sets, which are a (possibly time-dependent) family of sets having little or no exchange with their surrounding in terms of transport and are robust to small diffusion over a finite time of consideration (Froyland 2013, 2015). Natural examples include moving vortices in atmospheric (Rypina et al. 2007; Froyland et al. 2010), oceanographic (Treguier et al. 2003; Dellnitz et al. 2009; Froyland et al. 2015), and plasma flows (Padberg et al. 2007). In such applications, one would like to be able to find coherent sets even in the cases when a dynamical model that can be evaluated arbitrarily often is not available, only a finite set of Lagrangian trajectory data (passive tracers moving with flow with positions sampled at discrete time instances). This problem has received lot of attention in recent years, and a diverse collection of tools has been developed to tackle it (Budišić and Mezić 2012; Allshouse and Thiffeault 2012; Ser-Giacomi et al. 2015; Froyland and Padberg-Gehle 2015; Allshouse and Peacock 2015; Williams et al. 2015; Hadjighasem et al. 2016; Banisch and Koltai 2017; Schlueter-Kuck and Dabiri 2017; Padberg-Gehle and Schneide 2017; Rypina and Pratt 2017; Fabregat et al. 2016; Froyland and Junge 2017).

While other current methods aim at collecting trajectories into coherent sets, in Banisch and Koltai (2017) it has been proposed to go one step further and analyze the connectivity structure of the state space under transport and mixing with “transport coordinates” and the “skeleton of transport”. Very similar observations have been made earlier in Budišić and Mezić (2012) in the infinite-time limit for periodically forced systems. While coherent sets (and transport barriers) aim at partitioning the state space, the skeleton is aiming at “spanning” the state space with respect to transport. In this respect, coherent sets can be associated with distinct “extremal regions” of the skeleton. Here we will only use this idea of extremality, more precisely that coherent sets are “maximally far” from one another, as measured by transport. To this end, we will need to measure “farness” of dynamical trajectories.

Several dynamical distance measures have been put forward already to measure the “distance” or “dissimilarity” of trajectories or initial states in dynamical systems (Mezić and Banaszuk 2004; Budišić and Mezić 2012; Froyland and Padberg-Gehle 2015; Hadjighasem et al. 2016; Fabregat et al. 2016; Karrasch and Keller 2016). The majority of them are shown to serve their purpose well in revealing coherent structures efficiently and reliably. However, they are either heuristic in the sense that they are not derived from the physical notion of transport and mixing, or no discretizations to finite scattered trajectory data have been developed.

The purpose of this paper is thus twofold. On the one hand, we develop a distance measure (a semidistance) between trajectories that is derived from the physical notion of transport and mixing subject to diffusion of vanishing strength, and we also derive a discretized distance measure for finite (also possibly sparse and incomplete) Lagrangian data that is consistent with its continuous counterpart in the limit of infinite data. On the other hand, we construct a tool to analyze with such distances the structure of the state space under transport, especially to find coherent sets. This tool makes use of the idea that coherent sets are some sort of extremal regions on a spanning structure with respect to transport, although in this work we will not investigate this “skeleton” in its entirety.

Finite Time Coherent Sets

Let us consider the ordinary differential equation (ODE)

$$\begin{aligned} \dot{x}_t = v(t,x_t) \end{aligned}$$
(1)

on some bounded \(X\subset \mathbb {R}^d\) and on a finite time interval [0, T] for some \(T>0\). Throughout the paper, we will assume that \(v:[0,T]\times X\rightarrow \mathbb {R}^d\) is a continuous velocity field tangential at the boundary, such that the flow of (1), denoted by \(\phi _{s,t}[\cdot ]\), \(0\le s, t\le T\), is a diffeomorphism on appropriate subsets of X. For \(t<s\) we flow backward in time: \(\phi _{s,t}= \phi _{t,s}^{-1}\).

Many different notions to characterize coherent sets have been proposed in the literature. Central to all of these notions is the idea that coherent sets should be robust under noise; without such a requirement, any non-intersecting characteristic of a singleton could be considered a coherent set. To this end, one typically perturbs the ODE (1) by a random noise (Denner et al. 2016; Froyland and Koltai 2017; Karrasch and Keller 2016), leading to the Itô stochastic differential equation (SDE)Footnote 1

$$\begin{aligned} \mathrm{d}\varvec{}{x}_t^{\scriptscriptstyle {(\varepsilon )}} = v(t,\varvec{}{x}_t^{\scriptscriptstyle {(\varepsilon )}})\mathrm{d}t + \sqrt{\varepsilon }d\varvec{}{w}_t\,, \end{aligned}$$
(2)

where \(\{\varvec{}{w}_t\}_{t\in [0,T]}\) is a Wiener process (Brownian motion) with generator \(\phi \mapsto \frac{1}{2}\Delta \phi \), reflecting boundaries, and starting from \(\varvec{}{w}_0=0\) (deterministically) and \(\varepsilon >0\) is, at least for now, a given small constant. In fact, the rigorous mathematical formulation of an SDE with reflecting boundaries can be quite subtle, see Andres (2009). We ignore this issue as it does not affect our analysis.

According to the definition of finite time coherent pairs (Froyland et al. 2010; Froyland 2013; Koltai et al. 2016), two sets \(A,B\subset X\) are coherent for times 0 and T if most mass from set A is likely to end up in set B, and most mass ending up in set B is likely to originate from set A, that is,

$$\begin{aligned} \mathbb {P}\big [\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_T\in B \mid \varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_0\in A\big ] \approx 1, \quad \text { and }\quad \mathbb {P}\big [\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_0\in A \mid \varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_T\in B\big ]\approx 1. \end{aligned}$$
(3)

Naturally, for practical purposes one would need to choose how small \(\varepsilon \) and how large these probabilities should be. As the systems we are dealing with are often deterministic by nature, and there is no “physically straightforward” choice of the diffusion strength \(\varepsilon \), our first aim is to remove some of this indeterminacy by quantifying what it means for probabilities to be close to 1 for small \(\varepsilon \), in terms of large deviations as we explain below.Footnote 2 However, it turns out that the forward and backward conditions (3) are essentially equivalent in the large-deviation regime, and even worse, the large-deviation limits of (3) hardly give any quantitative information about how coherent two sets might be, as discussed in Appendix A. To conclude, the large deviations of conditions (3) do not yield sensible conditions for coherence.

Large-Deviation-Based Semidistances

In the current paper, we take a different approach. We study semidistances that quantify how unlikely it is for mass to flow from one point to another. These are semidistances in the sense that they satisfy all properties of a metric except for the triangle inequality. In the first part of this paper, in Sects. 2 and 3, we show how such semidistances can arise naturally from probabilistic arguments via large-deviation principles, as we explain below. In the second part, in Sects. 4 and 5, we discuss how (general) semidistances can be used to analyze coherent sets, and we apply the concepts of this paper to a number of examples.

In Sect. 2, we derive two different semidistances from the large deviations of two probabilities. The first one is related to the probability that the endpoint \(\varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\) of the random path is \(\phi _{0,T}[y]\), given that it starts in \(\varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x\), for any two initial positions \(x,y\in X\). As \(\varepsilon \rightarrow 0\), the process can no longer deviate from the deterministic flow of (1), and hence this probability will converge to 0 whenever \(x\ne y\). In fact, it converges exponentially fast (Freidlin and Wentzell 1998), i.e.,

$$\begin{aligned} \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \phi _{0,T}[y]\mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x\big ] \sim \mathrm {e}^{-\frac{1}{\varepsilon }\mu _T(x{\scriptstyle \rightarrow }y)} \end{aligned}$$
(4)

for some function \(\mu _T(x{\scriptstyle \rightarrow }y)\ge 0\), where this statement and notation are made precise in Sect. 2.1. Such exponential convergence results are called large-deviation principles, and \(\mu _T(x\rightarrow y)\) is the large-deviation rate. The less probable it is to reach one point from another, the larger the rate between them. The first semidistance is then obtained via symmetrization:

$$\begin{aligned} \mu _T^\mathrm {cross}(x,y):=\mu _T(x{\scriptstyle \rightarrow }y) + \mu _T(y{\scriptstyle \rightarrow }x). \end{aligned}$$
(5)

We call this the cross semidistance, since it arises from mass flowing from x to \(\phi _{0,T}[y]\) and mass flowing from y to \(\phi _{0,T}[x]\) simultaneously and independently, see Fig. 1.

The second semidistance arises as the large deviations of the probability for two independent random trajectories \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}},\varvec{}{y}^{\scriptscriptstyle {(\varepsilon )}}\) starting at x and y, respectively, to meet at or before time T (Fig. 2):

$$\begin{aligned} \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \varvec{}{y}_T^{\scriptscriptstyle {(\varepsilon )}} \mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x, \varvec{}{y}_0^{\scriptscriptstyle {(\varepsilon )}}=y\big ] \sim \mathrm {e}^{-\frac{1}{\varepsilon }\mu _T^\mathrm {meet}(x,y)}, \end{aligned}$$
(6)

where the meeting semidistance is given by

$$\begin{aligned} \mu _T^\mathrm {meet}(x,y):=\inf _{z\in X} \mu _T(x{\scriptstyle \rightarrow }z) + \mu _T(y{\scriptstyle \rightarrow }z). \end{aligned}$$
Fig. 1
figure 1

\(\mu _T^\mathrm {cross}(x,y)\) is the cost to move from x to \(\phi _{0,T}[y]\) and from y to \(\phi _{0,T}[x]\)

Fig. 2
figure 2

\(\mu _T^\mathrm {meet}(x,y)\) is the cost for two trajectories to meet

By this procedure, we find two semidistances \(\mu _T^\mathrm {cross}\) and \(\mu _T^\mathrm {meet}\) that can be used as a measure of “farness” of points xy, which will be low for points in the same coherent set, and high otherwise. Since both arise from large-deviation principles, they have a nice additional interpretation as a probabilistic cost or free energy that needs to be paid in order to deviate from the expected flows; such interpretation is common in statistical physics, see for example Onsager and Machlup (1953).

Nevertheless, we will see that in order to calculate these costs explicitly, the velocity field v needs to be known. As discussed above, this is in practice seldom the case; mostly one can only assume to have discrete-time snapshots of the positions of a limited number of floaters. With this in mind, we derive similar cost functions as above, that are based on such a finite data set only. This will be the content of Sect. 3. First, the dynamics is discretized in time and space by conditioning a usual time-stepping method for the SDE (2) on the event that the random continuous trajectories are to be found in the set of known floater positions at the \(K\in \mathbb {N}\) given time instances. As above, we then derive two large-deviation semidistances \(\nu _K^\mathrm {cross}(x,y)\) and \(\nu _K^\mathrm {meet}(x,y)\) that have a clear probabilistic interpretation, that can be used to characterize coherent sets, and that are based on the finite data set rather than on the explicit velocity field. In fact, we will show that these discrete-space–time semidistances are really specific discretizations of the continuous-space–time semidistances \(\mu _T^\mathrm {cross}\) and \(\mu _T^\mathrm {meet}\). As shown in Sect. 3.4, they can be computed as shortest path lengths in a time-dependent weighted graph. We give an algorithm to compute these shortest paths in Appendix B.

Let us stress that these semidistances are defined for deterministic dynamical systems. The random perturbation that is factored out by the large-deviation principle is merely acting as a catalyst to help quantify how strongly distinct trajectories mix—or, we should rather say how poorly, as the transport from one trajectory to another is inversely proportional to their semidistance.

Coherence Analysis with Semidistances

In Sect. 4, we describe how in general a semidistance on finite Lagrangian data can be used to analyze coherence. Key to our method is the notion of cornerstone: a point that is furthest away from all other points. Cornerstones are though of as “endpoints” of a spanning structure, and ideally each cornerstone is in some sense the center of a coherent set. As a next step, trajectories can be clustered around cornerstones to yield coherent sets. Of course, this approach is very close to the k-means- and fuzzy c-means clustering of trajectories with respect to dynamical distances in Hadjighasem et al. (2016); Froyland and Padberg-Gehle (2015), with the important difference that the centers are not chosen by the heuristics of these clustering approaches, but with regard to the properties of coherent sets in the light of transport and mixing.

To exemplify the usefulness of the theory put forth in this paper, in Sects. 4 and5 we test our approach on a number of standard test cases. Finally, Sect. 6 discusses possible combinations of this work with other concepts.

2 Large-Deviation Semidistances in Continuous Time and Space

In this section, we study large deviations of the forms (4) and (6). In large-deviation theory, it is often easier to first study large deviations in a larger space. In our setting, we first study the large deviations of paths in Sect. 2.1 before transforming to the large deviations of the endpoints in Sect. 2.2. We end with a discussion of the resulting semidistances \(\mu _T^\mathrm {cross},\mu _T^\mathrm {meet}\) in Sect. 2.3.

2.1 Large Deviations of Paths

We denote paths by \(w_{\scriptscriptstyle {(\cdot )}}\) to distinguish them from points w. Let \(\mathbb {P}\) be the Wiener measure, i.e., the probability that a Brownian path lies in a set \(U\subset C(0,T;\mathbb {R}^d)\) is \(\mathbb {P}[\varvec{}{w}_{(\cdot )}\in U]\). Recall that there does not exist a canonical probability measure on the space of paths, and so the Wiener measure cannot be identified with a meaningful density. This means that one always needs to consider sets rather than particular realizations of the Brownian path. Nevertheless, large-deviation rates are always local, in the sense that they depend on one realization only (the most likely one in the set U under consideration). This motivates writing \(\varvec{}{w}_{\scriptscriptstyle {(\cdot )}}\asymp f_{\scriptscriptstyle {(\cdot )}}\) if \(\varvec{}{w}_{\scriptscriptstyle {(\cdot )}}\) lies in an infinitesimal neighborhood U of the path \(f_{\scriptscriptstyle {(\cdot )}}\). We will make this more precise below.

The large deviations for the SDE (2) are a standard result by Freidlin and Wentzell (1998). This result can be derived via a combination of Schilder’s theorem and a contraction principle as we now explain.

We first consider the noise part \(\sqrt{\varepsilon }\varvec{}{w}_t\), which clearly converges (almost surely uniformly) to the constant path 0 as \(\varepsilon \rightarrow 0\). The corresponding large-deviation principle is given by Schilder’s theorem (Dembo and Zeitouni 1987, Th. 5.2.3):

$$\begin{aligned} -\varepsilon \log \mathbb {P}\left[ \sqrt{\varepsilon }\varvec{}{w}_{(\cdot )} \asymp w_{(\cdot )}\right] \xrightarrow [\varepsilon \rightarrow 0]{} \frac{1}{2}\int _0^T\!|\dot{w}_t|^2 \mathrm{d}t, \end{aligned}$$
(7)

for differentiable paths \(w_{\scriptscriptstyle {(\cdot )}}\) starting from \(w_0=0\) (otherwise the limit will be \(\infty \)).Footnote 3

Let us assume that the velocity field \(v(t,\cdot )\) is Lipschitz, so for each realization of the Brownian path \(\varvec{}{w}_{\scriptscriptstyle {(\cdot )}}=w_{\scriptscriptstyle {(\cdot )}}\) corresponds a unique solution \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_{\scriptscriptstyle {(\cdot )}}\) of the SDE, starting from some given \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_0=x\), see (Øksendal 2003, Th. 5.2.1). The contraction principle (Dembo and Zeitouni 1987, Th. 4.2.1) then states that the large-deviation rate of a path \(x_{\scriptscriptstyle {(\cdot )}}\) is given by the minimum of (7) over all realizations of the noise that give rise to that path, i.e.,

$$\begin{aligned} -\varepsilon \log \mathbb {P}\left[ \varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_{(\cdot )} \asymp x_{(\cdot )}\right]&\xrightarrow [\varepsilon \rightarrow 0]{} \inf _{w_{\scriptscriptstyle {(\cdot )}} \,:\, \dot{x}_t=v(t,x_t) + \dot{w}_t} \, \frac{1}{2}\int _0^T\!|\dot{w}_t|^2\,\mathrm{d}t\nonumber \\= & {} \frac{1}{2}\int _0^T\!|\dot{x}_t-v(t,x_t)|^2\,\mathrm{d}t, \end{aligned}$$
(8)

for differentiable paths \(x_{\scriptscriptstyle {(\cdot )}}\) starting from \(x_0=x\).

2.2 Large Deviations of Endpoints

We now derive the large-deviation principle of the type (4) as discussed in the Introduction. In a sense, the pathwise large deviations (8) encode more information than is needed if we are only interested in the endpoint \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_T\asymp \phi _{0,T}(y)\) of the random path. Another application of the contraction principle then states that the large-deviation rate for the endpoint is the minimum of (8) over all paths starting from x and ending in that given endpoint \(\phi _{0,T}[y]\), that is:

$$\begin{aligned}&-\varepsilon \log \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \phi _{0,T}[y]\mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x\big ] \nonumber \\&\quad \xrightarrow [\varepsilon \rightarrow 0]{} \inf _{x_{\scriptscriptstyle {(\cdot )}}\,: x_0=x,x_T=\phi _{0,T}[y]}\, \frac{1}{2}\int _0^T\!|\dot{x}_t-v(t,x_t)|^2\,\mathrm{d}t =: \mu _T(x{\scriptstyle \rightarrow }y). \end{aligned}$$
(9)

This defines the “one-way” rate that we are after.

The sum (5) then defines the cross-semidistance \(\mu _T^\mathrm {cross}(x,y)\) and has a natural interpretation in terms of large deviations: As mentioned in the Introduction, it arises from two independent and simultaneous copies \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_t,\varvec{}{y}^{\scriptscriptstyle {(\varepsilon )}}_t\). By independence, the probability that \((\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_T,\varvec{}{y}^{\scriptscriptstyle {(\varepsilon )}}_T)\asymp (\phi _{0,T}[y],\phi _{0,T}[x])\) given \((\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_0,\varvec{}{y}^{\scriptscriptstyle {(\varepsilon )}}_0)=(x,y)\) is a product of one-way probabilities, yielding the sum of two one-way rates in the large deviations, see Fig. 1.

A similar argument can be used to derive the meeting large deviations (6). Let \(\varvec{}{x}^{\scriptscriptstyle {(\varepsilon )}}_t\) and \(\varvec{}{y}^{\scriptscriptstyle {(\varepsilon )}}_t\) be two independent solutions of the SDE (2), starting from given x and y, respectively. We consider the probability that both trajectories end in a given point, say \(\phi _{0,T}[z]\) for some \(z\in \mathbb {R}^d\), see Fig. 2. Assuming independence of the two trajectories, we immediately get

$$\begin{aligned}&-\varepsilon \log \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \phi _{0,T}[z], \varvec{}{y}_T^{\scriptscriptstyle {(\varepsilon )}} \asymp \phi _{0,T}[z]\mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x, \varvec{}{y}_0^{\scriptscriptstyle {(\varepsilon )}}=y\big ] \\&\qquad = -\varepsilon \log \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \phi _{0,T}[z]\mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x \big ] -\varepsilon \log \mathbb {P}\big [ \varvec{}{y}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \phi _{0,T}[z]\mid \varvec{}{y}_0^{\scriptscriptstyle {(\varepsilon )}}=y \big ]\\&\qquad \!\xrightarrow [\varepsilon \rightarrow 0]{(9)} \mu _T(x{\scriptstyle \rightarrow }z) + \mu _T(y{\scriptstyle \rightarrow }z). \end{aligned}$$

However, we are only interested in the probability that the two trajectories meet, and not in the point where they meet. A final contraction principle thus yields:

$$\begin{aligned}&-\varepsilon \log \mathbb {P}\big [ \varvec{}{x}_T^{\scriptscriptstyle {(\varepsilon )}}\asymp \varvec{}{y}_T^{\scriptscriptstyle {(\varepsilon )}} \mid \varvec{}{x}_0^{\scriptscriptstyle {(\varepsilon )}}=x, \varvec{}{y}_0^{\scriptscriptstyle {(\varepsilon )}}=y\big ]\nonumber \\&\quad \xrightarrow [\varepsilon \rightarrow 0]{} \inf _{z\in X} \mu _T\big (x{\scriptstyle \rightarrow }z \big ) +\mu _T\big (y{\scriptstyle \rightarrow }z\big ) =: \mu _T^\mathrm {meet}(x,y). \end{aligned}$$
(10)

Observe that the two paths could also meet earlier and subsequently follow the same trajectory up until time T with zero cost; the time T thus acts as a maximum time at which the paths should meet.

2.3 The Semidistances

We now discuss some metric properties of the rate functionals. Recall from Introduction that we assumed that the flow is a diffeomorphism. Therefore \(\mu _T(x\rightarrow y)=0\) if and only if \(x=y\). It is then easy to see that, for any xy,

  1. (i)

    \(\mu _T^{\mathrm {cross}}(x,y)\ge 0\)                  and      \(\mu _T^{\mathrm {meet}}(x,y)\ge 0\),

  2. (ii)

    \(\mu _T^{\mathrm {cross}}(x,y)=0 \iff x=y\)     and      \(\mu _T^{\mathrm {meet}}(x,y)=0 \iff x=y\),

  3. (iii)

    \(\mu _T^{\mathrm {cross}}(x,y)=\mu _T^{\mathrm {cross}}(y,x)\)        and      \(\mu _T^{\mathrm {meet}}(x,y)=\mu _T^{\mathrm {meet}}(y,x)\).

However, the triangle inequality can fail, and so \(\mu _T^{\mathrm {cross}}\) and \(\mu _T^{\mathrm {meet}}\) are semidistances only.

We point out the following useful relation between the two. Observe that by the definition, \(\mu _T^{\mathrm {meet}}(x,y)\le \mu _T(x{\scriptstyle \rightarrow }y) + \mu _T(y,y)=\mu _T(x{\scriptstyle \rightarrow }y)\), and similarly \(\mu _T^{\mathrm {meet}}(x,y)\le \mu _T(y{\scriptstyle \rightarrow }x)\). Therefore,

$$\begin{aligned} \mu _T^{\mathrm {meet}}(x,y)\le & {} \min \big \{ \mu _T(x{\scriptstyle \rightarrow }y) , \mu _T(y{\scriptstyle \rightarrow }x)\big \} \le \max \big \{ \mu _T(x{\scriptstyle \rightarrow }y) , \mu _T(y{\scriptstyle \rightarrow }x)\big \}\\\le & {} \mu _T^{\mathrm {cross}}(x,y). \end{aligned}$$

In order to investigate which semidistance is more suitable to study coherence, one would need to study in which setting the gap \(\mu _T^{\mathrm {cross}}(x,y)-\mu _T^{\mathrm {meet}}(x,y)\) becomes large. This is beyond the scope of this paper, but we will show for several examples that both work as they should.

Remark 2.1

(Invariance under time reversal) Note the following invariance property of \(\mu _T\) under time reversal:

$$\begin{aligned} \mu _T(x{\scriptstyle \rightarrow }y)= \inf _{ \begin{array}{c} y_{\scriptscriptstyle {(\cdot )}}\,: y_0=\phi _{0,T}[y]\\ y_T=\phi _{0,T}^{-1}[\phi _{0,T}[x]] \end{array} }\, \frac{1}{2}\int _0^T\!|\dot{y}_t+v(T-t,y_t)|^2\,\mathrm{d}t =:{\overleftarrow{\mu }}\!_T\big ( \phi _{0,T}[y]{\scriptstyle \rightarrow }\phi _{0,T}[x ]\big ), \end{aligned}$$

where \(\overleftarrow{\mu }\!_T\) is the one-way rate associated with the backward system \(\dot{y}_t = - v(T-t, y_t)\). This time-reversal property is retained for the cross-semidistance: \(\mu _T^\mathrm {cross}(x,y)=\overleftarrow{\mu }\!_T^\mathrm {cross}\big ( \phi _{0,T}[x],\phi _{0,T}[y]\big )\). However, the meeting semidistance \(\mu _T^\mathrm {meet}(x,y)=\inf _z \overleftarrow{\mu }\!_T\big (z{\scriptstyle \rightarrow }\phi _{0,T}[x]\big )+\overleftarrow{\mu }\!_T\big (z{\scriptstyle \rightarrow }\phi _{0,T}[y]\big )\) is the same as the cost for the backward trajectories to start in a joint position and end in \(\phi _{0,T}[x]\) and \(\phi _{0,T}[y]\).

2.4 A Simple Example

Let us consider a very simple example, where the domain of interest is the interval [0, L], and there is no dynamics, i.e., \(v\equiv 0\). Its primary purpose is to form our intuition and expectations about how the semidistances work in more complicated settings. In particular, we shall see how the semidistances scale in time and system size.

The system is considered on the time interval [0, T]. One can then easily see that \({\dot{x}}_t \equiv L/T\) is an optimal path in (9), thus giving

$$\begin{aligned} \mu _T(0{\scriptstyle \rightarrow }L) = \mu _T(L{\scriptstyle \rightarrow }0) = \frac{1}{2}\int _0^T\!\left( \frac{L}{T}\right) ^2\,\mathrm{d}t = \frac{L^2}{2T}\,, \end{aligned}$$

and so \(\mu ^\mathrm {cross}_T(0,L)= \frac{L^2}{T}\). Thus, also  \(\mu _T(0{\scriptstyle \rightarrow }L/2) = \mu _T(L{\scriptstyle \rightarrow }L/2) = \frac{L^2}{8T}\). In general, the one-way cost is proportional to the squared distance and inversely proportional to time. This also gives

$$\begin{aligned} \mu _T^{\mathrm {meet}}(0,L) = \frac{L^2}{4T}\,, \end{aligned}$$

so, in this symmetric situation the meeting distance is half of one-way cost and quarter of the cross-semidistance.

We will revisit this example in the next section and will realize that the behavior of the discrete semidistances deviates from the one observed here for continuous space and time.

3 Large-Deviation Semidistances in Discrete Time and Space

As mentioned in the Introduction, the cost functions \(\mu _T^\mathrm {cross}\) and \(\mu _T^\mathrm {meet}\) are difficult to calculate explicitly, and impossible if the velocity or flow field is not explicitly known. In this section, we take a more practical approach. We will assume that the only information at hand is the position at finite times of a finite number I of floaters.

To be more specific, let \(\{x_k^{\scriptscriptstyle {(i)}}\}_{k=0,\ldots ; K, i=1,\ldots , I} \subset \mathbb {R}^d\) be given positions of floaters \(i=1,\ldots ,I\) at time \(k\tau \) for \(k=0,\ldots ,K\) for some \(\tau >0\). Assuming that the floaters sample from the deterministic flow field \(\phi _{s,t}\), we know that for each floater i,

$$\begin{aligned} x^{\scriptscriptstyle {(i)}}_{k+1} = \phi _{k\tau ,(k+1)\tau }[x^{\scriptscriptstyle {(i)}}_k]\,. \end{aligned}$$
(11)

If we would add noise to the system, we would find random particles described by the set of SDEs

$$\begin{aligned} d\varvec{}{x}_t^{\scriptscriptstyle {(i,\varepsilon )}} = v(t,\varvec{}{x}_t^{\scriptscriptstyle {(i,\varepsilon )}})\mathrm{d}t + \sqrt{\varepsilon }d\varvec{}{w}^{\scriptscriptstyle {(i)}}_t,&\varvec{}{x}_0^{\scriptscriptstyle {(i,\varepsilon )}}=x_0^{\scriptscriptstyle {(i)}},&\text {for } i=1,\ldots , I, \end{aligned}$$
(12)

where \(\varvec{}{w}^{\scriptscriptstyle {(i)}}\) are now independent standard Brownian motions.

Our strategy is to study the probability that random particles described by the SDEs (12) deviate from the given floater trajectories (11), conditional to the fact that all our knowledge about the otherwise unknown flow field \(\phi _{s,t}\) comes from the time- and space-discrete set of trajectory data (11). We first approximate the SDEs (12) by discrete-time, continuous-space Markov processes in Sect. 3.1, as it is done in standard time-stepping methods for SDEs (Kloeden and Platen 2010). Next, in Sect. 3.2 we condition these discrete-time processes on the given floater positions. Then we calculate the large-deviation rate for trajectories in Sect. 3.3, and for endpoints in Sect. 3.4. Finally, we end the section with a discussion of the metric properties of the resulting large-deviation rates in Sect. 3.5.

3.1 Discrete-Time Approximation

We first focus our attention to one time step \(k\tau \rightarrow (k+1)\tau \) of one trajectory i, and temporarily drop the superindex for brevity. Since the noise process is a standard Brownian motion, we know its density,

$$\begin{aligned} \frac{d\mathbb {P}[\sqrt{\varepsilon } \varvec{}{w}_{(k+1)\tau }\in \mathrm{d}y\,\big \vert \, \sqrt{\varepsilon } \varvec{}{w}_{k\tau }=x]}{\mathrm{d}y} = \left( 2\pi \varepsilon \tau \right) ^{-d/2} \exp \left( -\frac{|x-y|^2}{2\varepsilon \tau }\right) \,. \end{aligned}$$
(13)

Hence, we have exact information on the purely deterministic part of the SDE by (11) and on the purely noise part by (13). We combine this information by using the following time-stepping approximation for the SDE (2).

Fix an \(\alpha \in [0,1]\), and let \((\varvec{}{\xi }_k^+,\varvec{}{\xi }_k^-)_{k=0,\ldots ,K}\) be independent normally distributed \(\mathbb {R}^d\)-valued random variables with unit variance. Given the approximated random position \(\tilde{\varvec{}{x}}_k\) at time \(k\tau \), we iterate

$$\begin{aligned} \begin{aligned} \tilde{\varvec{}{x}}^+_{k}&:= \tilde{\varvec{}{x}}_{k} + \sqrt{\alpha \tau \varepsilon }\, \varvec{}{\xi }^+_k \\ \tilde{\varvec{}{x}}^-_{k+1}&:= \phi _{k\tau ,(k+1)\tau }[\tilde{\varvec{}{x}}^+_k]\\ \tilde{\varvec{}{x}}_{k+1}&:= \tilde{\varvec{}{x}}^-_{(k+1)\tau } + \sqrt{(1-\alpha )\tau \varepsilon }\, \varvec{}{\xi }^-_{k+1} \end{aligned} \end{aligned}$$
(14)

Here, \(\tilde{\varvec{}{x}}^+_k\) and \(\tilde{\varvec{}{x}}^-_{k+1}\) are only auxiliary (intermediate) steps. The method (14) is a special case of a splitting method, since the deterministic evolution and purely noise parts of the SDE (2) are handled separately in the distinct steps; here it would be “noise-flow-noise”.

We would like to stress that our choice of discretization is made on the basis that we can use the available information on the flow given by (11). In the realm of one-step methods for SDEs we are bound to choices of the form (14), because there is no information on the drift other than the flow generated by it on prescribed time intervals \([k\tau , (k+1)\tau )\). Given the form of time-stepping (14), the optimal [in the sense of highest weak consistency order Kloeden and Platen (2010)] approximation of the SDE is obtained by choosing \(\alpha =1/2\). That is the so-called Strang-splitting (Strang 1968) and has weak order two, while for \(\alpha \ne 1/2\) we only get order one.Footnote 4

Having performed discretization in time, in the next section we derive a discrete-time, discrete-space, \(\alpha \)-dependent Markov chain that we will use to derive discrete semidistances.

3.2 Conditioning on Finite Data

Recall that we considered one discrete-time process \((\tilde{\varvec{}{x}}_k)_{k=1,\ldots ,K}\) with initial condition \(\tilde{\varvec{}{x}}_0=x_0^{(i)}\), and that we suppressed the dependency on i. For each \(k=0,\ldots ,K\), we introduce the set

$$\begin{aligned} \mathcal {A}_k:=\{ x^{\scriptscriptstyle {(j)}}_k\}_{j=1,\ldots I}. \end{aligned}$$

of available points at time \(t=k\tau \). We now condition the random process on the event that for each realization \(\tilde{\varvec{}{x}}_k\in \mathcal {A}_k\) and for each intermediate point \(\tilde{\varvec{}{x}}_k^+\in \mathcal {A}_k\). This automatically implies conditioning of the other intermediate points \(\tilde{\varvec{}{x}}_{k+1}^-\in \mathcal {A}_{k+1}\) due to (11). We choose to condition on the intermediate points for practical reasons; otherwise, we would not be able to perform the second step in (14), since the discrete trajectories are our only information about the flow, cf. Remark 3.1 below.

The conditioning on the finite data set results in replacing the discrete-time continuous-space process by a fully discrete-time discrete-space Markov chain that hops between the given trajectories. Therefore, the state of the new Markov chain can be represented by the labels \(j=1,\ldots ,I\); this is particularly useful since the deterministic flow (11) will change the positions but not the labels. Since the resulting process is still Markovian, we can fully characterize its behavior through its transition probabilities for one time-step \(k\rightarrow k+1\). We now calculate these transition probabilities, dealing with each step in (14) separately. See Fig. 3 for a sketch.

For the transition from \(\tilde{\varvec{}{x}}_k\) to \(\tilde{\varvec{}{x}}_k^+\), where we know the increment distribution (13), note that we are in fact conditioning on a null set, so that the conditional probabilities are sensibly defined as the limits over balls \(B_r(\cdot )\) of small radii \(r\rightarrow 0\) around these points. We thus obtain, for any \(j,\ell =1,\ldots I\):

$$\begin{aligned} p_{k}^+(j,\ell )&:= \mathbb {P}\left[ \tilde{\varvec{}{x}}^+_k = x^{\scriptscriptstyle {(\ell )}}_k\,\big \vert \, \tilde{\varvec{}{x}}_k = x^{\scriptscriptstyle {(j)}}_k\ \text {and}\ \tilde{\varvec{}{x}}^+_k\in \mathcal {A}_k \right] \nonumber \\&= \lim _{r\rightarrow 0} \frac{\mathbb {P}\left[ \tilde{\varvec{}{x}}^+_k\in B_r(x^{\scriptscriptstyle {(\ell )}}_k)\, \big \vert \, \tilde{\varvec{}{x}}_k = x^{\scriptscriptstyle {(j)}}_k\right] }{\mathbb {P}\left[ \tilde{\varvec{}{x}}^+_k\in B_r(\mathcal {A}_k) \big \vert \, \tilde{\varvec{}{x}}_k = x^{\scriptscriptstyle {(j)}}_k\right] } \nonumber \\&= \frac{\exp \left( -\left| x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(j)}}_k\right| ^2/(2\alpha \varepsilon \tau )\right) }{\sum _{{\hat{\ell }}=1}^I \exp \left( -\left| x^{\scriptscriptstyle {({\hat{\ell }})}}_k-x^{\scriptscriptstyle {(j)}}_k\right| ^2/(2\alpha \varepsilon \tau )\right) }\,, \end{aligned}$$
(15)

and similarly for the transition from \(\tilde{\varvec{}{x}}_{k+1}^-\) to \(\tilde{\varvec{}{x}}_{k+1}\):

$$\begin{aligned} p_{k+1}^-(\ell ,m)&:= \mathbb {P}\left[ \tilde{\varvec{}{x}}_{k+1} = x^{\scriptscriptstyle {(m)}}_{k+1}\,\big \vert \, \tilde{\varvec{}{x}}^-_{k+1} = x^{\scriptscriptstyle {(\ell )}}_{k+1}\ \text {and}\ \tilde{\varvec{}{x}}_{k+1}\in \mathcal {A}_{k+1} \right] \,\nonumber \\&= \frac{\exp \left( -\left| x^{\scriptscriptstyle {(m)}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}\right| ^2/\left( 2(1-\alpha )\varepsilon \tau \right) \right) }{\sum _{\hat{m}=1}^I \exp \left( -\left| x^{\scriptscriptstyle {(\hat{m})}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}\right| ^2/\left( 2(1-\alpha )\varepsilon \tau \right) \right) }\,. \end{aligned}$$
(16)

Since the transition from \(\tilde{\varvec{}{x}}^+_k\) to \(\tilde{\varvec{}{x}}^-_{k+1}\) is deterministic (middle equation in (14)), we have that,

$$\begin{aligned} P_k(j,m):= & {} \mathbb {P}\left[ \tilde{\varvec{}{x}}_{k+1} = x^{\scriptscriptstyle {(m)}}_{k+1}\,\big \vert \, \tilde{\varvec{}{x}}_k=x^{\scriptscriptstyle {(j)}}_k\text { and } \tilde{\varvec{}{x}}^+_{k}\in \mathcal {A}_k, \tilde{\varvec{}{x}}^-_{k+1} \in \mathcal {A}_{k+1} \right] \nonumber \\= & {} \sum _{\ell =1}^I p_k^+(j,\ell )p_{k+1}^-(\ell ,m)\,. \end{aligned}$$
(17)

In words, the process performs the following three subsequent steps for one time step (see Fig. 3):

  1. 1.

    Start in \(x^{\scriptscriptstyle {(j)}}_k\), and perform a jump to some \(x^{\scriptscriptstyle {(\ell )}}_k\) with probability \(p^{\scriptscriptstyle {(k,+)}}_{j,\ell }\),

  2. 2.

    Perform a deterministic jump from \(x^{\scriptscriptstyle {(\ell )}}_k\) to \(x^{\scriptscriptstyle {(\ell )}}_{k+1}\),

  3. 3.

    Perform a jump from \(x^{\scriptscriptstyle {(\ell )}}_{k+1}\) to \(x^{\scriptscriptstyle {(m)}}_{k+1}\) with probability \(p^{\scriptscriptstyle {(k+1,-)}}_{\ell ,m}\).

The transition probabilities \(P_k(j,m)\) define our new, discrete-time Markov chain \((\varvec{}{i}_k)_{k=0,\ldots ,K}\) on the discrete space \(\{1,\ldots ,I\}\). To shorten notation, we will write \(i_{\scriptscriptstyle {(\cdot )}}:=(i_k)_{k=0,\ldots ,K}\) for a discrete path, analogous to the continuous-time setting. By the Markov property, the probability that the Markov chain realizes such a path is simply

$$\begin{aligned} \mathbb {P}\big [\varvec{}{i}_{\scriptscriptstyle {(\cdot )}} = i_{\scriptscriptstyle {(\cdot )}} \big ]= \prod _{k=0}^{K-1} P_k(i_k,i_{k+1}), \end{aligned}$$
(18)

where we assumed that the chain starts (deterministically) from \(i_0\).

Fig. 3
figure 3

One time step of the discrete-time discrete-space Markov chain \(\varvec{}{i}_k\)

3.3 Large Deviations of Discrete Trajectories

We now study the large deviations of the discrete Markov chain \(\varvec{}{i}_k\). Similarly to the continuous setting from Sect. 2, we start from the large deviations of paths. First we calculate the large deviations for \(p_k^+(j,\ell )\) and \(p_{k+1}^-(\ell ,m)\). By the Laplace principle (27),

$$\begin{aligned} -\varepsilon \log p_k^+(j,\ell )&{\mathop {=}\limits ^{(15)}}\varepsilon \log \sum _{{\hat{\ell }}=1}^I \exp \left( \frac{\left|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(j)}}_k\right|^2 - \left|x^{\scriptscriptstyle {({\hat{\ell }})}}_k-x^{\scriptscriptstyle {(j)}}_k\right|^2}{2\alpha \varepsilon \tau }\right) \\&\xrightarrow [\varepsilon \rightarrow 0]{} \max _{{\hat{\ell }}=1,\ldots ,I} \frac{\left|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(j)}}_k\right|^2 - \left|x^{\scriptscriptstyle {({\hat{\ell }})}}_k-x^{\scriptscriptstyle {(j)}}_k\right|^2}{2\alpha \tau } =\frac{|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(j)}}_k|^2}{2\alpha \tau }. \end{aligned}$$

We will make this simplification again below. Similarly, we obtain

$$\begin{aligned} -\varepsilon \log p_{k+1}^-(\ell ,m) \xrightarrow [\varepsilon \rightarrow 0]{(16)}\frac{|x^{\scriptscriptstyle {(m)}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}|^2}{2(1-\alpha )\tau }. \end{aligned}$$

Using these two exponential approximations, we can again use the Laplace principle (27) to find for the jump probability of one time step:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}-\varepsilon \log P_k(j,m)&{\mathop {=}\limits ^{(17)}} \lim _{\varepsilon \rightarrow 0}-\varepsilon \log \sum _{\ell =1}^I p_k^+(j,\ell )p_{k+1}^-(\ell ,m) \\&\,\,= \min _{\ell =1,\ldots , I} \lim _{\varepsilon \rightarrow 0} \left( -\varepsilon \log p_k^+(j,\ell ) -\varepsilon \log p_{k+1}^-(\ell ,m) \right) \\&\,\,= \min _{\ell =1,\ldots ,m} \frac{|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(j)}}_k|^2}{2\alpha \tau } + \frac{|x^{\scriptscriptstyle {(m)}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}|^2}{2(1-\alpha )\tau }. \end{aligned}$$

Finally, the large-deviation rate of a discrete path is

$$\begin{aligned} -\varepsilon \log \mathbb {P}\left[\varvec{}{i}_{\scriptscriptstyle {(\cdot )}} = i_{\scriptscriptstyle {(\cdot )}} \right]&{\mathop {=}\limits ^{(18)}} -\varepsilon \log \prod _{k=0}^{K-1} P_k(i_k,i_{k+1}) \nonumber \\&\xrightarrow [\varepsilon \rightarrow 0]{} \sum _{k=0}^{K-1} \min _{\ell =1,\ldots ,I} \frac{|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(i_k)}}_k|^2}{2\alpha \tau } + \frac{|x^{\scriptscriptstyle {(i_{k+1})}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}|^2}{2(1-\alpha )\tau } := \mathcal {J}( i_{\scriptscriptstyle {(\cdot )}}), \end{aligned}$$
(19)

Remark 3.1

Recall that we conditioned on the event that all \(\tilde{\varvec{}{x}}_k\) as well as the intermediate points \(\tilde{\varvec{}{x}}^+_{k}\) lie in the set \(\mathcal {A}_k\) of available points. One might argue that in practice only the points \(\tilde{\varvec{}{x}}_{k}\) are measured to lie in \(\mathcal {A}_k\), while the other two are mathematical constructs that may lie anywhere. However, if we would relax this conditioning and follow the calculations as above, we would find:

$$\begin{aligned} -\varepsilon \log P_k(j,m)&\xrightarrow [\varepsilon \rightarrow 0]{} \min _{x\in \mathbb {R}^d}\Big \{ \frac{|x-x^{\scriptscriptstyle {(j)}}_k|^2}{2\alpha \tau } + \frac{\left| x^{\scriptscriptstyle {(m)}}_{k+1}-\phi _{t_k,t_{k+1}}[x]\right| ^2}{2(1-\alpha )\tau } \\&- \min _{{\hat{m}}=1,\ldots ,I} \frac{\left| x^{\scriptscriptstyle {({\hat{m}})}}_{k+1}-\phi _{t_k,t_{k+1}}[x]\right| ^2}{2(1-\alpha )\tau } \Big \}. \end{aligned}$$

Since this large-deviation rate still depends on the unknown flow field \(\phi \), it cannot be used if only the data of a finite number of floaters is available.

Remark 3.2

(Missing data and non-uniform time-sampling)

Note that the construction works exactly as described above even if information about trajectories is partially missing. The conditioning on the set \(\mathcal {A}_k\) works identically, but now these sets might have different cardinalities smaller or equal I. Observe that our only information about the deterministic flow for times in \([k\tau ,(k+1)\tau )\) comes from those trajectories that are available both in \(\mathcal {A}_k\) and \(\mathcal {A}_{k+1}\). If this intersection is empty, we need to skip that time slice completely. This is not a problem, since our choice of sampling time uniformly by the step size \(\tau \) was solely in order to ease presentation. As the reader has probably observed, the extension for varying time steps \(\tau _k\) is straightforward.

3.4 Large Deviations of Endpoints

Analogously to the continuous setting, we study the large deviations of the one-way probability to hop from i to j in discrete time K, and the meeting probability that two independent chains, starting from i and j, respectively, meet by discrete time K or earlier. Since the paths (19) encode more information than the endpoints, we can now easily derive the large deviations of the one-way probability by a contraction principle. Indeed, for any two indices \(i,j=1,\ldots ,I\),

$$\begin{aligned} -\varepsilon \log \mathbb {P}[\varvec{}{i}_K=j \mid \varvec{}{i}_0=i]\xrightarrow [\varepsilon \rightarrow 0]{} \min _{i_{\scriptscriptstyle {(\cdot )}}:\,i_0=i,i_K=j} \mathcal {J}(i_{\scriptscriptstyle {(\cdot )}}) =: \nu _K(i{\scriptstyle \rightarrow }j), \end{aligned}$$
(20)

where \(\mathcal {J}\) is the discrete-path large-deviation rate (19). Note that \(\mathcal {J}\) is the shortest path length in a graph with time-dependent edge weights

$$\begin{aligned} w_k(i,j) = \min _{\ell =1,\ldots ,I} \frac{|x^{\scriptscriptstyle {(\ell )}}_k-x^{\scriptscriptstyle {(i)}}_k|^2}{2\alpha \tau } + \frac{|x^{\scriptscriptstyle {(j)}}_{k+1}-x^{\scriptscriptstyle {(\ell )}}_{k+1}|^2}{2(1-\alpha )\tau }\,. \end{aligned}$$

Again, the sum \(\nu _K^\mathrm {cross}(i,j):=\nu _K(i{\scriptstyle \rightarrow }j)+\nu _K(j{\scriptstyle \rightarrow }i)\) can be given an interpretation in terms of large deviations as in Sect. 2.2. Moreover, following the same argument as in (10), if we take two independent trajectories \(\varvec{}{i}_{\scriptscriptstyle {(\cdot )}}\) and \(\varvec{}{j}_{\scriptscriptstyle {(\cdot )}}\), then

$$\begin{aligned} -\varepsilon \log \mathbb {P}[\varvec{}{i}_K=\varvec{}{j}_K \mid \varvec{}{i}_0=i, \varvec{}{j}_0= j ]\xrightarrow [\varepsilon \rightarrow 0]{} \min _{\ell =1,\ldots ,I} \nu _K(i{\scriptstyle \rightarrow }\ell ) + \nu _K(j{\scriptstyle \rightarrow }\ell ) =: \nu _K^\mathrm {meet}(i,j). \end{aligned}$$

3.5 The Semidistances

It is easily checked that in the discrete setting the properties of a semidistance are also satisfied:

  1. (i)

    \(\nu _K^{\mathrm {cross}}(i,j)\ge 0\)                    and      \(\nu _K^{\mathrm {meet}}(i,j)\ge 0\),

  2. (ii)

    \(\nu _K^{\mathrm {cross}}(i,j)=0 \iff i=j\)       and      \(\nu _K^{\mathrm {meet}}(i,j)=0 \iff i=j\),

  3. (iii)

    \(\nu _K^{\mathrm {cross}}(i,j)=\nu _K^{\mathrm {cross}}(j,i)\)           and      \(\nu _K^{\mathrm {meet}}(i,j)=\nu _K^{\mathrm {meet}}(j,i)\).

Furthermore, the triangle inequality fails, but we again have the following estimate:

$$\begin{aligned} \nu _K^{\mathrm {meet}}(i,j) \le \min \big \{ \nu _K(i{\scriptstyle \rightarrow }j) , \nu _K(j{\scriptstyle \rightarrow }i)\big \} \le \max \big \{ \nu _K(i{\scriptstyle \rightarrow }j) , \nu _K(j{\scriptstyle \rightarrow }i)\big \} \le \nu _K^{\mathrm {cross}}(i,j). \end{aligned}$$

Both semidistances can be computed from shortest path costs, where the cost of a path is given by (19). We stress that this expression is fairly simple and depends on the flow field through the known positions of the floaters \(x_k^{\scriptscriptstyle {(\ell )}}\) only. Because of this: (1) these costs can be used in practice if the velocity field is unknown (Sects. 4 and 5); (2) these costs can even be applied to cases where there may not be an underlying velocity field, as for example in discrete-time dynamical system (Sect. 4.1).

These semidistances can be computed by first computing the one-way rates \(\nu _K(i\rightarrow j)\) using Algorithm 1, see Appendix B. From these rates, one readily obtains the semidistances via

$$\begin{aligned} \nu _K^\mathrm {cross}(i,j)&=\nu _K(i\rightarrow j) + \nu _K(j\rightarrow i) \quad \text {and}\quad \nu _K^\mathrm {meet}(i,j)\\&=\min _{\ell =1,\ldots ,I} \nu _K(i\rightarrow \ell ) + \nu _K(j\rightarrow \ell ). \end{aligned}$$

Remark 3.3

(Time reversal for discrete semidistances) Similarly to Remark 2.1, the one-way cost satisfies the time-reversal property \(\nu _K(i\rightarrow j) = {\overleftarrow{\nu }}\!\!_K(j\rightarrow i)\), provided \(\alpha =1/2\), where \(\overleftarrow{\nu }\!\!_K\) is the cost associated with the backward dynamics. Moreover, this time-reversal property also holds for the cross-semidistance, whereas for the meeting semidistance \(\nu _K^\mathrm {meet}(i,j)=\min _{\ell =1,\ldots ,I} \overleftarrow{\nu }\!\!_K(\ell \rightarrow i) + \overleftarrow{\nu }\!\!_K(\ell \rightarrow j)\). Apart from the superior consistency order discussed in Sect. 3.1, the invariance of semidistances under time reversal is another reason for choosing \(\alpha =1/2\).

Remark 3.4

Other large-deviation-based semidistances are also possible. If one considers the “noise-flow” (i.e., \(\alpha =1\)) time-stepping scheme for the SDE rather than “noise-flow-noise”, expression (19) simplifies a bit. As another example of a large-deviation-based semidistance between two given discrete paths \(\{x_k^{\scriptscriptstyle {(i)}},x_k^{\scriptscriptstyle {(j)}}\}_{k=0,\dots K}\), one could consider the probability to hop back and forth between the two trajectories, see Fig. 4. In that case, we find in the large-deviation scaling for \(\alpha =1\):

$$\begin{aligned} -\varepsilon \log \mathbb {P}[\varvec{}{i}_1=j_1,\varvec{}{i}_2=i_2,\dots \mid \varvec{}{i}_0=i_0]\xrightarrow [\varepsilon \rightarrow 0]{} \sum _{k=0}^{K-1} \frac{|x_k^{\scriptscriptstyle {(i)}}-x_k^{\scriptscriptstyle {(j)}}|^2}{2\tau }, \end{aligned}$$
(21)

for \(\alpha =0\) the sum would go from \(k=1\) to K. Naturally, this is simply the \(L^2\)-distance between two trajectories, as considered earlier in Froyland and Padberg-Gehle (2015). Although this construction is very easy to calculate and its square root is a genuine metric, it is less interpretable as a cost for transport and mixing.

Fig. 4
figure 4

Hopping back and forth (solid line) between two given trajectories (dashed lines)

Remark 3.5

It should be noted that the semidistances \(\nu _K^\mathrm {meet},\nu _K^\mathrm {cross}\) scale quadratically in space; this becomes even more apparent in the example considered in Sect. 3.7. In the case of the \(L^2\)-distance (21), the cost becomes a genuine distance after taking the square root. However, if we take the square roots of \(\nu _K^\mathrm {meet}\) and \(\nu _K^\mathrm {cross}\), the triangle inequality still fails. We therefore stick to the quadratic scaling as this has the most direct interpretation as large-deviation costs.

Remark 3.6

(Eulerian transport vs Lagrangian mixing) When speaking of transport in this paper, we mean “transport (of probability) from a trajectory to another”, to express how the dynamics is mixing up regions these two trajectories come in contact with. This can be seen as a Lagrangian perspective. We express with large-deviation rates the unlikeliness of transitions between trajectories, and these are then computed as shortest paths, cf. Sect. 3.4. Deceivingly similar mathematical constructions show up in Ser-Giacomi et al. (2015), where the authors consider “highly probable paths” of non-homogeneous Markov chains, which also leads to a time-dependent shortest path problem. Note, however, that this is orthogonal to our concept, as this is quantifying likeliness. A further important distinction is that their Markov chain is constructed in an Eulerian manner (opposed to our Lagrangian setting), meaning that it describes transport between fixed regions of state space; serving as a discretization of the flow field (Froyland et al. May 2007, 2010).

3.6 Discretization of the Continuous Semidistances

We now show that the one-way discrete-space–time cost \(\nu _K\) can also be obtained by discretizing the continuous-space–time cost \(\mu _T\). This means that discretization and derivation of the large-deviation principle are interchangeable operations (if done the right way). We will not be precise about the discretization error; of course one needs to assume that the number of floaters is sufficiently large.

We first divide the time interval into subintervals \([0,T)=\bigcup _{k=0}^{K-1} [k\tau , (k+\alpha )\tau )\cup [(k+\alpha )\tau ,(k+1)\tau )\). Recall that \(\phi _{t_0,t}\) is the flow associated with \(v(t,\cdot )\), that is, for any \(t_0,x\),

$$\begin{aligned} \partial _t \phi _{t_0,t}[x]= v\big (t,\phi _{t_0,t}[x]\big ). \end{aligned}$$

Note in what follows that \(x_{(\cdot )}\) is some path, not necessarily a trajectory of the flow. In each interval \([k\tau , (k+\alpha )\tau )\), we approximate by finite differences:

$$\begin{aligned} \dot{x}_t \approx \frac{x_{(k+\alpha )\tau }-x_{k\tau }}{\alpha \tau }&\text {and}&v(t,x_t) \approx \frac{x_{(k+\alpha )\tau } - \phi _{(k+\alpha )\tau ,k\tau }[x_{(k+\alpha )\tau }]}{\alpha \tau }. \end{aligned}$$

In each interval \([(k+\alpha )\tau ,(k+1)\tau )\), we approximate:

$$\begin{aligned} \dot{x}_t \approx \frac{x_{(k+1)\tau }-x_{(k+\alpha )\tau }}{(1-\alpha )\tau }&\text {and}&v(t,x_t) \approx \frac{\phi _{(k+\alpha )\tau ,(k+1)\tau }[x_{(k+\alpha )\tau }]- x_{(k+\alpha )\tau }}{(1-\alpha )\tau }. \end{aligned}$$

Because of the assumption that the flow is one-to-one, we can always write \(x_{(k+\alpha )\tau }=\phi _{k\tau ,(k+\alpha )\tau }[\hat{x}_k ]\) for some \({\hat{x}}_k\). We thus obtain:

$$\begin{aligned} \frac{1}{2}\int _0^T\!\big |\dot{x}_t - v(t,x_t)\big |^2\,\mathrm{d}t \approx \sum _{k=0}^{K-1} \frac{|x_{k\tau } - \hat{x}_k|^2}{2\alpha \tau } + \frac{|x_{(k+1)\tau } - \phi _{k\tau ,(k+1)\tau }[{\hat{x}}_k]|^2}{2(1-\alpha )\tau }. \end{aligned}$$

Since the number of floaters \(\{x_k^{\scriptscriptstyle {(i)}}\}_{k=0,\ldots ,K; i=1,\ldots ,I}\) is large, we can find an \(x^{\scriptscriptstyle {(i)}}_k\) close to \({\hat{x}}_k\), giving

$$\begin{aligned}&\mu _T(x_0^{\scriptscriptstyle {(i)}}\rightarrow x_K^{\scriptscriptstyle {(j)}}) \approx \inf \Big \{ \frac{1}{2}\int _0^T\!|\dot{x}_t + v(t,x_t)|^2\,\mathrm{d}t : x_0 = x_0^{\scriptscriptstyle {(i)}}, x_T = x_K^{\scriptscriptstyle {(j)}}, x_{k\tau }\in \mathcal {A}_k,\\& \qquad\qquad\qquad\qquad\qquad\qquad\quad x_k^{\scriptscriptstyle {(\ell )}}:=\phi _{(k+\alpha )\tau ,k\tau } [x_{(k+\alpha )\tau )}]\in \mathcal {A}_k \Big \} \\&\approx \min _{i_{(\cdot )}:i_0=i,i_K=j} \sum _{k=0}^{K-1} \min _{\ell =1,\dots ,I} \frac{|x_k^{\scriptscriptstyle {(i_k)}} - x_k^{\scriptscriptstyle {(\ell )}}|^2}{2\alpha \tau } + \frac{|x_{k+1}^{\scriptscriptstyle {(i_{k+1})}} - x_{k+1}^{\scriptscriptstyle {(\ell )}}|^2}{2(1-\alpha )\tau } \\&= \nu _K(i{\scriptstyle \rightarrow }j). \end{aligned}$$

This shows that we can either derive the large-deviation rate function in continuous space and discretize this to finite trajectories (as done here), or we can restrict the continuous dynamics to finite trajectory data and derive a large-deviation rate function for that (as done above); we obtain consistent results whichever route we take.

3.7 The Simple Example Revisited

Let us now demonstrate how the results of this section apply to the example of Sect. 2.4.

Discrete Time and Continuous Space

Let us first suppose we are given infinitely many “trajectories” of the system, one starting at each point \(x\in [0,L]\), and they are sampled at discrete time points \(k\tau \), \(k=0,1,\ldots ,K\), with \(\tau = \frac{T}{K}\). From Sect. 3.3 with \(\alpha =1/2\) we obtain, by writing \(\Delta x = \frac{L}{K}\), that

$$\begin{aligned} \nu _K(0{\scriptstyle \rightarrow }L) = \frac{1}{2} \sum _{k=1}^K \frac{(\Delta x)^2}{\tau } = \frac{1}{2} K\cdot \frac{(L/K)^2}{T/K} = \frac{L^2}{2T}\,, \end{aligned}$$

where we used that the optimal discrete path in (20) is the one making jumps of equal lengths \(\Delta x\). Note that the rate function is identical to that in the fully continuous case. Analogously, \(\nu _K(0{\scriptstyle \rightarrow }L/2) = \nu _K(L/2 {\scriptstyle \rightarrow }L) = \frac{L^2}{8T}\), and generally, if \(|x-y|=\delta \), then \(\nu _K(x{\scriptstyle \rightarrow }y)=\frac{\delta ^2}{2T}\). The derived semidistances scale similarly. Note that the semidistances converge to zero as \(T\rightarrow \infty \).

Discrete Time and Space

If we are given a finite number I of equispaced trajectories of this system sampled at the same times as in the previous paragraph, the virtual random walker cannot make arbitrarily small jumps as in the continuous state case, thus

$$\begin{aligned} \nu _K(0{\scriptstyle \rightarrow }L) = \frac{1}{2}\sum _{k=1}^K \frac{(\Delta x_k)^2}{\tau } \approx \frac{L^2}{2T}\,,\qquad \text {if }K\le I\,, \end{aligned}$$

since we can take \(\Delta x_k \approx L/K\) with error \(\mathcal {O}(I^{-1})\) as I grows. However, if \(K>I\), the smallest jumps are \(\Delta x_k = \frac{L}{I}\), thus

$$\begin{aligned} \nu _K(0{\scriptstyle \rightarrow }L) = \frac{1}{2}I\cdot \frac{(L/I)^2}{\tau } = \frac{L^2\,K}{2I\,T}\,. \end{aligned}$$

Thus, if the observation time of trajectories grows and they are still observed at the same rate (i.e., \(\tau \) stays constant), the semidistances saturate at \(\frac{L^2}{I \tau }\) and do not converge to zero as in the continuous time case. Moreover, to reach \(y=L/2\) from \(x=0\), we still cannot make smaller jumps than \(\Delta x = \frac{L}{I}\), but now we only require only I / 2 of them, such that we obtain \(\nu _K(0{\scriptstyle \rightarrow }L/2) = \nu _K(L/2{\scriptstyle \rightarrow }L) = \frac{I}{2}\cdot \frac{(L/I)^2}{\tau } = \frac{L^2}{4I\tau }\) (for even I, and vanishing error for odd I as I grows).

The main lesson is that while in the continuous space case halving the Euclidean distance makes the semidistance scale by \(\frac{1}{4}\), if the spatial resolution of trajectories is coarse, the discrete semidistance scales only by \(\frac{1}{2}\). In general, if \(|x-y|=\delta \), then on a coarse resolution grid it takes about \(\frac{\delta }{\Delta x}\) jumps to travel between these two points, and we obtain \(\nu _K(x{\scriptstyle \rightarrow }y) \approx \frac{\delta }{\Delta x}\cdot \frac{(\Delta x)^2}{\tau } = \delta \cdot \frac{\Delta x}{\tau }\). Note that \(\Delta x\) and \(\tau \) are constant quantities, and thus the one-way discrete cost scales linearly in the Euclidean distance between the two points, as opposed to quadratic scaling in the continuous space case.

4 Coherence Analysis with Semidistances

Let us assume that we are given a set of discrete time and space trajectories \(\{x_k^{\scriptscriptstyle {(j)}}\}_{j=1,\ldots ,I,k=0,\ldots ,K}\), and a (semi)distance d. We now describe how such semidistances can be used to distinguish and analyze coherent sets from the finite data. We shall work with an unspecified semidistance d, but of course the semidistances that we have in mind are \(\nu _K^\mathrm {cross}\) and \(\nu _K^\mathrm {meet}\) that we derived in the previous section. Other—not large-deviation based—distance measures could be used just as well, as we discuss below. Nevertheless, the semidistances should not be completely arbitrary; we assume that they share the behavior of \(\nu _K^\mathrm {meet}\) and \(\nu _K^\mathrm {cross}\) that we discuss in Sects. 4.1 and 4.2.

To illustrate the ideas, we first analyze the behavior of two one-dimensional prototypical examples. These examples show the difference between two types of regions: “mixing” and “static” (also known as “regular”). In these one-dimensional and simple examples, one can easily determine the regions and whether they are mixing or static from the semidistances. One example has two invariant sets under the dynamics, which is (measure-theoretically and topologically) mixing on both of them. The other has two static regions, where the mutual physical distance of trajectories does not change under the dynamics, and these regions are separated by a third, mixing region.

After this we proceed with a more involved model: a two-dimensional periodically forced double-gyre flow, where the boundaries of the separate regions are no longer as clear-cut as in the one-dimensional example. Nevertheless, we will show that one can identify the separate regions via the tools that we present next.

To this end, we introduce the notion of cornerstones, representing possible coherent sets or mixing regions, and then discuss how to find them and when to stop searching for them. Finally, to obtain coherent sets, we assign the trajectories to cornerstones. The notion of fuzzy affiliations will be used to express the uncertainty whether a trajectory close to the boundary of a set belongs to it or not. Note that in the case of finite data such an uncertainty is always present.

4.1 Two Illustrative Model Cases

Two Invariant, Mixing Subdomains

As mentioned in Sect. 3.5, we may also apply the techniques developed in this paper to a discrete-time dynamical system. To gain some intuition for the behavior of the semidistances at mixing regions, we consider the discrete-time system on the unit interval \(X = [0,1]\) and one-step flow map, see Fig. 5 (left),

$$\begin{aligned} { \phi (x) = \left\{ \begin{array}{ll} 4x \mod \frac{1}{2}, &{} x< \tfrac{1}{2} \\ \left( 4(x-\tfrac{1}{2}) \mod \tfrac{1}{2}\right) + \tfrac{1}{2}, &{} x \ge \tfrac{1}{2}\,. \end{array} \right. } \end{aligned}$$
Fig. 5
figure 5

Left: the time-discrete flow map with two invariant mixing subdomains. Right: the time-discrete flow map with two static regions and an invariant mixing subdomains between them

The sets \(X_1 = [0,\tfrac{1}{2}]\), \(X_2 = (\tfrac{1}{2},1]\) are invariant, i.e., \(\phi ^{-1}(X_1) = X_1\) and \(\phi ^{-1}(X_2) = X_2\), and \(\phi \) is simply the circle-quadrupling map on each of these sets, i.e., it is mixing on the single components. Consequently,Footnote 5

$$\begin{aligned} \liminf _{t\in \mathbb {N},\, t\rightarrow \infty } \left| \phi ^t(x) - \phi ^t(y)\right| = 0 \end{aligned}$$
(22)

for (Lebesgue-)almost every pair \(x,y\in X_i\), \(i=1,2\).

Thus, \(\nu _K(i{\scriptstyle \rightarrow }j)\rightarrow 0\) as \(K\rightarrow \infty \), because if \(x_i,x_j\) are both in \(X_1\) or both in \(X_2\), then (22) shows that their trajectories get arbitrarily close eventually. If the trajectories start in different halves of [0, 1], thenFootnote 6

$$\begin{aligned} \liminf _{t\in \mathbb {N},\, t\rightarrow \infty } \left| \phi ^t(x_i) - \tfrac{1}{2} \right| + \left| \phi ^t(x_j) - \tfrac{1}{2} \right| = 0\,, \end{aligned}$$
(23)

thus the jump from one trajectory to another gets arbitrarily cheap. See Fig. 6.

Fig. 6
figure 6

Two trajectories of the map \(\phi \) of length 200 steps, starting in \(X_1\) and \(X_2\), respectively. Theory shows that they come arbitrary close, eventually. Here they get the closest at time step 193, shown by a circle

The transport semidistances between any two points within the same region are very small, at least if the time window is large enough. This behavior is typical for mixing regions. In fact, since the two mixing regions are only separated by one point, it is relatively cheap to move from one region to the other, and so the semidistances between two points in separate regions converge with increasing time to zero. Nevertheless, the semidistances still detect a difference between the two invariant sets: The semidistance between two trajectories in the same invariant component goes in general quicker to zero than the one between two from different components, as shown in Fig. 7 for \(I=100\) initially equispaced trajectories. Thus, it is the relative difference between the semidistances that is relevant for the transport structure of the state space, and not the absolute values.

Fig. 7
figure 7

Semidistances \(\nu _K^\mathrm {cross}(i{\scriptstyle \rightarrow }j)\) (left) and \(\nu _K^\mathrm {meet}(i,j)\) (right) for increasing maximal time K, averaged over \(x_i,x_j\in X_1\) (downward-pointing triangles), \(x_i,x_j\in X_2\) (upward-pointing triangles), and \(x_i,\in X_1,\,x_j\in X_2\) (circles), respectively. Note that the decrease in the distance is much slower for trajectories taken from different invariant sets

Two Static Regions Divided by a Mixing One

To gain some intuition about static regions, let us now consider the discrete-time system on \(X=[0,1]\) given by

$$\begin{aligned} { \phi (x) = \left\{ \begin{array}{ll} x, &{} x \in [0,\tfrac{1}{4}) \cup (\tfrac{3}{4},1] \\ \left( 2(x-\tfrac{1}{4}) \mod \tfrac{1}{2}\right) + \tfrac{1}{4}, &{} x \in [\tfrac{1}{4},\tfrac{3}{4}]\,, \end{array} \right. } \end{aligned}$$

see Fig. 5 (right). This map has three invariant sets. The left and right ones are static, such that the mapping restricted to them is the identity, and are meant to model regions of the state space in complicated flows that are “static” in the sense that the mutual distance of points is not changed (or just barely) by the dynamics. We will consider these as one kind of prototype for coherent sets. The third region is mixing and physically separates the other two.

We take \(I=100\) initially equispaced trajectories and compute the one-way costs \(\nu _K(i{\scriptstyle \rightarrow }\cdot )\) with \(K=50\) for \(i=1\) and \(i=51\), respectively, shown in Fig. 8.

Fig. 8
figure 8

One-way cost \(\nu _K(i{\scriptstyle \rightarrow }j)\) for \(i=1\) (left) and \(i=51\) (right) for the map with two static and one mixing region

From our analysis in Sect. 3.7, we would have expected to see quadratic growth of the one-way cost with respect to physical distance in the static regions, but we only observe linear growth. This is due to the finite number of considered trajectories, as also explained in the second paragraph of Sect. 3.7. All points of the mixing region have almost the same cost from any one point in the static regions, and approximately zero cost from one another. To obtain the cost between two points of different static regions, one has to consider the cost to go to the boundary of the static and mixing regions (linear cost in Euclidean distance), travel on a trajectory from there to the boundary of the other static region (at zero cost), and then go from there to the desired point. (Again, linear cost in Euclidean distance that needs to be covered.) Thus, the cost (and semidistance) between these two points is the sum of their one-way cost (and semidistance) to the mixing region, provided the time of consideration is sufficiently large for the mixing to take place.Footnote 7 Since our fictive random walker uses trajectories of the mixing region to travel from one static region to the other, we will also call it transition region henceforth.

To conclude, from Fig. 8 we can easily identify three separate regions, and from the steepness of the slopes (linear/quadratic or flat), we can determine wether a region is static or mixing. As the next example shows, this distinction is usually not as clear as in these constructed examples, but the main ideas will be based on this observation.

4.2 The Periodically Forced Double Gyre

Let us now consider the non-autonomous system \(\dot{x}_t = v(t,x_t)\) on \(X = [0,2]\times [0,1]\) with (Froyland and Padberg-Gehle 2014)

$$\begin{aligned} \begin{aligned} v(t,x):= \begin{bmatrix} -\pi A \sin \left( \pi f(t,x_1)\right) \cos (\pi x_2) \\ \pi A \cos \left( \pi f(t,x_1)\right) \sin (\pi x_2)\frac{\hbox {d}f}{\hbox {d}z}(t,x_1) \end{bmatrix}, \end{aligned} \end{aligned}$$
(24)

where \(f(t,z)=\beta \sin (\omega t)z^2+(1-2\beta \sin (\omega t))z\). We fix the parameter values \(A=0.25\), \(\beta =0.25\) and \(\omega =2\pi \); hence, the vector field has time period 1. The system preserves the Lebesgue measure on X. Equation (24) describes two counter-rotating gyres next to each other (the left one rotates clockwise), with the vertical boundary between the gyres oscillating periodically, see Fig. 9.

Fig. 9
figure 9

Sketch of the velocity field of the periodically forced double-gyre flow at two different times. The horizontal axis is \(x_1\), and the vertical is \(x_2\)

We choose a uniform \(50\times 25\) grid as initial conditions for the floaters at time \(t=0\); i.e., \(I=1250\). We sample the trajectories of these floaters at times \(t_k = k\tau \), \(k=0,1,\ldots ,K\), where \(K=100\) and \(\tau = 0.2\). That means, the length of trajectories in consideration is 20 times the period of the forcing.

Employing our large-deviation-based distance computations on this data set using Algorithm 1 and \(\alpha =1/2\), we get the one-way costs \(\nu _K(i{\scriptstyle \rightarrow }j)\), \(i,j=1,\ldots ,I\), from which we compute \(\nu _K^\mathrm {cross}(i,j)\) and \(\nu _K^\mathrm {meet}(i,j)\).

As a first simple analysis, we can order the points by their semidistances to the center of one gyre, see Fig. 10. Here and in the following, the rates and semidistances will be always given in units \(1/\tau \). On a log-log scale, the slope 1 / 2 (square-root-type behavior) indicates that most trajectories in the gyre are approximately concentric circular regions around the center.Footnote 8 Since the semidistances grow linearly in the Euclidean distance inside the gyre, we see that we are in the low-resolution regime discussed in Sect. 3.7. Analogously to Fig. 8, we can again (vaguely) distinguish three regions: a steep (square-root-type) region, a flat region, and another steep (flipped square-root-type) region. As before, the flat region is typically strongly mixing, and the steep regions are static. We shall make this distinction more precise in the next sections.

Fig. 10
figure 10

Left: \(\nu _K^\mathrm {meet}(c_1,\cdot )\) sorted in ascending order. Right: the same as left, on a log-log scale. The horizontal axis shows the rank, and the vertical shows the semidistance value

Remark 4.1

(Three-dimensional flows) Clearly, the scaling behavior shown in Figs. 8 and 10is dimension-dependent. For a three-dimensional static region, one would see a slope 1 / 3 on a log–log scale. The question remains, how does a typical coherent set behave there; can it be modeled by a static region? If an incompressible flow rotates uniformly in a plane, it necessarily has a constant shifting motion in the perpendicular axial direction, leading to cylindrical vortices. If the cylindrical rings of a vortex rotate at different angular frequencies, the flow speed along axial directions is non-uniform, and mixing-type behavior occurs in the vortex (Halász et al. 2007). We leave the analysis of such systems to future work and proceed with analyzing different aspects of prototypical two-dimensional flows here.

4.3 Cornerstones

To start the analysis of the state space under a semidistance d, we randomly choose a trajectory, represented by a label \(c_0\in \{1,\ldots ,I\}\), and compute the trajectory furthest from it, i.e, we set

$$\begin{aligned} c_1 = \mathop {{{\mathrm{\arg \,\max }}}}\limits _{i=1,\ldots ,I} d(i,c_0)\,. \end{aligned}$$

To find a set of points that “spans” the state space, we identify successively further trajectories that are far away from all the other already identified “cornerstones” \(\{c_{q}\}_{q=1,\ldots ,Q}\), as in Rüdrich et al. (2017):

$$\begin{aligned} c_{Q+1} = \mathop {{{\mathrm{\arg \,\max }}}}\limits _{i=1,\ldots ,I}\min _{q=1,\ldots ,Q} d(i,c_q)\,. \end{aligned}$$
(25)

Observe that in this optimization problem we ignore the first, randomly chosen trajectory \(c_0\); hence, the set of cornerstones \(\{c_q\}_{q=1,\ldots ,Q}\) will be less dependent on this randomness. Moreover, even if the first trajectory \(c_0\) would represent a coherent set, the algorithm will eventually provide a new cornerstone in that set, which lies closer to the semidistance center of that set.

For the double gyre and the meeting distance, we identified three cornerstones. The objective function of the maximization problem (25) is plotted in Fig. 11; this yields a similar but more detailed picture as Fig. 10. Note that the chaotic, well-mixed transition region appears as flat region in these distance graphs, and the gyres appear as steep regions toward the maxima of the respective graphs. That the chaotic region is well mixed, and has no stratification (invariant rings as the gyres), can be seen from its flat behavior toward its maximum. The forth cornerstone is part of a gyre, and it starts to stratify it. Nevertheless, its distance to the other corners is much smaller.

Fig. 11
figure 11

The objective functions of the maximization problem in (25), sorted in ascending order (yellow and purple). Blue and red: \(\nu _K^\mathrm {meet}(c_i,\cdot )\), \(i=1,2\) (Color figure online)

To get a first glimpse of the separate regions in the state space, we have plotted the semidistances from each cornerstone in Fig. 12, both at the initial and final times. Note that since we work with trajectory labels rather than physical positions, the semidistances are invariant in time, whereas the physical positions of the floaters change over time. From these figures, one can approximately identify the two static (gyre) regions, being very close and very far from \(c_1\) and \(c_2\), respectively, and the chaotic transition region in between, having approximately constant distance from \(c_1\) and \(c_2\), cf. (Froyland and Padberg-Gehle 2014, Figure 1) .

Fig. 12
figure 12

Distances \(\nu _K^\mathrm {meet}\) from trajectory \(c_1\) (top), \(c_2\) (middle) and \(c_3\) (bottom), marked by the magenta circle, at initial (left) and final times (right). The semidistances are given in units \(1/\tau \). The horizontal axis is \(x_1\), and the vertical is \(x_2\) (Color figure online)

4.4 Number of Cornerstones

How to determine the number of cornerstones that should be used? Is there an optimal number, or is it up to our liking? In the case of the double gyre, as noted above, a fourth cornerstone would be part of one of the gyres, and assigning affiliations would thus split one gyre into two sets. If the gyres would consist of a continuum of periodic orbits, then we could proceed and split them this way into as many rings as we like. The same situation in an idealized framework appears in Sect. 4.1 for the static regions: since they are static, arbitrary subsets are perfectly coherent (even invariant in this case).

A good place to stop searching for further cornerstones would be when they would start to subdivide “maximal coherent” sets, as the gyres in the double-gyre example, or the static sets in the second, and the invariant sets in the first example of Sect. 4.1. To this end we make an idealized assumption: Coherent sets appear as for the second example in Sect. 4.1, i.e., multiple static regions divided by one mixing region. Here, “static” is meant in the sense that the mutual distances between points in the set barely change. Such an assumption was also utilized in (Hadjighasem et al. 2016).

Note that if there are \(C\ge 2\) coherent sets, the first C corner stones are going to be in them, one in each. This is due to the fact that to move from the center of one static region to another, the shortest path in (20) needs to move out of one set, travel in the transition region to the other set, and move to its center, hence maximizing the minimal distance to all other cornerstones. After finding all static regions, the next cornerstone is to be found in the transition region, if all static regions are approximately of the same size—which we assume here. The crucial observation is that this \((C+1)\)-st cornerstone is half as far from the other cornerstones,Footnote 9 as they are from one another. In other words, \(d(c_i,c_{C+1}) + d(c_j,c_{C+1}) \approx d(c_i,c_j)\), \(i,j\le C\).

To summarize, our simple check when to stop searching for cornerstones is going to be, when the value of the objective function in (25) drops by at least a factor two compared with the previous value. Observe how nicely this works in the periodically forced double-gyre case: the rightmost points of the curves in Fig. 11 are the optimizers, and the corresponding value of the yellow curve is less than half of the values for the first two cornerstones. This indicates to stop with three cornerstones, as they will represent both the gyres and the transition region.

4.5 Clustering and Fuzzy Affiliations

To get an even more crisp picture of the subdivision of the state space into regions which are far away in terms of the semidistance d, we assign to each cornerstone \(c_1,c_2,c_3\) the trajectories that are closer to them than to the two other cornerstones, respectively. For the periodically forced double gyre and the meeting distance this is shown in Fig. 13.

Fig. 13
figure 13

The trajectories closest in terms of \(\nu _K^\mathrm {meet}\) to one of the cornerstones than to the others. Left: initial time, right: final time. The horizontal axis is \(x_1\), the vertical is \(x_2\)

Comparing this picture with the typical trajectories of the time-1 Poincaré map of the system [again, see Froyland and Padberg-Gehle 2014, Figure 1], it appears that the gyre regions in our figure are smaller. This is due to the nature of the transport distance at hand: the gyres are partly made up of so-called regular regions of the Poincaré map, meaning that typical trajectories move on periodic orbits that are approximately concentric circular lines. Transport between these trajectories is only possible through diffusion, and the price one has to pay for this transport in radial direction is reflected by the rate function (recall, this is what we model by the static regions in Sect. 4.1). The cost to get from the center of the gyre (the cornerstone \(c_1\) or \(c_2\)) to a regular trajectory in the same gyre is proportional to the “radial distance” between them (compare with the static part of the second example in Sect. 4.1). This behavior is not characteristic for the well-mixed transition region, because there the dynamics (eventually) brings any two trajectories close to each other. The effect is most prominent if the time frame of consideration grows infinitely large, and on our finite time horizon it appears as a flattening of the curve. This brings us back to why the blue and green regions in Fig. 13 are smaller than gyres in the Poincaré map. The answer is simply, because the outer periodic orbits are closer to the transition region than to the center of the gyre, hence also closer to the cornerstone \(c_3\) that is in the transition region, because the points in the transition region have very small distance from one another.

Instead of a hard clustering we can assign the trajectories to the cornerstones by fuzzy affiliations \(q_{c_i}(\cdot )\), to obtain more refined information on coherence. For instance, let \(m>1\), and minimizing the affiliation-weighted penalty function

$$\begin{aligned} \sum _{j=1}^I\sum _{i=1}^{\ell } q_{c_i}(j)^m d(c_i,j)^2 \end{aligned}$$

subject to the constraints \(0\le q_{c_i}\) for \(i=1,\ldots ,\ell \) and \(\sum _{i=1}^{\ell }q_{c_i}(j)=1\) for every \(j=1,\ldots ,I\), yields

$$\begin{aligned} q_{c_i}(j) = \frac{1}{\sum _{k=1}^{\ell }\left( \frac{d(c_i,j)}{d(c_k,j)}\right) ^{\frac{2}{m-1}}}\,. \end{aligned}$$
(26)

This is the affiliation function in the fuzzy c-means algorithm (Bezdek 1981), giving \(q_{c_i}(j)=1\) \(\Leftrightarrow \) \(d(c_i,j)=0\), i.e., affiliation is maximal if the distance is minimal. Further, the parameter m controls the fuzziness of the clustering: large m gives soft clusters, while m approaching 1 gives more and more “crisp” clusters as the affiliations converge either to 0 or to 1 (Bezdek et al. 1987). The resulting affiliations (indicated at initial time) for \(m=2\) are shown in Fig. 14. For m close to 1 we obtain affiliations very similar to the hard clusters in Fig. 13.

Fig. 14
figure 14

Fuzzy affiliations \(q_{c_i}(\cdot )\) of the trajectories to the three cornerstones, \(c_1,c_2,c_3\) (from left to right) for fuzziness exponent \(m=2\), shown at initial time. The horizontal axis is \(x_1\), the vertical is \(x_2\)

5 Numerical Results

In the previous section we already presented numerical results for the double-gyre system, where we used the results to motivate and develop the analysis tools. In this section we apply these tools to two other well-analyzed test cases: the perturbed Bickley Jet and the rotating (transitory) double gyre. They are different paradigmatic examples, as the Bickley Jet has a non-vortex coherent set (the jet core), and the transitory double gyre is not a periodically forced system, thus genuinely living on a finite time interval.

Let us also point out that the examples presented here and in the previous section are all one- or two-dimensional. Although we expect the analysis in higher dimensions to be at least qualitatively not very different from the two-dimensional case, dealing with the nevertheless arising subtle differences (see Remark 4.1) is beyond the scope of this conceptual work.

It turns out that the choice between the two semidistances \(\nu _K^\mathrm {cross}\) or \(\nu _K^\mathrm {meet}\) has only marginal influence on the results. In this section we shall mostly work with the cross-semidistance.

5.1 The Bickley Jet

We consider a perturbed Bickley Jet as described in Rypina et al. (2007). This is an idealized zonal jet approximation in a band around a fixed latitude, assuming incompressibility, on which three traveling Rossby waves are superimposed, see Fig. 15. The dynamics is given by \(\dot{x}_t = v(t,x_t)\) with \(v(t,x)=(-\frac{\partial \Psi }{\partial x_2}, \frac{\partial \Psi }{\partial x_1})\) and stream function

$$\begin{aligned} \Psi (t,x_1,x_2) = -U_0 L \tanh \big (x_2/L\big ) + U_0L\,\mathrm {sech}^2\big (x_2/L\big ) \sum _{n=1}^3 A_n\cos \left( k_n\left( x_1- c_n t\right) \right) . \end{aligned}$$
Fig. 15
figure 15

Sketch of the Bickley Jet flow field at two different times. The flow pattern travels from left to right on the horizontally periodic domain. The horizontal axis is \(x_1\), the vertical is \(x_2\)

The constants are chosen as in Rypina et al. (2007, Section 4) . In particular, we set \(k_n = 2n/r_e\) with \(r_e = 6.371\), \(U_0 = 5.414\), and \(L = 1.77\). The phase speeds \(c_n\) of the Rossby waves are \(c_1 = 0.1446U_0\), \(c_2 = 0.205U_0\), \(c_3 = 0.461U_0\), their amplitudes \(A_1 = 0.0075\), \(A_2 = 0.15\), and \(A_3 = 0.3\), as in Hadjighasem et al. (2016). The system is considered on a state space \(X = [0,\pi r_e]\times [-3,3]\) which is periodic in the horizontal \(x_1\) coordinate.

We choose a uniform \(60\times 18\) grid as initial conditions for the floaters at time \(t=0\); i.e., \(I=1080\). We sample the trajectories of these floaters at times \(t_k = k\tau \), \(k=0,1,\ldots ,K\), where \(K=80\) and \(\tau = 0.5\). In this time interval, typical trajectories cross the cylindrical state space horizontally 4–5 times, trajectories in the jet core (the wavy structure in Fig. 15) up to 9 times.

Employing our large-deviation-based distance computations on this data set using Algorithm 1 and \(\alpha =1/2\), we get the rates \(\nu _K(i{\scriptstyle \rightarrow }j)\), \(i,j=1,\ldots ,I\). From these rates we readily obtain the \(\nu _K^\mathrm {cross}(i,j)\) via

$$\begin{aligned} \nu _K^\mathrm {cross}(i,j) = \nu _K(i{\scriptstyle \rightarrow }j) + \nu _K(j{\scriptstyle \rightarrow }i)\,. \end{aligned}$$

We repeat the cornerstone finding analysis from the previous section. The optimal values of the objective function in the cornerstone finding problem (25) are for 8 cornerstones, in order:

$$\begin{aligned} 2.06,\ 3.21,\ 2.45,\ 2.33,\ 2.30,\ 2.14,\ 1.42,\ 0.70. \end{aligned}$$

Recall that the first value is with respect to a random cornerstone \(c_0\) that we discard. These numerical values with our previous analysis shed light on the topological structure of the state space with respect transport and mixing. Note, that our assumption from Sect. 4.4, that all coherent sets are divided by one mixing region, is not satisfied: the jet core is a coherent set itself, dividing two mixing regions (below and above it), each containing 3 further coherent sets (the gyres). Thus, \(c_1\) and \(c_2\) have maximal distance (\(\nu _K^\mathrm {cross}(c_1,c_2)=3.21\)), because the random walker needs to cross the jet core. Every further cornerstone \(c_3,\ldots ,c_6\) can be reached from either \(c_1\) or \(c_2\) through one of the mixing regions, and thus have a very similar cost. The deviation of these costs, \(2.14\text {--}2.45\), shows that we did not reach the state of full mixing on the chosen time interval.

Now, the seventh cornerstone lies in the jet core, which has to be crossed if traveling between cornerstones that are below and above it, respectively. The corresponding cost (1.42) is a bit larger than half of the previous cost, because \(c_7\) does not lie on the shortest path between cornerstones below and above the jet core. Intuitively, the “center line” of the jet core should be equally far from all cornerstones \(c_1,\ldots ,c_6\), if the time interval is large enough such that the regions around the gyres are truly mixing. Since it is not, there are points on the boundary of the jet core which are easier to reach from them, and thus easier to cross there. The cornerstone \(c_7\) represents the position where it is the hardest to cross. The eighth cornerstone has truly half the semidistance to the closest one than \(c_7\), and lies in one of the mixing regions.

We show our results for seven cornerstones.Footnote 10 The semidistances are shown in Fig. 16.

Fig. 16
figure 16

Row-wise from top left to bottom: the identified corner stores \(c_i\), \(i=1,\ldots ,7\), (magenta circles) and their distances \(\nu ^{\mathrm {cross}}(c_i,\cdot )\) to the other trajectories, at initial time. The cornerstones are located in the six gyres and the central jet region. The distances are given in units \(1/\tau \). The horizontal axis is \(x_1\), the vertical is \(x_2\) (Color figure online)

The corresponding fuzzy affiliations from (26) for \(m=1.1\) are shown in Fig. 17. They show a very crisp distinction of the six gyres from the rest of the state space. The bottom right figure shows the affiliation \(q_{c_7}(\cdot )\) for \(m=1.9\), which suggests that the region around the gyres could still be partitioned into coherent sets itself: the jet core appears more strongly affiliated to this cornerstone than the other trajectories. It is not surprising that we could not see this for \(m=1.1\), since the closer m is to 1, the more “crisp” the affiliation function is forced to be, and the mixing region is more easily reached from the thin jet core than from the gyres.

Fig. 17
figure 17

Row-wise from top left to bottom: the fuzzy affiliations (26) of the trajectories at time \(t=5\) to the cornerstones \(c_1,\ldots ,c_7\), respectively (magenta circles). Bottom right: affiliation \(q_{c_7}(\cdot )\) for \(m=1.9\), which suggests that the \(7\text {th}\) coherent region could contain a coherent set itself: the jet core. The horizontal axis is \(x_1\), the vertical is \(x_2\) (Color figure online)

5.2 The Rotating Double Gyre

Let us consider a prototype for a system, where transport is considered only on a limited time interval. The rotating double-gyre system (Mosovsky and Meiss 2011) is given by the stream function \(\psi (t,x_1,x_2) = (1-s(t))\psi _P(x_1,x_2) + s(t)\psi _F(x_1,x_2)\), with \(s(t) = t^2(3-2t)\), \(\psi _P(x_1,x_2) = \sin (2\pi x_1)\sin (\pi x_2)\), and \(\psi _F(x_1,x_2) = \sin (\pi x_1)\sin (2\pi x_2)\), and is considered on the state space \(X = [0,1]^2\) and time interval \(t\in [0,1]\). The two gyres, which initially occupy the left and right halves of the unit square, turn during this time by \(\pi /2\) to occupy the top and bottom halves at final time, see Fig. 18.

Fig. 18
figure 18

Sketch of the flow field of the rotating double gyre at initial (left) and final (right) times. The horizontal axis is \(x_1\), the vertical is \(x_2\)

We choose a uniform \(30\times 30\) grid as initial conditions for the floaters at time \(t=0\); i.e., \(I=900\). We sample the trajectories of these floaters at times \(t_k = k\tau \), \(k=0,1,\ldots ,K\), where \(K=100\) and \(\tau = 0.01\). We employ the cross-semidistance, and start our cornerstone search. The first three values of the optimization problem (25) are

$$\begin{aligned} 0.0274,\ 0.0474,\ 0.0262. \end{aligned}$$

We identify the significant drop after two corner stones, hence we expect two coherent sets with one mixing region dividing them. The drop in the distance is by a factor 0.55, which is not below one half, the reason for this being again that the time interval of consideration is not sufficient for perfect mixing of the transition region.

Fig. 19
figure 19

From left to right: the identified corner stores \(c_i\), \(i=1,\ldots ,3\), (magenta circles) and their distances \(\nu ^{\mathrm {cross}}(c_i,\cdot )\) to the other trajectories, at initial time (top) and final time (bottom). The distances are given in units \(1/\tau \). The horizontal axis is \(x_1\), the vertical is \(x_2\) (Color figure online)

Fig. 20
figure 20

The fuzzy affiliations computed with \(m=1.2\) to the cornerstones \(c_1\) (top) and \(c_2\) (bottom), at times \(t=0,0.5,1\), from left to right, respectively. The horizontal axis is \(x_1\), the vertical is \(x_2\)

The semidistances from the three identified cores and the affiliations to these cores for exponent \(m=1.2\) are shown in Figs. 19 and 20, respectively. Although both \(\nu ^\mathrm {cross}\) and \(\nu ^\mathrm {meet}\) have been shown to be able to detect coherent sets, we demonstrate their different nature by showing the shortest paths in the respective distance in Fig. 21.

Fig. 21
figure 21

Shortest paths from cornerstone \(c_1\) to \(c_2\) (left), from \(c_2\) to \(c_1\) (middle), and the meeting paths with the shortest joint length (right). The brighter the color of a path segment, the larger the cost of that transition. For those segments for which the path crosses from one trajectory to another we show with a dashed segment how the former trajectory would have continued. The starting point of the path is indicated by a circle, and the endpoint by a triangle. The horizontal axis is \(x_1\), the vertical is \(x_2\)

Finally, we demonstrate the approach for a scattered set of sparse data points, taking 400 initial points randomly distributed in X, and repeating the analysis for their trajectories. We show the resulting fuzzy affiliations in Fig. 22.

Fig. 22
figure 22

The rotating double gyre with randomly chosen 400 initial points. The fuzzy affiliations computed with \(m=1.2\) to the cornerstones \(c_1\) (top) and \(c_2\) (bottom), at times \(t=0,0.5,1\), from left to right, respectively. The horizontal axis is \(x_1\), the vertical is \(x_2\)

6 Discussion and Outlook

6.1 The Dynamic Laplacian

Froyland (2015) has introduced the dynamic Laplacian as a transport-related tool to find coherent sets. Similarly to our approach, it makes use of a small random perturbation of size \(\varepsilon \), then \(\varepsilon \) is driven to zero.

Numerical methods so far discretize directly the dynamic Laplacian (Froyland and Junge 2015; Banisch and Koltai 2017; Froyland and Junge 2017). In light of our analysis, which can be used both ways (derive the large-deviation principle in continuous space, then discretize it to finite trajectories, cf. Sect. 3.6, or discretize the dynamics to finite trajectories, then derive the large-deviation principle on them, cf. Sect. 3.3), we ask whether there is a discrete dynamic Laplacian that can be derived from a discretization of the perturbed dynamics?

Mimicking the construction in Froyland (2015) and sketching the idea while skipping details, one should construct a discrete, \(\varepsilon \)-dependent transfer operator \(T_{\varepsilon }\in \mathbb {R}^{I\times I}\), that represents transition probabilities of a forward-backward process, then obtain a discrete dynamic Laplace operator \(L_{\text {dyn}} := \frac{d}{d\varepsilon }\big \vert _{\varepsilon =0}T_{\varepsilon }\). A discrete transfer operator \(T_\varepsilon \) that is a consistent approximation of the continuous dynamics can be obtained by a construction as in Sect. 3.2, by using the transition probabilities (17). Technical details aside, we see that the probabilities are linear combinations of terms of the form \(\text {e}^{-\Delta x / \varepsilon }\), where \(\Delta x\) here is a formal distance term that appears in the formulas. Differentiation with respect to \(\varepsilon \) immediately yields that all off-diagonal entries (basically, where \(\Delta x>0\)) of \(L_{\text {dyn}}\) are zero, in fact the matrix is the identity.

Thus, this approach of discretizing the dynamics first, and then factoring out the \(\varepsilon \)-small stochastic perturbation does not give a dynamically meaningful result. In analytic terms the very same problem occurred in a different attempt to introduce a discrete dynamic Laplacian from a discrete transfer operator, see (Banisch and Koltai 2017, Section IV) . In general, it would be desirable to understand when and how can the “first discretize, then factor out \(\varepsilon \)” methods work, such that they can complement the methods that directly discretize the (continuous) dynamic Laplace operator.

6.2 Other Distance Measures

The time-dependent shortest path problem used to compute our semidistances is computationally demanding in our current algorithmic realization, which theoretically limits the number of trajectories that can be handled. Moreover, they do not satisfy the triangle inequality, hence they are not a metric. Although numerical efficiency is not the main focus of this paper, and we demonstrated the usefulness of our semidistances in unraveling the underlying dynamical structure of the example systems, a more cheaply computable metric would enhance the utility and significance of the analysis methods presented here.

Ultimately, one would like to understand the intrinsic, possibly low-dimensional geometric organization of the state space with respect to transport and mixing, as pioneered in Banisch and Koltai (2017). Employing proper metrics would allow, e.g., the usage of low-dimensional embedding techniques, such as multidimensional scaling, to represent and better understand this geometric organization. One canonical candidate would be the metric structure related to the dynamic Laplacian, considered in Karrasch and Keller (2016). This will be subject of future studies.

To summarize, although other distance measures could be used to analyze complicated dynamic behavior, we showed that the semidistances we derived in this paper from the physical notion of transport and mixing in the vanishing diffusion setting are natural and effective diagnostic tools.