1 Introduction

The study of geometric flows is a very flourishing mathematical field and geometric evolution equations have been applied to a variety of topological, analytical and physical problems, giving in some cases very fruitful results. In particular, a great attention has been devoted to the analysis of harmonic map flow, mean curvature flow and Ricci flow. With serious efforts from the members of the mathematical community the understanding of these topics gradually improved and it culminated with Perelman’s proof of the Poincaré conjecture making use of the Ricci flow, completing Hamilton’s program. The enthusiasm for such a marvelous result encouraged more and more researchers to investigate properties and applications of general geometric flows and the field branched out in various different directions, including higher order flows, among which we mention the Willmore flow.

In the last two decades a certain number of authors focused on the one dimensional analog of the Willmore flow (see [26]): the elastic flow of curves and networks. The elastic energy of a regular and sufficiently smooth curve \(\gamma \) is a linear combination of the \(L^2\)-norm of the curvature \(\varvec{\kappa }\) and the length, namely

$$\begin{aligned} {\mathcal {E}}\left( \gamma \right) :=\int _{\gamma } \vert \varvec{\kappa }\vert ^{2}+\mu \,\mathrm {d}s. \end{aligned}$$

where \(\mu \ge 0\). In the case of networks (connected sets composed of \(N\in {\mathbb {N}}\) curves that meet at their endpoints in junctions of possibly different order) the functional is defined in a similar manner: one sum the contribution of each curve (see Definition 2.1). Formally the elastic flow is the \(L^2\) gradient flow of the functional \({\mathcal {E}}\) (as we show in Section 2.3) and the solutions of this flow are the object of our interest in the current paper.

To the best of our knowledge the problem was taken into account for the first time by Polden. In his Doctoral Thesis [46, Theorem 3.2.3.1] he proved that, if we take as initial datum a smooth immersion of the circle in the plane, then there exists a smooth solution to the gradient flow problem for all positive times. Moreover, as times goes to infinity, it converges along subsequences to a critical point of the functional (either a circle, or a symmetric figure eight or a multiple cover of one of these). Polden was also able to prove that if the winding number of the initial curve is \(\pm 1\) (for example the curve is embedded), then it converges to a unique circle [46, Corollary 3.2.3.3]. In the early 2000s Dziuk, Kuwert and Schätzle generalize the global existence and subconvergence result to \({\mathbb {R}}^n\) and derive an algorithm to treat the flow and compute several numerical examples. Later the analysis was extended to non closed curve, both with fixed endpoint and with non–compact branches. The problem for networks was first proposed in 2012 by Barrett, Garcke and Nürnberg [7].

Beyond the study of this specific problem there are quite a lot of catchy variants. For instance, as for a regular \(C^2\) curve \(\gamma :I\rightarrow {\mathbb {R}}^2\) it holds \({\varvec{k}}=\partial _s \tau \), where \(\tau \) is the unit tangent vector and \(\partial _s\) denotes derivative with respect to the arclength parameter s of the curve, we can introduce the tangent indicatrix: a scalar map \(\theta :I\rightarrow {\mathbb {R}}\) such that \(\tau =(\cos \theta ,\sin \theta )\). Then we can write the elastic energy in terms of the angle spanned by the tangent vector. By expressing the \(L^2\) corresponding gradient flow by means of \(\theta \) one get another geometric evolution equation. This is a second order gradient flow and it has been first considered by [54] and then further investigated by [30, 31, 42, 45, 55].

Critical points of total squared curvature subject to fixed length are called elasticae, or elastic curves. Notice that for any \(\mu >0\) the elasticae are (up to homothety) exactly the critical points of the energy \({\mathcal {E}}\). Elasticae have been studied since Bernoulli and Euler as the elastic energy was used as a model for the bending energy of an elastic rod [53] and more recently Langer and Singer contributed to their classification [27, 28] (see also [20, 32]).

The \(L^2\)-gradient flow of \(\int \vert \varvec{\kappa }\vert ^2\,\mathrm {d}s\) when the curve is subjected to fixed length is studied in [12, 13, 21, 49].

It is worth to mention also results about the Helfrich flow [17, 56], the elastic flow with constraints [25, 43, 44] and other fourth (or higher) order flows [1, 2, 36, 37, 57].

In the following table we collect some contributions on the elastic flow of curves (closed or open) and networks. The first column concerns papers containing detailed proofs of short time existence results. The initial datum can be a function of a suitably chosen Sobolev space, or Hölder space, or the curves are smooth. In the second column we place the articles that show existence for all positive times or that describe obstructions to such a desired result. When the flow globally exists, it is natural to wonder about the behavior of the solutions for \(t\rightarrow +\infty \). Papers that answer this question are in the third column. The ambient space may vary from article to article: it can be \({\mathbb {R}}^2\), \({\mathbb {R}}^n\), or a Riemannian manifold.

 

Short time existence

Long time behavior

Asymptotic analysis

Closed curves

[46]

[21, 46]

[21, 35, 46, 47]

Open curves Navier b.c.

[40]

[40]

[40, 41]

Open curves, clamped b.c.

[52]

[29]

[18, 41]

Non compact curves

 

[40]

[40]

Networks

[15, 22, 23]

[14, 22]

[14]

We refer also to the two recent PhD theses [38, 48].

The aim of this expository paper is to arrange (most of) this material in a unitary form, proving in full detail the results for the elastic flow of closed curves and underlying the differences with the other cases.

For simplicity we restrict to the Euclidean plane as ambient space. In Section 2 we define the flow, deriving the motion equation and the necessary boundary conditions for open curves and networks. In the literature curves that meet at junctions of order at most three are usually considered, while here the order of the junctions is arbitrary.

In Section 3 we show short time existence and uniqueness (up to reparametrizations) for the elastic flow of closed curve, supposing that the initial datum is Hölder-regular (Theorem 3.18). The notion of \(L^2\)-gradient flow gives rise to a fourth order parabolic quasilinear PDE, where the motion in tangential direction is not specified. To obtain a non–degenerate equation we fix the tangential velocity, then getting first a special flow (Definition  2.12). We find a unique solution of the special flow (Theorem 3.14) using a standard linearization procedure and a fixed point argument. Then a key point is to ensure that solving the special flow is enough to obtain a solution to the original problem. How to overcome this issue is explained in Section 2.4. The short time existence result can be easily adapted to open curves (see Remark 3.15), but present some extra difficulties in the case of networks, that we explain in Remark 3.16.

One interesting feature following from the parabolic structure of the elastic flow is that solutions are smooth for any (strictly) positive times. We give the idea of two possible lines of proof of this fact and we refer to [15, 22] for the complete result.

Section 4 is devoted to the prove that the flow of either closed or open curves with fixed endpoint exists globally in time (Theorem 4.15). The situation for network is more delicate and it depends on the evolution of the length of the curves composing the network and on the angles formed by the tangent vectors of the curves concurring at the junctions (Theorem 4.18).

In Section 5 we first show that, as time goes to infinity, the solutions of the elastic flow of closed curve convergence along subsequences to stationary points of the elastic energy, up to translations and reparametrizations. We shall refer to this phenomenon as the subconvergence of the flow. We then discuss how the subconvergence can be promoted to full convergence of the flow, namely to the existence of the full asymptotic limit as \(t\rightarrow +\infty \) of the evolving flow, up to reparametrizations (Theorem 5.4). The proof is based on the derivation and the application of a Łojasiewicz–Simon gradient inequality for the elastic energy.

We conclude the paper with a list of open problems.

2 The Elastic Flow

A regular curve \(\gamma \) is a continuous map \(\gamma :[a,b]\rightarrow {{{\mathbb {R}}}}^2\) which is differentiable on (ab) and such that \(\vert \partial _x\gamma \vert \) never vanishes on (ab). Without loss of generality, from now on we consider \([a,b]=[0,1]\).

In the sequel we will abuse the word “curve” to refer both to the parametrization of a curve, the equivalence class of reparametrizations, or the support in \({\mathbb {R}}^2\).

We denote by s the arclength parameter and we will pass to the arclength parametrization of the curves when it is more convenient without further mentioning. We will also extensively use the arclength measure \(\mathrm {d}s\) when integrating with respect to the volume element \(\mu _g\) on [0, 1] induced by a regular rectifiable curve \(\gamma \), namely, given a \(\mu _g\)-integrable function f on [0, 1] it holds

$$\begin{aligned} \int _{[0,1]}f\,\mathrm {d}\mu _g=\int _0^1f(x)\vert \partial _x\gamma (x)\vert \,\mathrm {d}x =\int _0^{\ell (\gamma )}f(x(s))\,\mathrm {d}s=:\int _{\gamma }f\,\mathrm {d}s, \end{aligned}$$

where \(\ell (\gamma )\) is the length of the curve \(\gamma \).

Definition 2.1

A planar network \({\mathcal {N}}\) is a connected set in \({\mathbb {R}}^2\) given by a finite union of images of regular curves \(\gamma ^i:[0,1]\rightarrow {{{\mathbb {R}}}}^2\) that may have endpoints of order one fixed in the plane and curves that meet at junctions of different order \(m\in {\mathbb {N}}_{\ge 2}\).

The order of a junction \(p\in {{{\mathbb {R}}}}^2\) is the number \(\sum _i \{0,1\}\cap \sharp (\gamma ^i)^{-1}(p)\).

As special cases of networks we find:

  • a single curve (either closed or not);

  • a network of three curves whose endpoints meet at two different triple junction (the so-called Theta);

  • a network of three curves with one common endpoint at a triple junction and the other three endpoint of order one (the so called Triod).

Notice that when it is more convenient, we will parametrize a closed curve as a map \(\gamma :{\mathbb {S}}^1\rightarrow {\mathbb {R}}^2\).

In order to calculate the integral of an N-tuple \(f=(f^1,\ldots ,f^N)\) of functions along the network \({\mathcal {N}}\) composed of the N curves \(\gamma ^i\) we adopt the notation

$$\begin{aligned} \int _{{\mathcal {N}}} f\,\mathrm {d}s:=\sum _{i=1}^{N}\int _{\gamma ^i}^{}f_{|\gamma ^i}\,\mathrm {d}s =\sum _{i=1}^N\int _0^1 f^i\vert \partial _x\gamma ^i\vert \,\mathrm {d}x. \end{aligned}$$

If \(\mu =(\mu ^1, \ldots ,\mu ^N)\) with \(\mu ^i\ge 0\), then the notation \(\int _{{\mathcal {N}}} \mu f\,\mathrm {d}s\) stands for \(\sum _{i=1}^N\int _0^1 \mu ^i f^i\vert \partial _x\gamma ^i\vert \,\mathrm {d}x\).

Let \(\gamma :[0,1]\rightarrow {\mathbb {R}}^2\) be a regular curve and \(f:(0,1)\rightarrow {\mathbb {R}}\) a Lebesgue measurable function. For \(p\in [1,\infty )\) we define

$$\begin{aligned} \Vert f\Vert ^p_{L^p(\mathrm {d}s)}:=\int _\gamma \vert f\vert ^p\,\mathrm {d}s =\int _0^1\vert f(x)\vert ^p\vert \partial _x\gamma (x)\vert \,\mathrm {d}x \end{aligned}$$

and

$$\begin{aligned} L^p(\mathrm {d}s):=\left\{ f:(0,1)\rightarrow {\mathbb {R}}\;\text {Lebesgue measurable with}\; \Vert f\Vert ^p_{L^p(\mathrm {d}s)}<+\infty \right\} . \end{aligned}$$

We will also use the \(L^\infty \)-norm

$$\begin{aligned} \Vert f^i\Vert _{L^\infty (\mathrm {d}s)}:= \mathrm {ess\; sup}_{L^\infty (\mathrm {d}s)} \,\vert f^i\vert . \end{aligned}$$

Whenever we are considering continuous functions, we identify the supremum norm with the \(L^\infty \) norm and denote it by \(\left\Vert \cdot \right\Vert _\infty \).

We remark here that for sake of notation we will simply write \(\Vert \cdot \Vert _{L^p}\) instead of \(\Vert \cdot \Vert _{L^p({\,\mathrm {d}}s)}\) both for \(p\in [1,\infty )\) and \(p=\infty \) whenever there is no risk of confusion.

We will analogously write

$$\begin{aligned} \Vert f\Vert _{L^p}:=\sum _{i=1}^{N}\Vert f^i\Vert _{L^p(\mathrm {d}s)}\quad \text {for all}\; p\in [1,\infty )\quad \text {and}\quad \Vert f\Vert _{L^\infty }:=\sum _{i=1}^{N}\Vert f^i\Vert _ {L^\infty (\mathrm {d}s)}, \end{aligned}$$

for an N-tuple of functions f along a network \({\mathcal {N}}\).

Assuming that \(\gamma ^i\) is of class \(H^2\), we denote by \(\varvec{\kappa }^i:=\partial _s^2\gamma ^i\) the curvature vector to the curve \(\gamma ^i\), which is defined at almost every point and the curvature is nothing but \(\kappa ^i:=\vert \varvec{\kappa }^i\vert \). We recall that in the plane we can write the curvature vector as \(\varvec{\kappa }^i=k^i \nu ^i\) where \(\nu ^i\) is the counterclockwise rotation of \(\frac{\pi }{2}\) of the unit tangent vector \(\tau ^i:=\vert \partial _x\gamma ^i\vert ^{-1}(\partial _x\gamma ^i)\) to a curve \(\gamma ^i\) and then \(k^i\) is the oriented curvature.

Definition 2.2

Let \(\mu ^i \ge 0\) be fixed for \(i\in \{1, \ldots , N\}\). The elastic energy functional \({\mathcal {E}}_\mu \) of a network \({\mathcal {N}}\) given by N curves \(\gamma ^i\) of class \(H^2\) is defined by

$$\begin{aligned} {\mathcal {E}}_\mu \left( {\mathcal {N}}\right) := \int _{{\mathcal {N}}} \vert \varvec{\kappa }\vert ^{2}\,\mathrm {d}s +\mu \, \mathrm {L}({\mathcal {N}}) :=\sum _{i=1}^N\left( \int _{{\mathcal {N}}^{i}} (k^i)^{2} \,\mathrm {d}s +\mu ^i \, \ell (\gamma ^i) \right) , \end{aligned}$$
(2.1)

and \(\mu \mathrm {L}({\mathcal {N}})\) is named weighted global length of the network \({\mathcal {N}}\).

2.1 First Variation of the Elastic Energy

The computation of the first variation has been carried several times in full details in the literature, both in the setting of closed curves or networks. We refer for instance to [7, 35].

Let \(N\in {\mathbb {N}}\), \(i\in \{1,\ldots ,N\}\). Consider a network \({\mathcal {N}}\) composed of N curves, parametrized by \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\) of class \(H^4\). In order to compute the first variation of the energy we can suppose that the curves meet at one junction, which is of order N and \(\gamma ^i(1)\) is some fixed point in \({{{\mathbb {R}}}}^2\) for any i. That is

$$\begin{aligned} \gamma ^1(0)=\cdots =\gamma ^N(0), \quad \gamma ^i(1)=P^i\in {\mathbb {R}}^2. \end{aligned}$$

The case of networks with other possible topologies can be easily deduced from the presented one. We consider a variation \(\gamma ^i_\varepsilon =\gamma ^i+\varepsilon \psi ^i\) of each curve \(\gamma ^i\) of \({\mathcal {N}}\) with \(\varepsilon \in {\mathbb {R}}\) and \(\psi ^i:[0,1]\rightarrow {\mathbb {R}}^2\) of class \(H^2\). We denote by \({\mathcal {N}}_\varepsilon \) the network composed of the curves \(\gamma ^i_\varepsilon \), which are regular whenever \(|\varepsilon |\) is small enough. We need to impose that the structure of the network \({\mathcal {N}}\) is preserved in the variation: we want the network \({\mathcal {N}}_\varepsilon \) to still have one junction of order N and we want to preserve the position of the other endpoints \(\gamma ^i_\varepsilon (1)=P^i\). To this aim we require

$$\begin{aligned} \psi ^1(0)=\cdots =\psi ^N(0),\quad \psi ^i(1)=0 \quad \forall \,i\in \{1,\ldots ,N\}. \end{aligned}$$

By definition of the elastic energy functional of networks, we have

$$\begin{aligned} {\mathcal {E}}_\mu ({\mathcal {N}}_\varepsilon ) =\sum _{i=1}^N\int _{\gamma ^i_\varepsilon }(k^i_\varepsilon )^2+\mu ^i\,\mathrm {d}s =\sum _{i=1}^N\int _{\gamma ^i_\varepsilon }\vert \varvec{\kappa }^i_\varepsilon \vert ^2 +\mu ^i\,\mathrm {d}s. \end{aligned}$$

We introduce the operator \(\partial _s^\perp \) (that acts on a vector field \(\varphi \)) defined as the normal component of \(\partial _s\varphi \) along the curve \(\gamma \), that is \(\partial _s^\perp \varphi =\partial _s\varphi -\left\langle \partial _s\varphi ,\partial _s\gamma \right\rangle \partial _s\gamma \). Then a direct computation yields the following identities:

$$\begin{aligned} \begin{aligned} \partial _\varepsilon {\,\mathrm {d}}s_\varepsilon&= \langle \partial _s\psi ^i,\tau ^i_\varepsilon \rangle {\,\mathrm {d}}s_\varepsilon = \left( \partial _s\langle \psi ^i,\tau ^i_\varepsilon \rangle - \langle \psi ^i,\varvec{\kappa }^i_\varepsilon \rangle \right) {\,\mathrm {d}}s_\varepsilon , \\ \partial _\varepsilon \partial _s - \partial _s\partial _\varepsilon&= \left( \langle \varvec{\kappa }^i_\varepsilon , \psi ^i\rangle - \partial _s \langle \tau ^i_\varepsilon , \psi ^i\rangle \right) \partial _s,\\ \partial _\varepsilon \tau ^i_\varepsilon&= \partial _s^\perp (\psi ^i)^\perp + \langle \tau ^i_\varepsilon , \psi ^i\rangle \varvec{\kappa }^i_\varepsilon , \\ \partial _\varepsilon \varvec{\kappa }^i_\varepsilon&= (\partial _s^\perp )^2 (\psi ^i)^\perp - \langle \partial _s^\perp (\psi ^i)^\perp , \varvec{\kappa }^i_\varepsilon \rangle \tau ^i_\varepsilon + \langle \tau ^i_\varepsilon , \psi ^i\rangle \partial _s\varvec{\kappa }^i_\varepsilon + \langle \varvec{\kappa }^i_\varepsilon ,\psi ^i\rangle \varvec{\kappa }^i_\varepsilon , \end{aligned} \end{aligned}$$
(2.2)

for any i on (0, 1), where s is the arclength parameter of \(\gamma _\varepsilon \) for any \(\varepsilon \). Therefore, evaluating at \(\varepsilon =0\), we obtain

$$\begin{aligned} \frac{d}{d\varepsilon }{\mathcal {E}}_\mu ({\mathcal {N}}_\varepsilon )\Big \vert _{\varepsilon =0}&=\sum _{i=1}^{N} \left[ \int _{\gamma ^i} 2 \langle \varvec{\kappa }^{i}, \partial ^2_{s} \psi ^{i} \rangle \, \mathrm {d}s + \int _{\gamma ^i} (-3 |\varvec{\kappa }^i|^2+\mu ^i) \left\langle \tau ^{i},\partial _s\psi ^{i}\right\rangle \,\mathrm {d}s \right] . \end{aligned}$$
(2.3)

Moreover, denoting by \(\partial _s^\perp (\cdot ): =\partial _s(\cdot )-\langle \partial _s (\cdot ),\tau \rangle \tau \), we have

$$\begin{aligned} \partial _s \varvec{\kappa }^i&=\partial ^\perp _s \varvec{\kappa }^i- |\varvec{\kappa }^i|^2\tau ^i,\\ \partial _s^2 \varvec{\kappa }^{i}&= (\partial _s^\perp )^2 \varvec{\kappa }^{i} -3 \langle \partial _s \varvec{\kappa }^i, \varvec{\kappa }^i\rangle \partial _s \gamma ^i - |\varvec{\kappa }^i|^2 \varvec{\kappa }^i, \end{aligned}$$

then, using these identities and integrating (2.3) by parts twice, one gets

$$\begin{aligned} \frac{d}{d\varepsilon }{\mathcal {E}}_\mu ({\mathcal {N}}_\varepsilon ) \Big \vert _{\varepsilon =0}&=\sum _{i=1}^{N} \int _{\gamma ^i} \left\langle 2 (\partial ^\perp _s)^2 \varvec{\kappa }^{i}+ |\varvec{\kappa }^i|^2 \varvec{\kappa }^{i} -\mu ^i \varvec{\kappa }^{i}, \psi ^{i} \right\rangle \,\mathrm {d}s\nonumber \\&\quad +\sum _{i=1}^{N} \left[ 2 \left. \langle \varvec{\kappa }^{i}, \partial _s \psi ^{i}\rangle \right| _0^1 + \left. \langle -2\partial ^\perp _s \varvec{\kappa }^{i} - |\varvec{\kappa }^i|^2\tau ^i+\mu ^i\tau ^i, \psi ^{i}\rangle \right| _0^1 \right] \end{aligned}$$
(2.4)
$$\begin{aligned}&=\sum _{i=1}^{N} \int _{\gamma ^i} \left\langle 2 (\partial ^\perp _s)^2 \varvec{\kappa }^{i}+ |\varvec{\kappa }^i|^2 \varvec{\kappa }^{i} -\mu ^i \varvec{\kappa }^{i}, \psi ^{i} \right\rangle \,\mathrm {d}s \nonumber \\&\quad +\sum _{i=1}^{N} 2 \langle \varvec{\kappa }^{i}(1), \partial _s \psi ^{i}(1)\rangle - 2 \langle \varvec{\kappa }^{i}(0), \partial _s \psi ^{i}(0)\rangle \nonumber \\&\quad + \left\langle \left( \sum _{i=1}^{N}-2\partial ^\perp _s \varvec{\kappa }^{i}(0) - \vert \varvec{\kappa }^i(0)\vert ^2\tau ^i(0)+\mu ^i\tau ^i(0)\right) , \psi ^{1}(0)\right\rangle . \end{aligned}$$
(2.5)

As we chose arbitrary fields \(\psi ^i\), we can split \(\partial _s\psi ^i\) into normal and tangential components as

$$\begin{aligned} \partial _{s}\psi ^{i}&=\partial _{s}^\perp \psi ^{i}+\partial _{s}^\parallel \psi ^{i} =\left\langle \partial _{s}\psi ^i,\nu ^i\right\rangle \nu ^i +\left\langle \partial _{s}\psi ^i,\tau ^i\right\rangle \tau ^i =:\left( \psi ^i_{s}\right) ^\perp \nu ^i+\left( \psi ^i_{s}\right) ^\parallel \tau ^i. \end{aligned}$$

This allows us to write

$$\begin{aligned} \left\langle \varvec{\kappa }^{i}, \partial _s \psi ^{i}\right\rangle&= \left\langle k^i\nu ^i, \left( \psi ^i_{s}\right) ^\perp \nu ^i+\left( \psi ^i_{s}\right) ^\parallel \tau ^i\right\rangle =k^i \left( \psi ^i_{s}\right) ^\perp , \end{aligned}$$

and we can then partially reformulate the first variation in terms of the oriented curvature and its derivatives:

$$\begin{aligned}&\frac{d}{d\varepsilon }{\mathcal {E}}_\mu ({\mathcal {N}}_\varepsilon ) \Big \vert _{\varepsilon =0} =\sum _{i=1}^{N} \int _{\gamma ^i} \left( 2\partial _ s^2 k^{i} +(k^i)^3-\mu ^i k^i\right) \left( \psi ^i\right) ^\perp \,\mathrm {d}s\nonumber \\&\quad +2\sum _{i=1}^N \left. k^i \left( \psi ^i_{s}\right) ^\perp \right| _0^1 + \left\langle \left( \sum _{i=1}^{N}-2\partial ^\perp _s \varvec{\kappa }^{i}(0) - \vert \varvec{\kappa }^i(0)\vert ^2\tau ^i(0)+\mu ^i\tau ^i(0)\right) , \psi ^{1}(0)\right\rangle . \end{aligned}$$
(2.6)

2.2 Second Variation of the Elastic Energy

In this part we compute the second variation of the elastic energy functional \({\mathcal {E}}_\mu \). We are interested only in showing its structure and analyze some properties, instead of computing it explicitly (for the full formula of the second variation we refer to [18, 47]). In fact, we will exploit the properties of the second variation only in the proof of the smooth convergence of the elastic flow of closed curves in Section 5. In particular, we we will not need to carry over boundary terms in the next computations.

Let \(\gamma :(0,1)\rightarrow {{{\mathbb {R}}}}^2\) be a smooth curve and let \(\psi :(0,1)\rightarrow {{{\mathbb {R}}}}^2\) be a vector field in \(H^4(0,1)\cap C^0_c(0,1)\), that is, \(\psi \) identically vanishes out of a compact set contained in (0, 1). In this setting, we can think of \(\gamma \) as a parametrization of a part of an arc of a network or of a closed curve. We are interested in the second variation

$$\begin{aligned} \frac{d^2}{d\varepsilon ^2} {\mathcal {E}}_\mu (\gamma +\varepsilon \psi )\Big \vert _{\varepsilon =0}. \end{aligned}$$

By (2.4) we have

$$\begin{aligned} \begin{aligned} \frac{d^2}{d\varepsilon ^2} {\mathcal {E}}_\mu (\gamma +\varepsilon \psi )\Big \vert _{\varepsilon =0}&= \frac{d}{d\varepsilon }\Big \vert _{\varepsilon =0} \int _{\gamma _\varepsilon } \left\langle 2(\partial _s^\perp )^2\varvec{\kappa }_\varepsilon + |\varvec{\kappa }_\varepsilon |^2 \varvec{\kappa }_\varepsilon - \mu \varvec{\kappa }_\varepsilon , \psi \right\rangle {\,\mathrm {d}}s_\varepsilon , \end{aligned} \end{aligned}$$

where \(\varvec{\kappa }_\varepsilon \) is the curvature vector of \(\gamma _\varepsilon =\gamma +\varepsilon \psi \), for any \(\varepsilon \) sufficiently small.

We further assume that \(\gamma \) is a critical point for \({\mathcal {E}}_\mu \) and that \(\psi \) is normal along \(\gamma \). Then

$$\begin{aligned} \frac{d^2}{d\varepsilon ^2} {\mathcal {E}}_\mu (\gamma +\varepsilon \psi )\Big \vert _{\varepsilon =0} = \int _\gamma \left\langle \partial _\varepsilon \big \vert _{\varepsilon =0} \left( 2(\partial _s^\perp )^2\varvec{\kappa }_\varepsilon + |\varvec{\kappa }_\varepsilon |^2 \varvec{\kappa }_\varepsilon - \mu \varvec{\kappa }_\varepsilon \right) , \psi \right\rangle {\,\mathrm {d}}s. \end{aligned}$$

Using (2.2), if \(\phi _\varepsilon \) is a normal vector field along \(\gamma _\varepsilon \) for any \(\varepsilon \) and we denote \(\phi {:}{=}\phi _0\), a direct computation shows that

$$\begin{aligned} \partial _\varepsilon |_{_{\varepsilon =0}} \partial _s^\perp \phi _\varepsilon - \partial _s^\perp \partial _\varepsilon |_{_{\varepsilon =0}} \phi _\varepsilon = \langle \psi , \varvec{\kappa }\rangle \partial _s^\perp \phi - \langle \partial _s^\perp \phi , \partial _s^\perp \psi \rangle \tau + \langle \phi ,\varvec{\kappa }\rangle \partial _s^\perp \psi . \end{aligned}$$

Hence \(\partial _\varepsilon |_{_{\varepsilon =0}} (\partial _s^\perp )^2 \varvec{\kappa }_\varepsilon \) can be computed applying the above commutation rule twice, first with \(\phi _\varepsilon = \partial _s^\perp \varvec{\kappa }_\varepsilon \) and then with \(\psi _\varepsilon = \varvec{\kappa }_\varepsilon \). One easily obtains

$$\begin{aligned} \partial _\varepsilon |_{_{\varepsilon =0}} (\partial _s^\perp )^2 \varvec{\kappa }_\varepsilon = (\partial _s^\perp )^4 \psi + \Omega (\psi ), \end{aligned}$$

where \(\Omega (\psi )\in L^2(\mathrm {d}s)\) is a normal vector field along \(\gamma \), depending only on \(k,\psi \) and their “normal derivatives” \(\partial _s^\perp \) up to the third order. Moreover the dependence of \(\Omega \) on \(\psi \) is linear. For further details on these computations we refer to [35].

Using (2.2) it is immediate to check that \(\partial _\varepsilon |_{_{\varepsilon =0}} \left( |\varvec{\kappa }_\varepsilon |^2 \varvec{\kappa }_\varepsilon - \mu \varvec{\kappa }_\varepsilon \right) \) yields terms that can be absorbed in \(\Omega (\psi )\). Therefore we conclude that

$$\begin{aligned} \frac{d^2}{d\varepsilon ^2} {\mathcal {E}}_\mu (\gamma +\varepsilon \psi )\Big \vert _{\varepsilon =0} = \int _\gamma \left\langle 2 (\partial _s^\perp )^4\psi + \Omega (\psi ) , \psi \right\rangle {\,\mathrm {d}}s. \end{aligned}$$

By polarization, we see that the second variation of \({\mathcal {E}}_\mu \) defines a bilinear form \(\delta ^2{\mathcal {E}}_\mu (\varphi ,\psi )\) given by

$$\begin{aligned} \delta ^2{\mathcal {E}}_\mu (\varphi ,\psi ) = \int _\gamma \left\langle 2 (\partial _s^\perp )^4\varphi + \Omega (\varphi ) , \psi \right\rangle {\,\mathrm {d}}s, \end{aligned}$$

for any normal vector field \(\varphi ,\psi \) of class \(H^4\cap C^0_c\) along \(\gamma \), which is a smooth critical point of \({\mathcal {E}}_\mu \).

2.3 Definition of the Flow

In this section we define the elastic flow for curves and networks. We formally derive it as the \(L^2\)-gradient flow of the elastic energy functional (2.1). We need to derive the normal velocity defining the flow. The reasons why a gradient flow is defined in term of a normal velocity are related to the invariance under reparametrization of the energy functional and we will come back on this point more deeply in Section 2.4.

The analysis of the boundary terms appeared in the computation of the first variation play an important role in the definition of the flow. Indeed, a correct definition of the flow depends on the fact that the velocity defining the evolution should be the opposite of the “gradient” of the energy. Hence we need to identify such a gradient from the formula of the first variation and, in turn, analyze the boundary terms appearing.

Suppose first that the network is composed only of one closed curve \(\gamma \in C^{\infty }([0,1],{\mathbb {R}}^2)\). This means that for every \(k\in {\mathbb {N}}\) we have \(\partial _x^k\gamma (0)=\partial _x^k\gamma (1)\) and \(\gamma \) can be seen as a smooth periodic function on \({{{\mathbb {R}}}}\). Then a variation field \(\psi \) is just a periodic function as well and no further boundary constraints are needed and then the boundary terms in (2.4) are automatically zero. Then (2.5) reduces to

$$\begin{aligned} \frac{d}{d\varepsilon }{\mathcal {E}}_\mu (\gamma _\varepsilon )_{|\varepsilon =0} =\int _{\gamma } \left\langle 2 (\partial ^\perp _s)^2 \varvec{\kappa }+ |\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }, \psi \right\rangle \,\mathrm {d}s . \end{aligned}$$

We have formally written the directional derivative of \({\mathcal {E}}_\mu \) of each curve in the direction \(\psi \) as the \(L^2\)-scalar product of \(\psi \) and the vector \(2 (\partial ^\perp _s)^2 \varvec{\kappa } +|\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }\). Hence we can understand \(2 (\partial ^\perp _s)^2 \varvec{\kappa } +|\varvec{\kappa }|^2 \varvec{\kappa } -\mu \varvec{\kappa }\) to be the gradient of \({\mathcal {E}}_\mu \). We then set the normal velocity driving the flow to be the opposite of such a gradient, that is

$$\begin{aligned} (\partial _t\gamma )^\perp =-2 (\partial ^\perp _s)^2 \varvec{\kappa } -|\varvec{\kappa }|^2 \varvec{\kappa } +\mu \varvec{\kappa }, \end{aligned}$$
(2.7)

where, again, \((\cdot )^\perp \) denotes the normal component of the velocity \(\partial _t\gamma \) of the curve \(\gamma \):

$$\begin{aligned} (\partial _t\gamma )^\perp =\partial _t\gamma -\left\langle \partial _t\gamma ,\tau \right\rangle \tau . \end{aligned}$$

In \({\mathbb {R}}^2\) it is possible to express the evolution equation in terms of the scalar curvature:

$$\begin{aligned} \left\langle \partial _t\gamma ,\nu \right\rangle \nu =(\partial _t\gamma )^\perp =2 (\partial ^\perp _s)^2 \varvec{\kappa } +|\varvec{\kappa }|^2 \varvec{\kappa }-\mu \varvec{\kappa } =\left( 2\partial _ s^2 k +(k)^2k-\mu k\right) \nu . \end{aligned}$$

This last equality can be directly deduced from (2.6). In this way we have derived an equation that describe the normal motion of each curve.

We pass now to consider, exactly as in Section 2.1, a network composed of N curves, parametrized by \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\) with \(i\in \{1,\ldots ,N\}\), that meet at one junction of order N at \(x=0\) and have the endpoints at \(x=1\) fixed in \({\mathbb {R}}^2\). We denote by \({\mathcal {N}}_\varepsilon \) the network composed of the curves \(\gamma ^i_\varepsilon =\gamma ^i+\varepsilon \psi ^i\) with \(\psi ^i:[0,1]\rightarrow {\mathbb {R}}^2\) such that

$$\begin{aligned} \psi ^1(0)=\cdots =\psi ^N(0),\quad \psi ^i(1)=0\quad \forall \,i\in \{1,\ldots ,N\}. \end{aligned}$$

Since the energy of a network is defined as the sum of of the energy of each curve, it is reasonable to define the gradient of \({\mathcal {E}}_\mu \) as the sum of the gradient of the energy of each curve composing the network, that we have identified with the vectors \(2 (\partial ^\perp _s)^2 \varvec{\kappa }^i +|\varvec{\kappa }^i|^2 \varvec{\kappa }^i-\mu ^i \varvec{\kappa }^i\). Hence, a network is a critical point of the energy when the the vectors \(2 (\partial ^\perp _s)^2 \varvec{\kappa }^i +|\varvec{\kappa }^i|^2 \varvec{\kappa }^i-\mu ^i \varvec{\kappa }^i\) vanish and the boundary terms in (2.5) are zero. Depending on the boundary constraints imposed on the network, i.e., its topology or possible fixed endpoints, we aim now to characterize the set of networks fulfilling boundary conditions that imply

$$\begin{aligned} \sum _{i=1}^{N} \left[ 2 \left. \langle \varvec{\kappa }^{i}, \partial _s \psi ^{i}\rangle \right| _0^1 + \left. \langle -2\partial ^\perp _s \varvec{\kappa }^{i} - |\varvec{\kappa }^i|^2\tau ^i+\mu ^i\tau ^i, \psi ^{i}\rangle \right| _0^1 \right] =0. \end{aligned}$$

Let us discuss the main possible cases of boundary conditions separately.

Curve with constraints at the endpoints

As we have mentioned before, if the network is composed of one curve, but this curve is not closed, then we fix its endpoint, namely \(\gamma (0)=P\in {\mathbb {R}}^2\) and \(\gamma (1)=Q\in {\mathbb {R}}^2\). As already shown in in Section 2.1, to maintain the position of the endpoints, we require \(\psi (0)=\psi (1)=0\), that automatically implies

$$\begin{aligned} \left. \langle -2\partial ^\perp _s \varvec{\kappa }^{i} - |\varvec{\kappa }^i|^2\tau ^i+\mu ^i\tau ^i, \psi ^{i}\rangle \right| _0^1 =0, \end{aligned}$$

in the computation of the first variation. On the other hand we are free to chose \(\partial _s\psi \) as test fields in the first variation. Suppose for example that \(\partial _s\psi (0)=\nu \) (where \(\nu \) is the unit normal vector to the curve \(\gamma \)) and \(\partial _s\psi (1)=0\), then from (2.5) we obtain \(k(0)=0\) and so \({\varvec{k}}(0)=0\). Interchanging the role of \(\partial _s\psi (0)\) and \(\partial _s\psi (1)\) we have \(k(1)={\varvec{k}}(1)=0\).

Hence we end up with the following set of conditions

$$\begin{aligned} {\left\{ \begin{array}{ll} \gamma (0)=P\\ \gamma (1)=Q\\ \varvec{\kappa }(0)=\varvec{\kappa }(1)=0, \end{array}\right. } \end{aligned}$$

known in the literature as natural or Navier boundary conditions.

However, since the elastic energy functional is a functional of the second order, it is legitimate to impose also that the unit tangent vectors at the endpoint of the curve are fixed, namely that the curve is clamped. Hence we now have \(\gamma (0)=P, \gamma (1)=Q, \tau (0)=\tau _0,\tau (1)=\tau _1\) as constraints. This time these boundary conditions affects the class of test function requiring \(\partial _s\psi (0)=\partial _s\psi (1)=0\), that, together with \(\psi (0)=\psi (1)=0\), automatically set (2.5) to zero.

Networks

We can consider without loss of generality that the structure of a network is as described in Section 2.1. Indeed boundary conditions for a other possible topologies can be easily deduces from this case.

The possible boundary condition at \(x=1\) are nothing but what we just described for a single curve with constraints at the endpoints. Thus we focus on the junction \(O=\gamma ^1(0)=\cdots =\gamma ^N(0)\). We can distinguish two sub cases

Neumann (so-called natural or Navier) boundary conditions

In this case we only require the network not to change its topology in a first variation. Letting first \(\psi ^i(0)=0\) for any i, it remains the boundary term

$$\begin{aligned} \sum _{i=1}^{N} \langle \varvec{\kappa }^{i}(0), \partial _s \psi ^{i}(0)\rangle =0, \end{aligned}$$

where the test functions \(\psi ^i\) appear differentiated. We can choose \(\partial _s\psi ^{1}(0)=\nu ^1(0)\) and \(\partial _s\psi ^i(0)=0\) for every \(i\in \{2,\ldots ,N\}\). This implies \(\varvec{\kappa }^1(0)=0\). Then, because of the arbitrariness of the choice of i we obtain:

$$\begin{aligned} \varvec{\kappa }^i(0)=0, \end{aligned}$$
(2.8)

for any \(i\in \{1, \ldots , N\}\).

It remains to consider the last term of (2.5). Taking into account the just obtained condition (2.8), by arbitrariness of \(\psi ^1(0)=\cdots =\psi ^N(0)\) it reads

$$\begin{aligned} \sum _{i={1}}^{N} \left( -2\partial ^\perp _s \varvec{\kappa }^{i}(0) +\mu \tau ^i(0)\right) =0, \end{aligned}$$

Dirichlet (so-called clamped) boundary conditions

As discussed above, also in the case of a network we can impose a condition on the tangent of the curves at their endpoints. As we saw in the clamped curve case, from the variational point of this extra condition involves the unit tangent vectors. Then an extra property on \(\partial _{s}\psi ^{i}\) is expected.

At the junction we require the following \((N-1)\) conditions:

$$\begin{aligned} \left\langle \tau ^{i_1}(0),\tau ^{i_2}(0)\right\rangle =c^{1,2}, \ldots ,\, \left\langle \tau ^{i_{N-1}}(0),\tau ^{i_N}(0)\right\rangle =c^{N-1,N}, \end{aligned}$$

that is, the angles between tangent vectors are fixed. We need that also the variation \({\mathcal {N}}_{\varepsilon }\) satisfies the same

$$\begin{aligned} \left\langle \tau _\varepsilon ^{i_1}(0),\tau _\varepsilon ^{i_2}(0)\right\rangle =c^{1,2}, \ldots ,\, \left\langle \tau _\varepsilon ^{i_{N-1}}(0),\tau _\varepsilon ^{i_N}(y_N)\right\rangle =c^{N-1,N}, \end{aligned}$$

for any \(|\varepsilon |\) small enough. This means that for every \(i,j\in \{1,\ldots , N\}\) we need that

$$\begin{aligned} \frac{d}{d\varepsilon }\left\langle \tau _\varepsilon ^{i}(0),\tau _\varepsilon ^{j}(0)\right\rangle =0, \end{aligned}$$

that implies

$$\begin{aligned} 0&=\frac{{d}}{{d}\varepsilon }\left\langle \tau _\varepsilon ^{i}(0),\tau _\varepsilon ^{j}(0)\right\rangle \Big \vert _{\varepsilon =0}= \left\langle \partial _{s}^\perp \psi ^i(0), \tau ^j(0)\right\rangle +\left\langle \tau ^i(0),\partial _{s}^\perp \psi ^j(0)\right\rangle \\&=({\psi }_{s}^{i})^\perp (0)\left\langle \nu ^i(0),\tau ^j(0) \right\rangle +({\psi }_{s}^{j})^\perp (0)\left\langle \tau ^i(0), \nu ^j(0)\right\rangle \\&=({\psi }_{s}^{i})^\perp (0)\left\langle \nu ^i(0),\tau ^j(0) \right\rangle -({\psi }_{s}^{j})^\perp (0)\left\langle \nu ^i(0),\tau ^j(0) \right\rangle . \end{aligned}$$

where we used the notation \(\left( \psi ^i_{s}\right) ^\perp :=\left\langle \partial _{s}\psi ^i,\nu ^i\right\rangle \). So we impose

$$\begin{aligned} ({\psi }_{s}^{1})^\perp (0)=\cdots =({\psi }_{s}^{N})^\perp (0). \end{aligned}$$
(2.9)

Then the first boundary term of (2.5) reduces to

$$\begin{aligned} 2\langle ({\psi }_{s}^{1})^\perp (0),\sum _{i=1}^{N} k^i(0)\rangle . \end{aligned}$$

Hence we find the following boundary conditions:

$$\begin{aligned} \sum _{i=1}^{N}\ k^i(0)=0,\quad \sum _{i=1}^{N}-2\partial ^\perp _s \varvec{\kappa }^{i}(0) - \vert \varvec{\kappa }^i(0)\vert ^2\tau ^i(0)+\mu ^i\tau ^i(0)=0. \end{aligned}$$

In the end, whenever the network is composed of N curves we have a system of N equations (not coupled) that are quasilinear and of fourth order in the parametrizations of the curves with coupled boundary conditions.

We now need to briefly introduce the Hölder spaces that will appear in the definition of the flow.

Let \(N\in {\mathbb {N}}\), consider a network \({\mathcal {N}}\) composed of N curves with endpoints of order one fixed in the plane and the curves that meet at junctions of different order \(m\in {\mathbb {N}}_{\ge 2}\). As we have already said each curve of \({\mathcal {N}}\) is parametrized by \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\). Let \(\alpha \in (0,1)\). We denote \(\gamma :=(\gamma ^1,\ldots ,\gamma ^N)\in ({\mathbb {R}}^2)^N\) and

$$\begin{aligned} \mathrm {I}_N:=C^{4+\alpha }\left( [0,1];({\mathbb {R}}^2)^N\right) . \end{aligned}$$

We will make and extensive use of parabolic Hölder spaces (see also [51, §11, §13]). For \(k\in \{0,1,2,3,4\}\), \(\alpha \in (0,1)\) the parabolic Hölder space

$$\begin{aligned} C^{\frac{k+\alpha }{4}, k+\alpha }([0,T]\times [0,1]) \end{aligned}$$

is the space of all functions \(u:[0,T]\times [0,1]\rightarrow {\mathbb {R}}\) that have continuous derivatives \(\partial _t^i\partial _x^ju\) where \(i,j\in {\mathbb {N}}\) are such that \(4i+j\le k\) for which the norm

$$\begin{aligned} \left\Vert u\right\Vert _{C^{\frac{k+\alpha }{4},k+\alpha }}:= & {} \sum _{4i+j=0}^k\left\Vert \partial _t^i\partial _x^ju\right\Vert _\infty +\sum _{4i+j=k}\left[ \partial _t^i\partial _x^ju\right] _{0,\alpha }\\&+\sum _{0<k+\alpha -4i-j<4}\left[ \partial _t^i\partial _x^ju\right] _{\frac{k+\alpha -4i-j}{4},0} \end{aligned}$$

is finite. We recall that for a function \(u:[0,T]\times [0,1]\rightarrow {\mathbb {R}}\), for \(\rho \in (0,1)\) the semi-norms \([ u]_{\rho ,0}\) and \([ u]_{0,\rho }\) are defined as

$$\begin{aligned}{}[ u]_{\rho ,0}:=\sup _{(t,x), (\tau ,x)}\frac{\vert u(t,x)-u(\tau ,x)\vert }{\vert t-\tau \vert ^\rho }, \end{aligned}$$

and

$$\begin{aligned}{}[ u]_{0,\rho }:=\sup _{(t,x), (t,y)}\frac{\vert u(t,x)-u(t,y)\vert }{\vert x-y\vert ^\rho }. \end{aligned}$$

Moreover the space \(C^{\frac{\alpha }{4},\alpha }\left( [0,T]\times [0,1]\right) \) is equal to the space

$$\begin{aligned} C^{\frac{\alpha }{4}}\left( [0,T];C^0([0,1])\right) \cap C^0\left( [0,T];C^\alpha ([0,1])\right) , \end{aligned}$$

with equivalent norms.

We also define the spaces \(C^{\frac{k+\alpha }{4}, k+\alpha }([0,T]\times \{0,1\}, {\mathbb {R}}^m)\) to be \(C^{\frac{k+\alpha }{4}}([0,T], {\mathbb {R}}^{2m})\) via the isomorphism \(f\mapsto (f(t,0),f(t,1))^t\).

Definition 2.3

(Elastic flow) Let \(N\in {\mathbb {N}}\) and let \({\mathcal {N}}_0\) be an initial network composed of N curves parametrized by \(\gamma _0=(\gamma _0^1,\ldots ,\gamma _0^N)\in \mathrm {I}_N\), (possibly) with endpoints of order one and (possibly) with curves that meet at junctions of different order \(m\in {\mathbb {N}}_{\ge 2}\). Then a time dependent family of networks \({\mathcal {N}}(t)_{t\in [0,T]}\) is a solution to the elastic flow in the time interval [0, T] with \(T>0\) if there exists a parametrization

$$\begin{aligned} \gamma (t,x)=\left( \gamma ^1(t,x), \ldots ,\gamma ^N(t,x)\right) \in C^{\frac{4+\alpha }{4}, 4+\alpha }\left( [0,T]\times [0,1];({\mathbb {R}}^2)^N\right) , \end{aligned}$$

with \(\gamma ^i\) regular, and such that for every \(t\in [0,T], x\in [0,1]\) and \(i\in \{1,\ldots ,N\}\) the system

$$\begin{aligned} {\left\{ \begin{array}{ll} (\partial _t\gamma ^i)^\perp =\left( -2\partial _s^2k^i-(k^i)^3+k^i\right) \nu ^i\\ \gamma ^i(0,x)=\gamma _0^i(x) \end{array}\right. } \end{aligned}$$
(2.10)

is satisfied. Moreover the system is coupled with suitable boundary conditions as follows, corresponding to the possible cases of boundary conditions discussed in the formulation of the first variation.

  • If \(N=1\) and the curve \(\gamma _0\) is closed we require \(\gamma (t,x)\) to be closed and we impose periodic boundary conditions.

  • If \(N=1\) and the curve \(\gamma _0\) is not closed with \(\gamma _0(0)=P\in {\mathbb {R}}^2\), \(\gamma _0(1)=Q\in {\mathbb {R}}^2\) and we want to impose natural boundary conditions we require

    $$\begin{aligned} {\left\{ \begin{array}{ll} \gamma (t,0)=P\\ \gamma (t,1)=Q\\ \varvec{\kappa }(t,0)=\varvec{\kappa }(t,1)=0. \end{array}\right. } \end{aligned}$$
    (2.11)
  • If \(N=1\) and the curve \(\gamma _0\) is not closed with \(\gamma _0(0)=P\in {\mathbb {R}}^2\), \(\gamma _0(1)=Q\in {\mathbb {R}}^2\) and we want to impose clamped boundary conditions, we require

    $$\begin{aligned} {\left\{ \begin{array}{ll} \gamma (t,0)=P\\ \gamma (t,1)=Q\\ \tau (t,0)=\tau _0\\ \tau (t,1)=\tau _1. \end{array}\right. } \end{aligned}$$
    (2.12)
  • If N is arbitrary and \({\mathcal {N}}_0\) has one multipoint

    $$\begin{aligned} \gamma _0^{i_1}(y_1)=\cdots =\gamma _0^{i_m}(y_m), \end{aligned}$$

    with \((i_1,y_1),\ldots ,(i_m,y_m)\in \{1,\ldots ,N\}\times \{0,1\}\) and we want to impose natural boundary conditions, for every \(j\in \{1,\ldots ,m\}\) we require

    $$\begin{aligned} {\left\{ \begin{array}{ll} \kappa ^{i_j}(t,y)=0\\ \sum _{j=1}^{m} \left( -2\partial ^\perp _s \varvec{\kappa }^{i_j}+\mu ^{i_j}\tau ^{i_j}\right) (t,y_j)=0. \end{array}\right. } \end{aligned}$$
    (2.13)
  • If N is arbitrary and \({\mathcal {N}}_0\) has one multipoint

    $$\begin{aligned} \gamma _0^{i_1}(y_1)=\cdots =\gamma _0^{i_m}(y_m), \end{aligned}$$

    with \((i_1,y_1),\ldots ,(i_m,y_m)\in \{1,\ldots ,N\}\times \{0,1\}\) where we want to impose clamped boundary conditions, we require

    $$\begin{aligned} {\left\{ \begin{array}{ll} \left\langle \tau ^{i_1}(y_1),\tau ^{i_2}(y_2)\right\rangle =c^{1,2}\\ \ldots \\ \left\langle \tau ^{i_{m-1}}(y_{m-1}),\tau ^{i_m}(y_m)\right\rangle =c^{m-1,m}\\ \sum _{j=1}^m k^{i_j}=0\\ \sum _{j=1}^m \left( -2\partial ^\perp _s \varvec{\kappa }^{i_j}- |\varvec{\kappa }^{i_j}(y_i)|^2\tau ^{i_j}(y_i) +\mu ^{i_j}\tau ^{i_j}(y_i)\right) =0. \end{array}\right. } \end{aligned}$$
    (2.14)

Clearly in the case of network with several junctions and endpoints of order one fixed in the plane, one has to impose different boundary conditions (chosen among (2.11), (2.12),  (2.13) and (2.14)) at each junctions and endpoint.

We give a name to the boundary conditions appearing in the definition of the flow. When there is a multipoint

$$\begin{aligned} \gamma _0^{i_1}(y_1)=\cdots =\gamma _0^{i_m}(y_m), \end{aligned}$$

with \((i_1,y_1),\ldots ,(i_m,y_m)\in \{1,\ldots ,N\}\times \{0,1\}\) we shortly refer to:

  • \(\gamma _0^{i_1}(t,y_1)=\cdots =\gamma _0^{i_m}(t,y_m)\) as concurrency condition;

  • \(\left\langle \tau ^{i_1}(y_1),\tau ^{i_2}(y_2)\right\rangle =c^{1,2}, \ldots , \left\langle \tau ^{i_{m-1}}(y_{m-1}),\tau ^{i_m}(y_m)\right\rangle =c^{m-1,m}\) as angle conditions;

  • either \(k^{i_j}(t,y)=0\) for every \(j\in \{1,\ldots ,m\}\) or \(\sum _{j=1}^m k^{i_j}=0\) as curvature conditions;

  • \(\sum _{j=1}^m \left( -2\partial ^\perp _s \varvec{\kappa }^{i_j}- |\varvec{\kappa }^{i_j}(y_i)|^2\tau ^{i_j}(y_i) +\mu ^{i_j}\tau ^{i_j}(y_i)\right) =0\) as third order condition.

    When we have an endpoint of order one we refer to the condition involving the tangent vector as angle condition and the curvature as curvature condition.

Remark 2.4

In system (2.10) only the normal component of the velocity is prescribed. This does not mean that the tangential velocity is necessary zero. We can equivalently write the motion equations as

$$\begin{aligned} \partial _t\gamma ^i=V^i\nu ^i+T^i\tau ^i, \end{aligned}$$

where \(V^i=-2\partial _s^2k^i-(k^i)^3+k^i\) and \(T^i\) are some at least continuous functions. In the case of a single closed curve or a single curve with fixed endpoint we can impose \(T\equiv 0\) (see Section 2.4).

Definition 2.5

(Admissible initial network) A network \({\mathcal {N}}_0\) of N regular curves parametrized by \(\gamma =(\gamma ^1, \ldots ,\gamma ^N)\), \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\) with \(i\in \{1, \ldots , N\}\) possibly with \(\ell \) endpoints of order one \(\{\gamma ^j(y_j)\}\) for some \((j,y_j)\in \{1, \ldots , N\}\times \{0,1\}\), and possibly with curves that meet at k different junctions \(\{O^p\}\) of order \(m\in {\mathbb {N}}_{\ge 2}\) at \(O^p=\gamma ^{p_1}(y_1)=\cdots =\gamma ^{p_m}(p_m)\) for some \((p_i,y_i)\in \{1, \ldots , N\}\times \{0,1\}, p\in \{1, \ldots ,k\}\) forming angles \(\alpha ^{p_i,p_{i+1}}\) between \(\nu ^{p_i}\) and \(\nu ^{p_{i+1}}\) is an admissible initial network if

  1. (i)

    the parametrization \(\gamma \) belongs to \(\mathrm {I}_N\);

  2. (ii)

    \({\mathcal {N}}_0\) satisfies all the boundary condition imposed in the system: concurrency, angle, curvature and third order conditions;

  3. (iii)

    at each endpoint \(\gamma ^j(y_j)\) of order one it holds

    $$\begin{aligned} 2\partial _s^2 k^j(y_j)+(k^j)^3(y_j)-\mu ^i k^j(y_j)=0; \end{aligned}$$
  4. (iv)

    the initial datum fulfills the non–degeneracy condition: at each junction

    $$\begin{aligned} \mathrm {span}\{\nu ^{p_1}, \ldots ,\nu _0^{p_m}\}={\mathbb {R}}^2; \end{aligned}$$
  5. (v)

    at each junction \(\gamma ^{p_1}(y_1)=\ldots =\gamma ^{p_m}(y_m)\) where at least three curves concur, consider two consecutive unit normal vectors \(\nu ^{p_i}(y_i)\) and \(\nu ^{p_k}(y_k)\) such that \(\mathrm {span}\{\nu ^{p_i}(y_i),\nu ^{p_k}(y_k)\}={{{\mathbb {R}}}}^2\). Then for every \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) we require

    $$\begin{aligned} \sin \theta ^i V^i(y_i)+\sin \theta ^k V^k(y_k) +\sin \theta ^j V^j(y_j)=0, \end{aligned}$$

    where \(\theta ^i\) is the angle between \(\nu ^{p_k}(y_k)\) and \(\nu ^{p_j}(y_j)\), \(\theta ^k\) between \(\nu ^{p_j}(y_j)\) and \(\nu ^{p_i}(y_i)\) and \(\theta ^{j}\) between \(\nu ^{p_i}(y_i)\) and \(\nu ^{p_k}(y_k)\).

Remark 2.6

The conditions (ii)–(iii)–(v) on the initial network are the so-called compatibility conditions. Together with the non-degeneracy condition, these conditions concern the boundary of the network, and so they are not required in the case of one single closed curve.

Remark 2.7

We refer to the conditions (iii) and (v) as fourth order compatibility conditions. We explain here how one derives condition (v) in the case of a junction \(\gamma ^1(0)=\ldots =\gamma ^m(0)\). Differentiating in time the concurrency condition we get \(\partial _t\gamma ^1(0) =\cdots =\partial _t\gamma ^m(0)\), or, in terms of the normal and tangential velocities \(V^1(0)\nu ^1(0)+T^1(0)\tau ^1(0)=\cdots =V^m(0)\nu ^m(0)+T^m(0)\tau ^m(0)\).

Without loss of generality we suppose that the concurring curves are labeled in a counterclockwise sense and that \(\mathrm {span}\{\nu ^1(0),\nu ^2(0)\}={\mathbb {R}}^2\). Then for every \(j\in \{3,\ldots ,m\}\) we have

$$\begin{aligned} \sin \theta ^1\nu ^1(0)+\sin \theta ^2\nu ^2(0) +\sin \theta ^j\nu ^j(0)=0, \end{aligned}$$

where \(\theta ^1\) is the angle between \(\nu ^2(0)\) and \(\nu ^j(0)\), \(\theta ^2\) between \(\nu ^j(0)\) and \(\nu ^1(0)\) and \(\theta ^j\) between \(\nu ^1(0)\) and \(\nu ^2(0)\). Then

$$\begin{aligned} \sin \theta ^1V^1(0)&= \left\langle V^1(0)\nu ^1(0)+T^1(0)\tau ^1(0), \sin \theta ^1\nu ^1(0)\right\rangle \\&= \left\langle V^2(0)\nu ^2(0)+T^2(0)\tau ^2(0), -\sin \theta ^2\nu ^2(0)-\sin \theta ^j\nu ^j(0)\right\rangle \\&= -\sin \theta ^2 V^2(0)+ \left\langle V^2(0)\nu ^2(0)+T^2(0)\tau ^2(0), -\sin \theta ^j\nu ^j(0)\right\rangle \\&= -\sin \theta ^2 V^2(0)+ \left\langle V^j(0)\nu ^j(0)+T^j(0)\tau ^j(0), -\sin \theta ^j\nu ^j(0)\right\rangle \\&=-\sin \theta ^2 V^2(0)-\sin \theta ^j V^j(0). \end{aligned}$$

Hence for every \(j\in \{3,\ldots ,m\}\) we obtained \(\sin \theta ^1V^1(0)+\sin \theta ^2 V^2(0) +\sin \theta ^j V^j(0)=0\).

Remark 2.8

To prove existence of solutions of class \(C^{\frac{4+\alpha }{4}, 4+\alpha }\) to the elastic flow of networks it is necessary to require the fourth order compatibility conditions for the initial datum. This conditions may sound not very natural because it does not appear among the boundary conditions imposed in the system. It is actually possible not to ask for it by defining the elastic flow of networks in a Sobolev setting. The price that we have to pay is that in such a case a solution will be slightly less regular (see [22, 38] for details). On the opposite side, if we want a smooth solution till \(t=0\) one has to impose many more conditions. These properties, the compatibility conditions of any order, are derived repeatedly differentiating in time the boundary conditions and using the motion equation to substitute time derivatives with space derivatives (see [14, 15]).

2.4 Invariance Under Reparametrization

It is very important to remark the consequences of the invariance under reparametrization of the energy functional on the resulting gradient flow. These effects actually occur whenever the starting energy is geometric, i.e., invariant under reparamentrization. To be more precise, let us say that the time dependent family of closed curves parametrized by \(\gamma :[0,T]\times \mathrm {S}^1\rightarrow {\mathbb {R}}^2\) is a smooth solution to the elastic flow

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\gamma (t,x) = V_\gamma (t,x)\nu _\gamma (t,x) ,\\ \gamma (0,\cdot )=\gamma _0(\cdot ), \end{array}\right. } \end{aligned}$$
(2.15)

and the driving velocity \(\partial _t\gamma \) is normal along \(\gamma \). If \(\chi :[0,T]\times \mathrm {S}^1\rightarrow \mathrm {S}^1\) with \(\chi (t,0)=0\) and \(\chi (t,1)=1\) is a smooth one-parameter family of diffeomorphism and \(\sigma (t,x){:}{=}\gamma (t,\chi (t,x))\), then it is immediate to check that \(\sigma \) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \sigma (t,x) = V_\sigma (t,x)\nu _\sigma (t,x) + W(t,x)\tau _\sigma (t,x),\\ \sigma (0,\cdot ) = \gamma _0(\chi (0,\cdot )), \end{array}\right. } \end{aligned}$$

and W can be computed explicitly in terms of \(\chi \) and \(\gamma \). More importantly, one has that \(V_\sigma (t,x)\nu _\sigma (t,x) = V_\gamma (t,\chi (t,x))\nu _\gamma (t,\chi (t,x))\). Since \(W(t,x)\tau _\sigma (t,x)\) is a tangential term, \(\sigma \) itself is a solution to the elastic flow. Indeed its normal driving velocity \(\partial _t^\perp \sigma \) is the one defining the elastic flow on \(\sigma \). This is the reason why the definition of the elastic flow is given in terms of the normal velocity of the evolution only.

In complete analogy, if \(\beta :[0,T)\times \mathrm {S}^1 \rightarrow {\mathbb {R}}^2\) is given, it is smooth and solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \beta (t,x)= V_\beta (t,x)\nu _\beta (t,x) + w(t,x)\tau _\beta (t,x),\\ \beta (0,\cdot )= \gamma _0 (\chi _0(\cdot )), \end{array}\right. } \end{aligned}$$

where \(\chi _0: \mathrm {S}^1\rightarrow \mathrm {S}^1\) is a diffeomorphism, then letting \(\psi :[0,T]\times \mathrm {S}^1\rightarrow \mathrm {S}^1\) be the smooth solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\psi (t,x) = - |(\partial _x\beta )(t,\psi (t,x))|^{-1} w(t,\psi (t,x)),\\ \psi (0,\cdot )=\chi _0^{-1}(\cdot ), \end{array}\right. } \end{aligned}$$

it immediately follows that \(\gamma (t,x){:}{=}\beta (t,\psi (t,x))\) solves (2.15).

Something similar holds true also in the general case of networks. First of all it is easy to check the all possible boundary conditions are invariant under reparametrizations (both at the multiple junctions and at the endpoints of order one). Concerning the velocity, we cannot impose the tangential velocity to be zero as in (2.15), but it remains true that if a time dependent family of networks parametrized by \(\gamma =(\gamma ^1,\ldots ,\gamma ^N)\) with \(\gamma ^i:[0,T]\times [0,1]\rightarrow {\mathbb {R}}^2\) is a solution to the elastic flow, then \(\sigma =(\sigma ^1,\ldots ,\sigma ^N)\) defined by \(\sigma ^i(t,x)=\gamma ^i(t, \chi ^i(t,x))\) with \(\chi ^i:[0,T]\times [0,1]\rightarrow [0,1]\) a time dependent family of diffeomorphisms such that \(\sigma (t,0)=0\) and \(\sigma (t,1)=1\) (together with suitable conditions on \(\partial _x\sigma (t,0)\), \(\partial _{x}^2\sigma (t,0)\) and so on) is still a solution to the elastic flow of networks. Indeed the velocity of \(\gamma ^i\) and \(\sigma ^i\) differs only by a tangential component.

Remark 2.9

We want to stress that at the junctions the tangential is velocity determined by the normal velocity.

Consider a junction of order m

$$\begin{aligned} \gamma ^1(t,0)=\cdots =\gamma ^m(t,0). \end{aligned}$$

Differentiating in time yields \(\partial _t\gamma ^1(t,0)=\ldots =\partial _t\gamma ^m(t,0)\) that, in terms of the normal and tangential motion V and T reads as

$$\begin{aligned} V^j\nu ^j+T^j\tau ^j=V^{j+1}\nu ^{j+1}+T^{j+1}\tau ^{j+1}, \end{aligned}$$

where \(j\in \{1, \ldots , m\}\) with \(m+1:=1\) and the argument (t, 0) is omitted from now on. Testing these identities with the unit tangent vectors \(\tau ^j\) leads to the system:

$$\begin{aligned} \begin{pmatrix} 1 &{} -\cos \alpha ^{1,2} &{} 0 &{} 0 &{}\ldots &{} 0\\ 0 &{} 1 &{} -\cos \alpha ^{2,3} &{} 0 &{} \ldots &{} 0 \\ 0 &{} 0 &{} 1 &{} -\cos \alpha ^{3,4} &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \ldots &{} 1 &{} -\cos \alpha ^{m-1,m} \\ -\cos \alpha ^{m,1} &{} 0 &{} 0 &{} \ldots &{} 0 &{} 1 \end{pmatrix} \begin{pmatrix} T^1\\ T^2 \\ T^3 \\ \vdots \\ T^{m-1} \\ T^m \end{pmatrix} =\begin{pmatrix} -\sin \alpha ^{1,2} V^2 \\ -\sin \alpha ^{2,3} V^3 \\ -\sin \alpha ^{3,4} V^4 \\ \vdots \\ -\sin \alpha ^{m-1,m} V^m \\ -\sin \alpha ^{m,1} V^1 \end{pmatrix} . \end{aligned}$$

We call M the \(m\times m\)-matrix of the coefficients and \(R_1,\ldots , R_m\) its rows.

It is easy to see that

$$\begin{aligned} \det (M)=1- \cos \alpha ^{m,1}\cos \alpha ^{1,2} \ldots \cos \alpha ^{m-2,m-1}\cos \alpha ^{m-1,m}, \end{aligned}$$

that is different from zero till the non-degeneracy condition is satisfied. Then the system has a unique solution and so each \(T^i(t)\) can be expressed as a linear combination of \(V^1(t),\ldots , V^m(t)\).

Remark 2.10

The previous observations clarify the fact that the only meaningful notion of uniqueness for a geometric flow like the elastic one is thus uniqueness up to reparametrization.

We can actually take advantage of the invariance by reparametrization of the problem to reduce system (2.10) to a non-degenerate system of quasilinear PDEs. Consider the flow of one curve \(\gamma \). As we said before, the normal velocity is a geometric quantity, namely \(\partial _t\gamma ^\perp = V\nu =-2\partial _s^2 k\nu -k^{3}\nu +\mu k\nu \). Computing this quantity in terms of the parametrization \(\gamma \) we get

$$\begin{aligned}&-V\nu =2\partial _s^2 k\nu +k^{3}\nu -\mu k\nu \nonumber \\&\quad =2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} -12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} -5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} -8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} \nonumber \\&\qquad +35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}}\nonumber \\&\qquad +\left\langle -2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}}+12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}+5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}}\nonumber \right. \\&\left. \qquad +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}}-35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}},\tau \right\rangle \tau \nonumber \\&\qquad -\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}} +\left\langle \mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}}, \tau \right\rangle \tau . \end{aligned}$$
(2.16)

We define

$$\begin{aligned} {\overline{T}}&:=\left\langle -2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} +12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} +5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} \right. \nonumber \\&\quad \left. +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} -35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}} +\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}},\tau \right\rangle . \end{aligned}$$
(2.17)

We can insert this choice of the tangential component of the velocity in the motion equation, which becomes

$$\begin{aligned} \partial _t\gamma&=V\nu +{\overline{T}}\tau =-\frac{2}{\vert \partial _x\gamma \vert ^4}\partial _x^4 \gamma +f(\partial _x\gamma ,\partial _x^2 \gamma ,\partial _x^3 \gamma )\\&=-2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} +12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} +5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} \\&\quad -35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}} +\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}}. \end{aligned}$$

Considering now the boundary conditions: up to reparametrization the clamped condition \(\tau (t,0)=\tau _0\) can be reformulated as \(\partial _x\gamma (t,0)=\tau _0\) and the curvature condition \(k=\varvec{\kappa }=0\) as \(\partial _x^2 \gamma (t,0)=0\). We can then extend this discussion to the flow of general networks, in order to define the so-called special flow.

Definition 2.11

(Admissible initial parametrization) We say that \(\varphi _0=(\varphi _0^1, \ldots , \varphi _0^N)\) is an admissible parametrization for the special flow if

  • the functions \(\varphi _0^i\) are of class \(C^{4+\alpha }([0,1];{\mathbb {R}}^2)\);

  • \(\varphi _0=(\varphi _0^1, \ldots , \varphi _0^N)\) satisfies all the boundary conditions imposed in the system;

  • at each endpoint of order one it holds \(V^i=0\) and \({\overline{T}}^i=0\) for any i;

  • at each junction it holds

    $$\begin{aligned} V^i\nu ^i+{\overline{T}}^i\tau ^i=V^j\nu ^j+{\overline{T}}^j\tau ^j \end{aligned}$$

    for any ij;

  • at each junction the non-degeneracy condition is satisfied;

where \({\overline{T}}^i\) is defined as in (2.17) for any i and j.

Definition 2.12

(Special flow) Let \(N\in {\mathbb {N}}\) and let \(\varphi _0=(\varphi _0^1,\ldots ,\varphi _0^N)\) be an admissible initial parametrization in the sense of Definition 2.11 (possibly) with endpoints of order one and (possibly) with junctions of different orders \(m\in {\mathbb {N}}_{\ge 2}\). Then a time dependent family of parametrizations \(\varphi _{t\in [0,T]}\), \(\varphi =(\varphi ^1,\ldots ,\varphi ^N)\) is a solution to the special flow if and only if for every \(i\in \{1,\ldots ,N\}\) the functions \(\varphi ^i\) are of class \(C^{\frac{4+\alpha }{4}, 4+\alpha }([0,T]\times [0,1];{\mathbb {R}}^2)\), for every \((t,x)\in [0,T]\times [0,1]\) it holds \(\partial _x\varphi (x)\ne 0\) and the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\varphi ^i=V^i\nu ^i+{\overline{T}}^i\tau ^i\\ \varphi ^i(0,x)=\varphi _0(x) \end{array}\right. } \end{aligned}$$
(2.18)

is satisfied, where \({\overline{T}}^i\) is defined as in (2.17) for any i. Moreover the following boundary conditions are imposed:

  • if \(N=1\) and \(\varphi _0\) is a closed curve, then we impose periodic boundary conditions;

  • if \(N=1\) and \(\varphi _0(0)=P,\varphi _0(1)=Q\), we can require either

    $$\begin{aligned} {\left\{ \begin{array}{ll} \varphi ^1(t,0)=P\\ \varphi ^1(t,1)=Q\\ \partial _x^2\varphi (t,0)=\partial _x^2\varphi ^1(t,1)=0, \end{array}\right. } \end{aligned}$$

    or

    $$\begin{aligned} {\left\{ \begin{array}{ll} \varphi ^1(t,0)=P\\ \varphi ^1(t,1)=Q\\ \partial _x\varphi ^1(t,0)=\tau _0\\ \partial _x\varphi ^1(t,1)=\tau _1. \end{array}\right. } \end{aligned}$$
  • if N is arbitrary and \({\mathcal {N}}_0\) has one multipoint

    $$\begin{aligned} \gamma _0^{i_1}(y_1)=\cdots =\gamma _0^{i_m}(y_m), \end{aligned}$$

    with \((i_1,y_1),\ldots ,(i_m,y_m)\in \{1,\ldots ,N\}\times \{0,1\}\) we can impose either

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _x^2\varphi ^{i_j}(t,y)=0\quad \text {for every}\, j\in \{1,\ldots ,m\}\\ \sum _{j=1}^{m} \left( -2\partial ^\perp _s \varvec{\kappa }^{i_j}+\mu ^{i_j}\tau ^{i_j}\right) (t,y_j)=0, \end{array}\right. } \end{aligned}$$

    or

    $$\begin{aligned} {\left\{ \begin{array}{ll} \left\langle \tau ^{i_1}(y_1),\tau ^{i_2}(y_2)\right\rangle =c^{1,2}\\ \ldots \\ \left\langle \tau ^{i_{m-1}}(y_{m-1}),\tau ^{i_m}(y_m)\right\rangle =c^{m-1,m}\\ \sum _{j=1}^m k^{i_j}=0\\ \left\langle \partial _x^2\varphi ^{i_j}(t,y),\partial _x\varphi ^{i_j}(t,y)\right\rangle =0\quad \text {for every}\, j\in \{1,\ldots ,m\}\\ \sum _{j=1}^m \left( -2\partial ^\perp _s \varvec{\kappa }^{i_j}- |\varvec{\kappa }^{i_j}(y_j)|^2\tau ^{i_j}(y_j) +\mu ^{i_j}\tau ^{i_j}(y_{i_j})\right) =0. \end{array}\right. } \end{aligned}$$

Lemma 2.13

Let \(\varphi _0=(\varphi _0^1, \ldots ,\varphi _0^N)\) be an admissible initial parametrization and \(\varphi _{t\in [0,T]}\), \(\varphi =(\varphi ^1, \ldots ,\varphi ^N)\) be a solution to the special flow. Then \({\mathcal {N}}_t=\cup _{i=1}^N\varphi ^i(t,[0,1])\) is a solution of the elastic flow of networks with initial datum \({\mathcal {N}}_0:=\cup _{i=1}^N\varphi ^i_0([0,1])\).

Proof

We show that \({\mathcal {N}}_0\) is an admissible initial networks. Conditions (i) and (iv) are clearly satisfied, together with condition (ii) because at the endpoints of order one \(0=V^i=2\partial _s^2k^i+(k^i)^3-\mu ^i k^i\). Also condition (iii) it is easy to get: \(\partial _x^2\varphi (y)=0\) implies \(k(y)=0\), \(\partial _x\varphi (y)=\tau ^*\) implies \(\tau =\tau ^*\) and all the other conditions are already satisfied by the special flow. At each junction \(\gamma ^{1}(y_1)=\cdots =\gamma ^{m}(y_m)\) of order at least three we consider two consecutive unit normal vectors \(\nu ^{i}(y_i)\) and \(\nu ^{k}(y_k)\) such that \(\mathrm {span}\{\nu ^{i}(y_i),\nu ^{k}(y_k)\}={\mathbb {R}}^2\). For every \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) we call \(\theta ^i\) the angle between \(\nu ^k(0)\) and \(\nu ^j(0)\), \(\theta ^k\) between \(\nu ^j(0)\) and \(\nu ^i(0)\) and \(\theta ^j\) between \(\nu ^i(0)\) and \(\nu ^k(0)\) and we recall that it holds

$$\begin{aligned} V^i\nu ^i+{\overline{T}}^i\tau ^i&=V^j\nu ^j+{\overline{T}}^j\tau ^j, \end{aligned}$$
(2.19)
$$\begin{aligned} V^i\nu ^i+{\overline{T}}^i\tau ^i&=V^k\nu ^k+{\overline{T}}^k\tau ^k. \end{aligned}$$
(2.20)

By testing (2.19) by \(\sin \theta ^j\tau ^k\) and by \(\cos \theta ^j\nu ^k\) and summing, we get

$$\begin{aligned} V^i=\cos \theta ^k V^j-\sin \theta ^k {\overline{T}}^j. \end{aligned}$$
(2.21)

If instead we test (2.19) by \(\cos \theta ^j\tau ^k\) and by \(\sin \theta ^j\nu ^k\) and we subtract the second equality to the first one, it holds

$$\begin{aligned} {\overline{T}}^i=\cos \theta ^k {\overline{T}}^j+\sin \theta ^k V^j. \end{aligned}$$
(2.22)

Similarly, by testing (2.20) by \(\cos \theta ^k\nu ^j\) and by \(\sin \theta ^k\tau ^j\) and subtracting the second identity to the first we have

$$\begin{aligned} V^i=\cos \theta ^j V^k+\sin \theta ^j {\overline{T}}^k. \end{aligned}$$
(2.23)

Finally we test (2.20) by \(\cos \theta ^k\tau ^j\) and by \(\sin \theta ^k\nu ^j\) and sum, obtaining

$$\begin{aligned} {\overline{T}}^i=\cos \theta ^j {\overline{T}}^k-\sin \theta ^j V^k. \end{aligned}$$
(2.24)

With the help of the identities (2.21), (2.22), (2.23) and (2.24) and interchanging the roles of ijk we can write

$$\begin{aligned} \sin \theta ^iV^i&=\cos \theta ^j {\overline{T}}^j-\cos \theta ^k {\overline{T}}^k,\\ \sin \theta ^kV^k&=\cos \theta ^i {\overline{T}}^i-\cos \theta ^j {\overline{T}}^j,\\ \sin \theta ^jV^j&=\cos \theta ^k {\overline{T}}^k-\cos \theta ^i {\overline{T}}^i. \end{aligned}$$

and so for every \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) we have \(\sin \theta ^iV^i+ \sin \theta ^kV^k+\sin \theta ^jV^j=0\), as desired.

The solution \({\mathcal {N}}\) admits a parametrization \(\varphi \) with the required regularity. As we have seen for the initial datum, the boundary conditions in Definition 2.18 implies the boundary conditions asked in Definition 2.3. By definition of solution of the special flow the parametrizations \(\varphi =(\varphi ^1,\ldots ,\varphi ^N)\) solves \(\partial _t\varphi ^i=V^i\nu ^i+{\overline{T}}^i\tau ^i\). Then

$$\begin{aligned} \left\langle \partial _t\varphi ^i, \nu ^i\right\rangle \nu ^i=\left\langle V^i\nu ^i+{\overline{T}}^i\tau ^i,\nu ^i\right\rangle \nu ^i =V^i\nu ^i =-2\partial _s^2 k^i-(k^i)^3+\mu ^i k^i, \end{aligned}$$

and thus all the properties of solution to the elastic flow are satisfied. \(\square \)

Lemma 2.14

Suppose that a closed curve parametrized by

$$\begin{aligned} \gamma \in C\left( [0,T];C^{5}([0,1];{\mathbb {R}}^2)\right) \cap C^1\left( [0,T];C^{4}([0,1];{\mathbb {R}}^2)\right) \end{aligned}$$

is a solution to the elastic flow with admissible initial datum \(\gamma _0\in C^5([0,1])\). Then a reparametrization of \(\gamma \) is a solution to the special flow.

Proof

The proof easily follows arguing similarly as in the discussion at the beginning of the section and, in particular, by recalling that reparametrizations only affect the tangential velocity. \(\square \)

The above result can be generalized to flow of networks as stated below.

Lemma 2.15

Suppose that a network \({\mathcal {N}}_0\) of N regular curves parametrized by \(\gamma =(\gamma ^1, \ldots \gamma ^N)\) with \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\), \(i\in \{1,\ldots ,N\}\) is an admissible initial network. Then there exist N smooth functions \(\theta ^i:[0,1]\rightarrow [0,1]\) such that the reparametrisztion \(\left( \gamma ^i\circ \theta ^i\right) \) is an admissible initial parametrization for the special flow.

For the proof see [23, Lemma 3.31]. Moreover by inspecting the proof of Theorem 3.32 in [23] we see that the following holds.

Proposition 2.16

Let \(T>0\). Let \({\mathcal {N}}_0\) be an admissible initial network of N curves parametrized by \(\gamma _0=(\gamma _0^1, \ldots \gamma _0^N)\) with \(\gamma ^i:[0,1]\rightarrow {\mathbb {R}}^2\), \(i\in \{1,\ldots ,N\}\). Suppose that \({\mathcal {N}}(t)_{t\in [0,T]}\) is a solution to the elastic flow in the time interval [0, T] with initial datum \({\mathcal {N}}_0\) and suppose that it is parametrized by regular curves \(\gamma =(\gamma ^1, \ldots \gamma ^N)\) with \(\gamma ^i:[0,T]\times [0,1]\rightarrow {\mathbb {R}}^2\). Then there exists \({\widetilde{T}}\in (0,T]\) and a time dependent family of reparametrizations \(\psi :[0,{\widetilde{T}}]\times [0,1]\rightarrow [0,1]\) such that \(\varphi (t,x):=(\varphi ^1(t,x),\ldots ,\varphi ^N(t,x))\) with \(\varphi (t,x):=\gamma ^i(t,\psi (t,x))\) is a solution to the special flow in \([0,{\widetilde{T}}]\).

Remark 2.17

In the case of a single open curve, reducing to the special flow is is particularly advantageous. Indeed one passes from a the degenerate problem (2.10) couple either with quasilinear or fully nonlinear boundary conditions to a non-degenerate system of quasilinaer PDEs with linear and affine boundary conditions.

2.5 Energy Monotonicity

Let us name \(V^i:=-2\partial _ s^2 k^{i}-(k^i)^2k^i+\mu ^i k^i\) the normal velocity of a curve \(\gamma ^i\) evolving by elastic flow and denote the tangential motion by \(T^i\):

$$\begin{aligned} \partial _t\gamma ^i=V^i\nu ^i+T^i\tau ^i. \end{aligned}$$
(2.25)

Definition 2.18

We denote by \({{\mathfrak {p}}}_\sigma ^h(k)\) a polynomial in \(k,\dots ,\partial _s^h k\) with constant coefficients in \({\mathbb {R}}\) such that every monomial it contains is of the form

$$\begin{aligned} C \prod _{l=0}^h (\partial _s^lk)^{\beta _l}\quad \text { with} \quad \sum _{l=0}^h(l+1)\beta _l = \sigma , \end{aligned}$$

where \(\beta _l\in {\mathbb {N}}\) for \(l\in \{0,\ldots ,h\}\) and \(\beta _{l_0}\ge 1\) for at least one index \(l_0\).

We notice that

$$\begin{aligned} \partial _s\left( {{\mathfrak {p}}}_\sigma ^h( k)\right)&={{\mathfrak {p}}}_{\sigma +1}^{h+1}( k),\nonumber \\ {\mathfrak {p}}_{\sigma _1}^{h_1}(k){\mathfrak {p}}_{\sigma _2}^{h_2} (k)&={\mathfrak {p}}_{\sigma _1+\sigma _2}^{\max \{h_1,h_2\}}(k), \end{aligned}$$
(2.26)
$$\begin{aligned} {\mathfrak {p}}_\sigma ^{h_1}(k)+ {\mathfrak {p}}_\sigma ^{h_2}(k)&= {\mathfrak {p}}_\sigma ^{\max \{h_1,h_2\}}(k). \end{aligned}$$
(2.27)

By (2.2) the following result holds.

Lemma 2.19

If \(\gamma \) satisfies (2.25), the commutation rule

$$\begin{aligned} \partial _{t}\partial _{s}=\partial _{s}\partial _{t}+\left( kV-\partial _s T\right) \partial _{s}\, \end{aligned}$$

holds. The measure \(\mathrm {d}s\) evolves as

$$\begin{aligned} \partial _t(\mathrm {d}s)=\left( \partial _s T-kV\right) \mathrm {d}s. \end{aligned}$$

Moreover the unit tangent vector, unit normal vector and the j–th derivatives of scalar curvature of a curve satisfy

$$\begin{aligned} \partial _{t}\tau&=\left( \partial _s V+T k\right) \nu ,\nonumber \\ \partial _{t}\nu&=-\left( \partial _s V+T k\right) \tau ,\nonumber \\ \partial _tk&=\left\langle \partial _{t}\varvec{\kappa },\nu \right\rangle =\partial _s^2 V+T\partial _s k+k^{2}V\,\nonumber \\&=-2\partial _{s}^{4}k-5k^{2}\partial _{s}^{2}k-6k\left( \partial _{s}k\right) ^{2} +T\partial _{s}k-k^{5}+\mu \left( \partial _{s}^{2}k+k^3\right) , \end{aligned}$$
(2.28)
$$\begin{aligned} \partial _t\partial _s^j k&=-2\partial _{s}^{j+4}k -5k^2\partial _s^{j+2}k +\mu \,\partial _s^{j+2}k +T\partial _{s}^{j+1}k +{\mathfrak {p}}_{j+5}^{j+1}\left( k\right) + \mu \,{\mathfrak {p}}_{j+3}^{j}(k) \end{aligned}$$
(2.29)
$$\begin{aligned}&= -2\partial _{s}^{j+4}k +T\partial _{s}^{j+1}k +{\mathfrak {p}}_{j+5}^{j+2}\left( k\right) + \mu \,{\mathfrak {p}}_{j+3}^{j+2}(k) . \end{aligned}$$
(2.30)

With the help of the previous lemma it is now possible to compute the derivative in time of a general polynomial \({\mathfrak {p}}_{\sigma }^h(k)\). By definition every monomial composing \({\mathfrak {p}}_{\sigma }^h(k)\) is of the form \({\mathfrak {m}}(k)=C \prod _{l=0}^h (\partial _s^lk)^{\beta _l}\) with \(\sum _{l=0}^h(l+1)\beta _l = \sigma \). Then for every fixed \(j\in \{1,\ldots ,h\}\) the monomial \({\mathfrak {n}}(k)=C\beta _{j}(\partial _s^{j}k)^{\beta _j-1} \prod _{l\ne j, l=0}^h (\partial _s^lk)^{\beta _l}\) can be written as \({\mathfrak {n}}(k)={\tilde{C}}\prod _{l=0}^h (\partial _s^lk)^{\alpha _l}\) with \(\sum _{l=0}^h(l+1)\alpha _l = \sigma -j-1\). Differentiating in time \({\mathfrak {m}}(k)\) we have

$$\begin{aligned} \partial _t\left( {\mathfrak {m}}(k)\right)&=\sum _{j=0}^h \left( \left( C\beta _{j}\partial _s^{j}k^{\beta _j-1}\partial _t\partial _s^jk\right) \cdot \prod _{l\ne j, l=0}^h (\partial _s^lk)^{\beta _l}\right) \\&=\sum _{j=0}^h \left( \left( -2\partial _{s}^{j+4}k +T\partial _{s}^{j+1}k +{\mathfrak {p}}_{j+5}^{j+2}\left( k\right) + \mu \,{\mathfrak {p}}_{j+3}^{j+2}(k)\right) \left( C\beta _{j}\partial _s^{j}k^{\beta _j-1}\right) \cdot \prod _{l\ne j, l=0}^h (\partial _s^lk)^{\beta _l}\right) \\&={\mathfrak {p}}_{\sigma +4}^{h+4}(k)+T{\mathfrak {p}}_{\sigma +1}^{h+1}(k)+{\mathfrak {p}}_{\sigma +4}^{h+2}(k)+\mu {\mathfrak {p}}_{\sigma +2}^{h+2}(k), \end{aligned}$$

where we used the product rule (2.26) and the structure of the monomial \({\mathfrak {n}}(k)\). Summing up the contribution of each monomial composing \({\mathfrak {p}}_{\sigma }^h(k)\) we have

$$\begin{aligned} \partial _t\left( {\mathfrak {p}}_{\sigma }^h(k)\right) ={\mathfrak {p}}_{\sigma +4}^{h+4}(k)+T{\mathfrak {p}}_{\sigma +1}^{h+1}(k)+\mu {\mathfrak {p}}_{\sigma +2}^{h+2}(k). \end{aligned}$$
(2.31)

Proposition 2.20

Let \({\mathcal {N}}_t\) be a time dependent family of smooth networks composed of N curves, possibly with junctions and fixed endpoint in the plane. Suppose that \({\mathcal {N}}_t\) is a solution of the elastic flow. Then

$$\begin{aligned} \partial _{t}{\mathcal {E}}_\mu ({\mathcal {N}}_t)&=-\int _{{\mathcal {N}}} V^2\,\mathrm {d}s. \end{aligned}$$

Proof

Using the evolution laws collected in Lemma 2.19, we get

$$\begin{aligned} \partial _{t}\int _{{\mathcal {N}}}k^{2}+\mu \,\mathrm {d}s&=\int _{{\mathcal {N}}}2k\partial _{t}k+\left( k^{2}+\mu \right) \left( \partial _s T-kV\right) \,\mathrm {d}s\\&=\int _{{\mathcal {N}}}2k\left( \partial _s^2V+T k_{s}+k^{2}V\right) +\left( k^{2}+\mu \right) \left( \partial _s T-kV\right) \,\mathrm {d}s\\&=\int _{{\mathcal {N}}}^{}2k\partial _s^2V+k^3V-\mu kV+\partial _s\left( T\left( k^2+\mu \right) \right) \,\mathrm {d}s. \end{aligned}$$

Integrating twice by parts the term \(\int 2kV_{ss}\) we obtain

$$\begin{aligned} \partial _{t}\int _{{\mathcal {N}}}{\mathcal {E}}_\mu \,\mathrm {d}s =-\int _{{\mathcal {N}}}^{}V^2\,\mathrm {d}s+\sum _{i=1}^{N} \left. 2k^i\partial _s V^i-2\partial _s k^iV^i+T^i\{\left( k^i\right) ^2+\mu ^i\} \right| _{\text {bdry}}.\nonumber \\ \end{aligned}$$
(2.32)

It remains to show that the contribution of the boundary term in (2.32) equals zero, whatever boundary condition we decide to impose at the endpoint among the ones listed in Definition 2.3. The case of the closed curve is trivial.

Let us start with the case of an endpoint \(\gamma ^j(y)\) (with \(y\in \{0,1\}, j\in \{1,\ldots ,N\}\)) subjected to Navier boundary condition, namely \(k^j(y)=0\). The point remains fixed, that implies \(V^j(y)=T^j(y)=0\). The term \(2k^j(y)\partial _s V^j(y)\) vanishes because \(k^j(y)=0\).

Suppose instead that the curve is clamped at \(\gamma ^j(y)\) with \(\tau ^j(y)=\tau ^*\). Then using Lemma 2.19, \(0=\partial _t\tau ^j(y)=(\partial _s V^j(y)-T^j(y)k^j(y))\nu ^j(y)\). Hence

$$\begin{aligned} 2k^j(y)\left( \partial _s V^j(y)-T^j(y)k^j(y)\right) =0, \end{aligned}$$

that combined with \(V^j(y)=T^j(y)=0\) implies that the boundary terms vanish in (2.32).

Consider now a junction of order m where natural boundary conditions have been imposed. Up to inverting the orientation of the parametrizations of the curves, we suppose that all the curves concur at the junctions at \(x=0\). The curvature condition \(k^i(0)=0\) with \(i\in \{1,\ldots ,m\}\) gives

$$\begin{aligned} \sum _{i=1}^m 2k^i(0)\partial _s V^i(0)+T^i(0)\left( k^i(0)\right) ^2=0. \end{aligned}$$

Differentiating in time the concurrency condition \(\gamma ^1(0)=\cdots \gamma ^m(0)\) we obtain

$$\begin{aligned} V^1(0)\nu ^1(0)+T^1(0)\tau ^1(0)=\cdots =V^m(0)\nu ^m(0)+T^m(0)\tau ^m(0), \end{aligned}$$

that combined with the third order condition \(0=\sum _{i=1}^m 2\partial _sk^i(0)\nu ^i(0)-\mu ^i\tau ^i(0)\) gives

$$\begin{aligned} 0&=\left\langle -\partial _t\gamma ^1(0), \sum _{i=1}^m 2\partial _sk^i(0)\nu ^i(0)-\mu ^i\tau ^i(0)\right\rangle \\&=\sum _{i=1}^m \left\langle -V^i(0)\nu ^i(0)-T^i(0)\tau ^i(0), 2\partial _sk^i(0)\nu ^i(0)-\mu ^i\tau ^i(0)\right\rangle \\&= \sum _{i=1}^m-2\partial _sk^i(0)V^i(0)+\mu ^iT^i(0), \end{aligned}$$

hence the boundary terms vanish and we get the desired result.

To conclude, consider a junction of order m, where the curves concur at \(x=0\) and suppose that we have imposed there clamped boundary conditions. In this case using the concurrency condition differentiated in time and the third order condition we find

$$\begin{aligned} 0&= \sum _{i=1}^m \left\langle -\partial _t\gamma ^1(0), 2\partial _sk^i(0)\nu ^i(0)+\left( (k^i(0))^2-\mu ^i\right) \tau ^i(0)\right\rangle \nonumber \\&=\sum _{i=i}^m-2\partial _s k^i(0)V^i(0)-\left( (k^i(0))^2-\mu ^i\right) T^i(0). \end{aligned}$$
(2.33)

Differentiating in time the angle condition

$$\begin{aligned} \left\langle \tau ^i(0),\tau ^{i+1}(0)\right\rangle =c^{i,i+1}=\cos (\theta ^{i,i+1}) \end{aligned}$$

we have

$$\begin{aligned} 0&=\left\langle \partial _t\tau ^i(0),\tau ^{i+1}(0)\right\rangle +\left\langle \tau ^i(0),\partial _t\tau ^{i+1}(0)\right\rangle \\&=\left\langle (\partial _s V^i(0)+T^i(0)k^i(0))\nu ^i(0),\tau ^{i+1}(0)\right\rangle +\left\langle \tau ^i(0), (\partial _s V^{i+1}(0)+T^{i+1}(0)k^{i+1}(0))\nu ^{i+1}(0)\right\rangle \\&=(\partial _s V^i(0)+T^i(0)k^i(0))\sin (\theta ^{i,i+1})-(\partial _s V^{i+1}(0)+T^{i+1}(0)k^{i+1}(0))\sin (\theta ^{i,i+1}), \end{aligned}$$

and hence \(\partial _s V^i(0)+T^i(0)k^i(0)=\partial _s V^{i+1}(0)+T^{i+1}(0)k^{i+1}(0)\). Repeating the previous computation for every \(i\in \{2,\ldots ,m-1\}\) we get

$$\begin{aligned} V^{1}_s(0)+T^{1}(0)k^{1}(0)=\cdots = V^{m}_s(0)+T^{m}(0)k^{m}(0), \end{aligned}$$

that together with the curvature condition \(\sum k^i=0\) at the junction gives

$$\begin{aligned} 0=2\left( \partial _s V^{1}(0)+T^{1}(0)k^{1}(0)\right) \sum _{i=1}^m k^i(0)=\sum _{i=1}^m 2\partial _s V^{i}(0)k^i(0)+2T^{i}(0)(k^{i})^2(0). \end{aligned}$$

Summing this last equality with (2.33) we have that the boundary terms vanishes also in this case. \(\square \)

3 Short Time Existence

We prove a short time existence result for the elastic flow of closed curves. We then explain how it can be generalized to other situations and which are the main difficulties that arises when we pass from one curve to networks.

3.1 Short Time Existence of the Special Flow

First of all we aim to prove the existence of a solution to the special flow. Omitting the dependence on (tx) we can write the motion equation of a curve subjected to (2.18) as

$$\begin{aligned} \partial _t\varphi =-2\frac{\partial _x^4\varphi }{\vert \partial _x\varphi \vert ^4} +{\widetilde{f}}(\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi ). \end{aligned}$$

We linearize the highest order terms of the previous equation around the initial parametrization \(\varphi ^0\) obtaining

$$\begin{aligned} \partial _t\varphi +\frac{2}{\vert \partial _x\varphi ^0\vert ^4}\partial _x^4\varphi&=\left( \frac{2}{\vert \partial _x\varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \partial _x^4\varphi +{\widetilde{f}}(\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi )\nonumber \\&=:f(\partial _x^4\varphi ,\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi ). \end{aligned}$$
(3.1)

Definition 3.1

Given \(\varphi ^0:{\mathbb {S}}^1\rightarrow {\mathbb {R}}^2\) an admissible initial parametrization for (2.18), the linearized system about \(\varphi ^0\) associated to the special flow of a closed curve is given by

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{array}{lll} \partial _t\varphi (t,x)+2\frac{\partial _x^4\varphi (t,x)}{\vert \partial _x\varphi ^0(x)\vert ^4}&{}=f(t,x)&{}\;\text {on }[0,T]\times {\mathbb {S}}^1\\ \varphi (0,x)&{}=\psi (x) &{}\;\text {on }{\mathbb {S}}^1. \end{array} \end{array}\right. } \end{aligned}$$
(3.2)

Here \((f,\psi )\) is a generic couple to be specified later on.

Let \(\alpha \in (0,1)\) be fixed. Whenever a curve \(\gamma \) is regular, there exists a constant \(c>0\) such that \(\inf _{x\in {\mathbb {S}}^1}\vert \partial _x\gamma \vert \ge c\). From now on we fix an admissible initial parametrization \(\varphi ^0\) with

$$\begin{aligned} \Vert \varphi ^0 \Vert _{C^{4+\alpha }({\mathbb {S}}^1;{\mathbb {R}}^2)}=R,\quad \text {and}\quad \inf _{x\in {\mathbb {S}}^1}\vert \partial _x\varphi ^0(x)\vert \ge c. \end{aligned}$$

Then for every \(j\in {\mathbb {N}}\) there holds

$$\begin{aligned} \left\Vert \frac{1}{\vert \partial _x\varphi ^0\vert ^j} \right\Vert _{C^{\alpha }({\mathbb {S}}^1;{\mathbb {R}}^2)}\le C(R,c) . \end{aligned}$$

Definition 3.2

For \(T>0\) we consider the linear spaces

$$\begin{aligned} {\mathbb {E}}_T:=&\, C ^{\frac{4+\alpha }{4},{^{4+\alpha }}}\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) ,\\ {\mathbb {F}}_T:=&\, C^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \times C^{4+\alpha }\left( {\mathbb {S}}^1;{\mathbb {R}}^2\right) , \end{aligned}$$

endowed with the norms

$$\begin{aligned} \Vert \gamma \Vert _{{\mathbb {E}}_T}:=\Vert \gamma \Vert _{C^{\frac{4+\alpha }{4},^{4+\alpha }} },\quad \Vert (f,\psi )\Vert _{{\mathbb {F}}_T}:=\Vert f\Vert _{C^{\frac{\alpha }{4},^{\alpha }}}+ \Vert \psi \Vert _{C^{4+\alpha }}. \end{aligned}$$

and we define the operator \({\mathcal {L}}_{T}:{\mathbb {E}}_T\rightarrow {\mathbb {F}}_T\) by

$$\begin{aligned} {\mathcal {L}}_{T}(\varphi ):=\left( {\mathcal {L}}^1_{T}(\varphi ),{\mathcal {L}}^2_{T}(\varphi )\right) :=\left( \partial _t\varphi +\frac{2}{\vert \partial _x\varphi ^0\vert ^4}\partial _x^4\varphi , \varphi _{\vert t=0}\right) . \end{aligned}$$

Remark 3.3

For every \(T>0\) the operator \({\mathcal {L}}_{T}:{\mathbb {E}}_T\rightarrow {\mathbb {F}}_T\) is well-defined, linear and continuous.

Theorem 3.4

Let \(\alpha \in (0,1)\), \((f,\psi )\in {\mathbb {F}}_T\). Then for every \(T>0\) the system (3.2) has a unique solution \(\varphi \in {\mathbb {E}}_T\). Moreover, for all \(T>0\) there exists \(C(T)>0\) such that if \(\varphi \in {\mathbb {E}}_T\) is a solution, then

$$\begin{aligned} \Vert \varphi \Vert _{{\mathbb {E}}_T } \le C(T)\Vert (f,\psi )\Vert _{{\mathbb {F}}_T}. \end{aligned}$$
(3.3)

Proof

See for instance [33, Theorem 4.3.1] and [51, Theorem 4.9]. \(\square \)

From the above theorem we get the following consequence.

Corollary 3.5

The linear operator \({\mathcal {L}}_{T}:{\mathbb {E}}_T\rightarrow {\mathbb {F}}_T\) is a continuous isomorphism.

By the above corollary, we can denote by \({\mathcal {L}}^{-1}_T\) the inverse of \({\mathcal {L}}_T\).

Notice that till now we have considered fixed \(T>0\) and derived (3.3), where the constant C depends on T. Now, once a certain interval of time \((0, {\widetilde{T}}]\) with \({\widetilde{T}}>0\) is chosen, we show that for every \(T\in (0, {\widetilde{T}}]\) it possible to estimate the norm of \({\mathcal {L}}^{-1}_T\) with a constant independent of T.

Lemma 3.6

For all \({\widetilde{T}}>0\) there exists a constant \(c({\widetilde{T}})\) such that

$$\begin{aligned} \sup _{T\in (0,\frac{1}{2}{\widetilde{T}}]}\Vert {\mathcal {L}}_T^{-1}\Vert _{{\mathcal {L}}({\mathbb {F}}_T,{\mathbb {E}}_T)}\le c({\widetilde{T}}). \end{aligned}$$

Proof

Fix \({\widetilde{T}}>0\), for all \(T\in (0,{\widetilde{T}}]\), for every \((f,\psi )\in {\mathbb {F}}_T\) we define the extension operator \(E(f,\psi ):=({\widetilde{E}}f,\psi )\) by

$$\begin{aligned}&{\widetilde{E}}:C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \rightarrow C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,{\widetilde{T}}]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \\&\quad {\widetilde{E}}f(t,x):= {\left\{ \begin{array}{ll} f(t,x) &{}\text {for}\, t\in [0,T],\\ f\left( T\frac{{\widetilde{T}}-t}{{\widetilde{T}}-T},x\right) &{}\text {for}\, t\in (T,{\widetilde{T}}],\\ \end{array}\right. } \end{aligned}$$

It is clear that \(E(f,\psi )\in {\mathbb {F}}_{{\widetilde{T}}}\) and that \(\Vert E \Vert _{{\mathcal {L}}({\mathbb {F}}_T,{\mathbb {F}}_{{\widetilde{T}}})}\le 1\).

Moreover \({\mathcal {L}}^{-1}_{{\widetilde{T}}}(E(f,\psi ))_{\vert [0,T]}={\mathcal {L}}^{-1}_T(f,\psi )\) by uniqueness and then

$$\begin{aligned} \Vert {\mathcal {L}}^{-1}_T(f,\psi ) \Vert _{{\mathbb {E}}_T}&\le \Vert {\mathcal {L}}^{-1}_{{\widetilde{T}}}(E(f,\psi )) \Vert _{{\mathbb {E}}_{{\widetilde{T}}}}\\&\le \Vert {\mathcal {L}}_{{\widetilde{T}}}^{-1}\Vert _{{\mathcal {L}}({\mathbb {F}}_{{\widetilde{T}}},{\mathbb {E}}_{{\widetilde{T}}})} \Vert E(f,\psi )\Vert _{{\mathbb {F}}_{{\widetilde{T}}}}\le c({\widetilde{T}})\Vert (f,\psi )\Vert _{{\mathbb {F}}_{T}}. \end{aligned}$$

\(\square \)

Definition 3.7

We define the affine spaces

$$\begin{aligned} {\mathbb {E}}^0_T&=\{\gamma \in {\mathbb {E}}_T\,\text {such that }\,\gamma _{\vert t=0}=\varphi ^0\} ,\\ {\mathbb {F}}^0_T&=C^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \times \{\varphi ^0\} . \end{aligned}$$

In the following we denote by \(\overline{B_M}\) the closed ball of radius M and center 0 in \({\mathbb {E}}_T\).

Lemma 3.8

Let \({\widetilde{T}}>0\), \(M>0\), \(c>0\) and \(\varphi ^0\) an admissible initial parametrization with \(\inf _{x\in {\mathbb {S}}^1}\vert \partial _x\varphi ^0\vert \ge c\). Then there exists \({\widehat{T}}={\widehat{T}}(c,M)\in (0,{\widetilde{T}}]\) such that for all \(T\in (0,{\widehat{T}}]\) every curve \(\varphi \in {\mathbb {E}}^0_T\cap B_M\) is regular with

$$\begin{aligned} \inf _{x\in {\mathbb {S}}^1}\vert \partial _x\varphi (t,x)\vert \ge \frac{c}{2}. \end{aligned}$$
(3.4)

Moreover for every \(j\in {\mathbb {N}}\)

$$\begin{aligned} \left\Vert \frac{1}{\vert \partial _x\varphi (t,x)\vert ^j}\right\Vert _{C^{\frac{\alpha }{4},\alpha }([0,T]\times [0,1])} \le C(c, M, j). \end{aligned}$$

Proof

We have

$$\begin{aligned} \vert \partial _x\varphi (t,x)\vert \ge \vert \partial _x \varphi ^0(x)\vert -\vert \partial _x\varphi (t,x)-\partial _x \varphi ^0(x)\vert , \end{aligned}$$

with \(\vert \partial _x\varphi (t,x)-\partial _x \varphi ^0(x)\vert \le \left[ \varphi \right] _{\beta ,0}t^\beta \le Mt^\beta \) with \(\beta =\frac{3}{4}+\frac{\alpha }{4}\). Taking \({\widehat{T}}\) sufficiently small, passing to the infimum we get the first claim. As a consequence

$$\begin{aligned} \sup _{x\in [0,1]}\frac{1}{\vert \partial _x\varphi (t,x)\vert }\le \frac{2}{c}. \end{aligned}$$
(3.5)

Then for \(j=1\) the second estimate follows directly combining the estimate (3.5) with the definition of the norm \(\Vert \cdot \Vert _{C^{\frac{\alpha }{4},\alpha }([0,T]\times {\mathbb {S}}^1)}\). The case \(j\ge 2\) follows from multiplicativity of the norm. \(\square \)

Form now on we fix \({\widetilde{T}}=1\) and we denote by \({\hat{T}}={\hat{T}}(c,M)\) the time given by Lemma 3.8 for given c and M.

Definition 3.9

For every \(T\in (0,{\hat{T}}]\) we define the map

$$\begin{aligned} N_{T}: {\left\{ \begin{array}{ll} {\mathbb {E}}^0_T \rightarrow C ^{\frac{\alpha }{4},{^{\alpha }}}([0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2)\\ \varphi \mapsto f(\varphi ), \end{array}\right. } \end{aligned}$$

where the functions \(f(\varphi ):=f(\partial _x^4\varphi ,\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi )\) is defined in (3.1). Moreover we introduce the map \({\mathcal {N}}_T\) given by \({\mathbb {E}}^0_T \ni \gamma \,\mapsto (N_{T}(\gamma ),\gamma \vert _{t=0})\).

Remark 3.10

We remind that f is given by

$$\begin{aligned} f(\varphi )&=\left( \frac{2}{\vert \partial _x \varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \partial _x^4\varphi +12\frac{\partial _x^3\varphi \left\langle \partial _x^2\varphi ,\partial _x\varphi \right\rangle }{\left| \partial _x\varphi \right| ^{6}} +5\frac{\partial _x^2\varphi \left| \partial _x^2\varphi \right| ^{2}}{\left| \partial _x\varphi \right| ^{6}}\\&\quad +8\frac{\partial _x^2\varphi \left\langle \partial _x^3\varphi ,\partial _x\varphi \right\rangle }{\left| \partial _x\varphi \right| ^{6}} -35\frac{\partial _x^2\varphi \left\langle \partial _x^2\varphi ,\partial _x\varphi \right\rangle ^{2}}{\left| \partial _x\varphi \right| ^{8}}+\mu \frac{\partial _x^2\varphi }{\vert \partial _x\varphi \vert ^2}. \end{aligned}$$

By Lemma 3.8, for \(\varphi \in {\mathbb {E}}^0_T\), we have that for all \(t\in [0,T]\) the map \(\varphi (t)\) is a regular curve. Hence \(N_{T}\) is well–defined. Furthermore we notice that the map \({\mathcal {N}}_t\) is a mapping from \({\mathbb {E}}^0_T\) to \({\mathbb {F}}^0_T\).

The following lemma is a classical result on parabolic Hölder spaces. For a proof see for instance [33].

Lemma 3.11

Let \(k\in \{1,2,3\}\), \(T\in [0,1]\) and \(\varphi ,{\widetilde{\varphi }}\in {\mathbb {E}}_T^0\). We denote by \(\varphi ^{(4-k)}, {\widetilde{\varphi }}^{(4-k)}\) the \((4-k)\)–th space derivative of \(\varphi \) and \({\widetilde{\varphi }}\), respectively. Then there exist \(\varepsilon >0\) and a constant \({\widetilde{C}}\) independent of T such that

$$\begin{aligned} \left\Vert \varphi ^{(4-k)}-{\widetilde{\varphi }}^{(4-k)} \right\Vert _{C^{\frac{\alpha }{4},\alpha }} \le {\widetilde{C}}T^{\varepsilon }\left\Vert \varphi ^{(4-k)}-{\widetilde{\varphi }}^{(4-k)} \right\Vert _{C^{\frac{k+\alpha }{4},k+\alpha }} \le {\widetilde{C}}T^{\varepsilon }\left\Vert \varphi -{\widetilde{\varphi }} \right\Vert _{{\mathbb {E}}_T}. \end{aligned}$$

Definition 3.12

Let \(\varphi ^0\) be an admissible initial parametrization, \(c:=\inf _{x\in {\mathbb {S}}^1}\vert \partial _x \varphi ^0\vert \). For a positive M and a time \(T\in (0,{\widehat{T}}(c,M)]\) we define \({\mathcal {K}}_T:{\mathbb {E}}^0_T\cap \overline{B_M}\rightarrow {\mathbb {E}}^0_T\) by

$$\begin{aligned} {\mathcal {K}}_T:={\mathcal {L}}_T^{-1}\circ {\mathcal {N}}_T. \end{aligned}$$

Proposition 3.13

Let \(\varphi ^0\) be an admissible initial parametrization, \(c:=\inf _{x\in {\mathbb {S}}^1}\vert \partial _x \varphi ^0\vert \). Then there exists a positive radius \(M(\varphi ^0)>\Vert \varphi ^0\Vert _{C^{4+\alpha }}\) and a time \({\overline{T}}(c,M)\) such that for all \(T\in (0,{\overline{T}}]\) the map \({\mathcal {K}}_T:{\mathbb {E}}^0_T\cap \overline{B_M}\rightarrow {\mathbb {E}}^0_T\) takes values in \({\mathbb {E}}^0_T\cap \overline{B_M}\) and it is a contraction.

In the following proof constants may vary from line to line and depend on c, M and \(\Vert \varphi ^0\Vert _{C^{4+\alpha }}\).

Proof

Let \(M>0\) and \({\widetilde{T}}>0\) be arbitrary positive numbers. Let \({\widehat{T}}(c,M)\) be given by Lemma 3.8 and assume without loss of generality that \({\widehat{T}}(c,M)<\tfrac{1}{2} {\widetilde{T}}\). Let \(T\in (0,{\widehat{T}}(c,M)]\) be a generic time.

Clearly \({\mathcal {L}}_T^{-1}({\mathbb {F}}_T^0)\subseteq {\mathbb {E}}^0_T\) and the \({\mathcal {K}}_T\) is well defined on \({\mathbb {E}}^0_T\cap \overline{B_M}\).

First we show that there exists a time \(T'\in (0,{\widehat{T}}(c,M))\) such that for all \(T\in (0,T']\), for every \(\varphi ,{\widetilde{\varphi }} \in {\mathbb {E}}^0_T\cap \overline{B_M}\), it holds

$$\begin{aligned} \Vert {\mathcal {K}}_T(\varphi )- {\mathcal {K}}_T({\widetilde{\varphi }})\Vert _{{\mathbb {E}}_T} \le \frac{1}{2}\Vert \varphi - {\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$
(3.6)

We begin by estimating

$$\begin{aligned}&\Vert N_{T}(\varphi )-N_{T}({\widetilde{\varphi }})\Vert _{C^{\frac{\alpha }{4},\alpha }} = \Vert f(\varphi )-f({\widetilde{\varphi }})\Vert _{C^{\frac{\alpha }{4},\alpha }}. \end{aligned}$$

The highest order term in the above norm is

$$\begin{aligned} \begin{aligned}&\left( \frac{2}{\vert \partial _x \varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \partial _x^4\varphi + \left( \frac{2}{\vert \partial _x {\widetilde{\varphi }}\vert ^4} -\frac{2}{\vert \partial _x \varphi ^0\vert ^4}\right) \partial _x^4 {\widetilde{\varphi }} \\&\quad = \left( \frac{2}{\vert \partial _x \varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \left( \partial _x^4\varphi -\partial _x^4 {\widetilde{\varphi }}\right) + \left( \frac{2}{\vert \partial _x {\widetilde{\varphi }}\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \partial _x^4 {\widetilde{\varphi }} \end{aligned} \end{aligned}$$
(3.7)

We can rewrite the above expression using the identity

$$\begin{aligned} \frac{1}{\vert a \vert ^4} -\frac{1}{\vert b \vert ^4}= \left( \vert b \vert -\vert a \vert \right) \left( \frac{1}{\vert a \vert ^2\vert b \vert } +\frac{1}{\vert a \vert \vert b \vert ^2}\right) \left( \frac{1}{\vert a \vert ^2} +\frac{1}{\vert b \vert ^2}\right) . \end{aligned}$$
(3.8)

We get

$$\begin{aligned}&\left( \frac{2}{\vert \partial _x \varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \\&\quad = \left( \vert \partial _x\varphi \vert -\vert \partial _x \varphi ^0 \vert \right) \left( \frac{1}{\vert \partial _x \varphi ^0 \vert ^2\vert \partial _x\varphi \vert } +\frac{1}{\vert \partial _x \varphi ^0 \vert \vert \partial _x\varphi \vert ^2}\right) \left( \frac{1}{\vert \partial _x \varphi ^0 \vert ^2} +\frac{1}{\vert \partial _x\varphi \vert ^2}\right) . \end{aligned}$$

In order to control \(\left( \frac{1}{\vert \partial _x \varphi ^0 \vert ^2\vert \partial _x\varphi \vert } +\frac{1}{\vert \partial _x \varphi ^0 \vert \vert \partial _x\varphi \vert ^2}\right) \left( \frac{1}{\vert \partial _x \varphi ^0 \vert ^2} +\frac{1}{\vert \partial _x\varphi \vert ^2}\right) \) we use Lemma 3.8. Now we identify \(\varphi ^0\) with its constant in time extension \(\psi ^0(t,x):=\varphi ^0(x)\), which belongs to \({\mathbb {E}}_T^0\) for arbitrary T. Observe that \(\Vert \psi ^0\Vert _{{\mathbb {E}}_T} = \Vert \psi ^0\Vert _{C^{\frac{4+\alpha }{4},4+\alpha }} = \Vert \varphi ^0\Vert _{C^{4+\alpha }}\) is independent of T. Then making use of Lemma 3.11 we obtain

$$\begin{aligned} \left\Vert \vert \partial _x\varphi \vert -\vert \partial _x\psi ^0\vert \right\Vert _{C^{\alpha ,\frac{\alpha }{4}}} \le \left\Vert \partial _x\varphi -\partial _x\psi ^0 \right\Vert _{C^{\alpha ,\frac{\alpha }{4}}} \le C T^{\varepsilon } \Vert \varphi -\psi ^0\Vert _{{\mathbb {E}}_T} \le CM T^{\varepsilon }. \end{aligned}$$

Then

$$\begin{aligned} \left\Vert \left( \frac{2}{\vert \partial _x \varphi ^0\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \left( \partial _x^4\varphi -\partial _x^4 {\widetilde{\varphi }}\right) \right\Vert _{C^{\alpha ,\frac{\alpha }{4}}} \le C MT^{\varepsilon } \Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$

Similarly we obtain allows us to write

$$\begin{aligned} \left\Vert \left( \frac{2}{\vert \partial _x {\widetilde{\varphi }}\vert ^4} -\frac{2}{\vert \partial _x\varphi \vert ^4}\right) \partial _x^4 {\widetilde{\varphi }} \right\Vert _{C^{\alpha ,\frac{\alpha }{4}}} \le C MT^{\varepsilon } \Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$
(3.9)

The lower order terms of \(f(\varphi )-f({\widetilde{\varphi }})\) are of the form

$$\begin{aligned} \frac{a\left\langle b,c\right\rangle }{\vert d\vert ^j} -\frac{{\tilde{a}}\langle {\tilde{b}},{\tilde{c}}\rangle }{\vert {\tilde{d}}\vert ^{j}}, \end{aligned}$$
(3.10)

with \(j\in \{2,6,8\}\) and with \(a,b,c,d,{\tilde{a}},{\tilde{b}},{\tilde{c}},{\tilde{d}}\) space derivatives up to order three of \(\varphi \) and \({\widetilde{\varphi }}\), respectively. Adding and subtracting the expression

$$\begin{aligned} \frac{{\tilde{a}}\left\langle b,c\right\rangle }{\vert d\vert ^j}+\frac{{\tilde{a}}\langle {\tilde{b}},c\rangle }{\vert d\vert ^j}+ \frac{{\tilde{a}}\langle {\tilde{b}},{\tilde{c}}\rangle }{\vert d\vert ^j} \end{aligned}$$

to (3.10), we get

$$\begin{aligned} \frac{(a-{\tilde{a}})\left\langle b,c\right\rangle }{\vert d\vert ^j}+ \frac{{\tilde{a}}\left\langle (b-{\tilde{b}}),c\right\rangle }{\vert d\vert ^j} +\frac{{\tilde{a}}\left\langle {\tilde{b}},(c-{\tilde{c}})\right\rangle }{\vert d\vert ^j} +\left( \frac{1}{\vert d\vert ^j}-\frac{1}{\vert {\tilde{d}}\vert ^j}\right) {\tilde{a}}\left\langle {\tilde{b}},{\tilde{c}}\right\rangle . \end{aligned}$$
(3.11)

With the help of Lemma 3.11 we can estimate the first term of (3.11) in the following way:

$$\begin{aligned} \left\Vert \frac{(a-{\tilde{a}})\left\langle b,c\right\rangle }{\vert d\vert ^j} \right\Vert _{C^{\frac{\alpha }{4},\alpha }} \le C\Vert a-{\tilde{a}} \Vert _{C^{\frac{\alpha }{4},\alpha }} \le CT^{\varepsilon }\Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$

The second and the third term of (3.11) can be estimated similarly by Cauchy–Schwarz inequality. To obtain the desired estimate for the last term of (3.11) we proceed in a similar way as for the second term of (3.7). We use the identities

$$\begin{aligned} \frac{1}{\vert d \vert ^2} -\frac{1}{\vert {\tilde{d}} \vert ^2}&= \left( \vert {\tilde{d}} \vert -\vert d \vert \right) \left( \frac{1}{\vert d \vert ^2\vert {\tilde{d}} \vert } +\frac{1}{\vert d \vert \vert {\tilde{d}} \vert ^2}\right) ,\\ \frac{1}{\vert d \vert ^j} -\frac{1}{\vert {\tilde{d}} \vert ^j}&= \left( \vert {\tilde{d}} \vert -\vert d \vert \right) \left( \frac{1}{\vert d \vert ^2\vert {\tilde{d}} \vert } +\frac{1}{\vert d \vert \vert {\tilde{d}} \vert ^2}\right) \left( \frac{1}{\vert d \vert ^2} +\frac{1}{\vert {\tilde{d}} \vert ^2}\right) \left( \frac{1}{\vert d \vert ^{j-4}} +\frac{1}{\vert {\tilde{d}} \vert ^{j-4}}\right) , \end{aligned}$$

for \(j\in \{6,8\}\) and Lemmas 3.8 and 3.11 and we finally get

$$\begin{aligned} \left\Vert \left( \frac{1}{\vert d\vert ^j}-\frac{1}{\vert {\tilde{d}}\vert ^j}\right) {\tilde{a}}\left\langle {\tilde{b}},{\tilde{c}}\right\rangle \right\Vert _{C^{\frac{\alpha }{4},\alpha }} \le C T^{\varepsilon }\Vert d-{\tilde{d}}\Vert _{C^{\frac{\alpha }{4},\alpha }} \le C T^{\varepsilon }\Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$

Putting the above inequalities together we have

$$\begin{aligned} \Vert f(\varphi )-f({\widetilde{\varphi }})\Vert _{C^{\frac{\alpha }{4}},\alpha }\le C T^{\varepsilon }\Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}. \end{aligned}$$

By Lemma 3.6, this implies that for all \(T\in (0,{\widehat{T}}(M,c)]\)

$$\begin{aligned} \Vert {\mathcal {K}}_T(\varphi ) -{\mathcal {K}}_T({\widetilde{\varphi }})\Vert _{{\mathbb {E}}_T}= & {} \Vert {\mathcal {L}}^{-1}_T({\mathcal {N}}_T(\varphi )) -{\mathcal {L}}^{-1}({\mathcal {N}}_T({\widetilde{\varphi }}))\Vert _{{\mathbb {E}}_T}\nonumber \\\le & {} \sup _{T\in [0,{\widehat{T}}]}\Vert {\mathcal {L}}^{-1}_T\Vert _{{\mathcal {L}}({\mathbb {F}}_T,{\mathbb {E}}_T)} \Vert {\mathcal {N}}_T(\varphi )-{\mathcal {N}}_T(\widetilde{ \varphi })\Vert _{{\mathbb {F}}_T}\nonumber \\\le & {} C(M,c,{\widetilde{T}})T^\varepsilon \Vert \varphi -{\widetilde{\varphi }}\Vert _{{\mathbb {E}}_T}, \end{aligned}$$
(3.12)

with \(0<\varepsilon <1\). Choosing \(T'\) small enough we can conclude that for every \(T\in (0,T']\) the inequality (3.6) holds.

In order to conclude the proof it remains to show that we can choose M sufficiently big so that \({\mathcal {K}}_T\) maps \({\mathbb {E}}^0_T\cap \overline{B_M}\) into itself.

As before we identify \(\varphi ^0(x)\) with its constant in time extension \(\psi ^0(t,x)\). Notice that the expressions \({\mathcal {K}}_T(\psi ^0)\) and \({\mathcal {N}}_T(\psi ^0)\) are then well defined.

As M is an arbitrary positive constant, let us choose M at the beginning, depending on \(\varphi ^0\) and \({\widetilde{T}}\) only, so that

$$\begin{aligned} \Vert \psi ^0\Vert _{{\mathbb {E}}_T} = \Vert \varphi ^0\Vert _{C^{4+\alpha }} < \frac{M}{2} \qquad \forall \,T>0, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \Vert {\mathcal {K}}_T(\psi ^0)\Vert _{{\mathbb {E}}_T}&\le \sup _{T\in [0,{\widetilde{T}}/2-\delta ]} \Vert {\mathcal {L}}_T^{-1}\Vert _{{\mathcal {L}}({\mathbb {F}}_T,{\mathbb {E}}_T)}\Vert {\mathcal {N}}_T(\psi ^0)\Vert _{{\mathbb {F}}_T} \\&= \sup _{T\in [0,{\widetilde{T}}/2-\delta ]} \Vert {\mathcal {L}}_T^{-1}\Vert _{{\mathcal {L}}({\mathbb {F}}_T,{\mathbb {E}}_T)}\Vert (f(\varphi ^0),\varphi ^0))\Vert _{{\mathbb {F}}_T} \\&\le c({\widetilde{T}}) C(\varphi ^0) \\&< \frac{M}{2} \qquad \forall \,\delta >0, \end{aligned} \end{aligned}$$

where we used that \(\Vert (f(\varphi ^0),\varphi ^0))\Vert _{{\mathbb {F}}_T}\) is time independent and then estimated by a constant \(C(\varphi ^0)\) depending only on \(\varphi ^0\) and we also used Lemma 3.6. For \(T\in (0,T']\), as \(T'\le {\widehat{T}}(c,M) \le \tfrac{1}{2}{\widetilde{T}}-\delta \) for some positive \(\delta \), we also have

$$\begin{aligned} \begin{aligned} \Vert {\mathcal {K}}_T(\varphi )\Vert _{{\mathbb {E}}_T}&\le \Vert {\mathcal {K}}_T(\psi ^0)\Vert _{{\mathbb {E}}_T} + \Vert {\mathcal {K}}_T(\varphi ) - {\mathcal {K}}_T(\psi ^0)\Vert _{{\mathbb {E}}_T} \\&< \frac{M}{2} + C(M,c,{\widetilde{T}})T^\varepsilon 2M, \end{aligned} \end{aligned}$$

for any \(\varphi \in {\mathbb {E}}^0_T\cap \overline{B_M}\), where we used (3.12). It follows that by taking \(T\le T'\) sufficiently small, we have that \({\mathcal {K}}_T:{\mathbb {E}}^0_T\cap \overline{B_M} \rightarrow {\mathbb {E}}^0_T\cap \overline{B_M}\) and it is a contraction.

\(\square \)

Theorem 3.14

Let \(\varphi ^0\) be an admissible initial parametrization. There exists a positive radius M and a positive time T such that the special flow (2.18) of closed curves has a unique solution in \(C^{\frac{4+\alpha }{4},4+\alpha }\left( [0,T]\times {\mathbb {S}}^1\right) \cap \overline{B_M}\).

Proof

Choosing M and \({\overline{T}}\) as in Proposition 3.13, for every \(T\in (0,{\overline{T}}]\) the map \({\mathcal {K}}_T:{\mathbb {E}}_T^0\cap \overline{B_M}\rightarrow {\mathbb {E}}_T^0\cap \overline{B_M}\) is a contraction of the complete metric space \({\mathbb {E}}_T^0\cap \overline{B_M}\). Thanks to Banach–Cacciopoli contraction theorem \({\mathcal {K}}_T\) has a unique fixed point in \({\mathbb {E}}_T^0\cap \overline{B_M}\). By definition of \({\mathcal {K}}_T\), an element of \({\mathbb {E}}_T^0\cap \overline{B_M}\) is a fixed point for \({\mathcal {K}}_T\) if and only if it is a solution to the special flow (2.18) of closed curves in \(C^{\frac{4+\alpha }{4},4+\alpha }\left( [0,T]\times {\mathbb {S}}^1\right) \cap \overline{B_M}\). \(\square \)

Remark 3.15

In order to prove an existence and uniqueness theorem for the special flow of curves with fixed endpoints subjected to natural or clamped boundary conditions, it is enough to repeat the previous arguments with some small adjustments.

In the case of Navier boundary condition we replace \({\mathbb {E}}_T\), \({\mathbb {E}}_T^0\), \({\mathbb {F}}_T\) and \({\mathbb {F}}_T^0\) by

$$\begin{aligned} {\mathbb {E}}_T^1&:=\left\{ \varphi \in C ^{\frac{4+\alpha }{4},{^{4+\alpha }}}\left( [0,T]\times [0,1];{\mathbb {R}}^2\right) \,:\, \partial _x^2\varphi (0)=\partial _x^2\varphi (1)=0,\,\varphi _{\vert t=0}=\varphi ^0\right\} ,\\ {\mathbb {E}}_T^{0,1}&:=\left\{ \varphi \in {\mathbb {E}}_T^1\,:\,\varphi (t,0)=P, \varphi (t,1)=Q, \right\} ,\\ {\mathbb {F}}_T^{1}&:= C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times [0,1];{\mathbb {R}}^2\right) \times (C ^{\frac{4+\alpha }{4}}\left( [0,T];{\mathbb {R}}^2\right) )^2 \times C^{4+\alpha }([0,1];{\mathbb {R}}^2) , \\ {\mathbb {F}}_T^{0,1}&:= C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times [0,1];{\mathbb {R}}^2\right) \times \{P\}\times \{Q\}\times \{\varphi ^0\}, \end{aligned}$$

where by \(P,Q \in {{{\mathbb {R}}}}^2\). In this case we introduce the linear operator

$$\begin{aligned} {\mathcal {L}}_T(\varphi ):=\left( \partial _t\varphi +\frac{2}{\vert \partial _x\varphi ^0\vert ^4}\partial _x^4\varphi , \varphi _{\vert x=0},\varphi _{\vert x=1},\varphi _{\vert t=0}\right) . \end{aligned}$$

This modification allows us to treat the linear boundary conditions \(\partial _x^2\varphi (0)=\partial _x\varphi (1)=0\) and the affine ones \(\varphi (t,0)=P\), \(\varphi (t,1)=Q\).

In the case of clamped boundary conditions instead we have to take into account four vectorial affine boundary conditions. We modify the affine space \({\mathbb {E}}_T^0\) into

$$\begin{aligned} {\mathbb {E}}_T^{0,2} :=\left\{ \varphi \in {\mathbb {E}}_T\,:\,\varphi (t,0)=P, \varphi (t,1)=Q, \partial _x\varphi (t,0)=\tau ^0, \partial _x\varphi (t,1)=\tau ^1, \varphi _{\vert t=0}=\varphi ^0\right\} , \end{aligned}$$

and

$$\begin{aligned} {\mathbb {F}}_T^{2}&:= C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times [0,1];{\mathbb {R}}^2\right) \times \left( C ^{\frac{4+\alpha }{4}}\left( [0,T];{\mathbb {R}}^2\right) \right) ^2\\&\quad \times \left( C ^{\frac{3+\alpha }{4}}\left( [0,T];{\mathbb {R}}^2\right) \right) ^2 \times C^{4+\alpha }([0,1];{\mathbb {R}}^2), \\ {\mathbb {F}}_T^{0,2}&:= C ^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times [0,1];{\mathbb {R}}^2\right) \times \{P\}\times \{Q\}\times \{\tau ^0\} \times \{\tau ^1\} \times \{\varphi ^0\}. \end{aligned}$$

Finally the operator \({\mathcal {L}}_T\) in this case is

$$\begin{aligned} {\mathcal {L}}_T(\varphi ):= \left( \partial _t\varphi +\frac{2}{\vert \partial _x\varphi ^0\vert ^4}\partial _x^4\varphi , \varphi _{\vert x=0},\varphi _{\vert x=1}, \partial _x\varphi _{\vert x=0}, \partial _x\varphi _{\vert x=1}, \varphi _{\vert t=0}\right) . \end{aligned}$$

Remark 3.16

Differently from the case of endpoints of order one, at the multipoints of higher order we impose also non linear boundary conditions (quasilinear or even fully non linear). Treating these terms is then harder: it is necessary to linearize both the main equation and the boundary operator.

Consider for instance the case of the elastic flow of a network composed of N curves that meet at two junction, both of order N and subjected to natural boundary conditions. The concurrency condition and the second order condition are already linear. Instead the third order condition is of the form

$$\begin{aligned} \sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^i\vert ^3} \left\langle \partial _x^3 \varphi ,\nu ^i\right\rangle \nu ^i +h^i(\partial _x \varphi ^i)=0, \end{aligned}$$

where we omit the dependence on (ty) with \(y\in \{0,1\}\). The linearized version of the highest order term in the third order condition is:

$$\begin{aligned}&-\sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^{0,i}\vert ^3} \left\langle \partial _x^3 \varphi ,\nu _0^i\right\rangle \nu _0^i \nonumber \\&\quad = -\sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^{0,i}\vert ^3} \left\langle \partial _x^3 \varphi ,\nu _0^i\right\rangle \nu _0^i +\sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^i\vert ^3} \left\langle \partial _x^3 \varphi ,\nu ^i\right\rangle \nu ^i +h^i(\partial _x \varphi ^i)=:b(\varphi ) , \end{aligned}$$
(3.13)

where we denoted by \(\nu _0\) the unit normal vector of the initial datum \(\varphi ^0\). Then, instead of (3.2), the linearized system associated to the special flow is

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{array}{ll} \partial _t\varphi ^i(t,x)+\frac{2}{\vert \partial _x \varphi ^{0,i}(x)\vert ^4}\partial _x^4\varphi ^i(t,x)&{}=f^i(t,x)\\ \varphi ^{i}(t,y)-\varphi ^{j}(t,y)&{}=0 \\ \partial _x^2\varphi ^i(t,y)&{}=0 \\ -\sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^{0,i}(y)\vert ^3} \left\langle \partial _x^3 \varphi (t,y),\nu _0^i\right\rangle \nu _0^i(y)&{}=b(t,y)\\ \varphi ^{i}(0,x)&{}=\psi ^{i}(x)\\ \end{array} \end{array}\right. }, \end{aligned}$$
(3.14)

for \(i, j\in \{1,\ldots ,N\}\), \(j\ne i\), \(t\in [0,T]\), \(x\in [0,1]\), \(y\{0,1\}\).

The spaces introduced in Definitions 3.2 and 3.7 are replaced by

$$\begin{aligned} {\mathbb {E}}_T&=\big \{\varphi \in C^{\frac{4+\alpha }{4}, 4+\alpha }\left( [0,T]\times [0,1]; ({\mathbb {R}}^2)^N\right) \; \text {such that for}\, i,j\in \{1, \ldots , N\}, t\in [0,T], \\&\qquad y\in \{0,1\}\,\text {it holds}\; \varphi ^i(t,y)=\varphi ^j(t,y), \partial _x^2\varphi ^i(t,y)=0\big \} ,\\ {\mathbb {F}}_T&= C^{\frac{\alpha }{4}, \alpha }\left( [0,T]\times [0,1]; ({\mathbb {R}}^2)^N\right) \times \left( C^{1+\alpha }\left( [0,T]; {\mathbb {R}}^2\right) \right) ^2\times C^{4+\alpha }\left( [0,1];({\mathbb {R}}^2)^N\right) ,\\ {\mathbb {E}}^0_T&=\{\varphi \in {\mathbb {E}}_T\,\text {such that}\,\varphi \vert _{ t=0}=\varphi ^0 \},\\ {\mathbb {F}}^0_T&=C^{\frac{\alpha }{4},{^{\alpha }}}\left( [0,T]\times [0,1];({\mathbb {R}}^2)^N\right) \times \left( C^{1+\alpha }\left( [0,T]; {\mathbb {R}}^2\right) \right) ^2 \times \{\varphi ^0\} . \end{aligned}$$

The operator \({\mathcal {L}}_{T}:{\mathbb {E}}_T\rightarrow {\mathbb {F}}_T\) becomes

$$\begin{aligned} {\mathcal {L}}_{T}(\varphi ) :=\left( \partial _t\varphi +\frac{2}{\vert \partial _x\varphi ^0\vert ^4}\partial _x^4\varphi , -\sum _{i=1}^N\frac{1}{\vert \partial _x \varphi ^{0,i}(y)\vert ^3} \left\langle \partial _x^3 \varphi (t,y),\nu _0^i\right\rangle \nu _0^i(y),\varphi _{\vert t=0}\right) , \end{aligned}$$

and the operator that encodes the non–linearities of the problem is \({\mathcal {N}}_{T}:{\mathbb {E}}^0_T\rightarrow {\mathbb {F}}^0_T\) that maps \(\varphi \) into the triple \((N^1_{T}(\gamma ),N^2_{T}(\gamma ),\gamma \vert _{t=0})\) with

$$\begin{aligned}&N^1_{T}: {\left\{ \begin{array}{ll} {\mathbb {E}}^0_T \rightarrow C ^{\frac{\alpha }{4},{^{\alpha }}}([0,T]\times [0,1];{\mathbb {R}}^2)\\ \varphi \mapsto f(\varphi ), \end{array}\right. }\\&N^2_{T}: {\left\{ \begin{array}{ll} {\mathbb {E}}^0_T \rightarrow C ^{1+\alpha }([0,T]\times [0,1];{\mathbb {R}}^2)\\ \varphi \mapsto b(\varphi ), \end{array}\right. } \end{aligned}$$

where the functions \(f(\varphi ):=f(\partial _x^4\varphi ,\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi )\) and \(b(\varphi ):=b(\partial _x^3\varphi ,\partial _x^2\varphi ,\partial _x\varphi )\) are defined in (3.1) and in (3.13). The map \({\mathcal {K}}\) will be defined accordingly. We do not here describe the details concerning the solvability of the linear system, as well as the proof of the contraction property of \({\mathcal {K}}\) and we refer to [23, Section 3.4.1].

3.2 Parabolic Smoothing

When dealing with parabolic problems, it is natural to investigate the regularization of the solutions of the flow. More precisely, we claim that the following holds.

Proposition 3.17

Let \(T>0\) and \(\varphi _0=(\varphi ^1_0, \ldots ,\varphi ^N_0)\) be an admissible initial parametrization (possibly) with endpoints of order one and (possibly) with junctions of different orders \(m\in {\mathbb {N}}_{\ge 2}\). Suppose that \(\varphi _{t\in [0,T]}\), \(\varphi =(\varphi ^1, \ldots ,\varphi ^N)\) is a solution in \({\mathbb {E}}_T\) to the special flow in the time interval [0, T] with initial datum \(\varphi _0\). Then the solution \(\varphi \) is smooth for positive times in the sense that

$$\begin{aligned} \varphi \in C^\infty \left( [\varepsilon ,T]\times [0,1]; ({\mathbb {R}}^2)^N\right) \end{aligned}$$

for every \(\varepsilon \in (0,T)\).

We give now a sketch of proof of this fact in the case of closed curves. Basically, it is possible to prove the result in two different ways: with the so-called Angenent’s parameter trick [4, 5, 11] or making use of the classical theory of linear parabolic equations [51].

Sketch of the proof.

For the sake of notation let

$$\begin{aligned} {\mathcal {A}}(\gamma )&=-2\frac{\partial _x^4 \gamma }{\left| \partial _x\gamma \right| ^{4}} +12\frac{\partial _x^3 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} +5\frac{\partial _x^2 \gamma \left| \partial _x^2 \gamma \right| ^{2}}{\left| \partial _x\gamma \right| ^{6}} +8\frac{\partial _x^2 \gamma \left\langle \partial _x^3 \gamma ,\partial _x\gamma \right\rangle }{\left| \partial _x\gamma \right| ^{6}} \\&\quad -35\frac{\partial _x^2 \gamma \left\langle \partial _x^2 \gamma ,\partial _x\gamma \right\rangle ^{2}}{\left| \partial _x\gamma \right| ^{8}} +\mu \frac{\partial _x^2 \gamma }{\left| \partial _x\gamma \right| ^{2}}. \end{aligned}$$

Then the motion equation reads \(\partial _t\gamma ={\mathcal {A}}(\gamma )\). We consider the map

$$\begin{aligned} G: {\left\{ \begin{array}{ll} (0,\infty )\times {\mathbb {E}}_T\rightarrow C^{4+\alpha }({\mathbb {S}}^1;{\mathbb {R}}^2)\times C^{\frac{\alpha }{4},\alpha }([0,T]\times {\mathbb {S}}^1;{{{\mathbb {R}}}}^2)\\ (\lambda ,\gamma )\rightarrow \left( \gamma _{\vert t=0}-\gamma _0,\partial _t\gamma -\lambda A(\gamma )\right) \end{array}\right. } \end{aligned}$$

We notice that if we take \(\lambda =1\) and \(\gamma =\varphi \) the solution of the special flow we get \(G(1,\varphi )=0\). The Fréchet derivative \(\delta G(1,\varphi )(0,\cdot ):{\mathbb {E}}_T\rightarrow C^{4+\alpha }({\mathbb {S}}^1;{\mathbb {R}}^2)\times C^{\frac{\alpha }{4},\alpha }([0,T]\times {\mathbb {S}}^1;{{{\mathbb {R}}}}^2)\) is given by

$$\begin{aligned} \delta G(1,\varphi )(0,\gamma )=\left( \gamma _{\vert t=0}, \partial _t\gamma +\frac{2}{\vert \partial _x\partial \varphi \vert ^4}\partial _x^4 \gamma +F_\varphi (\gamma )\right) \end{aligned}$$

where \(F_\varphi \) is linear in \(\gamma \), where \(\partial _x^3 \gamma ,\partial _x^2 \gamma \) and \(\partial _x\gamma \) appears and the coefficients are depending of \(\partial _x\varphi ,\partial _x^2\varphi ,\partial _x^3\varphi \) and \(\partial _x^4\varphi \). The computation to write in details the Fréchet derivative is rather long and we do not write it here. Since the time derivative appears only as \(\partial _t\gamma \) and it is not present in \(A(\gamma )\), formally one can follow the computations of Section 2.2.

It is possible to prove that \(\delta G(1,\varphi )(0,\cdot )\) is an isomorphism. This is equivalent to show that given any \(\psi \in C^{4+\alpha }{\mathbb {S}}^1;{\mathbb {R}}^2)\) and \(f\in C^{\frac{\alpha }{4},\alpha }([0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2)\) the system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t\gamma (t,x)+\frac{2}{\vert \partial _x\varphi (t,x)\vert ^4}\partial _x^4 \gamma (t,x)+F(\gamma ) =f(t,x)\\ \gamma (0,x)=\psi (x) \end{array}\right. } \end{aligned}$$

has a unique solution.

Then the implicit function theorem implies the existence of a neighbourhood \((1+\varepsilon ,1-\varepsilon )\subseteq (0,\infty )\), a neighbourhood U of \(\varphi \) in \({\mathbb {E}}_T\) and a function \(\Phi :(1+\varepsilon ,1-\varepsilon ) \rightarrow U\) with \(\Phi (1)=0\) and

$$\begin{aligned} \{(\lambda ,\gamma )\in (1+\varepsilon ,1-\varepsilon ) \times U: G(\lambda ,\gamma )=0\} =\{(\lambda , \Phi (\lambda )):\lambda \in (1+\varepsilon ,1-\varepsilon )\}. \end{aligned}$$

Given \(\lambda \) close to 1 consider

$$\begin{aligned} \varphi _{\lambda }(t,x):=\varphi (\lambda t, x), \end{aligned}$$

where \(\varphi \), as before, is a solution to the special flow. This satisfies \(G(\lambda ,\varphi _{\lambda })=0\). Moreover by uniqueness \(\varphi _{\lambda }=\Phi (\lambda )\). Since \(\Phi \) is smooth, this shows that \(\varphi _{\lambda }\) is a smooth function of \(\lambda \) with values in \({\mathbb {E}}_T\). This implies

$$\begin{aligned} t\partial _t\varphi =\partial _\lambda (\varphi _{\lambda })_{\vert \lambda =1} \in {\mathbb {E}}_T \end{aligned}$$

from which we gain regularity in time of the solution \(\varphi \).

Then using the fact that \(\varphi \) is a solution to the special flow and the structure of the motion equation of the special flow it is possible to increase regularity also in space.

We can then start a bootstrap to obtain that the solution is smooth for every positive time.

Alternatively we can show inductively that there exists \(\alpha \in (0,1)\) such that for all \(k\in {\mathbb {N}}\) and \(\varepsilon \in (0,T)\),

$$\begin{aligned} \varphi \in C^{\frac{2k+2+\alpha }{4},2k+2+\alpha }\left( [\varepsilon ,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) . \end{aligned}$$

The case \(k=1\) is true because \(\varphi \in C^{\frac{4+\alpha }{4},4+\alpha }\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \) by Theorem 3.14.

Now assume that the assertion holds true for some \(k\in {\mathbb {N}}\) and consider any \(\varepsilon \in (0,T)\). Let \(\eta \in C_0^\infty \left( (\frac{\varepsilon }{2},\infty );{\mathbb {R}}\right) \) be a cut–off function with \(\eta \equiv 1\) on \([\varepsilon ,T]\). By assumption,

$$\begin{aligned} \varphi \in C^{\frac{2k+2+\alpha }{4},2k+2+\alpha }\left( \left[ \varepsilon ,T\right] \times {\mathbb {S}}^1;{\mathbb {R}}^2\right) , \end{aligned}$$

and thus it is straightforward to check that the function g defined by

$$\begin{aligned} (t,x)\mapsto g(t,x):=\eta (t)\varphi (t,x) \end{aligned}$$

lies in \(C^{\frac{2k+2+\alpha }{4},2k+2+\alpha }\left( \left[ 0,T\right] \times {\mathbb {S}}^1;{\mathbb {R}}^2\right) \). Moreover g satisfies a parabolic problem of the following form: for all \(t\in (0,T)\), \(x\in {\mathbb {S}}^1\):

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{array}{ll} \partial _t g(t,x)+\frac{2}{\vert \partial _x\varphi (t,x)\vert ^4}\partial _x^4 g(t,x) +f\left( \partial _x\varphi ,\partial _x^2\varphi ,\partial _x g,\partial _x^2 g,\partial _x^3 g\right) (t,x)&{}=\eta '(t)\varphi (t,x), \\ g(0,x)&{}=0. \end{array} \end{array}\right. } \end{aligned}$$
(3.15)

The lower order terms in the motion equation are given by

$$\begin{aligned} f\left( \partial _x\varphi ,\partial _x^2\varphi ,\partial _x g,\partial _x^2 g,\partial _x^3 g\right) (t,x)&=-12 \frac{\left\langle \partial _x^2\varphi ,\partial _x\varphi \right\rangle }{\left|\partial _x\varphi \right|^6}\partial _x^3 g-8\frac{\partial _x^2\varphi }{\vert \partial _x\varphi \vert ^6}\left\langle \partial _x^3 g,\partial _x\varphi \right\rangle \\&\quad -\left( 5\frac{|\partial _x^2\varphi |^2}{|\partial _x\varphi |^6}-35\frac{\left\langle \partial _x^2\varphi ,\partial _x\varphi \right\rangle ^2}{\vert \partial _x\varphi \vert ^8}+\mu \frac{1}{\vert \partial _x\varphi \vert ^2}\right) \partial _x^2 g. \end{aligned}$$

The problem is linear in the components of g and in the highest order term of exactly the same structure as the linear system (3.2) with time dependent coefficients in the motion equation. The coefficients and the right hand side fulfil the regularity requirements of [51, Theorem 4.9] in the case \(l=2(k+1)+2+\alpha \). As \(\eta ^{(j)}(0)=0\) for all \(j\in {\mathbb {N}}\), the initial value 0 satisfies the compatibility conditions of order \(2(k+1)+2\) with respect to the given right hand side. Thus [51, Theorem 4.9] yields that there exists a unique solution to (3.15) g with the regularity

$$\begin{aligned} g\in C^{\frac{2(k+1)+2+\alpha }{4},2(k+1)+2+\alpha }\left( [0,T]\times {\mathbb {S}}^1;{\mathbb {R}}^2\right) . \end{aligned}$$

This completes the induction as \(g=\varphi \) on \([\varepsilon ,T]\). \(\square \)

3.3 Short Time Existence and Uniqueness

We conclude this section by proving the local (in time) existence and uniqueness result for the elastic flow.

As before, we give the proof of this theorem in the case of closed curves and then we explain how to adapt it in all the other situations.

We remind that a solution of the elastic flow is unique if it is unique up to reparametrizations.

Theorem 3.18

(Existence and uniqueness) Let \({\mathcal {N}}_0\) be an admissible initial network. Then there exists a positive time T such that within the time interval [0, T] the elastic flow of networks admits a unique solution \({\mathcal {N}}(t)\).

Proof

We write a proof for the case of the elastic flow of closed curves.

Existence. Let \(\gamma _0\) be an admissible initial closed curve of class \(C^{4+\alpha }([0,1];{\mathbb {R}}^2)\). Then \(\gamma _0\) is also an admissible initial parametrization for the special flow. By Theorem 3.14 there exists a solution of the special flow, that is also a solution of the elastic flow.

Uniqueness. Consider a solution \(\gamma _t\) of the elastic flow. Then we can reparametrize the \(\gamma _t\) into a solution to the special flow using Proposition 2.16. Hence uniqueness follows from Theorem 3.14. \(\square \)

We now explain how to prove existence of solution to the elastic flow of networks. Differently from the situation of closed curves, an admissible initial network \({\mathcal {N}}_0\) admits a parametrization \(\gamma =(\gamma ^1, \ldots ,\gamma ^N)\) of class \(C^{4+\alpha }([0,1];{\mathbb {R}}^2)\) that, in general, is not an admissible initial parametrization in the sense of Definition 2.11. However it is always possible to reparametrize each curve \(\gamma ^i\) by \(\psi ^i:[0,1]\rightarrow [0,1]\) in such a way that \(\varphi =(\varphi ^1, \ldots ,\varphi ^N)\) with \(\varphi ^i:=\gamma ^i\circ \psi ^i\) is an admissible initial parametrization for the special flow. Then by the suitable modification of Theorem 3.14 there exists a solution to the special flow, that is also a solution of the elastic flow.

Thus all the difficulties lie is proving the existence of the reparametrizations \(\psi ^i\).

In all cases we look for \(\psi ^i:[0,1]\rightarrow [0,1]\) with \(\psi ^i(0)=0\), \(\psi ^i(1)=1\) and \(\partial _x\psi ^i(x)\ne 0\) for every \(x\in [0,1]\). We now list all possible further conditions a certain \(\psi ^i\) has to fulfill at \(y=0\) or \(y=1\) in the different possible situations. It will then be clear that such reparametrizations \(\psi ^i\) exist.

  • If \(\gamma (y)\) is an endpoint of order one with Navier boundary conditions (namely \(\gamma (y)=P\), \(\varvec{\kappa }(y)=0\)), then \(\psi (y)\) needs to satisfy the following conditions:

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _x\psi (y)=1\\ \partial _x^2\psi (y)=-\left\langle \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert }, \frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert }\right\rangle =:a(y)\\ \partial _x^3\psi (y)=0\\ \partial _x^4\psi (y)=-\frac{1}{\vert \partial _x\gamma (y)\vert ^5}\left\langle \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert }, \frac{\partial _x^4 \gamma (y)}{\vert \partial _x\gamma (y)\vert } +6a(y)\frac{\partial _x^3 \gamma (y)}{\vert \partial _x\gamma (y)\vert } +3a^2(y)\frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert } \right\rangle =:-\frac{1}{\vert \partial _x\gamma (y)\vert ^5}b(y). \end{array}\right. } \end{aligned}$$

    Indeed, with such a request, we have \(\varphi (y)=\gamma (\psi (y))=\gamma (y)=P\) and

    $$\begin{aligned} \partial _x^2\varphi (y)&=\partial _x^2 \gamma (\psi (y))(\partial _x\psi (y))^2 +\partial _x\gamma (\psi (y))\partial _x^2\psi (y)\\&=\partial _x^2 \gamma (y)+\partial _x\gamma (y)\left( -\left\langle \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert }, \frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert }\right\rangle \right) =\vert \partial _x\gamma \vert ^2\varvec{\kappa }(y)=0. \end{aligned}$$

    Moreover \({\overline{T}}(y)=0\). Indeed

    $$\begin{aligned} {\overline{T}}&=-2\left\langle \frac{\partial _x^4\varphi (y)}{\vert \partial _x\varphi (y)\vert ^4}, \frac{\partial _x\varphi (y)}{\vert \partial _x\varphi (y)\vert } \right\rangle \\&=-2\left\langle \frac{\partial _x^4 \gamma (y)+6\partial _x^3 \gamma (y)a(y) +3\partial _x^2 \gamma (y)a^2(y)+\partial _x\gamma (y)\partial _x^4\psi (y)}{\vert \partial _x\gamma (y)\vert ^4}, \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert } \right\rangle \\&=-2\frac{1}{\vert \partial _x\gamma (y)\vert ^4}b(y) +2\frac{1}{\vert \partial _x\gamma (y)\vert ^5} \left\langle b(y)\partial _x\gamma (y),\frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert } \right\rangle \\&=-2\frac{1}{\vert \partial _x\gamma (y)\vert ^4}b(y) +2\frac{1}{\vert \partial _x\gamma (y)\vert ^5}b(y) \left\langle \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)},\frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert } \right\rangle \vert \partial _x\gamma (y)\vert =0. \end{aligned}$$
  • If \(\gamma (y)\) is an endpoint order one where clamped boundary conditions are imposed (\(\gamma (y)=P\), \(\frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert }=\tau ^*\) with \(\tau ^*\) a unit vector) we require \(\psi (y)\) to fulfill

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _x\psi (y)=\frac{1}{\vert \partial _x\gamma (y)\vert }\\ \partial _x^2\psi (y)=0\\ \partial _x^3\psi (y)=0\\ \partial _x^4\psi (y)=b(y). \end{array}\right. } \end{aligned}$$

    with \(b(y)=\left\langle \frac{\partial _x^4 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^4} -6\frac{\partial _x^3 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^3}\left\langle \frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^2},\tau ^*\right\rangle -\frac{5}{2}\frac{\partial _x^2 \gamma (y)\left| \partial _x^2 \gamma (y)\right| ^{2}}{\left| \partial _x\gamma (y)\right| ^{6}} -4\frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^2}\right. \)\(\left. \left\langle \frac{\partial _x^3 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^3},\tau ^*\right\rangle \right. \) \( \left. +\frac{35}{2}\frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^2}\left\langle \frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^2},\tau ^*\right\rangle ^{2} -\frac{\mu }{2}\frac{\partial _x^2 \gamma (y)}{\vert \partial _x\gamma (y)\vert ^2}, \frac{\partial _x\gamma (y)}{\vert \partial _x\gamma (y)\vert }\right\rangle \). So that \(\varphi (y)=\gamma (\psi (y))=\gamma (y)=P\), \(\partial _x\varphi (y)=\partial _x\gamma (\psi (y))\partial _x\psi (y) =(\partial _x\gamma (y))(\frac{1}{\vert \partial _x\gamma \vert })=\tau ^*\), and \({\overline{T}}(y)=0\).

  • Suppose instead that \(\gamma ^{p_1}(y_1)=\cdots =\gamma ^{p_m}(y_m)\) is a multipoint of order m with natural boundary conditions. Then each curve is paramatrized by \(\gamma ^{p_i}\in C^{4+\alpha }([0,1];{\mathbb {R}}^2)\) and the network \({\mathcal {N}}_0\) satisfies the conditions (ii), (iv) and (v) of Definition 2.5.

    The non-degeneracy condition is satisfied because of (iv).

    By requiring

    $$\begin{aligned} {\left\{ \begin{array}{ll} \partial _x\psi ^{p_i}(y_i)=1\\ \partial _x^2\psi ^{p_i}(y_i)=a^{p_i}(y_i)\\ \partial _x^3\psi ^{p_i}(y_i)=0, \end{array}\right. } \end{aligned}$$

    where \(a^{p_i}(y_i):=-\left\langle \frac{\partial _x\gamma ^{p_i}(y_i)}{\vert \partial _x\gamma ^{p_i}(y_i)\vert }, \frac{\partial _x^2\gamma ^{p_i}(y_i)}{\vert \partial _x\gamma ^{p_i}(y_i)\vert }\right\rangle \) all the conditions imposed by the system are satisfied. We have to choose \(\partial _x^4\psi ^{p_i}(y_i)\) in a manner that implies the fourth order compatibility condition

    $$\begin{aligned}&V^{p_1}_\varphi (y_1)\nu ^{p_1}_\varphi (y_1) +{\overline{T}}^{p_1}_\varphi (y_1)\tau ^{p_1}_\varphi (y_1) =\cdots \nonumber \\&=V^{p_m}_\varphi (y_m)\nu ^{p_m}_\varphi (y_m) +{\overline{T}}^{p_m}_\varphi (y_m)\tau ^{p_m}_\varphi (y_m), \end{aligned}$$
    (3.16)

    where the subscript \(\varphi \) we mean that all the quantities in (3.16) are computed with respect on the parametrization \(\varphi ^{p_i}:=\gamma ^{p_i}\circ \psi ^{p_i}\). Notice that the geometric quantities V, \(\nu \) and \(\tau \) are invariant under reparametrization, they coincide for \(\varphi ^{p_i}\) and \(\gamma ^{p_i}\) and so from now on we omit the subscript. Condition iv) allows us to consider two consecutive unit normal vectors \(\nu ^{p_i}(y_i)\) and \(\nu ^{p_k}(y_k)\) such that \(\mathrm {span}\{\nu ^{p_i}(y_i),\nu ^{p_k}(y_k)\}={{{\mathbb {R}}}}^2\). Then, by condition (v), for every \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) we have

    $$\begin{aligned} \sin \theta ^i V^{p_i}(y_i)+\sin \theta ^k V^{p_k}(y_k) +\sin \theta ^j V^{p_j}(y_j)=0, \end{aligned}$$
    (3.17)

    where \(\theta ^i\) is the angle between \(\nu ^{p_k}(y_k)\) and \(\nu ^{p_j}(y_j)\), \(\theta ^k\) between \(\nu ^{p_j}(y_j)\) and \(\nu ^{p_i}(y_i)\) and \(\theta ^{j}\) between \(\nu ^{p_i}(y_i)\), and \(\nu ^{p_k}(y_k)\) and at most one between \(\sin \theta ^i\) and \(\sin \theta ^k\) is equal to zero. Consider first every curve \(\gamma ^{p_j}\) with \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) for which both \(\sin \theta ^i\) and \(\sin \theta ^k\) are different from zero, then the conditions

    $$\begin{aligned} \sin \theta ^i{\overline{T}}_{\varphi }^{p_i}(y_i)&=\cos \theta ^k V^{p_k}(y_k)-\cos \theta ^j V^{p_j}(y_j)\nonumber \\ \sin \theta ^k{\overline{T}}_{\varphi }^{p_k}(y_k)&=\cos \theta ^j V^{p_j}(y_j)-\cos \theta ^i V^{p_i}(y_i)\nonumber \\ \sin \theta ^j{\overline{T}}_{\varphi }^{p_j}(y_j)&=\cos \theta ^i V^{p_i}(y_i)-\cos \theta ^k V^{p_k}(y_k) \end{aligned}$$
    (3.18)

    combined together with (3.17) imply (3.16) (see [38] for details). Instead for all the curves \(\gamma ^{p_j}\) with \(j\in \{1,\ldots ,m\}\), \(j\ne i\), \(j\ne k\) for which, for example, \(\sin \theta ^i=0\) it is possible to prove (see again [38]) that

    $$\begin{aligned} \sin \theta ^k V^{p_k}(y_k) +\sin \theta ^j V^{p_j}(y_j)&=0\nonumber \\ \sin \theta ^k{\overline{T}}_{\varphi }^{p_i}(y_i)&=V^{p_j}(y_j)-\cos \theta ^k V^{p_i}(y_i)\nonumber \\ \sin \theta ^j{\overline{T}}_{\varphi }^{p_k}(y_k)&=V^{p_i}(y_i)-\cos \theta ^j V^{p_k}(y_k)\nonumber \\ \sin \theta ^k{\overline{T}}_{\varphi }^{p_j}(y_j)&=\cos \theta ^k V^{p_j}(y_j)- V^{p_i}(y_i) \end{aligned}$$
    (3.19)

    yielding (3.16). One can show that for every \(i\in \{1, \ldots ,m\}\), imposing such requirements (i.e., either (3.17), (3.18) or (3.19)) implies that \(\partial _x^4\psi ^{p_i}(y_i)\) is uniquely determined.

  • Also the case of a multipoint with clamped boundary conditions can be treated following the arguments of the just considered cases of natural boundary conditions.

To summarise, for every \(i\in \{1,\ldots ,N\}\) we must prove the existence of \(\psi ^i:[0,1]\rightarrow [0,1]\) with \(\partial _x\psi ^i(x)\ne 0\) for every \(x\in [0,1]\) satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} \psi ^i(0)=0\\ \partial _x\psi ^i(0)=c_1\\ \partial _x^2\psi ^i(0)=c_2\\ \partial _x^3\psi ^i(0)=0\\ \partial _x^4 \psi ^i(0)=c_3\\ \end{array}\right. } \quad \text {and}\quad {\left\{ \begin{array}{ll} \psi ^i(1)=1\\ \partial _x\psi ^i(1)=c_4\\ \partial _x^2 \psi ^i(1)=c_5\\ \partial _x^3 \psi ^i(1)=0\\ \partial _x^4 \psi ^i(1)=c_6\\ \end{array}\right. } \end{aligned}$$
(3.20)

with \(c_1,c_2,c_3\) and \(c_4,c_5,c_6\) depending on the type of the endpoint \(\gamma ^i(0)\) and \(\gamma ^i(1)\). The \(\psi ^i\) can be (roughly) constructed by choosing \(\psi ^i\) to be, near the points 0 and 1, the respective fourth Taylor polynomial that is determined by the values of the derivatives appearing in (3.20). Then one connects the two polynomial graphs by a suitable increasing smooth function.

To get uniqueness when we let evolve an open curve or a network, one has to use Proposition 2.16. We refer to [15, 23, 38, 52] for a complete proof.

Remark 3.19

The previous theorem gives a solution of class \(C^{\frac{4+\alpha }{4},4+\alpha }([0,T]\times [0,1];{\mathbb {R}}^2)\) whenever the initial datum is of class \(C^{4+\alpha }([0,1];{\mathbb {R}}^2)\) and satisfies all the conditions listed in Definition 2.5. We can remove the fourth order conditions (iii)–(iv) setting the problem in Sobolev spaces, with the initial datum in \(W^{4-4/p,p}([0,1];{\mathbb {R}}^2)\) with \(p\in (5, \infty )\). Even in this lower regularity class it is possible to prove uniqueness of solutions (see [22]), but we pay in regularity of the solution, that is merely in \(W^{1,p}\left( (0,T);L^p\left( (0,1);{\mathbb {R}}^2\right) \right) \cap L^p\left( (0,T);W^{4,p}\left( (0,1);{\mathbb {R}}^2 \right) \right) \).

With the strategy we presented in this paper it is possible to get a smooth solution in [0, T] if in addition the initial datum admits a smooth parametrization and satisfies the compatibility conditions of any order (for a complete proof of this result we refer to [15]). Since the solution of class \(C^{\frac{4+\alpha }{4},4+\alpha }\) is unique, a fortiori the smooth solution is unique. Although a smooth solution is desiderable, asking for compatibility conditions of any order is a very strong request.

4 Long Time Existence

Definition 4.1

A time-dependent family of networks \({\mathcal {N}}_t\) parametrized by \(\gamma _t=(\gamma ^1,\ldots ,\gamma ^N)\) is a maximal solution to the elastic flow with initial datum \({\mathcal {N}}_0\) in [0, T) if it is a solution in the sense of Definition 2.3 in \((0,{\hat{T}}]\) for all \({\hat{T}}<T\), \(\gamma \in C^\infty \left( [\varepsilon ,T)\times [0,1]; ({\mathbb {R}}^2)^N\right) \) for all \(\varepsilon >0\) and if there does not exist a smooth solution \(\widetilde{{\mathcal {N}}}_t\) in \((0,{\widetilde{T}}]\) with \({\widetilde{T}}\ge T\) and such that \({\mathcal {N}}=\widetilde{{\mathcal {N}}}\) in (0, T).

If \(T=\infty \) in the above definition, \({\widetilde{T}}\ge T\) is supposed to mean \({\widetilde{T}}=\infty \). The maximal time interval of existence of a solution to the elastic flow will be denoted by \([0,T_{\max })\), for \(T_{\max }\in (0,+\infty ]\).

Notice that the existence of a maximal solution is granted by Theorem 3.14, Theorem 3.18 and Proposition 3.17.

4.1 Evolution of Geometric Quantities

In this section we use the following version of the Gagliardo–Nirenberg Inequality which follows from [39, Theorem 1] and a scaling argument.

Let \(\eta \) be a smooth regular curve in \({\mathbb {R}}^2\) with finite length \(\ell \) and let u be a smooth function defined on \(\eta \). Then for every \(j\ge 1\), \(p\in [2,\infty ]\) and \(n\in \{0,\ldots ,j-1\}\) we have the estimates

$$\begin{aligned} \Vert \partial _s^nu\Vert _{L^p}\le {\widetilde{C}}_{n,j,p}\Vert \partial _s^ju\Vert _{L^2}^\sigma \Vert u\Vert _{L^2}^{1-\sigma }+\frac{B_{n,j,p}}{\ell ^{j\sigma }}\Vert u\Vert _{L^2} \end{aligned}$$

where

$$\begin{aligned} \sigma =\frac{n+1/2-1/p}{j} \end{aligned}$$

and the constants \({\widetilde{C}}_{n,j,p}\) and \(B_{n,j,p}\) are independent of \(\eta \). In particular, if \(p=+\infty \),

$$\begin{aligned} {\Vert \partial _s^n u\Vert }_{L^\infty } \le {\widetilde{C}}_{n,j} {\Vert \partial _s^j u\Vert }_{L^2}^{\sigma } {\Vert u\Vert }_{L^2}^{1-\sigma }+ \frac{B_{n,j}}{\ell ^{j\sigma }}{\Vert u\Vert }_{L^2}\quad \text { with }\quad \sigma =\frac{n+1/2}{j}. \end{aligned}$$
(4.1)

We notice that in the case of a time-dependent family of curves with length equibounded from below by some positive value, the Gagliardo–Nirenberg inequality holds with uniform constants.

By the monotonicity of the elastic energy along the flow (Section 2.5), the following result holds.

Corollary 4.2

Let \({\mathcal {N}}_t=\bigcup _{i=1}^N \gamma ^i_t\) be a maximal solution to the elastic flow with initial datum \({\mathcal {N}}_0\) in the maximal time interval \([0,T_{\max })\) and let \({\mathcal {E}}_\mu ({\mathcal {N}}_0)\) be the elastic energy of the initial datum. Then for all \(t\in (0,T_{\max })\) it holds

$$\begin{aligned} \int _{\gamma ^i_t} \vert k^i\vert ^2\,\mathrm {d}s \le \int _{{\mathcal {N}}_t} \vert k\vert ^2\,\mathrm {d}s \le {\mathcal {E}}_\mu ({\mathcal {N}}_0). \end{aligned}$$
(4.2)

Now we consider the evolution in time of the length of the curves of the network.

Lemma 4.3

Let \({\mathcal {N}}_t=\bigcup _{i=1}^N \gamma ^i_t\) be a maximal solution to the elastic flow in the maximal time interval \([0,T_{\max })\) with initial datum \({\mathcal {N}}_0\) and let \({\mathcal {E}}_\mu ({\mathcal {N}}_0)\) be the elastic energy of the initial datum. Let \(\mu ^1,\ldots ,\mu ^N>0\) and \(\mu ^*:=\min _{i=1,\ldots , N}\mu ^i\). Then for all \(t\in (0,T_{\max })\) it holds

$$\begin{aligned} \ell (\gamma ^i_t) \le \mathrm {L}({\mathcal {N}}_t) \le \frac{1}{\mu ^*}{\mathcal {E}}_\mu ({\mathcal {N}}_0). \end{aligned}$$
(4.3)

Furthermore if \({\mathcal {N}}_t\) is composed of a time dependent family of closed curves \(\gamma _t\), then for all \(t\in (0,T_{\max })\)

$$\begin{aligned} \ell (\gamma _t)\ge \frac{4\pi ^2}{{\mathcal {E}}_\mu (\gamma _0)}. \end{aligned}$$
(4.4)

Suppose instead that \(\gamma _t\) is a time dependent family of curves subjected either to Navier boundary conditions or to clamped boundary conditions with \(\gamma (t,0)=P\) and \(\gamma (t,1)=Q\) for every \(t\in [0,T_{\max })\). Then for all \(t\in (0,T_{\max })\)

$$\begin{aligned} \ell (\gamma _t)\ge \vert P-Q\vert>0 \;\text {if}\; P\ne Q \quad \text {and} \quad \ell (\gamma _t)\ge \frac{\pi ^2}{{\mathcal {E}}_\mu (\gamma _0)}>0 \;\text {if}\; P= Q. \end{aligned}$$
(4.5)

Proof

Formula (4.3) is a direct consequence of Proposition 2.20. Suppose \(\gamma _t\) is a one-parameter family of single closed curves. Then by Gauss–Bonnet theorem we have

$$\begin{aligned} 2\pi \le \int _{\gamma _t} \vert k\vert \,\mathrm {d}s \le \left( \int _{\gamma _t} \vert k\vert ^2\,\mathrm {d}s\right) ^{1/2} \left( \int _{\gamma _t} 1\,\mathrm {d}s\right) ^{1/2} = \ell (\gamma _t)^{1/2}\left( \int _{\gamma _t} \vert k\vert ^2\,\mathrm {d}s\right) ^{1/2}, \end{aligned}$$
(4.6)

that combined with (4.2) gives (4.4). Clearly if \(\gamma _t\) is composed of a curve with fixed endpoints \(\gamma (t,0)=P\) and \(\gamma (t,1)=Q\) with \(P\ne Q\), then \(\ell (\gamma _t)\ge \vert P-Q\vert >0\). Suppose now that \(P=Q\). Then by a generalization of the Gauss–Bonnet Theorem (see [16, Corollary A.2]) to not necessarily embedded curves with coinciding endpoints it holds \(\int _{\gamma _t}\vert k\vert \,\mathrm {d}s\ge \pi \) and so repeating the chain of inequalities (4.6) one gets (4.5). \(\square \)

Remark 4.4

In many situations it seems not possible to generalize the above computations in the case of networks to control the lengths of the curves neither individually nor globally. At the moment there are no explicit examples of networks whose curves disappear during the evolution, but we believe in this possible scenario.

Consider for example a sequence of networks composed of three curves that meet only at their endpoints in two triple junctions. In particular, suppose that the networks is composed of two arcs of circle of radius 1 and length \(\varepsilon \) that meet with a segment (of length \(2\sin \tfrac{\varepsilon }{2}\sim \varepsilon \)) with angles of amplitude \(\frac{\varepsilon }{2}\). The energy (with \(\mu ^i=1\) for any i) of this network is \({\mathcal {E}}_\mu ({\mathcal {N}}_\varepsilon )=4\varepsilon +2\sin \tfrac{\varepsilon }{2}\) and it converges to zero when \(\varepsilon \rightarrow 0\).

A similar behavior has been shown by Nürnberg in the following numerical examples based on the methods developed in [8] (see [38, Section 5.5] for more details). The initial datum is the standard double bubble (Fig. 1).

Fig. 1
figure 1

A numerical example of a shrinking network. The weighs \(\mu ^i\) are all equal to 0.2

First the symmetric double bubble expand and then it starts flattening. The length of all the curves becomes smaller and smaller and the same happen to the amplitude of the angles. The simulation suggest that the networks shrink to a point in finite time.

There is another example in which only the length of one curve goes to zero and the network composed of three curve becomes a “figure eight” (Fig. 2).

Fig. 2
figure 2

A numerical example of a disappearance of one curve. The weighs \(\mu ^i\) are all equal to 2

Remark 4.5

If some of the weights \(\mu ^i\) of the definition of the elastic flow are equal to zero, then the \(L^2\)-norm of the curvature remains bounded, but the lengths of the network can go to infinity. However, during the flow of either a single closed curve or a curve with Navier boundary conditions, the length of the curve can go to infinity, but not in finite time. Suppose \(\mu =0\), in this case we call the functional \({\mathcal {E}}_0\). It holds

$$\begin{aligned} \frac{d}{dt}\ell (\gamma _t)&=\frac{d}{dt}\int _{\gamma _t}1\,\mathrm {d}s =\int _{\gamma _t} \partial _s T-k V\,\mathrm {d}s =T(1)-T(0)+\int _{\gamma _t} 2k\partial _s^2k+k^4\,\mathrm {d}s\\&=\int _{\gamma _t} -2\vert \partial _s k\vert ^2+k^4\,\mathrm {d}s +k(1)\partial _sk(1)-k(0)\partial _sk(0)\\&=\int _{\gamma _t} -2\vert \partial _s k\vert ^2+k^4\,\mathrm {d}s, \end{aligned}$$

indeed, in the case of a closed curve \(T(1)=T(0)\) and \(k(1)\partial _sk(1)=k(0)\partial _sk(0)\), while natural boundary conditions implies \(T(1)=T(0)=k(1)=k(0)=0\). The Gagliardo–Nirenberg inequality gives

$$\begin{aligned}&\Vert k \Vert _4\le {\widetilde{C}}\Vert \partial _sk\Vert _2^{\frac{1}{4}} \Vert k\Vert _2^{\frac{3}{4}}+\frac{B}{\ell ^{\frac{1}{4}}}\Vert k\Vert _2\le c\Vert k\Vert _2^{\frac{3}{4}}\left( \Vert \partial _sk\Vert _2^{\frac{1}{4}} +\Vert k\Vert _2^{\frac{1}{4}}\right) \\&\le 2^{{\frac{3}{4}}} c\Vert k\Vert _2^{\frac{3}{4}}\left( \Vert \partial _sk\Vert _2+\Vert k\Vert _2\right) ^{\frac{1}{4}} , \end{aligned}$$

where \(c=\max \left\{ {\widetilde{C}},B/\ell ^{\frac{1}{4}}\right\} \). Thanks to (4.4) and (4.5), we know that \(\ell \) is uniformly bounded from below away from zero and thus that constants are independent of the length. Also, as \(\Vert k\Vert _2 \le C({\mathcal {E}}_0(\gamma _0))\), using Young inequality we obtain

$$\begin{aligned} \Vert k \Vert _4^4&\le C\Vert k\Vert _2^3\left( \Vert \partial _sk\Vert _2 +\Vert k\Vert _2\right) \le \varepsilon C \Vert \partial _sk\Vert _2^2 +C({\mathcal {E}}_0(\gamma _0),\varepsilon ). \end{aligned}$$

By taking \(\varepsilon \) small enough we then conclude

$$\begin{aligned} \frac{d}{dt}\ell (\gamma _t)&\le \int _{\gamma _t} -2\vert \partial _s k\vert ^2+k^4\,\mathrm {d}s \le \int _{\gamma _t} -\vert \partial _sk\vert ^2\,\mathrm {d}s + C({\mathcal {E}}_0(\gamma _0)), \end{aligned}$$

thus in both cases \( \frac{d}{dt}\ell (\gamma _t)\le C({\mathcal {E}}_0(\gamma _0))\) and hence the length grows at most linearly.

Unfortunately in the case of clamped curves we are not able to reproduce the same computation because we cannot get rid of the boundary terms \(k(1)\partial _s k(1)\) and \(k(0)\partial _s k(0)\). However we are not aware of examples in which the length of a clamped curve subjected to the \(L^2\)-gradient flow of \({\mathcal {E}}_0\) blows up in finite time.

Lemma 4.6

Let \(\gamma :[0,1]\rightarrow {{{\mathbb {R}}}}^2\) be a smooth regular curve. Then the following estimates hold:

$$\begin{aligned} \int _{\gamma }^{} |{\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) |\,\mathrm {d}s&\le \varepsilon \Vert \partial _s^{j+2}k\Vert _{L^2}^2+C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{2(2j+5)}_{L^2}\right) ,\nonumber \\ \int _{\gamma }^{}|{\mathfrak {p}}_{2j+4}^j\left( k\right) |\,\mathrm {d}s&\le \varepsilon \Vert \partial _s^{j+1}k\Vert _{L^2}^2 +C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+C\Vert k\Vert ^{2(2j+3)}_{L^2}\right) , \end{aligned}$$
(4.7)

for any \(\varepsilon >0\).

Proof

Every monomial of \({\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) \) is of the form \(C\prod _{l=0}^{j+1}\left( \partial _s^lk\right) ^{\alpha _l}\) with \(\alpha _l\in {\mathbb {N}}\) and \(\sum _{l=0}^{j+1}\alpha _l(l+1)=2j+6\). We define \(J:=\{l\in \{0,\dots ,j+1\}:\alpha _l\ne 0\}\) and for every \(l\in J\) we set

$$\begin{aligned} \beta _l:=\frac{2j+6}{(l+1)\alpha _l}. \end{aligned}$$

We observe that \(\sum _{l\in J}^{}\frac{1}{\beta _l}=1\) and \(\alpha _l\beta _l>2\) for every \(l\in J\). Thus the Hölder inequality implies

$$\begin{aligned} C\int _{\gamma }\prod _{l\in J}^{} (\partial _s^lk)^{\alpha _l}\,\mathrm {d}s \le C\prod _{l\in J}^{}\left( \int _{\gamma }|\partial _s^lk|^{\alpha _l\beta _l}\,\mathrm {d}s\right) ^{\frac{1}{\beta _l}} =C\prod _{l\in J}^{}\Vert \partial _s^lk\Vert ^{\alpha _l}_{L^{\alpha _l\beta _l}}. \end{aligned}$$

Applying the Gagliardo–Nirenberg inequality for every \(l\in J\) yields for every \(i\in \{1,\ldots ,j+1\}\)

$$\begin{aligned} \Vert \partial _s^lk^i\Vert _{L^{\alpha _l\beta _l}}\le C_{l,j,\alpha _l,\beta _l}\Vert \partial _s^{j+2}k^i\Vert _{L^2}^{\sigma _l}\Vert k^i\Vert _{L^2}^{1-\sigma _l}+\frac{B_{l,j,\alpha _l,\beta _l}}{\ell (\gamma )^{(j+2)\sigma _l}}\Vert k^i\Vert _{L^2}\, \end{aligned}$$

where for all \(l\in J\) the coefficient \(\sigma _l\) is given by

$$\begin{aligned} \sigma _l=\frac{l+1/2-1/(\alpha _l\beta _l)}{j+2}. \end{aligned}$$

We may choose

$$\begin{aligned} C=\max \left\{ C_{l,j,\alpha _l,\beta _l}, \frac{B_{l,j,\alpha _l,\beta _l}}{\ell (\gamma )^{(j+2)\sigma _l}}:l\in J \right\} . \end{aligned}$$

Since the polynomial \({\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) \) consists of finitely many monomials of the above type, we can write

$$\begin{aligned} C\int _{\gamma }^{}\prod _{l\in J}^{}|\partial _s^lk|^{\alpha _l}\,\mathrm {d}s&\le C\prod _{l\in J }^{}\Vert \partial _s^lk\Vert ^{\alpha _l}_{L^{\alpha _l\beta _l}}\\&\le C\prod _{l\in J}^{}\Vert k\Vert ^{(1-\sigma _l)\alpha _l}_{L^2}\left( \Vert \partial _s^{j+2}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{\sigma _l\alpha _l}_{L^2}\\&= C\Vert k\Vert ^{\sum _{l\in J}(1-\sigma _l)\alpha _l}_{L^2}\left( \Vert \partial _s^{j+2}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{\sum _{l\in J}\sigma _l\alpha _l}_{L^2}. \end{aligned}$$

Moreover we have

$$\begin{aligned} \sum _{l\in J}\sigma _l\alpha _l&\le 2-\frac{1}{(j+2)^2}<2. \end{aligned}$$

Applying Young’s inequality with \(p:=\frac{2}{\sum _{l\in J}\sigma _l\alpha _l}\) and \(q:=\frac{2}{2-\sum _{l\in J}^{}\sigma _l\alpha _l}\) we obtain

$$\begin{aligned} C\int _{\gamma }^{}\prod _{l\in J}^{}|\partial _s^lk|^{\alpha _l}\,\mathrm {d}s&\le \frac{C}{\varepsilon }\Vert k\Vert ^{2\frac{\sum _{l\in J}(1-\sigma _l)\alpha _l}{2-\sum _{l\in J}^{}\sigma _l\alpha _l}}_{L^2} +\varepsilon C \left( \Vert \partial _s^{j+2}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{2}_{L^2} \end{aligned}$$

where

$$\begin{aligned} 2\frac{\sum _{l\in J}(1-\sigma _l)\alpha _l}{2-\sum _{l\in J}^{}\sigma _l\alpha _l}&=2(2j+5). \end{aligned}$$

As C depends only on j and the length of the curve, we get choosing \(\varepsilon \) small enough

$$\begin{aligned} \int _{\gamma }|{\mathfrak {p}}_{2j+6}^{j+1}|\left( k\right) \,\mathrm {d}s&\le \varepsilon \left( \Vert \partial _s^{j+2}k\Vert _{L^2}+\Vert k\Vert _{L^2}\right) ^{2}_{L^2}+\frac{C}{\varepsilon }\Vert k\Vert ^{2(2j+5)}_{L^2} . \end{aligned}$$

To conclude it is enough to take choose a suitable \(\varepsilon >0\). The second inequality in (4.7) can be proved in the very same way. \(\square \)

Lemma 4.7

Let \(\gamma :[0,1]\rightarrow {{{\mathbb {R}}}}^2\) be a smooth regular curve. Suppose that \(\gamma \) has a fixed endpoint of order one \(\gamma (y)\) with \(y\in \{0,1\}\). Then the following estimates hold:

$$\begin{aligned} |{\mathfrak {p}}^{j+1}_{2j+5}(k)(y)|&\le \varepsilon \Vert \partial _s^{j+2}k\Vert _{L^2}^2+C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+\Vert k\Vert ^{2(2j+5)}_{L^2} \right) ,\nonumber \\ |{\mathfrak {p}}^{j+1}_{2j+3}(k)(y)|&\le \varepsilon \Vert \partial _s^{j+2}k\Vert _{L^2}^2+C(\varepsilon ,\ell (\gamma ))\left( \Vert k\Vert _{L^2}^2+C\Vert k\Vert ^{(2j+3)^2}_{L^2}\right) , \end{aligned}$$
(4.8)

for any \(\varepsilon >0\).

Proof

The term \(|{\mathfrak {p}}^{j+1}_{2j+5}(k)(y)|\) can be controlled by a sum of terms like \(C\prod _{l=0}^{j+1}\Vert \partial _s^{l}k\Vert _{L^\infty }^{\alpha _l}\) with \(\sum _{l=0}^{j+1}(l+1)\alpha _l=2j+5\). Then, for every \(l\in \{0,\ldots , j+1\}\) we use interpolation inequalities with \(p=+\infty \) to obtain

$$\begin{aligned} \Vert \partial _s^lk\Vert _{L^\infty }\le C_l\left( {\Vert \partial _s^{j+2} k\Vert }_{L^2}^{\sigma _l} {\Vert k\Vert }_{L^2}^{1-\sigma _l}+ {\Vert k\Vert }_{L^2}\right) , \end{aligned}$$

with \(\sigma _l=\frac{l+1/2}{j+2}\). Thus

$$\begin{aligned} \prod _{l=0}^{j+1}\Vert \partial _s^{l}k\Vert _{L^\infty }^{\alpha _l}&\le \, C \prod _{l=0}^{j+1}\left( {\Vert \partial _s^{j+2} k\Vert }_{L^2} +{\Vert k\Vert }_{L^2}\right) ^{\sigma _l\alpha _l} {\Vert k\Vert }_{L^2}^{(1-\sigma _l)\alpha _l}\\&\le \, C \left( {\Vert \partial _s^{j+2}k\Vert }_{L^2} + {\Vert k\Vert }_{L^2}\right) ^{\sum _{l=0}^{j+1}\sigma _l\alpha _l} {\Vert k\Vert }_{L^2}^{\sum _{l=0}^j(1-\sigma _l)\alpha _l} \end{aligned}$$

with

$$\begin{aligned} \sum _{l=0}^{j+1}\sigma _l\alpha _l&=\,\sum _{l=0}^{j+1}\alpha _l \frac{l+1/2}{j+2} =\frac{2j+5-1/2\sum _{l=0}^j\alpha _l}{j+2}\\&\le \, \frac{2j+5-1/2\sum _{l=0}^{j+1}\alpha _l(l+1)/(j+2)}{j+2} \\&=\, \frac{2j+5 -1 -1/2(j+2)}{j+2}= 2 - \frac{1}{2(j+2)^2}<2. \end{aligned}$$

Then by Young inequality,

$$\begin{aligned} \left( {\Vert \partial _s^{j+2} k\Vert }_{L^2} +{\Vert k\Vert }_{L^2}\right) ^{\sum _{l=0}^{j+1}\sigma _l\alpha _l} {\Vert k\Vert }_{L^2}^{\sum _{l=0}^{j+1}(1-\sigma _l)\alpha _l} \le \varepsilon \left( {\Vert \partial _s^{j+2} k\Vert }_{L^2} + {\Vert k\Vert }_{L^2}\right) ^2 + C {\Vert k\Vert }_{L^2}^{a^*} \end{aligned}$$

and the last exponent \(a^*=2\frac{\sum _{l=0}^j(1-\sigma _l)\alpha _l}{2-\sum _{l=0}^j\sigma _l\alpha _l}\) is equal to \(2(2j+5)\). Choosing a value \(\varepsilon >0\) small enough, we get the desired estimate.

Similarly \({\mathfrak {p}}^{j+1}_{2j+3}(k)(y)\) can be controlled by a sum of terms like \(C\prod _{l=0}^{j+1}\Vert \partial _s^{l}k\Vert _{L^\infty }^{\alpha _l}\) with \(\sum _{l=0}^{j+1}(l+1)\alpha _l=2j+3\). We can repeat the same proof. Also in this case \(\sum _{l=0}^{j+1}\sigma _l\alpha _l<2\): indeed

$$\begin{aligned} \sum _{l=0}^{j+1}\sigma _l\alpha _l&=\frac{2j+3-1/2\sum _{l=0}^{j+1}\alpha _l}{j+2}\\&\le \, \frac{2j+3-1/2\sum _{l=0}^{j+1}\alpha _l(l+1)/(j+2)}{j+2} \\&=\, \frac{2j+3 + 1-1 - \frac{2j+3}{2j+4}}{j+2}= 2 - \frac{1}{j+2}-\frac{2j+3}{2(j+2)^2}<2. \end{aligned}$$

This time we get that the exponent \(a^*\in (\frac{2}{j+5},\frac{(2j+3)^2}{2})\). We have \(a^*=\frac{(j+\frac{5}{2})(\sum _{l=0}^{j+1}\alpha _l)-2j-3}{ 1+\frac{1}{2}\sum _{l=0}^{j+1}\alpha _l}\). Because of the properties of the polynomial \({\mathfrak {p}}_{2j+3}^{j+1}(k)\) we have that \(2\le \sum _{l=0}^{j+1}\alpha _l< 2j+3\). Then \(a^*<\frac{(j+\frac{5}{2})(2j+3)-(2j+3)}{1+\frac{1}{2}\sum _{l=0}^{j+1}\alpha _l} <\frac{(2j+3)^2}{2}\). Moreover \(a^*\ge \frac{2(j+\frac{5}{2})-2j-3}{1+\frac{1}{2}(2j+3)} =\frac{2}{j+5}\). Now that we have ensured that \(a^*\) is bounded from below away from zero we can conclude that the desired estimate hold true. \(\square \)

Lemma 4.8

Let \(\gamma _t\) be a maximal solution in \([0,T_{\max })\) to the elastic flow of a curve with Navier boundary conditions. The for all \(t\in [0,T_{\max })\) and for all \(n\in {\mathbb {N}}\)

$$\begin{aligned} \partial _s^{2n}k(t,0)=\partial _s^{2n}k(t,1)=0. \end{aligned}$$

Proof

We prove the result by induction. Since we required Navier boundary conditions, apart from the fixed endpoints \(\gamma (t,0)=P\) and \(\gamma (t,1)=Q\), we also have \(k(0)=k(1)=0\). Differentiating in time the first condition we get for \(y\in \{0,1\}\)

$$\begin{aligned} 0&=\partial _t \gamma (t,y)= V(t,y)\nu (t,y)+T(t,y)\tau (t,y)\\&=(-2\partial _s^2 k(t,y)-k^3(t,y)+\mu k (t,y))\nu (t,y)+T(t,y)\tau (t,y), \end{aligned}$$

that implies \(T(t,y)=0\) and \(\partial _s^2 k (t,y)=0\), and this gives the first step of the induction. Let \(n\in {\mathbb {N}}\) and suppose that \(\partial _s^{2n}k(0)=\partial _s^{2n}k(1)=0\) holds for any natural number \(m\le n\). Then making use of (2.30) we have

$$\begin{aligned} 0=\partial _t\partial _s^{2(n-1)}k(t,y)&=-2\partial _s^{2(n+1)}k(t,y) -5k^2\partial _s^{2n}k(t,y)+\mu \partial _s^{2n}k(t,y)-T\partial _s^{2n-1}k(t,y)\\&+{\mathfrak {p}}^{2n-1}_{2n+3}(k)(t,y)+\mu {\mathfrak {p}}^{2n-2}_{2n+1}(k)(t,y) =-2\partial _s^{2(n+1)}k(t,y), \end{aligned}$$

where we use the induction hypothesis, \(T(t,y)=0\) and the fact that each monomial of \({\mathfrak {p}}^{2n-1}_{2n+3}(k)\), \(\mu {\mathfrak {p}}^{2n-2}_{2n+1}(k)\) contains at least one term of the form \(\partial _s^{2m}k\). \(\square \)

Lemma 4.9

Let \(\gamma _t\) be a maximal solution in \([0,T_{\max })\) to the elastic flow of a curve with clamped boundary conditions. The for all \(t\in [0,T_{\max })\), \(y\in \{0,1\}\) and \(n\in {\mathbb {N}}\)

$$\begin{aligned} \partial _s^{4n+2}k(t,y) ={\mathfrak {p}}^{4n}_{4n+3}(k)(t,y) +\mu {\mathfrak {p}}^{4n}_{4n+1}(k)(t,y), \end{aligned}$$
(4.9)
$$\begin{aligned} \partial _s^{4n+3}k(t,y) ={\mathfrak {p}}^{4n+1}_{4n+4}(k)(t,y) +\mu {\mathfrak {p}}^{4n+1}_{4n+2}(k)(t,y). \end{aligned}$$
(4.10)

Proof

Consider first the fixed endpoints condition \(\gamma (t,0)=P\) and \(\gamma (t,1)=Q\). Differentiating in time we have, for \(y\in \{0,1\}\)

$$\begin{aligned} 0&=\partial _t \gamma (t,y)= V(t,y)\nu (t,y)+T(t,y)\tau (t,y)\\&=(-2\partial _s^2 k(t,y)-k^3(t,y)+\mu k (t,y))\nu (t,y)+T(t,y)\tau (t,y). \end{aligned}$$

Since both normal and tangential velocity have to be zero, we get \(T(t,y)=0\) and \(\partial _s^2 k(t,y)=\frac{\mu }{2} k(t,y)-\frac{1}{2}k^3(t,y)\): the case \(n=0\) of (4.9) holds true. Fix a certain \(n\in {\mathbb {N}}\), suppose that (4.9) is true for any natural number \(m\le n\). Then

$$\begin{aligned} 0&=\partial _t\left( \partial _s^{4n+2}k(t,y) +{\mathfrak {p}}^{4n}_{4n+3}(k(t,y))+\mu {\mathfrak {p}}^{4n}_{4n+1}(k(t,y))\right) \\&=-2\partial _{s}^{4n+6}k(t,y) -T(t,y)\partial _{s}^{4n+3}k(t,y) +{\mathfrak {p}}_{4n+7}^{4n+4}\left( k(t,y)\right) + \mu \,{\mathfrak {p}}_{4n+5}^{4n+4}(k(t,y))\\&\quad +{\mathfrak {p}}_{4n+7}^{4n+4}(k(t,y))+T(t,y){\mathfrak {p}}_{4n+4}^{4n+1}(k(t,y)) +\mu {\mathfrak {p}}_{4n+5}^{4n+4}(k(t,y))+\mu T(t,y){\mathfrak {p}}_{4n+2}^{4n+1}(k(t,y))\\&=-2\partial _{s}^{4n+6}k(t,y) +{\mathfrak {p}}_{4n+7}^{4n+4}\left( k(t,y)\right) + \mu \,{\mathfrak {p}}_{4n+5}^{4n+4}(k(t,y)). \end{aligned}$$

We prove also (4.10) by induction. We consider the clamped boundary condition \(\tau (t,0)=\tau _0\) and \(\tau (t,1)=\tau _1\). In this case differentiating in time we obtain

$$\begin{aligned} 0=\partial _t \tau (t,y)=(\partial _s V(t,y)+T(t,y)k(t,y))\nu (t,y), \end{aligned}$$

that implies \(0=\partial _s V(t,y) =2\partial _s^3 k(t,y)+3k^2(t,y)\partial _sk(t,y)+\mu \partial _sk(t,y)\). The induction step follows as in the previous situation. \(\square \)

Remark 4.10

It is not possible to generalize (4.9) and (4.10) to \(\partial _s^nk={\mathfrak {p}}^{n-2}_{n+1}(k)+\mu {\mathfrak {p}}^{n-2}_{n-1}(k)\), where \(n\in {\mathbb {N}}\) is arbitrary. Indeed we do not have any particular request on k and \(\partial _s k\) at the boundary, we cannot produce the step \(n=0\) of the induction.

Lemma 4.11

Let \(\gamma _t\) be a maximal solution to the elastic flow either of closed curves or of an open curve subjected to Navier boundary conditions with initial datum \(\gamma _0\) in the maximal time interval \([0,T_{\max })\). Let \({\mathcal {E}}_\mu (\gamma _0)\) be the elastic energy of the initial curve \(\gamma _0\). Then for all \(t\in (0,T_{\max })\) and \(j\in {\mathbb {N}}\), \(j\ge 1\) it holds

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s \le C(j,{\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$
(4.11)

Proof

Using (2.30) we obtain

$$\begin{aligned}&\frac{d}{dt}\int _{\gamma _t} \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s =\int _{\gamma _t} \partial _s^jk\partial _t\partial _s^j k +\frac{1}{2}\vert \partial _s^jk\vert ^2(\partial _sT-kV)\,\mathrm {d}s \nonumber \\&\quad =\int _{\gamma _t} \partial _s^jk \left\{ -2\partial _{s}^{j+4}k -5k^2\partial _s^{j+2}k +\mu \,\partial _s^{j+2}k +{\mathfrak {p}}_{j+5}^{j+1}\left( k\right) + \mu \,{\mathfrak {p}}_{j+3}^{j}(k)+T\partial _s^{j+1}k \right\} \,\mathrm {d}s\nonumber \\&\qquad +\int _{\gamma _t} \frac{1}{2}\vert \partial _s^jk\vert ^2(\partial _sT-kV)\,\mathrm {d}s. \end{aligned}$$
(4.12)

We begin by considering the terms involving the tangential velocity: we have

$$\begin{aligned} \int _{\gamma _t}T\partial _s^jk\partial _s^{j+1}k +\frac{1}{2}\partial _sT(\partial _s^jk)^2\,\mathrm {d}s=\frac{1}{2} \left( T(t,1)(\partial _s^jk(t,1))^2-T(t,0)(\partial _s^jk(t,0))^2\right) =0,\nonumber \\ \end{aligned}$$
(4.13)

since for a closed curve \(T(t,1)(\partial _s^jk(t,1))^2=T(t,0)(\partial _s^jk(t,0))^2\) and in the case of Navier boundary conditions \(T(t,1)=T(t,0)=0\).

Integrating twice by parts the term \(\int -2\partial _s^jk\partial _s^{j+4}k\,\mathrm {d}s\) and once \(\int \mu \partial _s^jk\partial _s^{j+2}k-5k^2\partial _s^j k\partial _s^{j+2}k\,\mathrm {d}s\) we have

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s&=\int -2\vert \partial _s^{j+2}k\vert ^2-\mu \vert \partial _s^{j+1}k\vert ^2 +{\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) +\mu {\mathfrak {p}}_{2j+4}^j\left( k\right) \,\mathrm {d}s. \end{aligned}$$
(4.14)

Also in the case of open curves with Navier boundary conditions there is no boundary contribution thanks to Lemma 4.8. Combing (4.14) together with (4.7) one has

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s\le & {} \int -\vert \partial _s^{j+2}k\vert ^2-\frac{\mu }{2}\vert \partial _s^{j+1}k\vert ^2 \,\mathrm {d}s\nonumber \\&+ C\Vert k\Vert ^{2(2j+5)}_2+C\Vert k\Vert _{L^2}^2\le C(j,{\mathcal {E}}_\mu (\gamma _0)), \end{aligned}$$
(4.15)

where in the last inequality we used (4.2). \(\square \)

The case of clamped boundary conditions is more tricky.

Lemma 4.12

Let \(\gamma _t\) be a maximal solution to the elastic flow subjected to clamped boundary conditions with initial datum \(\gamma _0\) in the maximal time interval \([0,T_{\max })\). Let \({\mathcal {E}}_\mu (\gamma _0)\) be the elastic energy of the initial curve \(\gamma _0\). Then for all \(t\in (0,T_{\max })\) and \(n\in {\mathbb {N}}\), \(n\ge 0\) it holds

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^{4n}k\vert ^2\,\mathrm {d}s \le C(n,{\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$

Proof

Consider the equality (4.12). As in the case of Navier boundary conditions, also in the case of clamped boundary conditions \(T(t,1)=T(t,0)=0\) and thus we have (4.13). Then integrating by parts the terms \(\int -2\partial _s^jk\partial _s^{j+4}k\,\mathrm {d}s\) and \(\int \mu \partial _s^jk\partial _s^{j+2}k-5k^2\partial _s^j k\partial _s^{j+2}k\,\mathrm {d}s\) appearing in (4.12) we obtain

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s&=\int -2\vert \partial _s^{j+2}k\vert ^2-\mu \vert \partial _s^{j+1}k\vert ^2 +{\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) +\mu {\mathfrak {p}}_{2j+4}^j\left( k\right) \,\mathrm {d}s\nonumber \\&\quad -2\partial _s^jk\partial _s^{j+3}k+2\partial _s^{j+1}k\partial _s^{j+2}k+\mu \partial _s^jk\partial _s^{j+1}k-5k^2\partial _s^jk\partial _s^{j+1}k\,\Big \vert ^1_0 \nonumber \\&=\int -2\vert \partial _s^{j+2}k\vert ^2-\mu \vert \partial _s^{j+1}k\vert ^2 +{\mathfrak {p}}_{2j+6}^{j+1}\left( k\right) +\mu {\mathfrak {p}}_{2j+4}^j\left( k\right) \,\mathrm {d}s\nonumber \\&\quad -2\partial _s^jk\partial _s^{j+3}k +2\partial _s^{j+1}k\partial _s^{j+2}k+{\mathfrak {p}}^{j+1}_{2j+5}(k)+\mu {\mathfrak {p}}^{j+1}_{2j+3}(k)\,\Big \vert ^1_0 .\nonumber \end{aligned}$$

Suppose \(j=4n\) with \(n\in {\mathbb {N}}\). Then, using (4.9) and (4.10)

$$\begin{aligned} \partial _s^jk\partial _s^{j+3}k&= \partial _s^{4n}k\partial _s^{4n+3}k =\partial _s^{4n}k{\mathfrak {p}}^{4n+1}_{4n+4}(k) +\mu \partial _s^{4n}k {\mathfrak {p}}^{4n+1}_{4n+2}(k)\\&={\mathfrak {p}}^{j+1}_{2j+5}(k) +\mu {\mathfrak {p}}^{j+1}_{2j+3}(k),\\ \partial _s^{j+1}k\partial _s^{j+2}k&= \partial _s^{4n+1}k\partial _s^{4n+2}k =\partial _s^{4n+1}k{\mathfrak {p}}^{4n}_{4n+3}(k)+\mu \partial _s^{4n+1}k{\mathfrak {p}}^{4n}_{4n+1}(k)\\&={\mathfrak {p}}^{j+1}_{2j+5}(k) +\mu {\mathfrak {p}}^{j+1}_{2j+3}(k). \end{aligned}$$

So, for \(j=4n\) with \(n\in {\mathbb {N}}\), combing (4.14) together with (4.7) and (4.8) one has

$$\begin{aligned} \frac{d}{dt}\int \frac{1}{2}\vert \partial _s^jk\vert ^2\,\mathrm {d}s&\le \int -\frac{1}{2}\vert \partial _s^{j+2}k\vert ^2 -\frac{\mu }{4}\vert \partial _s^{j+1}k\vert ^2 \,\mathrm {d}s+ C\Vert k\Vert ^{2(2j+5)}_2\\&+ C\Vert k\Vert ^{(2j+3)^2}_{L_2}+C\Vert k\Vert _{L^2}^2\\&\le C(j,{\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$

\(\square \)

We pass now to networks. In the case of clamped boundary conditions, apart from the monotonicity of the energy, geometric estimates on the derivative of the curvature are not know (see also Section 6).

To describe the results contained in [14, 22] for networks with junctions subjected to natural boundary conditions we need a preliminary definition.

Definition 4.13

We say that at a junction of order \(m\in {\mathbb {N}}_{\ge 2}\) the uniform non-degeneracy condition is satisfied if there exists \(\rho >0\) such that

$$\begin{aligned} \inf _{t\in [0,T_{\max })}\max _{i=1,\ldots ,m}\left\{ \left| \sin \alpha ^i(t)\right| \right\} \ge \rho , \end{aligned}$$
(4.16)

where with \(\alpha ^i\) we denote the angles between two consecutive tangent vectors of the curves concurring at the junction.

Then [22, Proposition 6.15] reads as follow:

Lemma 4.14

Let \({\mathcal {N}}(t)\) be a maximal solution to the elastic flow with initial datum \({\mathcal {N}}_0\) in the maximal time interval \([0,T_{\max })\) and let \({\mathcal {E}}_\mu ({\mathcal {N}}_0)\) be the elastic energy of the initial network. Suppose that at all the junctions (of any order \(m\in {\mathbb {N}}_{\ge 2}\)) we impose natural boundary conditions, for \(t\in (0,T_{\max })\) the lengths of all the curves of the network \({\mathcal {N}}(t)\) are uniformly bounded away from zero and that the uniform non–degeneracy condition is satisfied. Then for all \(t\in (0,T_{\max })\) it holds

$$\begin{aligned} \frac{d}{dt}\int _{{\mathcal {N}}_t} \left|\partial _s^2k\right|^2 \,\mathrm {d}s \le C({\mathcal {E}}_\mu ({\mathcal {N}}_0)). \end{aligned}$$
(4.17)

This lemma is proved in the case of network with triple junctions, but with an accurate inspection of the proof one notices that it can be adapted to junctions of any order \(m\in {\mathbb {N}}_{\ge 2}\). The structure of the proof of [22, Proposition 6.15] is the same of Lemma 4.11 and Lemma 4.12. The main difference is the treatment of the boundary terms, which is more intricate. The uniform bound from below on the length of each curve is needed in Lemma 4.6 and Lemma 4.7, that are both used in the proof. The uniform non-degeneracy conditions allows us to express the tangential velocity at the boundary in function of the normal velocity (see Remark 2.9). As in Lemma 4.11 and Lemma 4.12, in [22, Proposition 6.15] the tangential velocity is arbitrary.

To generalize [22, Proposition 6.15] from \(\partial _s^2k\) to \(\partial _s^{2+4j}k\) with \(j\in {\mathbb {N}}\) we must also require that the tangential velocity in the interior of the curves is a linear interpolation between the tangential velocity at the junction (given in terms of the normal velocity) and zero (for further details we refer the reader to [14]).

4.2 Long Time Existence

Theorem 4.15

(Global Existence) Let \(\mu >0\) and let

$$\begin{aligned} \gamma \in C^{\frac{4+\alpha }{4},4+\alpha }([0,T_{\max })\times [0,1])) \cap C^{\infty }([\varepsilon , T_{\max })\times [0,1]) \end{aligned}$$

be a maximal solution to the elastic flow of a single curve (either closed, or with fixed endpoints in \({\mathbb {R}}^2\)) in the maximal time interval \([0,T_{\max })\) with admissible initial datum \(\gamma _0\in C^{4+\alpha }([0,1])\). Then \(T_{\max }=+\infty \). In other words, the solution exists globally in time.

Proof

Suppose by contradiction that \(T_{\max }\) is finite. In the whole time interval \([0,T_{\max })\) the length of the curves \(\gamma _t\) is uniformly bounded from above and from below away from zero and the \(L^2\)-norm of the curvature is uniformly bounded.

If the curve is closed or subjected to Navier boundary conditions, then (4.11) tells us that for every \(t_1,t_2\in (\varepsilon , T_{\max })\), \(t_1<t_2\)

$$\begin{aligned} \int _{\gamma _{{t}_2}} \vert \partial _s^j k\vert ^2\,\mathrm {d}s -\int _{\gamma _{t_1}} \vert \partial _s^j k\vert ^2\,\mathrm {d}s \le C{\mathcal {E}}_{\mu }(\gamma _0)\left( t_2-t_1\right) \le C{\mathcal {E}}_{\mu }(\gamma _0)\left( T_{\max }-\varepsilon \right) . \end{aligned}$$

The estimate implies \(\partial _s^j k\in L^{\infty }\left( (\varepsilon , T_{\max });L^2\right) \). Instead in the case of clamped boundary condition we get

$$\begin{aligned} \int _{\gamma _{t_2}} \vert \partial _s^4 k\vert ^2\,\mathrm {d}s -\int _{\gamma _{t_1}} \vert \partial _s^4 k\vert ^2\,\mathrm {d}s \le C{\mathcal {E}}_{\mu }(\gamma _0)\left( t_2-t_1\right) \le C{\mathcal {E}}_{\mu }(\gamma _0)\left( T_{\max }-\varepsilon \right) , \end{aligned}$$

because Lemma 4.12 holds true only when j is a multiple of 4. Again this estimate gives \(\partial _s^4 k\in L^{\infty }\left( (\varepsilon , T_{\max });L^2\right) \). Using Gagliardo–Nirenberg inequality for all \(t\in [0,T_{\max })\) we get

$$\begin{aligned} \Vert \partial _s k(t)\Vert _{L^2}&\le C_1\Vert \partial _s^{4}k(t)\Vert _{L^2}^\sigma \Vert k(t)\Vert _{L^2}^{1-\sigma }+ C_2 \Vert k(t)\Vert _{L^2}\le C({\mathcal {E}}_{\mu }(\gamma _0)),\\ \Vert \partial _s^{2}k(t)\Vert _{L^2}&\le C_1\Vert \partial _s^{4}k(t)\Vert _{L^2}^\sigma \Vert k(t)\Vert _{L^2}^{1-\sigma }+ C_2 \Vert k(t)\Vert _{L^2}\le C({\mathcal {E}}_{\mu }(\gamma _0)),\\ \Vert \partial _s^{3}k(t)\Vert _{L^2}&\le C_1\Vert \partial _s^{4}k(t)\Vert _{L^2}^\sigma \Vert k(t)\Vert _{L^2}^{1-\sigma }+ C_2 \Vert k(t)\Vert _{L^2}\le C({\mathcal {E}}_{\mu }(\gamma _0)), \end{aligned}$$

with constants independent on t.

Hence in all cases (closed curves, either Navier of clamped boundary conditions), since \(\tau \in L^{\infty }\left( (\varepsilon , T_{\max }); L^\infty \right) \), by interpolation we obtain

$$\begin{aligned} k,\partial _s k,\partial _s^2 k \in L^{\infty }\left( (\varepsilon , T_{\max }); L^\infty \right) . \end{aligned}$$

Putting this observation together with the formulas

$$\begin{aligned} \varvec{\kappa }&= k\nu ,\\ \partial _s\varvec{\kappa }&= \partial _s k\nu -k^2\tau ,\\ \partial _s^2\varvec{\kappa }&= (\partial _s^2 k-k^3)\nu -3k\partial _s k\tau ,\\ \partial _s^3\varvec{\kappa }&= (\partial _s^3 k -6k^2\partial _s k)\nu -(4k\partial _s^2 k+ 3\partial _s k^2)\tau , \end{aligned}$$

we get \(\varvec{\kappa },\partial _s \varvec{\kappa }, \partial _s^2 \varvec{\kappa } \in L^{\infty }\left( (\varepsilon , T_{\max });L^\infty \right) \) and \(\partial _s^3\varvec{\kappa } \in L^{\infty }\left( (\varepsilon , T_{\max });L^2\right) \). We also get \((\partial _t\gamma )^\perp \in L^{\infty }\left( (\varepsilon , T_{\max });L^\infty \right) \).

We reparametrize \(\gamma (t)\) into \({\tilde{\gamma }}(t)\) with the property \(\vert \partial _x{\tilde{\gamma }}(t,x)\vert =\ell ({\tilde{\gamma }}(t))\) for every \(x\in [0,1]\) and for all \(t\in [0,T_{\max })\). This choice in particular implies

$$\begin{aligned} 0<c\le \sup _{t\in [0,T_{\max }), x\in [0,1]} \vert \partial _x{\tilde{\gamma }}(t,x)\vert \le C<\infty . \end{aligned}$$

We make use of the relations

$$\begin{aligned} \varvec{\kappa }(t,x)= & {} \frac{\partial _x^2{\tilde{\gamma }}(t,x)}{\ell ({\tilde{\gamma }}(t))^2},\quad \partial _s\varvec{\kappa }(t,x) =\frac{\partial _x^3{\tilde{\gamma }}(t,x)}{\ell ({\tilde{\gamma }}(t))^3},\quad \\ \partial _s^2\varvec{\kappa }(t,x)= & {} \frac{\partial _x^4{\tilde{\gamma }}(t,x)}{\ell ({\tilde{\gamma }}(t))^4},\quad \partial _s^3\varvec{\kappa }(t,x) =\frac{\partial _x^5{\tilde{\gamma }}(t,x)}{\ell ({\tilde{\gamma }}(t))^5}\, \end{aligned}$$

to get

$$\begin{aligned} \int _0^1 \frac{\vert \partial _x^2{\tilde{\gamma }}(t,x) \vert ^2 }{\ell ({\tilde{\gamma }}(t))^3}\,\mathrm {d}x =\int _{{\tilde{\gamma }}_t} \vert {\varvec{k}}\vert ^2\,\mathrm {d}s =\int _{{\tilde{\gamma }}_t} \vert k\vert ^2\,\mathrm {d}s \le {\mathcal {E}}_\mu (\gamma _0), \end{aligned}$$

and

$$\begin{aligned} \int _0^1 \frac{\vert \partial _x^5{\tilde{\gamma }}(t,x) \vert ^2 }{\ell ({\tilde{\gamma }}(t))^9}\,\mathrm {d}x =\int _{{\tilde{\gamma }}_t} \vert \partial _s^3{\varvec{k}}\vert ^2\,\mathrm {d}s\le C({\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$

These estimates allows us to conclude that \(\partial _x^2{\tilde{\gamma }}, \partial _x^3{\tilde{\gamma }}, \partial _x^4{\tilde{\gamma }}\in L^{\infty }\left( (\varepsilon , T_{\max });L^\infty ((0,1))\right) \) and \(\partial _x^5{\tilde{\gamma }}\in L^{\infty }\left( (\varepsilon , T_{\max });L^2((0,1))\right) \). Moreover \((\partial _t {\tilde{\gamma }})^\perp =(\partial _t\gamma )^\perp \in L^{\infty }\left( (\varepsilon , T_{\max });L^\infty ((0,1))\right) \) implies \({\tilde{\gamma }}\in L^{\infty }\left( (\varepsilon , T_{\max });L^\infty ((0,1))\right) \). Then there exists \(\gamma _\infty (\cdot )\) limit as \(t\rightarrow T_{\max }\) of \({\tilde{\gamma }}(t,\cdot )\) together with the limit of its derivatives till 5-th order.

The curve \(\gamma _\infty \) is an admissible initial curve, indeed it belongs to \(C^{4+\alpha }([0,1])\) and in the case fixed endpoint are present, by continuity of k and \(\partial _s^2 k\) it holds

$$\begin{aligned} 2\partial _s^2k_{\infty }(0)+k_{\infty }^3(0)-\mu k_{\infty }(0) =2\partial _s^2k_{\infty }(1)+k_{\infty }^3(1)-\mu k_{\infty }(1) =0. \end{aligned}$$

Then there exists an elastic flow \({\overline{\gamma }}\in C^{\frac{4+\alpha }{4},4+\alpha }([T_{\max }, T_{\max }+\delta ]\times [0,1]))\) with initial datum \(\gamma _\infty \) in the time interval \([T_{\max }, T_{\max }+\delta ]\) with \(\delta >0\). We reparametrize \({\overline{\gamma }}\) in \({\hat{\gamma }}\) with the property \(\vert \partial _x{\hat{\gamma }}(t,x)\vert =\ell ({\hat{\gamma }}(t))\) for every \(x\in [0,1]\) and \(t\in [T_{\max }, T_{\max }+\delta ]\). Then for every \(x\in [0,1]\)

$$\begin{aligned} \lim _{t\nearrow T_{\max }} {\tilde{\gamma }}(t,x)=\gamma _{\infty }(x) =\lim _{t\searrow T_{\max }}{\hat{\gamma }}(t,x) \end{aligned}$$

and also for \(j\in \{1,2,3,4,5\}\)

$$\begin{aligned} \lim _{t\nearrow T_{\max }} \partial _x^j{\tilde{\gamma }}(t,x)=\partial _x^j\gamma _{\infty }(x) =\lim _{t\searrow T_{\max }}\partial _x^j{\hat{\gamma }}(t,x). \end{aligned}$$

Then

$$\begin{aligned} \lim _{t\nearrow T_{\max }} \partial _t{\tilde{\gamma }}(t,x) =\lim _{t\searrow T_{\max }}\partial _t{\hat{\gamma }}(t,x). \end{aligned}$$

Thus we can define

$$\begin{aligned} g:[0,T_{\max }+\delta ]\rightarrow {\mathbb {R}}^2, g(t, [0,1]):= {\left\{ \begin{array}{ll} {\tilde{\gamma }}\quad \text {for}\; t\in [0,T_{\max })\\ {\hat{\gamma }}\quad \text {for}\;t\in [T_{\max }, T_{\max }+\delta ]. \end{array}\right. } \end{aligned}$$

solution of the elastic flow in \([0, T_{\max }+\delta ]\). This contradicts the maximality of \(\gamma \).

\(\square \)

Remark 4.16

In the case of the elastic flow either of closed curve or of curves with Navier boundary conditions with the help of Remark 4.5 it is possible to generalize the above result to the case \(\mu \ge 0\) (see [21, 46]).

Remark 4.17

In Lemma 4.11 and Lemma 4.12 we derive estimates for every derivative of k. The above proof shows that it is enough to get the estimate of Lemma 4.11 for \(j=1,2,3\) and of Lemma 4.12 for \(n=1\).

At the moment for general networks we are able to get the following partial result:

Theorem 4.18

(Long time behavior, [14, 22]) Let \({\mathcal {N}}_0\) be a geometrically admissible initial network. Suppose that \(\left( {\mathcal {N}}(t)\right) \) is a maximal solution to the elastic flow with initial datum \({\mathcal {N}}_0\) in the maximal time interval \([0,T_{\max })\) with \(T_{\max }\in (0,\infty )\cup \{\infty \}\). Suppose that at each junction we impose Navier boundary conditions. Then

$$\begin{aligned} T_{\max }=+\infty , \end{aligned}$$

or at least one of the following happens:

  1. (i)

    the inferior limit as \(t\nearrow T_{\max }\) of the length of at least one curve of \({\mathcal {N}}(t)\) is zero;

  2. (ii)

    at one of the junctions it occurs that \(\liminf _{t\nearrow T_{\max }}\max _i\left\{ \left| \sin \alpha ^i(t)\right| \right\} =0\), where \(\alpha ^i(t)\) are the angles formed by the unit tangent vectors of the curves concurring at a junction.

5 Asymptotic Behavior

In this section we collect results on the asymptotic convergence of the elastic flow, that is, we analyze the possibility that the solutions have a limit as times goes to \(+\infty \) and such limit is an elastica, i.e., a critical point of the energy. The first step in this direction is the proof of the subconvergence of the flow, that consists in the fact that the solution converges to an elastica along an increasing sequence of times, up to reparametrization and translation in the ambient space. We present such a result for the flow of closed curves and for a single curve with Navier or clamped boundary conditions.

Proposition 5.1

(Subconvergence) Let \(\mu >0\) and let \(\gamma _t\) be a solution of the elastic flow of closed curves in \([0,+\infty )\) with initial datum \(\gamma _0\). Then, up to subsequences, reparametrization, and translations, the curve \(\gamma _t\) converges to an elastica as \(t\rightarrow \infty \).

Proof

We remind that thanks to (4.2) and (4.4) for every \(t\in [0,+\infty )\) the length \(\ell (\gamma _t)\) is uniformly bounded from above and from below away from zero by constants that depends only on the energy of the initial datum \({\mathcal {E}}_{\mu }(\gamma _0)\) and on \(\mu \). We can rewrite inequality (4.15) as

$$\begin{aligned} \frac{d}{dt}\int _{\gamma (t)} \frac{1}{2}\vert \partial _s^jk\vert ^2\, \mathrm {d}s + \int _{\gamma (t)} \vert \partial _s^{j+2}k\vert ^2\, \mathrm {d}s \le C({\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$

Using interpolation inequalities (with constants \(c_1,c_2\) independent of time) for every \(j\in {\mathbb {N}}\) we get

$$\begin{aligned} \frac{d}{dt}\int _{\gamma (t)} \vert \partial _s^jk\vert ^2\, \mathrm {d}s + \int _{\gamma (t)} \vert \partial _s^{j}k\vert ^2\, \mathrm {d}s&\le \frac{d}{dt}\int _{\gamma (t)} \vert \partial _s^jk\vert ^2\, \mathrm {d}s\\&\quad + c_1 \int _{\gamma (t)} \vert \partial _s^{j+2}k\vert ^2\, \mathrm {d}s +c_2 \int _{\gamma (t)} \vert k \vert ^2\, \mathrm {d}s\\&\le C({\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$

By comparison we obtain

$$\begin{aligned} \int _{\gamma (t)} \vert \partial _s^jk\vert ^2\,\mathrm {d}s\le \int _{\gamma _0} \vert \partial _s^jk\vert ^2\,\mathrm {d}s +C({\mathcal {E}}_\mu (\gamma _0)). \end{aligned}$$
(5.1)

Hence by Sobolev embedding we get that \(\Vert \partial _s^jk\Vert _{L^\infty }\) is uniformly bounded in time, for any \(j\in {\mathbb {N}}\). By Ascoli–Arzelá Theorem, up to subsequences and reparametrizations, there exists the limit \(\lim _{i\rightarrow \infty } \partial _s^j k_{t_i}=:\partial _s^j k_{\infty }\) uniformly on [0, 1], for some sequence of times \(t_i\rightarrow +\infty \). Thus, for a suitable choice of a sequence of points \(p_i\in {{{\mathbb {R}}}}^2\) such that \(\gamma (t_i, 0)-p_{t_i}\) is the origin O of \({\mathbb {R}}^2\), the sequence of curves \(\gamma (t_i,\cdot )-p_i\) converges (up to reparametrizations) smoothly to a limit curve \(\gamma _{\infty }\) with \(\gamma (0)=O\) and \(0<c\le \ell (\gamma _{\infty })\le C <\infty \).

It remains to show that the limit curve is a stationary point for the energy \({\mathcal {E}}_\mu \). Let \(V{:}{=}\partial _t\gamma ^\perp = -2 (\partial _s^\perp )^2 \varvec{\kappa } - |\varvec{\kappa }|^2\varvec{\kappa }+\mu \varvec{\kappa }\) and \(v(t):=\int _{\gamma (t)} |V|^2\,\mathrm {d}s\). By Proposition 2.20 we know that \(\partial _t E(\gamma (t,\cdot ))=-v(t)\), thus

$$\begin{aligned} \int _{0}^\infty v(t)\,\mathrm {d}t= {\mathcal {E}}_\mu (\gamma (0, \cdot ))-\lim _{t\rightarrow \infty }{\mathcal {E}}_\mu (\gamma (t, \cdot ))\le {\mathcal {E}}_\mu (\gamma _0) , \end{aligned}$$
(5.2)

and then \(v\in L^1(0,\infty )\). By (5.1) we can estimate

$$\begin{aligned} \vert \partial _t{v}(t)\vert \le C(\mu ,\gamma _0). \end{aligned}$$

Therefore \(v(t)\rightarrow 0\) as \(t\rightarrow +\infty \) and then the limit curve is a critical point. \(\square \)

Notice that in the previous proof we cannot hope for a (uniform in time) bound on the supremum norm of \(\gamma \) itself. In fact all the parabolic estimates are obtained on the curvature vector of the evolving curve. This means that we are not yet able to exclude that the flow leaves any compact set as \(t\rightarrow \infty \).

Proposition 5.2

(Subconvergence) Let \(\mu >0\) and let \((\gamma _t\) be a solution in \([0,+\infty )\) of the elastic flow of open curves subjected either to Navier or clamped boundary conditions. Then, up to subsequences and reparametrization, the curve \(\gamma _t\) converges to an elastica as \(t\rightarrow \infty \).

Proof

Whatever boundary condition we consider the length of \(\gamma _t\) is bounded from above by (4.3) and from below away from zero by (4.5). Furthermore, since the endpoints are fixed, the evolving curve will remain in a ball of center \(\gamma (t,0)=P\) and radius \(2{\mathcal {E}}_\mu (\gamma _0)\) and so for every \(t\in [0,T_{\max })\) it holds \(\sup _{x\in [0,1]}\vert \gamma (t,x)\vert <C\) with C independent of time.

Consider the case of Navier boundary conditions: as in the proof of Proposition 5.1 we obtain that \(\Vert \partial _s^j k\Vert _{L^\infty }\) is uniformly bounded in time, for every \(j\in {\mathbb {N}}\). In the clamped case instead we have that \(\partial _s^j k\in L^{2}(0,\infty );L^{\infty })\) only when j is a multiple of 4. However, by interpolation, we get that \(\Vert \partial _s^j k\Vert _{L^\infty }\) is uniformly bounded in time, for every \(j\in {\mathbb {N}}\). Therefore, up to subsequences and reparametrization, the curves \(\gamma (t_i,\cdot )\) converges smoothly to a limit curve \(\gamma _\infty \) for some sequence of times \(t_i\rightarrow +\infty \). One can show that \(\gamma _\infty \) is a critical point exactly as in Proposition 5.1.

\(\square \)

By the same methods, it is also possible to prove the following subconvergence result.

Proposition 5.3

([14]) Let \({\mathcal {N}}_0\) be a geometrically admissible initial network composed of three curves that meet at a triple junction. Suppose that \({\mathcal {N}}_t\) is a maximal solution to the elastic flow with initial datum \({\mathcal {N}}_0\) in the maximal time interval \([0,+\infty )\). Suppose that along the flow the length of the three curves is uniformly bounded from below, at the junction Navier boundary condition is imposed and the uniform non–degeneracy conditions is fulfilled. Then it is possible to find a sequence of time \(t_i\rightarrow \infty \), such that the networks \({\mathcal {N}}_{t_i}\) converge, after an appropriate reparametrization, to a critical point for the energy \({\mathcal {E}}_\mu \) with Navier boundary conditions.

From now on we restrict ourselves to the case of closed curves and we want to improve the subconvergence result of Proposition 5.1 into full convergence of the flow. More precisely, we want to prove that the solution of the elastic flow of closed curves admits a full limit as time increases and such a limit is a critical point.

Recall that when we say that \(\gamma :[0,1]\rightarrow {{{\mathbb {R}}}}^2\) is a smooth closed curve, periodic conditions at the endpoints are understood. More precisely, it holds that \(\partial _x^k\gamma (0)=\partial _x^k\gamma (1)\) for any \(k\in {\mathbb {N}}\). Therefore we can write that a closed smooth curve is just a smooth immersion \(\gamma :{\mathbb {S}}^1\rightarrow {\mathbb {R}}^2\) of the unit circle \({\mathbb {S}}^1\simeq [0,2\pi ]/_\sim \). In this section we shall adopt this notation as a shortcut for recalling that periodic boundary conditions are assumed. Moreover, for sake of simplicity, we assume that the constant weight on the length in the functional \({\mathcal {E}}_\mu \) is \(\mu =1\) and we write \({\mathcal {E}}\).

Now we can state the result about the full convergence of the flow.

Theorem 5.4

(Full convergence [35, 47]) Let \(\gamma :[0,+\infty )\times {{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) be the smooth solution of the elastic flow with initial datum \(\gamma (0,\cdot )=\gamma _0(\cdot )\), that is a smooth closed curve.

Then there exists a smooth critical point \(\gamma _\infty \) of \({\mathcal {E}}\) such that \(\gamma (t,\cdot )\rightarrow \gamma _\infty (\cdot )\) in \(C^m({{{\mathbb {S}}}}^1)\) for any \(m\in {\mathbb {N}}\), up to reparametrization. In particular, the support \(\gamma (t,{{{\mathbb {S}}}}^1)\) stays in a compact set of \({{{\mathbb {R}}}}^2\) for any time.

Let us remark again that sub-convergence of a flow is a consequence of the parabolic estimates that one can prove. However this fact is not sufficient for proving the existence of a full limit as \(t\rightarrow +\infty \) of \(\gamma (t,\cdot )\) in any reasonable notion of convergence. We observe that sub-convergence does not even prevent from the possibility that for different sequences of times \(t_j,\tau _j \nearrow +\infty \) and points \(p_j,q_j\in {{{\mathbb {R}}}}^2\), the curves \(\gamma (t_j,\cdot )-p_j\) and \(\gamma (\tau _j,\cdot )-q_j\) converge to different critical points. The sub-convergence clearly does not imply that the flow remains in a compact set for all times either. This last fact, that is, uniform boundedness of the flow in \({{{\mathbb {R}}}}^2\), is not a trivial property for fourth order equation like the elastic flow. Indeed, such evolution equation lacks of maximum principle and therefore uniform boundedness of the flow in \({{{\mathbb {R}}}}^2\) cannot be deduced by comparison with other solutions.

However, all these properties will follow at once as the proof of Theorem 5.4 will be completed, that is, as we prove the full convergence of the flow.

The proof of Theorem 5.4 is based on several intermediate results. The strategy is rather general and the main steps can be sum up as follows. First we need to set up a suitable functional analytic framework in which the energy functional and its first and second variations are considered in a neighborhood of an arbitrary critical point. In this setting we prove some variational properties that are needed in order to produce a Łojasiewicz–Simon gradient inequality. Such an inequality estimates the difference in energy between the critical point and points in a neighborhood of it in terms of the operator norm of the first variation. As the first variation functional coincide with the driving velocity of the flow, this furnishes an additional estimate. Such an estimate will be finally applied to the flow as follows. Since we know that the flow subconverges, it passes arbitrarily close to critical points of the energy at large times. The application of the Łojasiewicz–Simon inequality will then imply that, once the flow passes sufficiently close to a critical point, it will be no longer able to “escape” from a neighborhood of such critical point. This will eventually imply the convergence of the flow.

The use of this kind of inequality for proving convergence of solutions to parabolic equations goes back to the semimal work of Simon [50]. The Łojasiewicz–Simon inequality we shall employ follows from the abstract results of [9] and the method is inspired by the strategy used in [10], where the authors study the Willmore flow of closed surfaces. We mention that another successful application of this strategy is contained in [18], where the authors study open curves with clamped boundary conditions. Finally, a recent application of this strategy to the flow of generalized elastic energies on Riemannian manifolds is contained in [47]. We remark that this strategy is rather general and it can be applied to many different geometric flows. We refer to [35] for a more detailed exposition of the method, in which the authors stress on the crucial general ingredients needed to run the argument.

Now we introduce the above mentioned framework. Observe that for any fixed smooth curve \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) there exists \(\rho (\gamma )>0\) such that if \(\psi :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) is a field of class \(H^4\) with \(\Vert \psi \Vert _{H^4}\le \rho \), then \(\gamma + \psi \) is a regular curve of class \(H^4\). Whenever \(\gamma \) is fixed, we will always assume that \(\rho =\rho (\gamma )>0\) is sufficiently small so that \(\gamma +\psi \) is a regular curve for any field \(\psi \) with \(\Vert \psi \Vert _{H^4}\le \rho \). Hence the following definition is well posed.

Definition 5.5

Let \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) be a regular smooth closed curve and \(\rho =\rho (\gamma )>0\) sufficiently small. We define

$$\begin{aligned} H^{m,\perp }_\gamma {:}{=}\left\{ \psi \in H^m({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^2) \,\,|\,\, \langle \psi (x),\tau (x)\rangle =0 \,\,\, \text{ a.e. } \text{ on } \,{{{\mathbb {S}}}}^1 \right\} , \end{aligned}$$

for any \(m\in {\mathbb {N}}\) and we denote \(L^{2,\perp }_\gamma \equiv H^{0,\perp }_\gamma \). Moreover we define

$$\begin{aligned} E:B_\rho (0)\subseteq H^{4,\perp }_\gamma \rightarrow {{{\mathbb {R}}}}\quad E(\psi ){:}{=}{\mathcal {E}}(\gamma +\psi ). \end{aligned}$$

For a fixed smooth curve \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\), the functional E is defined on an open subset of an Hilbert spaces. Therefore, we can treat the first and second variations of \({\mathcal {E}}\) as functionals over such Hilbert spaces. More precisely, we know that in classical functional analysis if \(F:B_\rho \subseteq V\rightarrow {{{\mathbb {R}}}}\) is a twice Gateaux differentiable functional defined on a ball \(B_\rho \) in a Banach space V, then \(\delta F: B_\rho \rightarrow V^\star \) and \(\delta ^2 F: B_\rho \rightarrow L(V,V^\star )\), where

$$\begin{aligned}&\delta F (v)[w] {:}{=}\frac{d}{d\varepsilon } F(v + \varepsilon w) \Big \vert _{\varepsilon =0},\\&\delta ^2 F (v) [w][z]{:}{=}\delta ^2 F (v) [w,z] {:}{=}\frac{d}{d\eta }\frac{d}{d\varepsilon } F(v + \varepsilon w + \eta z) \Big \vert _{\varepsilon =0}\Big \vert _{\eta =0}, \end{aligned}$$

where \((\cdot )^\star \) denotes the dual space of \((\cdot )\) and \(L(V,V^\star )\) is the space of bounded linear functionals from V to \(V^\star \). We adopted the notation \(\delta ^2 F (v) [w,z]\) by the fact that the second variation can be also seen as a bilinear symmetric form on V, indeed \(\delta ^2 F (v) [w][z]=\delta ^2 F (v) [z][w]\). In such a setting we have

$$\begin{aligned} \delta E : B_\rho (0) \subseteq H^{4,\perp }_\gamma \rightarrow (H^{4,\perp }_\gamma )^\star \qquad \qquad \delta E(\psi )[\varphi ] {:}{=}\frac{d}{d\varepsilon } E(\psi + \varepsilon \varphi ) \Big \vert _{\varepsilon =0}, \end{aligned}$$

and the second variation functional

$$\begin{aligned} \begin{aligned}&\delta ^2 E : B_\rho (0) \subseteq H^{4,\perp }_\gamma \rightarrow L(H^{4,\perp }_\gamma ,(H^{4,\perp }_\gamma )^\star ) \\&\delta ^2 E(\psi )[\varphi ,\zeta ] {:}{=}\frac{d}{d\eta }\frac{d}{d\varepsilon } E(\psi + \varepsilon \varphi + \eta \zeta ) \Big \vert _{\varepsilon =0}\Big \vert _{\eta =0}. \end{aligned} \end{aligned}$$

We will actually only need to consider \(\delta ^2 E\) evaluated at \(\psi =0\), that is, the second variation of \({\mathcal {E}}\) at the given curve \(\gamma \).

The study carried out in Section 2.1 shows that the functional E is Fréchet differentiable with

$$\begin{aligned} \delta E (\psi ) [\varphi ] = \int _{{{{\mathbb {S}}}}^1} \langle 2(\partial _s^\perp )^2 \varvec{\kappa }_{\gamma +\psi } +|\varvec{\kappa }_{\gamma +\psi }|^2\varvec{\kappa }_{\gamma +\psi } - \varvec{\kappa }_{\gamma +\psi }, \varphi \rangle \,\mathrm {d}s, \end{aligned}$$

where \(\varvec{\kappa }_{\gamma +\psi }\) is the curvature vector of the curve \(\gamma +\psi \) and both arclength derivative \(\partial _s\) and measure \(\mathrm {d}s\) are understood with respect to the curve \(\gamma +\psi \). The explicit formula for such first variation functionals shows that actually \(\delta E(\psi ) \) belongs to the smaller dual space \((L^{2,\perp }_\gamma )^\star \). Indeed \(\delta E(\psi ) \) is represented in \(L^2\)-duality as

$$\begin{aligned} \begin{aligned} \delta E(\psi )[\varphi ]&= \left\langle |\gamma '+\psi '| \left( 2(\partial _s^\perp )^2 \varvec{\kappa }_{\gamma +\psi } +|\varvec{\kappa }_{\gamma +\psi }|^2\varvec{\kappa }_{\gamma +\psi } - \varvec{\kappa }_{\gamma +\psi } \right) , \varphi \right\rangle _{L^2(\mathrm {d}x)}, \end{aligned} \end{aligned}$$
(5.3)

for any \(\varphi \in L^{2,\perp }_\gamma \), where the derivative \(\partial _s^\perp \) is understood with respect to the curve \(\gamma +\psi \).

Moreover, the results in Section 2.2 similarly imply that the second variation \(\delta ^2 E (0) [\varphi ,\cdot ]\) evaluated at some \(\varphi \in H^{4,\perp }_\gamma \) belongs to the smaller dual space \((L^{2,\perp }_\gamma )^\star \) via the pairing

$$\begin{aligned} \delta ^2 E (0) [\varphi ,\zeta ] = \left\langle |\gamma '| \left( (\partial _s^\perp )^4 \varphi + \Omega (\varphi ) \right) ,\zeta \right\rangle _{L^2(\mathrm {d}x)}, \end{aligned}$$
(5.4)

for any \(\zeta \in L^{2,\perp }_\gamma \), where \(\Omega :H^{4,\perp }_\gamma \rightarrow L^{2,\perp }_\gamma \) is a compact operator and the derivative \(\partial _s^\perp \) is understood with respect to the curve \(\gamma \).

In this setting, we can prove the following properties on the first and second variations.

Proposition 5.6

Let \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) be a regular smooth closed curve and \(\rho =\rho (\gamma )>0\) sufficiently small. Then the following holds true.

  1. 1.

    The functions \(E:B_\rho \subseteq H^{4,\perp }_\gamma \rightarrow {{{\mathbb {R}}}}\) and \(\delta E: B_\rho \subseteq H^{4,\perp }_\gamma \rightarrow (L^{2,\perp }_\gamma )^\star \) are analytic.

  2. 2.

    The second variation operator \(\delta ^2 E(0): H^{4,\perp }_\gamma \rightarrow (L^{2,\perp }_\gamma )^\star \), which is defined by

    $$\begin{aligned} \delta ^2 E(0)[\varphi ][\zeta ] {:}{=}\delta ^2 E(0)[\varphi ,\zeta ] \qquad \forall \varphi \in H^{4,\perp }_\gamma ,\,\,\forall \,\zeta \in L^{2,\perp }(\gamma ), \end{aligned}$$

    is a Fredholm operator of index zero, i.e., \(\mathrm{dim } \, \ker \delta ^2 E(0) = \mathrm{codim }\, (\mathrm{Imm } \, \delta ^2 E(0))\) is finite.

Proof

The fact that both E and \(\delta E\) are analytic maps follows from the fact that such operators are compositions and sums of analytic functions. For a detailed proof of this fact we refer to [18, Lemma 3.4].

Now we prove the second statement. Consider the operator \({\mathcal {L}}:H^{4,\perp }_\gamma \rightarrow L^{2,\perp }_\gamma \) defined by

$$\begin{aligned} {\mathcal {L}}(\varphi ) = |\gamma '|(\partial _s^\perp )^4 \varphi + |\gamma '|\Omega (\varphi ), \end{aligned}$$

where \(\Omega \) is as in (5.4). We clearly have that \(\delta ^2 E(0): H^{4,\perp }_\gamma \rightarrow (L^{2,\perp }_\gamma )^\star \) is Fredholm of index zero if and only if \({\mathcal {L}}\) is. Since \(|\gamma '|\Omega (\cdot )\) is compact, the thesis is then equivalent to say that \( H^{4,\perp }_\gamma \ni \,\varphi \mapsto |\gamma '| (\partial _s^\perp )^4 \varphi \, \in L^{2,\perp }_\gamma \) is Fredholm of index zero (see [24, Section 19.1, Corollary 19.1.8]). Since \(|\gamma '|\) is uniformly bounded away from zero, the thesis is equivalent to prove that the operator

$$\begin{aligned} (\partial _s^\perp )^4: H^{4,\perp }_\gamma \rightarrow L^{2,\perp }_\gamma \end{aligned}$$

is Fredholm of index zero. By [24, Corollary 19.1.8], as \(\mathrm{id} : H^{4,\perp }_\gamma \rightarrow L^{2,\perp }_\gamma \) is compact, this is equivalent to show that the operator

$$\begin{aligned} \mathrm{id} + (\partial _s^\perp )^4: H^{4,\perp }_\gamma \rightarrow L^{2,\perp }_\gamma \end{aligned}$$

is Fredholm of index zero. We can prove, in fact, that \(\mathrm{id} + (\partial _s^\perp )^4\) is even invertible.

Injectivity follows as if \(\varphi + (\partial _s^\perp )^4 \varphi =0\), then multiplication by \(\varphi \) and integration by parts give

$$\begin{aligned} \int |(\partial _s^\perp )^2\varphi |^2 + |\varphi |^2 \, \mathrm {d}s = 0, \end{aligned}$$

and then \(\varphi =0\).

Let now \(X\in L^{2,\perp }_\gamma \) be any field and consider the continuous functional \(F:H^{2,\perp }_\gamma \rightarrow {{{\mathbb {R}}}}\) defined by

$$\begin{aligned} F(\varphi ) {:}{=}\int _{{{{\mathbb {S}}}}^1} \frac{1}{2} |(\partial _s^\perp )^2\varphi |^2 + \frac{1}{2} |\varphi |^2 - \langle \varphi , X\rangle \,\mathrm {d}s. \end{aligned}$$

The explicit computation shows that

$$\begin{aligned} (\partial _s^\perp )^2\varphi = \partial ^2_s\varphi + \left( 2\langle \partial _s\varphi ,\varvec{\kappa }\rangle + \langle \varphi , \partial _s \varvec{\kappa }\rangle \right) \tau - \langle \partial _s\varphi ,\tau \rangle \varvec{\kappa }. \end{aligned}$$
(5.5)

Since \(\int |\partial _s\varphi |^2\,\mathrm {d}s = - \int \langle \varphi ,\partial _s\varphi \rangle \,\mathrm {d}s \le \varepsilon \int |\partial _s\varphi |^2 + C(\varepsilon ) \int |\varphi |^2\), computing \(|\partial _s\varphi |^2\) using (5.5), Young’s inequality yields that

$$\begin{aligned} \int _{{{{\mathbb {S}}}}^1} |\varphi |^2 + |\partial _s\varphi |^2 + |\partial ^2_s\varphi |^2 \, \mathrm {d}s \le C(\gamma ) \int _{{{{\mathbb {S}}}}^1} \frac{1}{2} |(\partial _s^\perp )^2\varphi |^2 + \frac{1}{2} |\varphi |^2 \, \mathrm {d}s. \end{aligned}$$

Therefore, by direct methods in Calculus of Variations, it follows that there exists a minimizer \(\phi \) of F in \(H^{2,\perp }_\gamma \). In particular \(\phi \) solves

$$\begin{aligned} \int _{{{{\mathbb {S}}}}^1} \langle (\partial _s^\perp )^2 \phi , (\partial _s^\perp )^2 \varphi \rangle + \langle \phi ,\varphi \rangle \,\mathrm {d}s = \int _{{{{\mathbb {S}}}}^1} \langle X, \varphi \rangle \,\mathrm {d}s, \end{aligned}$$
(5.6)

for any \(\varphi \in H^{2,\perp }_\gamma \). If we show that \(\phi \in H^{4,\perp }_\gamma \), then \(\phi + (\partial _s^\perp )^4\phi = X\) and surjectivity will be proved. However, this follows by very standard arguments, simply noticing that once writing the integrand of the functional F in terms of \(\partial _s^2\varphi \), \(\partial _s\varphi \) and \(\varphi \), by means of Equation (5.5), its dependence on the highest order term \(\partial _s^2\varphi \) is quadratic and the “coefficients” are given by the geometric quantities of \(\gamma \), which is smooth.

\(\square \)

We remark that it is essential to employ normal fields in the proof of the Fredholmness properties of \(\delta ^2 E(0)\) in Proposition 5.6 in order to rule out the tangential degeneracy related to the geometric nature of the energy functional.

The above analysis of the second variation is exactly what is needed in order to derive a Łojasiewicz–Simon gradient inequality. More precisely, we can rely on the following functional analytic result, which is a corollary of the results in [9]. We recall the result here without proof.

Proposition 5.7

([47, Corollary 2.6]). Let \(E:B_{\rho _0}(0) \subseteq V \rightarrow {{{\mathbb {R}}}}\) be an analytic map, where V is a Banach space and 0 is a critical point of E. Suppose that we have a Banach space \(W=Z^\star \hookrightarrow V^\star \), where \(V\hookrightarrow Z\), for some Banach space Z, that \(\mathrm {Imm}\,\delta E\subseteq W\) and the map \(\delta E:U\rightarrow W\) is analytic. Assume also that \(\delta ^2 E(0)\in L(V,W)\) and it is Fredholm of index zero.

Then there exist constants C, \(\rho >0\) and \(\theta \in (0,1/2]\) such that

$$\begin{aligned} |E(\psi )-E(0)|^{1-\theta }\le C \Vert \delta E(\psi ) \Vert _{W}, \end{aligned}$$

for any \(\psi \in B_\rho (0)\subseteq U\).

We can use Proposition 5.7 in order to derive a Łojasiewic–Simon inequality on our elastic functional \({\mathcal {E}}\).

Corollary 5.8

(Łojasiewicz–Simon gradient inequality). Let \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) be a smooth critical point of \({\mathcal {E}}\). Then there exists \(C,\sigma >0\) and \(\theta \in (0,\tfrac{1}{2}]\) such that

$$\begin{aligned} |{\mathcal {E}}(\gamma +\psi )-{\mathcal {E}}(\gamma )|^{1-\theta }\le C \Vert \delta E (\psi ) \Vert _{(L^{2,\perp }_\gamma )^\star }, \end{aligned}$$

for any \(\psi \in B_\sigma (0)\subseteq H^{4,\perp }_\gamma ({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^2)\).

Proof

By Proposition 5.6 we can apply Proposition 5.7 on the functional \(E:B_{\rho _0}(0)\subseteq H^{4,\perp }_\gamma \rightarrow {{{\mathbb {R}}}}\) with the spaces \(V= H^{4,\perp }_\gamma \) and \(W=(L^{2,\perp }_\gamma )^\star \). This immediately implies the thesis. \(\square \)

Let \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) be an embedded smooth curve. Choosing such \(\rho \) small enough, the open set \(U=\{p\in {{{\mathbb {R}}}}^2\,:\,d_\gamma (p):=d(p,\gamma )<\rho \}\) is a tubular neighborhood of \(\gamma \) with the property of unique orthogonal projection. The “projection” map \(\pi :U\rightarrow \gamma ({{{\mathbb {S}}}}^1)\) turns out to be \(C^2\) in U and given by \(p\mapsto p-\nabla d^2_\gamma (p)/2\), moreover the vector \(\nabla d^2_\gamma (p)\) is orthogonal to \(\gamma \) at the point \(\pi (p)\), see [34, Section 4] for instance. Then, given \(\varphi \in B_\rho (0)\subseteq H^4({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^2)\), we can define a map \(\chi :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {S}}}}^1\) by

$$\begin{aligned} \chi (x)=\gamma ^{-1}\bigl [\pi \bigl (\gamma (x)+\varphi (x)\bigr )\bigr ], \end{aligned}$$

that is \(C^2\) and invertible if \(\gamma '(x)+\varphi '(x)\) is never parallel to the unit vector \(\nabla d_\gamma (\gamma (x)+\varphi (x))\), which is true if we have (possibly) chosen a smaller \(\rho \) (so that \(|\varphi |\) and \(|\partial _x \varphi |\) are small and the claim follows as \(\langle \gamma '(x),\nabla d_\gamma (p)\rangle \rightarrow 0\) as \(p\rightarrow \gamma (x)\)).

We consider the vector field along \(\gamma \) defined by

$$\begin{aligned} \psi (\chi (x)):=\frac{1}{2} \nabla d^2_\gamma (\gamma (x)+\varphi (x)) \end{aligned}$$

which is orthogonal to \(\gamma \) at the point \(\pi (\gamma (x)+\varphi (x))=\gamma (\chi (x))\), for every \(x\in {{{\mathbb {S}}}}^1\), by construction. Hence \(\psi \) is a normal vector field along the reparametrized curve \(x\mapsto \gamma (\chi (x))\). Thus, we have

$$\begin{aligned} \gamma (\chi (x))+\psi (\chi (x))=&\,\pi \bigl (\gamma (x)+\varphi (x)\bigr )+\nabla d^2_\gamma (\gamma (x)+\varphi (x))/2\\ =&\,\gamma (x)+\varphi (x)-\nabla d^2_\gamma (\gamma (x)+\varphi (x))/2+\nabla d^2_\gamma (\gamma (x)+\varphi (x))/2\\ =&\,\gamma (x)+\varphi (x). \end{aligned}$$

and we conclude that the curve \(\gamma +\varphi \) can be described by the (reparametrized) regular curve \((\gamma +\psi )\circ \chi \), with \(\psi \circ \chi \) normal vector field along \(\gamma \circ \chi \). Moreover, by construction it follows that \(\psi \circ \chi \in H^{4,\perp }_{\gamma \circ \chi }\). Moreover, it is clear that if \(\varphi \rightarrow 0\) in \(H^4\) then also \(\psi \rightarrow 0\) in \(H^4\).

All this can be done also for a regular curve \(\gamma :{{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) which is only immersed (that is, it can have self-intersections), recalling that locally every immersion is an embedding and repeating the above argument a piece at a time along \(\gamma \), getting also in this case a normal field \(\psi \) describing a curve \(\gamma +\varphi \) for \(\varphi \in B_\rho (0)\subseteq H^4({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^2)\).

Now, if \(\gamma =\gamma (t,x)\) is the smooth solution of the elastic flow with datum \(\gamma _0\), by Proposition 5.1 there exist a smooth critical point \(\gamma _\infty \), a sequence \(t_j\rightarrow +\infty \), a sequence of points \(p_j\in {{{\mathbb {R}}}}^2\) and \({\overline{\gamma }}_{t_j}={\overline{\gamma }}(t_j,\cdot )\) reparametrization of \(\gamma (t_j,\cdot )\) such that

$$\begin{aligned} {\overline{\gamma }}_{t_j}-p_j \xrightarrow [j\rightarrow +\infty ]{} \gamma _\infty \end{aligned}$$
(5.7)

in \(C^m({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n)\) for any \(m\in {\mathbb {N}}\). Moreover, we know there are positive constants \(C_L=C_L(\gamma _0)\) and \(C(m,\gamma _0)\), for any \(m\in {\mathbb {N}}\), such that

$$\begin{aligned} \frac{1}{C_L} \le \ell (\gamma _t) \le C_L \end{aligned}$$

and

$$\begin{aligned} \Vert (\partial _s^\perp )^m \varvec{\kappa }(t,\cdot ) \Vert _{L^2(\mathrm {d}s)}\le C(m,\gamma _0) \end{aligned}$$
(5.8)

for every \(t\ge 0\).

If for suitable times \(t\in J\) we can write \(\gamma =\gamma _\infty +\varphi _t\) with \(\Vert \varphi _t\Vert _{H^4}<\rho =\rho _{\gamma _\infty }\) small enough, then it is an immediate computation to see that, if we describe \(\gamma \) as a “normal graph” reparametrization along \(\gamma _\infty \) by \(\gamma _\infty +\psi _t\) as in the above discussion, then

$$\begin{aligned} \Vert \psi _t\Vert _{H^m}\le C(m,\gamma _0,\gamma _\infty ), \end{aligned}$$
(5.9)

for every \(m\in {\mathbb {N}}\) for any \(t\in J\).

We are finally ready for proving the desired smooth convergence of the flow.

Proof of Theorem 5.4

Let us set \(\gamma _t:=\gamma (t,\cdot )\) and we let \(\gamma _\infty \), \(t_j\), \(p_j\) and \({\overline{\gamma }}_{t_j}={\overline{\gamma }}(t_j,\cdot )\) be as in (5.7). Since the energy is non-increasing along the flow, we can assume that \({\mathcal {E}}(\gamma _t)\searrow {\mathcal {E}}(\gamma _\infty )\), as \(t\rightarrow +\infty \) and \({\mathcal {E}}(\gamma _t) > {\mathcal {E}}(\gamma _\infty )\) for any t. Thus, it is well defined the positive function

$$\begin{aligned} H(t)= \left[ {\mathcal {E}}(\gamma _t) - {\mathcal {E}}(\gamma _\infty ) \right] ^\theta , \end{aligned}$$

where \(\theta \in (0,1/2]\) is given by Corollary 5.8 applied on the curve \(\gamma _\infty \). The function H is monotone decreasing and converging to zero as \(t\rightarrow +\infty \) (hence it is bounded above by \(H(0)= \left[ {\mathcal {E}}(\gamma _0) - {\mathcal {E}}(\gamma _\infty ) \right] ^\theta \)).

Now let \(m\ge 6\) be a fixed integer. By Proposition 5.1, for any \(\varepsilon >0\) there exists \(j_\varepsilon \in {\mathbb {N}}\) such that

$$\begin{aligned} \Vert {\overline{\gamma }}_{t_{j_\varepsilon }} - p_{j_\varepsilon } -\gamma _\infty \Vert _{C^m({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n)}\le \varepsilon \qquad \text { and }\qquad H(t_{j_\varepsilon })\le \varepsilon . \end{aligned}$$

Choosing \(\varepsilon >0\) small enough, in order that

$$\begin{aligned} ({\overline{\gamma }}_{t_{j_\varepsilon }} - p_{j_\varepsilon } -\gamma _\infty )\in B_{\rho _{\gamma _\infty }}(0)\subseteq H^4({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n), \end{aligned}$$

for every t in some interval \([t_{j_\varepsilon },t_{j_\varepsilon } + \delta )\) there exists \(\psi _t \in H^{4,\perp }_{\gamma _\infty }\) such that the curve \({\widetilde{\gamma }}_t = \gamma _\infty + \psi _t\) is the “normal graph” reparametrization of \(\gamma _t-p_{j_\varepsilon }\). Hence

$$\begin{aligned} (\partial _t {\widetilde{\gamma }})^\perp = -(2(\partial _s^\perp )^2 \varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}), \end{aligned}$$

as the flow is invariant by translation and changing the parametrization of the evolving curves only affects the tangential part of the velocity. Since \({\widetilde{\gamma }}_{t_\varepsilon }\) is such reparametrization of \({\overline{\gamma }}_{t_{j_\varepsilon }} - p_{j_\varepsilon }\) and this latter is close in \(C^m({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n)\) to \(\gamma _\infty \), possibly choosing smaller \(\varepsilon ,\delta >0\) above, it easily follows that for every \(t\in [t_{j_\varepsilon },t_{j_\varepsilon } + \delta )\) there holds

$$\begin{aligned} \Vert \psi _t \Vert _{H^4} <\sigma , \end{aligned}$$

where \(\sigma >0\) is as in Corollary 5.8 applied on \(\gamma _\infty \) and we possibly choose it smaller than the constant \(\rho _\infty \).

We want now to prove that if \(\varepsilon >0\) is sufficiently small, then actually we can choose \(\delta =+\infty \) and \(\Vert \psi _t \Vert _{H^4} <\sigma \) for every time.

For E as in Corollary 5.8, we have

$$\begin{aligned}{}[{\mathcal {E}}(\gamma _t)-{\mathcal {E}}(\gamma _\infty )]^{1-\theta }&=\,[{\mathcal {E}}({{\widetilde{\gamma }}}_t)-{\mathcal {E}}(\gamma _\infty )]^{1-\theta }\nonumber \\&=\,\left[ E(\psi _t) - E(0) \right] ^{1-\theta }\nonumber \\&\le \, C_1(\gamma _\infty ,\sigma ) \Vert \delta E (\psi _t) \Vert _{(L^{2,\perp }_{\gamma _\infty })^\star }\nonumber \\&=\,C_1(\gamma _\infty ,\sigma )\sup _{\Vert S\Vert _{L^{2,\perp }_{\gamma _\infty }=1}}\int _{{{{\mathbb {S}}}}^1} \left\langle |{{\widetilde{\gamma }}}_t'|\bigl (2(\partial _s^\perp )^2 \varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}\bigr ), S \right\rangle \,\mathrm {d}x\nonumber \\&\le \,C_1(\gamma _\infty ,\sigma )\sup _{\Vert S\Vert _{L^2({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n)=1}}\int _{{{{\mathbb {S}}}}^1} \left\langle |{{\widetilde{\gamma }}}_t'|\bigl (2(\partial _s^\perp )^2 \varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}\bigr ), S \right\rangle \,\mathrm {d}x\nonumber \\&=\,C_1(\gamma _\infty ,\sigma )\left( \int _{{{{\mathbb {S}}}}^1} |{{\widetilde{\gamma }}}_t'|^2\bigl \vert 2(\partial _s^\perp )^2 \varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}\bigr \vert ^2\,\mathrm {d}x\right) ^{1/2} \end{aligned}$$
(5.10)

where we can assume that \(C_1(\gamma _\infty ,\sigma )\ge 1\).

Now, \(\langle {\widetilde{\gamma }}_t,\tau _{\gamma _\infty }\rangle =\langle \gamma _\infty ,\tau _{\gamma _\infty }\rangle \) is time independent, then \(\langle \partial _t {\widetilde{\gamma }},\tau _{\gamma _\infty }\rangle =0\) and possibly taking a smaller \(\sigma >0\), we can suppose that \(|\tau _{\gamma _\infty }-\tau _{{\widetilde{\gamma }}}|\le \tfrac{1}{2}\) for any \(t\ge t_{j_\varepsilon }\) such that \(\Vert \psi _t \Vert _{H^4} <\sigma \). Hence,

$$\begin{aligned}&|(\partial _t{\widetilde{\gamma }})^\perp |= | \partial _t {\widetilde{\gamma }} - \langle \partial _t {\widetilde{\gamma }} , \tau _{{\widetilde{\gamma }}} \rangle \tau _{{\widetilde{\gamma }}} | = | \partial _t {\widetilde{\gamma }} + \langle \partial _t {\widetilde{\gamma }} , \tau _{\gamma _\infty } - \tau _{{\widetilde{\gamma }}} \rangle \tau _{{\widetilde{\gamma }}} |\\&\ge |\partial _t{\widetilde{\gamma }}| - |\partial _t{\widetilde{\gamma }}||\tau _{\gamma _\infty }-\tau _{{\widetilde{\gamma }}}| \ge \frac{1}{2} |\partial _t{\widetilde{\gamma }}|. \end{aligned}$$

Differentiating H, we then get

$$\begin{aligned} \begin{aligned} \frac{d}{dt} H(t)&=\,\frac{d}{dt}[{\mathcal {E}}({{\widetilde{\gamma }}}_t)-{\mathcal {E}}(\gamma _\infty )]^{\theta }\\&=\,-\theta H^{\frac{\theta -1}{\theta }}\int _{{{{\mathbb {S}}}}^1} |{{\widetilde{\gamma }}}_t'|\bigl \vert 2(\partial _s^\perp )^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}\bigr \vert ^2\,\mathrm {d}x\\&\le \,-\theta H^{\frac{\theta -1}{\theta }}C_2(\gamma _\infty ,\sigma )\\&\quad \left( \int _{{{{\mathbb {S}}}}^1} \bigl \vert (\partial _t{{\widetilde{\gamma }}})^{\perp }\bigr \vert ^2\,\mathrm {d}x\right) ^{1/2}\left( \int _{{{{\mathbb {S}}}}^1} |{{\widetilde{\gamma }}}_t'|^2\bigl \vert 2(\partial _s^\perp )^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} + |\varvec{\kappa }_{{{\widetilde{\gamma }}}_t}|^2\varvec{\kappa }_{{{\widetilde{\gamma }}}_t} - \varvec{\kappa }_{{{\widetilde{\gamma }}}_t}\bigr \vert ^2\,\mathrm {d}x\right) ^{1/2}\nonumber \\&\le \,-H^{\frac{\theta -1}{\theta }}C(\gamma _\infty ,\sigma ) \Vert \partial _t{{\widetilde{\gamma }}} \Vert _{L^2(\mathrm {d}x)}[{\mathcal {E}}({{\widetilde{\gamma }}}_t)-{\mathcal {E}}({{\widetilde{\gamma }}}_\infty )]^{1-\theta }\\&=\,-C(\gamma _\infty ,\sigma )\Vert \partial _t{{\widetilde{\gamma }}} \Vert _{L^2(\mathrm {d}x)}, \end{aligned} \end{aligned}$$

where \(C(\gamma _\infty ,\sigma )=\theta C_2(\gamma _\infty ,\sigma )/2C_1(\gamma _\infty ,\sigma )\). This inequality clearly implies the estimate

$$\begin{aligned} C(\gamma _\infty ,\sigma )\int _{\xi _1}^{\xi _2}\Vert \partial _t{{\widetilde{\gamma }}} \Vert _{L^2(\mathrm {d}x)}\,\mathrm {d}t\le H(\xi _1)-H(\xi _2)\le H(\xi _1) \end{aligned}$$
(5.11)

for every \(t_{j_\varepsilon }\le \xi _1<\xi _2<t_{j_\varepsilon }+\delta \) such that \(\Vert \psi _t \Vert _{H^4} <\sigma \). Hence, for such \(\xi _1,\xi _2\) we have

$$\begin{aligned} \Vert {{\widetilde{\gamma }}}_{\xi _2}-{{\widetilde{\gamma }}}_{\xi _1}\Vert _{L^2(\mathrm {d}x)}= & {} \left( \int _{{{{\mathbb {S}}}}^1} |{{\widetilde{\gamma }}}_{\xi _2}(x)-{{\widetilde{\gamma }}}_{\xi _1}(x)|^2\,\mathrm {d}x\right) ^{1/2}\nonumber \\\le & {} \biggl (\int _{{{{\mathbb {S}}}}^1} \left( \int _{\xi _1}^{\xi _2}\partial _t{{\widetilde{\gamma }}}(t,x)\,\mathrm {d}t\,\right) ^2\,\mathrm {d}x\biggr )^{1/2}\nonumber \\= & {} \left\| \int _{\xi _1}^{\xi _2}\partial _t{{\widetilde{\gamma }}}\,\mathrm {d}t\,\right\| _{L^2(\mathrm {d}x)}\nonumber \\\le & {} \int _{\xi _1}^{\xi _2}\Vert \partial _t{{\widetilde{\gamma }}}\Vert _{L^2(\mathrm {d}x)}\,\mathrm {d}t\nonumber \\\le & {} \frac{{H(\xi _1)}}{C(\gamma _\infty ,\sigma )}\nonumber \\\le & {} \frac{\varepsilon }{C(\gamma _\infty ,\sigma )}, \end{aligned}$$
(5.12)

where we used that \(H(\xi _1)\le H(t_{j_\varepsilon })\le \varepsilon \) and the fact that \(\bigl \Vert \int _{\xi _1}^{\xi _2} v\,\mathrm {d}t\,\bigr \Vert _{L^2(\mathrm {d}x)}\le \int _{\xi _1}^{\xi _2}\Vert v\Vert _{L^2(\mathrm {d}x)}\,\mathrm {d}t\), which easily follows from Holder inequality.

Therefore, for \(t\ge t_{j_\varepsilon }\) such that \(\Vert \psi _t \Vert _{H^4} <\sigma \), we have

$$\begin{aligned} \Vert \psi _t\Vert _{L^2(\mathrm {d}x)}= & {} \Vert {\widetilde{\gamma }}_t - \gamma _\infty \Vert _{L^2(\mathrm {d}x)}\le \Vert {\widetilde{\gamma }}_t - {\widetilde{\gamma }}_{t_{j_\varepsilon }} \Vert _{L^2(\mathrm {d}x)} + \Vert {\widetilde{\gamma }}_{t_{j_\varepsilon }} - \gamma _\infty \Vert _{L^2(\mathrm {d}x)}\\&\le \frac{\varepsilon }{C(\gamma _\infty ,\sigma )} +\varepsilon \sqrt{2\pi }. \end{aligned}$$

Then, by means of Gagliardo–Nirenberg interpolation inequalities (see [3] or [6], for instance) and estimates (5.9), for every \(l\ge 4\), we have

$$\begin{aligned} \Vert \psi _t \Vert _{H^l} \le C \Vert \psi _t \Vert _{H^{l+1}}^a \Vert \psi _t \Vert _{L^2(\mathrm {d}x)}^{1-a} \le C(l,\gamma _0,\gamma _\infty ,\sigma )\varepsilon ^{1-a}, \end{aligned}$$

for some \(a\in (0,1)\) and any \(t\ge t_{j_\varepsilon }\) such that \(\Vert \psi _t \Vert _{H^4} <\sigma \).

In particular setting \(l+1=m\ge 6\), if \(\varepsilon >0\) was chosen sufficiently small depending only on \(\gamma _0\), \(\gamma _\infty \) and \(\sigma \), then \(\Vert \psi _t\Vert _{H^4}<\sigma /2\) for any time \(t\ge t_{j_\varepsilon }\), which means that we could have chosen \(\delta =+\infty \) in the previous discussion.

Then, from estimate (5.12) it follows that \({\widetilde{\gamma }}_t\) is a Cauchy sequence in \(L^2(\mathrm {d}x)\) as \(t\rightarrow +\infty \), therefore \({\widetilde{\gamma }}_t \) converges in \(L^2(\mathrm {d}x)\) as \(t\rightarrow +\infty \) to some limit curve \({\widetilde{\gamma }}_\infty \) (not necessarily coincident with \(\gamma _\infty \)). Moreover, by means of the above interpolation inequalities, repeating the argument for higher m we see that such convergence is actually in \(H^m\) for every \(m\in {\mathbb {N}}\), hence in \(C^m({{{\mathbb {S}}}}^1,{{{\mathbb {R}}}}^n)\) for every \(m\in {\mathbb {N}}\), by Sobolev embedding theorem. This implies that \({\widetilde{\gamma }}_\infty \) is a smooth critical point of \({\mathcal {E}}\). As the original flow \(\gamma _t\) is a fixed translation of \({\widetilde{\gamma }}_t\), up to reparametrization, this completes the proof. \(\square \)

Collecting the results we proved about the elastic flow of closed curves, we can state the following comprehensive theorem.

Theorem 5.9

Let \(\gamma _0:{\mathbb {S}}^1\rightarrow {{{\mathbb {R}}}}^2\) be a smooth closed curve. Then there exists a unique solution \(\gamma :[0,+\infty )\times {{{\mathbb {S}}}}^1\rightarrow {{{\mathbb {R}}}}^2\) to the elastic flow

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \gamma = -\left( 2(\partial _s^\perp )^2\varvec{\kappa } + |\varvec{\kappa }|^2\varvec{\kappa } -\varvec{\kappa } \right) &{} \text{ on } [0,+\infty )\times {{{\mathbb {S}}}}^1,\\ \gamma (0,x)=\gamma _0(x) &{} \text{ on } {{{\mathbb {S}}}}^1. \end{array}\right. } \end{aligned}$$

Moreover there exists a smooth critical point \(\gamma _\infty \) of \({\mathcal {E}}\) such that \(\gamma (t,\cdot )\rightarrow \gamma _\infty (\cdot )\) in \(C^m({{{\mathbb {S}}}}^1)\) for any \(m\in {\mathbb {N}}\), up to reparametrization.

Remark 5.10

We remark that Theorem 5.9 is true exactly as stated for the analogously defined flow in the Euclidean spaces \({{{\mathbb {R}}}}^n\) for any \(n\ge 2\) (see [35]). Indeed, it is immediate to see that the proof above generalizes to higher codimension. We observe that the very same statement holds for the suitably defined elastic flow defined in the hyperbolic plane and in the two-sphere by [47, Corollary 1.2]. It is likely that smooth convergence of the elastic flow still holds true in hyperbolic spaces and spheres of any dimension \(\ge 2\) and, more generally, in homogeneous Riemannian manifolds, that is, complete Riemannian manifolds such that the group of isometries acts transitively on them. For further results and comments about the convergence of the elastic flow in Riemannian manifolds we refer to [47].

Let us conclude by stating the analogous full convergence result proved for the elastic flow of open curves with clamped boundary conditions.

Theorem 5.11

([18, 29]) Let \(\gamma _0:[0,1]\rightarrow {{{\mathbb {R}}}}^n\) be a smooth curve. Then there exists a unique solution \(\gamma :[0,+\infty )\times [0,1]\rightarrow {{{\mathbb {R}}}}^n\) to the elastic flow satisfying the clamped boundary conditions

$$\begin{aligned} \gamma (t,0)=\gamma _0(0), \quad \gamma (t,1)=\gamma _0(1), \quad \partial _s\gamma (t,0)=\tau _{\gamma _0}(0), \quad \partial _s\gamma (t,1)=\tau _{\gamma _0}(1), \quad \end{aligned}$$

with initial datum \(\gamma _0\). Moreover there exists a smooth critical point \(\gamma _\infty \) of \({\mathcal {E}}\) subjected to the above clamped boundary conditions such that \(\gamma (t,\cdot )\rightarrow \gamma _\infty (\cdot )\) in \(C^m([0,1])\) for any \(m\in {\mathbb {N}}\), up to reparametrization.

6 Open Questions

We conclude the paper by mentioning some related open problems.

  • In Theorem 4.18 a description of the possible behaviors as \(t\rightarrow T_{\max }\) is given for evolving networks subjected to Navier boundary conditions. When instead clamped boundary conditions are imposed, only short time existence is known [23]. One would like to investigate further this flow of networks as time approaches the maximal time of existence. Recent results [19] on the minimization of \({\mathcal {E}}_\mu \) among networks whose curves meet with fixed angles suggest that an analogous of Theorem 4.18 is expected: either \(T_{\max }=\infty \) or as \(t\rightarrow T_{\max }\) the length of at least one curve of the network could go to zero.

  • In Section 4 we described a couple of numerical examples by Robert Nürnberg in which some curves vanish or the amplitude of the angles at the junctions goes to zero. It is an open problem to explicitly find an example of an evolving network developing such phenomena. More generally, one wants to give a more accurate description of the onset of singularities during the flow.

  • In the case of the flow of networks with Navier boundary conditions estimates of the type

    $$\begin{aligned} \frac{d}{dt}\int _{{\mathcal {N}}_t} \left|\partial _s^n k\right|^2 \,\mathrm {d}s \le C({\mathcal {E}}_\mu ({\mathcal {N}}_0)), \end{aligned}$$

    are shown for \(n=2+4j\) with \(j\in {\mathbb {N}}\) only for a special choice of the tangential velocity (see [14]). One could ask whether the same holds true for a general tangential velocity.

  • In the last section we show that if \(\gamma _t\) is a solution of the elastic flow of closed curves in \([0,\infty )\), then its support stays in a compact sets of \({\mathbb {R}}^2\) for any time. The same is true for open curves and networks with some endpoint fixed in the plane. What about compact networks? At the moment we are not able to exclude that if the initial network \({\mathcal {N}}_0\) has no fixed endpoints (as in the case of a Theta) as \(t\rightarrow T_{\max }\) the entire configuration \({\mathcal {N}}_t\) “escapes” to infinity.

  • Another related question asked by G. Huisken is the following: suppose that the support of an initial closed curve \(\gamma _0\) lies in the upper halfplane, is it possible to prove that there is no time \(\tau \) such that the support of the solution at time \(\tau \) lies completely in the lower halfplane?

  • Are there self-similar (for instance translating or rotating) solutions of the elastic flow?

  • Several variants of the elastic flow have been investigated, but an analysis of the elastic flow of closed curves that encloses a fixed (signed) area is missing.

  • At the moment no stability results are shown for the elastic flow of networks. More generally, one would understand whether an elastic flow of a general network defined for all times converges smoothly to a critical point, just as in the case of closed curves. Similarly, proving the stability of the flow would mean to understand whether an elastic flow of networks starting “close to” a critical point exists for all times and smoothly converges.

  • Is it possible to introduce a definition of weak solution (for instance by variational schemes such as minimizing movements) that is also capable to provide global existence in the case of networks? We remark that all notions based on the maximum principle, such as viscosity solutions, cannot work in this context, due to the high order of the evolution equation in the spatial variable.