1 Introduction

The present paper is a case study of the impact of variable delay on periodic motion. We consider a periodic solution of an autonomous differential equation with a constant time lag and ask how stability properties of the periodic solution change when the constant time lag is replaced by a variable, state-dependent delay—in such a way that the periodic solution is preserved. Let an odd continuously differentiable function \(g:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be given with

$$\begin{aligned} g(\xi )=1\quad \text {on}\quad (-\infty ,b]\quad \text {and}\quad g'(\xi )<0\quad \text {on}\quad (-b,b), \end{aligned}$$

for some \(b\in \left( 0,\frac{1}{3}\right) \). We begin with the equation

$$\begin{aligned} x'(t)=g(x(t-1)) \end{aligned}$$
(1.1)

which models negative feedback with respect to a stationary state (here given by \(\xi =0\)), for a scalar variable and with a constant time lag. Proceeding as in [2, Section XV.1] we find a periodic solution: Take any continuous function \(\phi :[-1,0]\rightarrow {\mathbb {R}}\) with \(\phi (t)\le -b\) on \([-1,-b]\) and \(\phi (t)=t\) on \([-b,0]\). Integrate Eq. (1.1) successively over the intervals \([0,1-b],[1-b,1+b],[1+b,2]\), with the initial condition \(x(t)=\phi (t)\) on \([-1,0]\). This yields a function \(p:[-1,2]\rightarrow {\mathbb {R}}\) with

$$\begin{aligned} p(t)= & {} t \quad \text {on}\quad [-b,1-b],\\ p(t)= & {} 1-b+\int _{-b}^{t-1}g(s)ds\quad \text {on}\quad (1-b,1+b),\\ p(t)= & {} 2-t\quad \text {on}\quad [1+b,2], \end{aligned}$$

which satisfies Eq. (1.1) on [0, 2]. Extension by the symmetry \(p(t)=-p(t-2)\) for \(2\le t\le 4\) and upon that periodic continuation of the restriction p|[0, 4] to a function \(p:{\mathbb {R}}\rightarrow {\mathbb {R}}\) defines a periodic solution of Eq. (1.1) with the said symmetry and minimal period 4. Equation (1.1) shows that p is twice continuously differentiable.

Let \(C=C([-2,0],{\mathbb {R}})\) denote the Banach space of continuous real functions on \([-2,0]\), with the norm given by \(|\phi |=\max _{-2\le t\le 0}|\phi (t)|\). For a function \(x:dom\rightarrow {\mathbb {R}}\) and \(t\in {\mathbb {R}}\) with \([t-2,t]\subset dom\) recall the notation \(x_t\) for the shifted segment \([-2,0]\ni s\mapsto x(t+s)\in {\mathbb {R}}\).

By the symmetry, \(p_t(0)+p_t(-2)=0\) for all \(t\in {\mathbb {R}}\). Therefore the function p solves every equation of the form

$$\begin{aligned} x'(t)=g(x(t-d(p_t))) \end{aligned}$$

where the delay functional \(d:C\rightarrow {\mathbb {R}}\) is given by

$$\begin{aligned} d(\phi )=1+\rho (\phi ^{*}\phi ), \end{aligned}$$

with the linear functional \(\phi ^{*}:C\ni \phi \mapsto \phi (0)+\phi (-2)\in {\mathbb {R}}\) and a real function \(\rho :{\mathbb {R}}\rightarrow (-1,1)\) satisfying \(\rho (0)=0\). We fix a continuously differentiable function \(\delta :{\mathbb {R}}^2\rightarrow (-1,1)\) with

$$\begin{aligned} \delta (\xi ,0)= & {} 0 \quad \text {for all}\quad \xi \in {\mathbb {R}},\\ \delta (0,\Delta )= & {} 0 \quad \text {for all}\quad \Delta \in {\mathbb {R}},\\ \partial _1\delta (0,\Delta )= & {} \Delta \quad \text {for all}\quad \Delta \in {\mathbb {R}}, \end{aligned}$$

e. g., \(\delta (\xi ,\Delta )=\sin (\xi \,\Delta )\), or \(\delta (\xi ,\Delta )=\text {tanh}(\xi \,\Delta )\), and define \(d_{\Delta }:C\rightarrow (0,2)\) for \(\Delta \in {\mathbb {R}}\) by

$$\begin{aligned} d_{\Delta }(\phi )=1+\delta (\phi ^{*}\phi ,\Delta ). \end{aligned}$$

Then \(d_{\Delta }(p_t)=1\) for all \(t\in {\mathbb {R}}\), and p becomes a solution of the equation

$$\begin{aligned} x'(t)=g(x(t-d_{\Delta }(x_t))), \end{aligned}$$
(1.2)

which for \(\Delta =0\) is Eq. (1.1) with the constant time lag 1 while for \(\Delta \ne 0\) there is a state-dependent contribution to the time lag in the differential equation. Notice that

$$\begin{aligned} Dd_{\Delta }(p_t)\chi =\partial _1\delta (0,\Delta )\phi ^{*}\chi =\Delta \phi ^{*}\chi \quad \text {for all}\quad t\in {\mathbb {R}},\chi \in C. \end{aligned}$$

The stability properties which we study in the sequel require linearization, which for differential equations with state-dependent delay is possible in the framework introduced in [4, 11]. Let \(C^1=C^1([-2,0],{\mathbb {R}})\) denote the Banach space of continuously differentiable real functions on \([-2,0]\), with the norm given by \(|\phi |_1=|\phi |+|\phi '|\), and let \(j:C^1\rightarrow C\) denote the inclusion map. For \(\Delta \in {\mathbb {R}}\) given consider the functional \(f_{\Delta }:C^1\rightarrow {\mathbb {R}}\), \(f_{\Delta }(\phi )=g(\phi (-d_{\Delta }(j\phi )))\), which represents the right hand side of Eq. (1.2). The following proposition verifies the hypothesis for the results from [4, 11] which we need.

Proposition 1.1

Let \(\Delta \in {\mathbb {R}}\) be given. The maps \(d=d_{\Delta }\) and \(f=f_{\Delta }\) are continuously differentiable with

$$\begin{aligned} Df(\phi )\chi =g'(\phi (-d(j\phi )))\{\chi (-d(j\phi ))-\phi '(-d(j\phi ))Dd(j\phi )j\chi \} \end{aligned}$$

for all \(\phi \in C^1\) and \(\chi \in C^1\). Moreover,

(e) each derivative \(Df(\phi ):C^1\rightarrow {\mathbb {R}}\), \(\phi \in C^1\), extends to a linear map \(D_ef(\phi ):C\rightarrow {\mathbb {R}}\) and the map

$$\begin{aligned} C^1\times C\ni (\phi ,\chi )\mapsto D_ef(\phi )\chi \in {\mathbb {R}}\end{aligned}$$

is continuous.

We prove Proposition 1.1 at the end of this introduction. The extension property (e) in Proposition 1.1 is a version of the notion of being almost Fréchet differentiable from [7], and is crucial for the following to hold. For every \(\Delta \in {\mathbb {R}}\) the non-empty set

$$\begin{aligned} X_{\Delta }=\{\phi \in C^1:\phi '(0)=f_{\Delta }(\phi )\} \end{aligned}$$

is a continuously differentiable submanifold of codimension 1 in \(C^1\), with tangent spaces

$$\begin{aligned} T_{\phi }X_{\Delta }=\{\chi \in C^1:\chi '(0)=Df_{\Delta }(\phi )\chi \}. \end{aligned}$$

Each initial value \(\phi \in X_{\Delta }\) continues to a unique maximal solution \(x^{\Delta ,\phi }:[-2,t(\Delta ,\phi ))\rightarrow {\mathbb {R}}\) of Eq. (1.2), which means that \(0<t(\Delta ,\phi ))\le \infty \), \(x^{\Delta ,\phi }\) is continuously differentiable, Eq. (1.2) holds for \(0\le t<t(\Delta ,\phi )\), and any other continuously differentiable solution \(x:[-2,t_x)\rightarrow {\mathbb {R}}\), \(0<t_x\le \infty \), of the initial value problem

$$\begin{aligned} x'(t)=g(x(t-d_{\Delta }(x_t)))\quad \text {for}\quad t>0,\quad x_0=\phi \in X_{\Delta }, \end{aligned}$$

is a restriction of \(x^{\Delta ,\phi }\). All solution operators

$$\begin{aligned} S_{\Delta ,t}:\{\phi \in X_{\Delta }:t<t(\Delta ,\phi )\}\ni \phi \mapsto x^{\Delta ,\phi }_t\in X_{\Delta },\quad t\ge 0. \end{aligned}$$

are continuously differentiable, and the semiflow on \(X_{\Delta }\) given by \((t,\phi )\mapsto x^{\Delta ,\phi }_t\) is continuous. For \(\phi \in X_{\Delta }\) and \(0\le u<t(\Delta ,\phi )\) the derivative

$$\begin{aligned} DS_{\Delta ,u}(\phi ):T_{\phi }X_{\Delta }\rightarrow T_{S_{\Delta ,u}(\phi )}X_{\Delta } \end{aligned}$$

satisfies

$$\begin{aligned} DS_{\Delta ,u}(\phi )\chi =v^{\Delta ,\phi ,\chi }_u \end{aligned}$$

with the unique maximal solutions \(v=v^{\Delta ,\phi ,\chi }\) of the initial value problems

$$\begin{aligned} v'(t)= & {} Df_{\Delta }(S_{\Delta ,t}(\phi ))v_t\quad \text {for}\quad 0\le t<t(\Delta ,\phi ), \nonumber \\ v_0= & {} \chi \in T_{\phi }X_{\Delta }. \end{aligned}$$
(1.3)

Equation (1.3) is called the variational equation along the solution \(x^{\Delta ,\phi }\), and the functions \(v^{\Delta ,\phi ,\chi }:[-2,t(\Delta ,\phi ))\rightarrow {\mathbb {R}}\) are continuously differentiable.

The stability properties of p as a solution to Eq. (1.2) which we have in mind are the spectral properties of the monodromy operator \(M_{\Delta }=DS_{\Delta ,4}(p_0)\), that is, of the linearization of the period map \(S_{\Delta ,4}\) at its fixed point \(p_0\). Using Proposition 1.1 and the computation of \(Dd_{\Delta }\) we see that the variational equation Eq. (1.3) along p becomes

$$\begin{aligned} v'(t)= & {} Df_{\Delta }(p_t)v_t\nonumber \\= & {} g'(p(t-1))\{v(t-1)-p'(t-1)\Delta [v(t)+v(t-2)]\}. \end{aligned}$$
(1.4)

From \(g'(p(0-1))=0\) we have \(Df_{\Delta }(p_0)=0\), and it follows that the domain \(T_{p_0}X_{\Delta }\) of the monodromy operator is

$$\begin{aligned} Y=\{\chi \in C^1:\chi '(0)=0\}, \end{aligned}$$

which is independent of the parameter \(\Delta \).

The spectral properties of \(M_{\Delta }\) refer to its complexification. Instead of the latter we study the analogue of \(M_{\Delta }\) which is given by complex-valued solutions of the variational equation. Let \({{\mathcal {C}}}^1\) denote the Banach space analogous to \(C^1\) which consists of complex-valued functions, consider the closed subspace \({{\mathcal {Y}}}=\{\eta \in {{\mathcal {C}}}^1:\eta '(0)=0\}\) analogous to Y, and observe that for every \(\Delta \in {\mathbb {R}}\) and \(\eta \in {{\mathcal {Y}}}\) there is a unique continuously differentiable function \(v^{\Delta ,\eta }:[-2,\infty )\rightarrow {\mathbb {C}}\) with \(v^{\Delta ,\eta }_0=\eta \) so that \(v=v^{\Delta ,\eta }\) satisfies Eq. (1.4) for all \(t\ge 0\), and that we have \(v^{\Delta ,\eta }_4\in {{\mathcal {Y}}}\). This is easily seen by decomposition into real and imaginary parts. The linear map

$$\begin{aligned} {{\mathcal {M}}}_{\Delta }:{{\mathcal {Y}}}\ni \eta \mapsto v^{\Delta ,\eta }_4\in {{\mathcal {Y}}} \end{aligned}$$

analogous to \(M_{\Delta }\) is conjugate to the complexification of \(M_{\Delta }\) by a topological isomorphism, and more convenient for our purpose .

In the next section we verify that each map \({{\mathcal {M}}}_{\Delta }\), \(\Delta \in {\mathbb {R}}\), is continuous and compact. Consequently the spectrum

$$\begin{aligned} \sigma _{\Delta }=\{\lambda \in {\mathbb {C}}:{{\mathcal {M}}}_{\Delta }-\lambda \,id\quad \text {is not bijective}\} \end{aligned}$$

is at most countable, every \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) is an eigenvalue with finite-dimensional eigenspace, and isolated in \(\sigma _{\Delta }\). The eigenvalues \(\lambda \ne 0\) - which in the context of monodromy operators are also called Floquet multipliers - can accumulate only at \(0\in {\mathbb {C}}\). From \(\dim \,{{\mathcal {Y}}}=\infty \) we have \(0\in \sigma _{\Delta }\). For \(\lambda \in {\mathbb {C}}\) we abbreviate \({{\mathcal {M}}}_{\Delta }-\lambda ={{\mathcal {M}}}_{\Delta }-\lambda \,id\), and \(({{\mathcal {M}}}_{\Delta }-\lambda )^{-n}(0)=(({{\mathcal {M}}}_{\Delta }-\lambda )^n)^{-1}(0)\) for all \(n\in {\mathbb {N}}\). For every Floquet multiplier the chain length

$$\begin{aligned} n(\lambda )=\min \{n\in {\mathbb {N}}:(T-\lambda \,id)^{-n}(0)=(T-\lambda )^{-(n+1)}(0)\}, \end{aligned}$$

and the algebraic multiplicity

$$\begin{aligned} m(\lambda )=\dim \,({{\mathcal {M}}}_{\Delta }-\lambda )^{-n(\lambda )}(0), \end{aligned}$$

are finite. A Floquet multiplier \(\lambda \) is called simple if \(m(\lambda )=1\). For every \(\Delta \in {\mathbb {R}}\) the resolvent set

$$\begin{aligned} \rho _{\Delta }={\mathbb {C}}\setminus \sigma _{\Delta } \end{aligned}$$

is open, the resolvent map \(\rho _{\Delta }\ni \lambda \mapsto ({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\in L_c({{\mathcal {Y}}},{{\mathcal {Y}}})\) is analytic, and each Floquet multiplier \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) is a pole of the resolvent map whose order is equal to the chain length \(n(\lambda )\), see e. g. [9].

The derivative \(p_c'\) of the periodic function \(p_c:{\mathbb {R}}\ni t\mapsto p(t)\in {\mathbb {C}}\) satisfies Eq. (1.4) for every \(\Delta \in {\mathbb {R}}\). This yields

$$\begin{aligned} {{\mathcal {M}}}_{\Delta }p_{c,0}'=p_{c,4}'=p_{c,0}', \end{aligned}$$

and \(p_{c,0}'\) is an eigenvector of the Floquet multiplier \(1\in \sigma _{\Delta }\) for every \(\Delta \in {\mathbb {R}}\).

The construction of p described above is a first indication that the periodic orbit

$$\begin{aligned} {{\mathcal {O}}}=\{p_t\in C^1:0\le t\le 4\}\subset X_0 \end{aligned}$$

is stable and locally attracting in \(X_0\) (but not globally attracting, due to the infinite-dimensional stable manifold of the zero solution [4, Section 3]). One can show that attraction towards \({{\mathcal {O}}}\) is extreme: There is a neighbourhood U of \({{\mathcal {O}}}\) in \(X_0\) so that for every \(\phi \in U\) we have \(t(0,\phi )=\infty \), and there exist \(t_{\phi }\in [0,7]\) and \(s=s_{\phi }\in [0,4)\) with \(x^{0,\phi }_t=p_{s+t}\in {{\mathcal {O}}}\) for all \(t\ge t_{\phi }\) [8].

In Proposition 2.4 we obtain \(\sigma _0=\{0,1\}\) and \({{\mathcal {Y}}}=({{\mathcal {M}}}_0)^{-1}(0)\oplus {\mathbb {C}}\,p_{c,0}'\). These facts reflect the strong stability properties of the periodic solution p of Eq. (1.1) on the level of linearization - and tell us that for \(\Delta \ne 0\), when state-dependent delay is present, the only possible change in stability properties on the level of linearization is some kind of destabilization.

Propositions 2.5 and 2.6 at the end of Sect. 2 express a kind of continuity of the spectra \(\sigma _{\Delta }\) at \(\Delta =0\).

In Sect. 3 we derive a characteristic equation for the Floquet multipliers and compute resolvents \(({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\) for \(\Delta \in {\mathbb {R}}\) and \(\lambda \in \rho _{\Delta }\). This is inspired by an approach going back to [10]. Sections 46 prepare the search for solutions to the characteristic equation and for results on multiplicity of Floquet multipliers. Important are the computations in Sect. 6 which bring the characteristic equation into a tractable form. Corollary 6.5 excludes Floquet multipliers in \((1,\infty )\) for any \(\Delta \in {\mathbb {R}}\setminus \{0\}\).

Sections 7, 8 contain the main results. Due to a symmetry in the characteristic equation it is enough to consider parameters \(\Delta \ge 0\). Theorem 7.2 says that at \(\Delta =0\) a Floquet multiplier \(\Lambda (\Delta )\in \sigma _{\Delta }\cap (-\infty ,0)\) bifurcates from \(0\in {\mathbb {C}}\) and decreases to \(-\infty \) as \(\Delta \rightarrow \infty \), with nonzero speed. This means a loss of stability of the periodic orbit \({{\mathcal {O}}}\) for \(\Delta >0\); for \(\Delta >0\) with \(\Lambda (\Delta )<-1\) the orbit \({{\mathcal {O}}}\) is unstable. Theorem 8.2 guarantees that the Floquet multiplier 1 is simple not only for small \(\Delta \) (as in Proposition 2.6) but for all parameters \(\Delta \ge 0\), and that the Floquet multiplier \(\Lambda (\Delta )\) is simple for \(\Delta =\Delta _{*}\) with \(\Lambda (\Delta _{*})=-1\).

For parameters \(\Delta \ge 0\) with \(\sigma _{\Delta }\setminus \{1\}\) contained in the open unit circle the simplicity of the Floquet multiplier 1 allows to apply a result by Mallet-Paret and Nussbaum [6] which guarantees that the orbit \({{\mathcal {O}}}\) is stable and exponentially attracting in \(X_{\Delta }\) with asymptotic phase.

In Sect. 9 we describe how Floquet multipliers in \(\sigma _{\Delta }\cap (0,1)\) arise and behave for \(\Delta \rightarrow \infty \), and address subcritical bifurcations into pairs of nonreal, complex conjugate Floquet multipliers. Finally we comment on a period doubling bifurcation from the periodic orbit \({{\mathcal {O}}}\) at the critical parameter \(\Delta =\Delta _{*}\), for which Theorems 7.2 and 8.2 provide sufficient hypotheses.

Let us mention that we are not aware of any other example of a period doubling bifurcation in differential equations with state-dependent delay. Period-doubling bifurcations in families of delay differential equations with constant time lags were found by Campbell and LeBlanc [1].

For another result on Floquet multipliers of periodic solutions of a family of differential equations with state-dependent delay, in a singular perturbation setting, see [5] by Mallet-Paret and Nussbaum.

As in the case of periodic solutions of ordinary differential equations the Floquet multipliers and their multiplicities should be invariants of the orbit \({{\mathcal {O}}}\subset X_{\Delta }\), which means that they should not change if the solution p of Eq. (1.2) is replaced with a translate \(p(t+\cdot )\), \(0<t<4\). A proof of this in case of delay differential equations with constant time lags is found in [2, Chapters XIII-XIV].

One may ask what happens if instead of a non-constant periodic solution of a family of delay differential equations as above the simpler case of a constant solution of such a family is considered: Suppose \(g:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is continuously differentiable, and \(g(\xi )=0\), so that \(c:{\mathbb {R}}\ni t\mapsto \xi \in {\mathbb {R}}\) satisfies Eq. (1.1). Replacing the time lag 1 in Eq. (1.1) by any continuously differentiable delay functional \(d:C\rightarrow (0,2)\) with \(1=d(c_0)\) \((=d(c_t)\) for all \(t\in {\mathbb {R}}\)) would neither change the tangent space analogous to \(Y=T_{p_0}X_{\Delta }\) above nor the variational equation along the solution c, both due to the term \(\phi '(-d(j\phi ))\) in Proposition 1.1 which is zero for constant \(\phi =c_0\). Consequently the introduction of state-dependent delay with \(1=d(c_0)\) would have no effect on spectral properties of linearized solution operators along the constant solution c. For related facts compare [4, Section 3].

Notation, conventions, preliminaries

Concerning roots recall that for every \(\lambda _0\in {\mathbb {C}}\setminus \{0\}\) there exist an open disk \(D\subset {\mathbb {C}}\setminus \{0\}\) centered at \(\lambda _0\) and an analytic function \(z:D\rightarrow {\mathbb {C}}\setminus \{0\}\) with \((z(\lambda ))^2=\lambda \) on D. Obviously, \(z(\lambda )\ne -z(\lambda )\) on D.

The algebra of \(2\times 2\)-matrices with complex entries is denoted by \({\mathbb {C}}^{2\times 2}\), and \(I=(\delta _{jk})_{1\le j,k\le 2}\).

For reals \(s<t\) and \({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {K}}={\mathbb {C}}\) the Banach space of continuous functions \([s,t]\rightarrow {\mathbb {K}}\), with the norm given by \(|\phi |=\max _{s\le u\le t}|\phi (u)|\), is denoted by \(C([s,t],{\mathbb {K}})\), and the Banach space of continuously differentiable functions \([s,t]\rightarrow {\mathbb {K}}\), with the norm given by \(|\phi |_1=|\phi |+|\phi '|\), is denoted by \(C^1([s,t],{\mathbb {K}})\). For \(s=-2\) and \(t=0\) we use the abbreviations \(C,C^1\) in case \({\mathbb {K}}={\mathbb {R}}\) and \({{\mathcal {C}}},{{\mathcal {C}}}^1\) in case \({\mathbb {K}}={\mathbb {C}}\).

Further Banach spaces which occur in the sequel are \(C^1([-b,b],{\mathbb {C}}^2)\) analogous to \(C^1([-b,b],{\mathbb {C}})\), and the subspaces \(C_0^1([-b,b],{\mathbb {C}})\subset C^1([-b,b],{\mathbb {C}})\) and \(C_0^1([-b,b],{\mathbb {C}}^2)\subset C^1([-b,b],{\mathbb {C}}^2)\) which are defined by the boundary conditions \(\phi '(-b)=0=\phi '(b)\). The vectorspace \(C([-2,\infty ),{\mathbb {C}})\) is considered without a topology on it.

For Banach spaces BE over the field \({\mathbb {K}}\), \({\mathbb {K}}={\mathbb {R}}\) or \({\mathbb {K}}={\mathbb {C}}\), the Banach space of continuous linear maps \(T:B\rightarrow E\), with \(|T|=\sup _{|b|\le 1}|Tb|\), is denoted by \(L_c(B,E)\).

Proof of Proposition 1.1

  1. 1.

    The evaluation map \(ev:C\times [-2,0]\rightarrow {\mathbb {R}}\), \(ev(\phi ,t)=\phi (t)\), is continuous, and linear in the first argument. The evaluation map \(ev_1:C^1\times (-2,0)\rightarrow {\mathbb {R}}\), \(ev_1(\phi ,t)=\phi (t)\), is continuously differentiable with the partial derivatives

    $$\begin{aligned} D_1ev_1(\phi ,t)(\chi ,s)=ev_1(\chi ,t)=\chi (t)\,\,\text {and}\,\,D_2ev_1(\phi ,t)s=s\,D_2ev_1(\phi ,t)1=s\,\phi '(t), \end{aligned}$$

    hence \(Dev_1(\phi ,t)(\chi ,s)=\chi (t)+s\,\phi '(t)\) for \(\phi ,\chi \) in \(C^1\). The chain rule applied to \(f=g\circ \,ev_1\circ (id\times (-(d\circ j)))\) yields that f is differentiable with

    $$\begin{aligned} Df(\phi )\chi= & {} Dg(ev_1(\phi ,-d(j\phi )))\{D_1ev_1(\phi ,-d(j\phi ))\chi \\&+D_2ev_1(\phi ,-d(j\phi ))D(-d(j\phi )j\chi \}\\= & {} g'(\phi (-d(j\phi )))\{\chi (-d(j\phi ))-\phi '(-d(j\phi ))Dd(j\phi )j\chi \}\\= & {} g'(\phi ,-d(j\phi )))\{ev_1(\chi ,-d(j\phi ))-ev(\phi ',-d(j\phi ))Dd(j\phi )j\chi \} \end{aligned}$$

    for \(\phi ,\chi \) in \(C^1\).

  2. 2.

    Proof that the map \(Df:C^1\ni \phi \mapsto Df(\phi )\in L_c(C^1,{\mathbb {R}})\) is continuous. The map \(Ev:[-2,0]\rightarrow L_c(C^1,{\mathbb {R}})\) given by \(Ev(t)\chi =ev(\chi ,t)\) is continuous, due to the estimate

    $$\begin{aligned} |\chi (t)-\chi (s)|\le \max _{-2\le u\le 0}|\chi '(u)||t-s|\le |\chi |_1|t-s| \end{aligned}$$

    for \(\chi \in C^1\) and ts in \([-2,0]\). Using this, and the fact that differentiation \(C^1\ni \phi \mapsto \phi '\in C\) is linear and continuous, and the expression for \(Df(\phi )\chi \) from Part 1, one easily completes the proof that the map Df is continuous.

  3. 3.

    Verification of property (e). For \(\phi \in C^1\) and \(\chi \in C\) define \(D_ef(\phi )\chi \) by the formula for \(Df(\phi )\chi \) but with \(ev_1(\chi ,-d(j\phi ))\) replaced by \(ev(\chi ,-d(j\phi )\) and \(j\chi \) replaced by \(\chi \). Then the continuity of the map \(C^1\times C\ni (\phi ,\chi )\mapsto D_ef(\phi )\chi \in {\mathbb {R}}\) becomes obvious. \(\square \)

2 Continuity, Compactness, the Case \(\Delta =0\)

Proposition 2.1

The map \({\mathbb {R}}\times {{\mathcal {Y}}} \times [0,\infty )\ni (\Delta ,\eta ,t)\mapsto v^{\Delta ,\eta }_t\in {{\mathcal {C}}}^1\) is continuous.

Proof

We only indicate the steps of the proof.

  1. 1.

    For every \(\Delta \in {\mathbb {R}}\) and \(\chi \in {{\mathcal {Y}}}\) the continuous differentiability of \(v^{\Delta ,\chi }:[-2,\infty )\rightarrow {\mathbb {C}}\) implies that the curve \([0,\infty )\ni t\mapsto v^{\Delta ,\chi }_t\in {{\mathcal {C}}}^1\) is continuous.

  2. 2.

    For \((\Delta ,\eta )\in {\mathbb {R}}\times {{\mathcal {Y}}}\) and \(0\le t\le 1\) represent the solution \(v^{\Delta ,\eta }\) by the variation-of-constants formula for ordinary differential equations and show that the map \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto v^{\Delta ,\eta }|[0,1]\in C([0,1],{\mathbb {C}})\) is continuous. Conclude that the map \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto v^{\Delta ,\eta }|[-2,1]\in C([-2,1],{\mathbb {C}})\) is continuous.

  3. 3.

    Show by induction that for every \(n\in {\mathbb {N}}\) the map \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto v^{\Delta ,\eta }|[-2,n]\in C([-2,n],{\mathbb {C}})\) is continuous.

  4. 4.

    Use Eq. (1.4) in order to show that for every \(n\in {\mathbb {N}}\) also the map \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto (v^{\Delta ,\eta })'|[0,n]\in C([0,n],{\mathbb {C}})\) is continuous. Next, obtain the continuity of the maps \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto (v^{\Delta ,\eta })'|[-2,n]\in C([-2,n],{\mathbb {C}})\), and deduce that the maps \({\mathbb {R}}\times {{\mathcal {Y}}}\ni (\Delta ,\eta )\mapsto v^{\Delta ,\eta }|[-2,n]\in C^1([-2,n],{\mathbb {C}})\), \(n\in {\mathbb {N}}\), are continuous.

  5. 5.

    For reals \(\Delta ,{\bar{\Delta }}\) and \(\chi ,{\bar{\chi }}\) in \({{\mathcal {Y}}}\) set \(v=v^{\Delta ,\chi }\) and \({\bar{v}}=v^{{\bar{\Delta }},{\bar{\chi }}}\), and consider \(0\le s\le t<n\in {\mathbb {N}}\). Then

$$\begin{aligned} |{\bar{v}}_s-v_t|_1\le & {} |{\bar{v}}_s-v_s|_1+|v_s-v_t|_1\\\le & {} |({\bar{v}}-v)|[-2,n]|_1+|v_s-v_t|_1, \end{aligned}$$

and it becomes obvious how the assertion of Proposition 2.1 follows by means of Parts 1 and 4 of the proof. \(\square \)

Proposition 2.2

(Compactness) For every bounded set \(B\subset {\mathbb {R}}\times {{\mathcal {Y}}}\) and for every \(t\ge 2\) the closure of the set \(\{v^{\Delta ,\eta }_t:(\Delta ,\eta )\in B\}\subset {{\mathcal {C}}}^1\) is compact.

Proof

We only indicate the steps of the proof. Let a bounded subset \(B\subset {\mathbb {R}}\times {{\mathcal {Y}}}\) be given.

  1. 1.

    For \((\Delta ,\eta )\in B\) and \(0\le t\le 1\) represent the solution \(v^{\Delta ,\eta }\) by the variation-of-constants formula for ordinary differential equations and show that the set \(\{v^{\Delta ,\eta }(t)\in {\mathbb {C}}:(\Delta ,\eta )\in B,0\le t\le 1\}\) is bounded. As B is bounded, it follows that also the set \(\{v^{\Delta ,\eta }(t)\in {\mathbb {C}}:(\Delta ,\eta )\in B,-2\le t\le 1\}\) is bounded.

  2. 2.

    Proceed by induction and obtain that every set \(\{v^{\Delta ,\eta }(t)\in {\mathbb {C}}:(\Delta ,\eta )\in B,-2\le t\le n\}\), \(n\in {\mathbb {N}}\), is bounded.

  3. 3.

    Let \(n\in {\mathbb {N}}\). Use Eq. (1.4) and obtain that the set \(\{(v^{\Delta ,\eta })'(t)\in {\mathbb {C}}:(\Delta ,\eta )\in B,0\le t\le n\}\), is bounded. Use the boundedness of B and deduce that the set \(\{(v^{\Delta ,\eta })'(t)\in {\mathbb {C}}:(\Delta ,\eta )\in B,-2\le t\le n\}\), is bounded. It follows that there is a uniform Lipschitz constant for the functions \(v^{\Delta ,\eta }|[-2,n]\), \( (\Delta ,\eta )\in B\), and the set of these function is equicontinuous at every \(t\in [-2,n]\). Use Eq. (1.4) in order to deduce that also the set of all derivatives \((v^{\Delta ,\eta })'|[0,n]\), \( (\Delta ,\eta )\in B\), is equicontinuous at every \(t\in [0,n]\).

  4. 4.

    Let \(t\ge 2\) be given. Choose an integer \(n\ge t\). It follows that both sets \(V=\{v^{\Delta ,\eta }_t\in {{\mathcal {C}}}^1:(\Delta ,\eta )\in B\}\) and \(V'=\{(v^{\Delta ,\eta })'_t\in {{\mathcal {C}}}:(\Delta ,\eta )\in B\}\) are bounded with respect to the norm on \({{\mathcal {C}}}\) and equicontinuous at every \(s\in [-2,0]\). Therefore their closures in \({{\mathcal {C}}}\) are compact. Now let a sequence \((\phi _j)_1^{\infty }\) in V be given. A subsequence \((\phi _{j_k})_1^{\infty }\) converges in \({{\mathcal {C}}}\), and a subsequence of the sequence of derivatives \(((\phi _{j_k})')_1^{\infty }\) converges in \({{\mathcal {C}}}\). This yields a subsequence of \((\phi _j)_1^{\infty }\) which converges in \({{\mathcal {C}}}^1\). It follows that V has compact closure in \({{\mathcal {C}}}^1\). \(\square \)

Notice that the factor \(g'(p(t-1))\) on the right hand side of Eq. (1.4) is zero on the set \([0,1-b]\cup [1+b,3-b]\cup [3+b,4]+4{\mathbb {N}}_0\), due to \(g'(\xi )=0\) for \(|\xi |\ge b\) and \(|p(s)|\ge b\) on \([-1,-b]\cup [b,2-b]\cup [2+b,3]+4{\mathbb {N}}_0\).

Corollary 2.3

Each solution \(v^{\Delta ,\eta }\), \(\Delta \in {\mathbb {R}}\) and \(\eta \in {{\mathcal {Y}}}\), is constant on each of the intervals \([0,1-b],[1+b,3-b],[3+b,4]\), and on their translates by \(4{\mathbb {N}}_0\).

We turn to the case \(\Delta =0\), for which \(d_0(\phi )=1\) everywhere.

Proposition 2.4

  1. (i)

    \({{\mathcal {M}}}_0{{\mathcal {Y}}}={\mathbb {C}}\,p_{c,0}'\) and \(\sigma _0=\{0,1\}\),

  2. (ii)

    \({{\mathcal {Y}}}={{\mathcal {M}}}_0^{-1}(0)\oplus {\mathbb {C}}p_{c,0}'\),

  3. (iii)

    \(0\in {\mathbb {C}}\) is an eigenvalue with chain length 1, and

  4. (iv)

    the eigenvalue 1 is simple.

Proof

  1. 1.

    On assertion (i). Let \(\eta \in {{\mathcal {Y}}}\), set \(v=v^{0,\eta }\). By Corollary 2.3 both functions v and \(p_c'\) are constant on the interval \([1+b,2+b]\). For \(w=v(1+b)/p_c'(1+b)=-v(1+b)\) we have \(v(t)=w\,p_c(t)\) on \([1+b,2+b]\). Notice that for \(\Delta =0\) Eq. (1.4) reads \(v'(t)=g'(p(t-1))v(t-1)\). Successively integrating this equation on the intervals \([1+b+n,2+b+n]\), \(n\in {\mathbb {N}}\), we obtain \(v(t)=w\,p_c(t)\) for all \(t\ge 1+b\). In particular, \({{\mathcal {M}}}_0\eta =v_4=w\,p_{c,4}'=w\,p_{c,0}'\), hence

    $$\begin{aligned} {{\mathcal {M}}}_0{{\mathcal {Y}}}\subset {\mathbb {C}}\,p_{c,0}'\quad (\subset {{\mathcal {M}}}_0{{\mathcal {Y}}}). \end{aligned}$$
    (2.1)

    Now consider \(\lambda \in \sigma _0\setminus \{0\}\). For an eigenvector \(\chi \in {{\mathcal {Y}}}\setminus \{0\}\), \(\chi =\frac{1}{\lambda }{{\mathcal {M}}}_0\chi \in {\mathbb {C}}\,p_{c,0}'\) (see (2.1)). It follows that \(\chi ={{\mathcal {M}}}_0\chi =\lambda \,\chi \), and thereby, \(\lambda =1\).

  2. 2.

    Proof of \({{\mathcal {Y}}}\subset {{\mathcal {M}}}_0^{-1}(0)+{\mathbb {C}}p_{c,0}'\). Let \(\eta \in {{\mathcal {Y}}}\), set \(v=v^{0,\eta }\). By assertion (i), \({{\mathcal {M}}}_0\eta =w\,p_{c,0}'\) for some \(w\in {\mathbb {C}}\). We have

    $$\begin{aligned} {{\mathcal {M}}}_0(\eta -w\,p_{c,0}')=w\,p_{c,0}'-{{\mathcal {M}}}_0w\,p_{c,0}'=0 \end{aligned}$$

    because of \({{\mathcal {M}}}_0p_{c,0}'=p_{c,0}'\). Hence

    $$\begin{aligned} \eta =(\eta -w\,p_{c,0}')+w\,p_{c,0}'\in {{\mathcal {M}}}_0^{-1}(0)+{\mathbb {C}}p_{c,0}'. \end{aligned}$$
  3. 3.

    Proof of \({{\mathcal {M}}}_0^{-1}(0)\cap {\mathbb {C}}p_{c,0}'=\{0\}\). For \(\chi \in {{\mathcal {M}}}_0^{-1}(0)\cap {\mathbb {C}}p_{c,0}'\), \(0={{\mathcal {M}}}_0\chi \) and \(\chi =a\,p_{c,0}'\) for some \(a\in {\mathbb {C}}\), hence \(0=a\,{{\mathcal {M}}}_0p_{c,0}'=a\,p_{c,0}'\), \(a=0\), \(\chi =0\).

  4. 4.

    Parts 2 and 3 yield assertion (ii).

  5. 5.

    Assertion (ii) implies that \(0\in {\mathbb {C}}\) is an eigenvalue with eigenspace \({{\mathcal {M}}}_0^{-1}(0)\). The chain length is 1 because for every \(\chi \in {{\mathcal {M}}}_0^{-2}(0)\) we get \({{\mathcal {M}}}_0\chi \in {{\mathcal {M}}}_0^{-1}(0)\cap {\mathbb {C}}\,p_{c,0}'=\{0\}\) (with assertion (i) and Part 3), hence \(\chi \in {{\mathcal {M}}}_0^{-1}(0)\).

  6. 6.

    Proof of \(({{\mathcal {M}}}_0-1)^{-1}(0)\subset {\mathbb {C}}\,p_{c,0}'\). For \(\chi \in ({{\mathcal {M}}}_0-1)^{-1}(0)\) we have \(\chi ={{\mathcal {M}}}_0\chi \in {\mathbb {C}}\,p_{c,0}'\), see assertion (i).

  7. 7.

    Proof of assertion (iv). For every \(\chi \in ({{\mathcal {M}}}_0-1)^{-2}(0)\) we have \(({{\mathcal {M}}}_0-1)\chi \in ({{\mathcal {M}}}_0-1)^{-1}(0)\subset {\mathbb {C}}\,p_{c,0}'\) (see Part 6). It follows that

    $$\begin{aligned} \chi \in {{\mathcal {M}}}_0\chi +{\mathbb {C}}\,p_{c,0}'\subset & {} {\mathbb {C}}\,p_{c,0}'\quad \text {(with assertion (i))}\\\subset & {} ({{\mathcal {M}}}_0-1)^{-1}(0). \end{aligned}$$

    \(\square \)

The next results are about persistence, or continuity, of spectra for small \(\Delta \).

Proposition 2.5

For every \(\epsilon >0\) there exists \(\Delta _{\epsilon }>0\) with

$$\begin{aligned} \{\lambda \in \sigma _{\Delta }:\lambda \ne 1\}\subset \{\lambda \in {\mathbb {C}}:|\lambda |<\epsilon \} \end{aligned}$$

for all \(\Delta \in {\mathbb {R}}\) with \(|\Delta |<\Delta _{\epsilon }\).

Proof

We argue by contradiction. Suppose there exist \(\epsilon >0\) and sequences \((\Delta _n)_1^{\infty }\) in \({\mathbb {R}}\) and \((\lambda _n)_1^{\infty }\) in \(\{\lambda \in \sigma _{\Delta }:\lambda \ne 1\}\) with \(\Delta _n\rightarrow 0\) and \(|\lambda _n|\ge \epsilon \) for all \(n\in {\mathbb {N}}_0\). Choose an eigenvector \(\chi _n\in {{\mathcal {Y}}}\) for each eigenvalue \(\lambda _n\). Let \(pr:{{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}}\) denote the projection along \({\mathbb {C}}p_{c,0}'\) onto \(K={{\mathcal {M}}}_0^{-1}(0)\). Let \(n\in {\mathbb {N}}\). Because of \(\lambda _n\ne 1\) we have \(\chi _n\notin {\mathbb {C}}\,p_{c,0}'\), hence \(\zeta _n=pr\,\chi _n\) belongs to \(K\setminus \{0\}\), and we may assume \(|\zeta _n|_1=1\). As \(pr\,{{\mathcal {M}}}_{\Delta _n}(id-pr){{\mathcal {Y}}}\subset pr\,{{\mathcal {M}}}_{\Delta _n}{\mathbb {C}}\,p_{c,0}'\subset pr\,{\mathbb {C}}\,p_{c,0}'=0\) we have

$$\begin{aligned} \lambda _n\zeta _n=\lambda _n pr\,\chi _n=pr\,\lambda _n \chi _n= pr\,{{\mathcal {M}}}_{\Delta _n}\chi _n=pr\,{{\mathcal {M}}}_{\Delta _n}pr\,\chi _n=pr\,{{\mathcal {M}}}_{\Delta _n}\zeta _n, \end{aligned}$$

and \(\zeta _n\) is an eigenvector of the eigenvalue \(\lambda _n\) of the map \(K\ni \zeta \mapsto pr\,{{\mathcal {M}}}_{\Delta _n}\zeta \in K\).

Proposition 2.2 yields that the elements \(\lambda _n\zeta _n=pr\,{{\mathcal {M}}}_{\Delta _n}\zeta _n\), \(n\in {\mathbb {N}}\), belong to a compact subset of the Banach space K. In particular the moduli \(|\lambda _n|=|\lambda _n\zeta _n|_1\) are bounded, and a subsequence \((\lambda _{n_j})_{j=1}^{\infty }\) convergences to some \(\lambda \in {\mathbb {C}}\), \(|\lambda |\ge \epsilon >0\). Using \(\zeta _n=\frac{1}{\lambda _n}pr\,{{\mathcal {M}}}_{\Delta _n}\zeta _n\) and compactness we find a subsequence of the eigenvectors \(\zeta _{n_j}\) which converges to some \(\zeta \in K\) with \(|\zeta |_1=1\). Using \(\Delta _n\rightarrow 0\) and Proposition 2.1 we arrive at \(0\ne \lambda \,\zeta =pr\,{{\mathcal {M}}}_0\zeta \), in contradiction to \({{\mathcal {M}}}_0K=0\). \(\square \)

Proposition 2.6

There exists \(\Delta _1>0\) so that for all \(\Delta \in {\mathbb {R}}\) with \(|\Delta |<\Delta _1\) the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) is simple.

Proof

  1. 1.

    (Geometric multiplicities) Proof that there exists \(\Delta _g>0\) with \(\text {dim}\,({{\mathcal {M}}}_{\Delta }-1)^{-1}(0)=1\) for \(|\Delta |\le \Delta _g\). Suppose there is a sequence \((\Delta _n)_1^{\infty }\) in \({\mathbb {R}}\) converging to 0, with \(\text {dim}({{\mathcal {M}}}_{\Delta _n}-1)^{-1}(0)\ge 2\) for all \(n\in {\mathbb {N}}\). For \(n\in {\mathbb {N}}\), choose an eigenvector \(\chi _n\notin {\mathbb {C}}\,p_{c,0}'\). With K and pr as in the proof of Proposition 2.5, set \(\zeta _n=pr\,\chi _n\in K\setminus \{0\}\). We may assume \(|\zeta _n|_1=1\). As in the proof of Proposition 2.5 we get that \(\zeta _n\) is an eigenvector of the eigenvalue 1 of the map \(K\ni \zeta \mapsto pr\,{{\mathcal {M}}}_{\Delta _n}\zeta \in K\), and compactness and continuity arguments yield an element \(\zeta \in K\) with \(|\zeta |_1=1\) and \(\zeta =pr\,{{\mathcal {M}}}_0\zeta \), in contradiction to \({{\mathcal {M}}}_0K=0\).

  2. 2.

    Proof that there exists \(\Delta _c\in (0,\Delta _g)\) so that for \(|\Delta |\le \Delta _c\) the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) has chain length 1. Suppose the contrary. Then there are sequences \((\Delta _n)_1^{\infty }\) converging to 0 and \((w_n)_1^{\infty }\) in \({{\mathcal {Y}}}\) with \(w_n\in {{\mathcal {M}}}_{\Delta _n}-1)^{-2}(0)\setminus ({{\mathcal {M}}}_{\Delta _n}-1)^{-1}(0)\) for all \(n\in {\mathbb {N}}\). Using Part 1 we get \(({{\mathcal {M}}}_{\Delta _n}-1)w_n \in ({{\mathcal {M}}}_{\Delta _n}-1)^{-1}(0)={\mathbb {C}}\,p_{c,0}'\) and \(w_n\notin ({{\mathcal {M}}}_{\Delta _n}-1)^{-1}(0)={\mathbb {C}}\,p_{c,0}'\), hence \(\rho _n=pr\,w_n\in K\setminus \{0\}\). We may assume \(|\rho _n|_1=1\). Observe that as in the proof of Proposition 2.5 we have

    $$\begin{aligned} (pr\,{{\mathcal {M}}}_{\Delta _n}-1)\rho _n=pr\,{{\mathcal {M}}}_{\Delta _n}\rho _n-pr\,w_n=pr\,{{\mathcal {M}}}_{\Delta _n}w_n-pr\,w_n, \end{aligned}$$

    and consequently \( (pr\,{{\mathcal {M}}}_{\Delta _n}-1)\rho _n =pr\,({{\mathcal {M}}}_{\Delta _n}-1)w_n\in pr\,{\mathbb {C}}\,p_{c,0}'=0\), or \(pr\,{{\mathcal {M}}}_{\Delta _n}\rho _n=\rho _n\ne 0\). Now continuity and compactness arguments as in the proof of Proposition 2.5 yield an element \(\rho \in K\) with \(|\rho |_1=1\) and \(\rho =pr\,{{\mathcal {M}}}_0\rho \), in contradiction to \({{\mathcal {M}}}_0K=0\).

  3. 3.

    Combining the results of Parts 1 and 2 we get that for \(|\Delta |\le \Delta _c\) the algebraic eigenspace of the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) is one-dimensional. \(\square \)

In Sect. 8 we shall see that the algebraic multiplicity of the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) is 1 for all \(\Delta \in {\mathbb {R}}\), and in Sects. 7 and 9 we shall find eigenvalues different from 0 and 1. The proofs of these results rely on the characteristic equation for eigenvalues which is derived in the next section, and on the computation of resolvents, also in the next section.

3 The Characteristic Equation and the Resolvents

We begin with the computation of the preimages \(\chi \in {{\mathcal {Y}}}\) of a given element \(\eta \in ({{\mathcal {M}}}_{\Delta }-\lambda ){{\mathcal {Y}}}\), for \(\Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0\}\). Let \(v=v^{\Delta ,\chi }\). Then \(\chi =\frac{1}{\lambda }(v_4-\eta )\). As v is constant on each of the intervals \([0,1-b],[1+b,3-b],[3+b,4]\) it is determined on [0, 4] by its restrictions to the intervals \([1-b,1+b]\) and \([3-b,3+b]\). The following proposition shows that these restrictions correspond to a solution of a boundary value problem on the interval \([-b,b]\).

Proposition 3.1

Let \(\Delta \in {\mathbb {R}}\), \(\lambda \in {\mathbb {C}}\setminus \{0\}\), \(\eta =({{\mathcal {M}}}_{\Delta }-\lambda )\chi \), \(v=v^{\Delta ,\chi }\). Then the map

$$\begin{aligned} y=\left( \begin{array}{c}u\\ w\end{array}\right) \in C^1_0([-b,b],{\mathbb {C}}^ 2) \end{aligned}$$

given by \(u(t)=v(t+3)\) and \(w(t)=v(t+1)\in {\mathbb {C}}\) satisfies

$$\begin{aligned} y'(t)= & {} g'(t)\{\Delta \,A(\lambda )y(t)+y(-b)+Z(\Delta ,\lambda ,\eta ,t)\}\quad \text {on}\quad [-b,b]\end{aligned}$$
(3.1)
$$\begin{aligned}&\text {and}&\nonumber \\ y(-b)= & {} B(\lambda )y(b)+N(\lambda ,\eta ), \end{aligned}$$
(3.2)

with the maps

$$\begin{aligned} A:{\mathbb {C}}\setminus \{0\}\ni \lambda \mapsto \left( \begin{array}{cr}1 &{} 1\\ \frac{1}{\lambda } &{} -1\end{array}\right) \in {\mathbb {C}}^{2\times 2}, \end{aligned}$$
$$\begin{aligned} Z:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times [-b,b]\rightarrow {\mathbb {C}}^2 \end{aligned}$$

given by

$$\begin{aligned} Z(\Delta ,\lambda ,\eta ,t)= \left( \begin{array}{c} 0 \\ \frac{1}{\lambda }(\eta (0)-\eta (t))+\frac{\Delta }{\lambda }\eta (t-1) \end{array}\right) \in {\mathbb {C}}^2\quad \text {on}\quad [-b,0] \end{aligned}$$

and

$$\begin{aligned} Z(\Delta ,\lambda ,\eta ,t)=\left( \begin{array}{c} 0 \\ \frac{\Delta }{\lambda }\eta (t-1) \end{array}\right) \in {\mathbb {C}}^2\quad \text {on}\quad [0,b] \end{aligned}$$

for \(\Delta \in {\mathbb {R}},\lambda \in {\mathbb {C}}\setminus \{0\},\eta \in {{\mathcal {Y}}}\),

$$\begin{aligned} B:{\mathbb {C}}\setminus \{0\}\ni \lambda \mapsto \left( \begin{array}{cr}0 &{} 1\\ \frac{1}{\lambda } &{} 0\end{array}\right) \in {\mathbb {C}}^{2\times 2}, \end{aligned}$$
$$\begin{aligned} N:({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\ni (\lambda ,\eta )\mapsto \left( \begin{array}{c} 0 \\ \frac{-\eta (0)}{\lambda } \end{array}\right) \in {\mathbb {C}}^2. \end{aligned}$$

Moreover, \(\chi =\frac{1}{\lambda }(v_4-\eta )\) with

$$\begin{aligned} v_4(t)=\left\{ \begin{array}{l} u(-b)\quad \text {on}\quad [-2,-1-b]\\ u(1+t)\quad \text {on}\quad [-1-b,-1+b]\\ u(b)\quad \text {on}\quad [-1+b,0] \end{array} \right. \end{aligned}$$
(3.3)

Proof

From \(v'(t)=0\) on \([0,1-b]\cup [1+b,3-b]\cup [3+b,4]\) we have \(y'(-b)=0=y'(b)\), so that indeed \(y\in C^1_0([-b,b],{\mathbb {C}}^2)\). For \(t\in [-b,b]\),

$$\begin{aligned} u'(t)= & {} v'(3+t)=g'(p(2+t))\{v(2+t)-p'(2+t)\Delta [v(3+t)+v(1+t)]\}\\= & {} g'(p(t))\{u(-b)+\Delta [u(t)+w(t)]\}\\&(v\,\,\text {is constant on}\,\,[1+b,3-b])\\= & {} g'(t)\Delta (u(t)+w(t))+g'(t)u(-b) \end{aligned}$$

and

$$\begin{aligned} w'(t)= & {} v'(1+t)=g'(p(t))\{v(t)-p'(t)\Delta [v(1+t)+v(t-1)]\}\\= & {} g'(t)\{v(t)-\Delta [w(t)+\chi (t-1)]\}. \end{aligned}$$

We have

$$\begin{aligned} \chi (t-1)=\frac{1}{\lambda }(v(4+t-1)-\eta (t-1))=\frac{1}{\lambda }(u(t)-\eta (t-1)) \end{aligned}$$

and in case \(t\in [0,b]\),

$$\begin{aligned} v(t)=v(1-b)=w(-b) \end{aligned}$$

while in case \(t\in [-b,0]\),

$$\begin{aligned} v(t)= & {} \chi (t)=\frac{1}{\lambda }(v(4+t)-\eta (t))=\frac{1}{\lambda }(v(4)-\eta (t))\\= & {} \frac{1}{\lambda }(v(4)-\eta (0)+\eta (0)-\eta (t))=v(0)+\frac{1}{\lambda }(\eta (0)-\eta (t))\\= & {} v(1-b)+\frac{1}{\lambda }(\eta (0)-\eta (t))=w(-b)+\frac{1}{\lambda }(\eta (0)-\eta (t)). \end{aligned}$$

For \(t\in [0,b]\) we obtain

$$\begin{aligned} w'(t)=g'(t)\Delta \left( -\frac{u(t)}{\lambda }-w(t)\right) +g'(t)w(-b)+g'(t)\frac{\Delta }{\lambda }\eta (t-1), \end{aligned}$$

and for \(t\in [-b,0]\) we get

$$\begin{aligned} w'(t)=g'(t)\Delta \left( -\frac{u(t)}{\lambda }-w(t)\right) +g'(t)w(-b)+g'(t)\frac{1}{\lambda }(\eta (0)-\eta (t))+g'(t)\frac{\Delta }{\lambda }\eta (t-1). \end{aligned}$$

It follows that y satisfies Eq. (3.1). Also,

$$\begin{aligned} u(-b)=v(3-b)=v(1+b)=w(b) \end{aligned}$$

and

$$\begin{aligned} w(-b)=v(1-b)=v(0)=\frac{1}{\lambda }(v(4)-\eta (0))=\frac{1}{\lambda }(v(3+b)-\eta (0))=\frac{1}{\lambda }(u(b)-\eta (0)) \end{aligned}$$

which yields Eq. (3.2). Finally, \(\chi =\frac{1}{\lambda }({{\mathcal {M}}}_{\Delta }\chi -\eta )=\frac{1}{\lambda }(v_4-\eta )\) with

$$\begin{aligned} v_4(t)=\left\{ \begin{array}{l} u(-b)\quad \text {on}\quad [-2,-1-b]\\ u(1+t)\quad \text {on}\quad [-1-b,-1+b]\\ u(b)\quad \text {on}\quad [-1+b,0] \end{array} \right. \end{aligned}$$

\(\square \)

Next we characterize the solutions \(y:[-b,b]\rightarrow {\mathbb {C}}^2\) of Eq. (3.1) which satisfy the boundary condition (3.2), by an equation for the initial data \(c=y(-b)\). The matrices \(g'(t)\Delta \,A(\lambda )\), \(t\in [-b,b]\), commute. It follows that for \(-b\le s\le t\le b\) the solutions \(z:[-b,b]\rightarrow {\mathbb {C}}^2\) of the nonautonomous linear ordinary differential equation \(z'(t)=g'(t)\Delta \,A(\lambda )z(t)\) satisfy \(z(t)=U(t,s)z(s)\) with

$$\begin{aligned} U(t,s)=U(\Delta ,\lambda ,t,s)= e^{\int _s^tg'(r)\Delta \,A(\lambda )dr}=e^{(g(t)-g(s))\Delta \,A(\lambda )}\in {\mathbb {C}}^{2\times 2}. \end{aligned}$$

Using the variation-of-constants formula we get

$$\begin{aligned} y(t)=U(t,-b)c+\int _{-b}^tU(t,s)g'(s)cds+\int _{-b}^tU(b,s)g'(s)Z(\Delta ,\lambda ,\eta ,s)ds \end{aligned}$$
(3.4)

for \(-b\le t\le b\). The boundary condition for \(c=y(-b)\) becomes

$$\begin{aligned} c= & {} B(\lambda )y(b)+N(\lambda ,\eta )\\= & {} B(\lambda )\left( U(b,-b)+\int _{-b}^bU(b,s)g'(s)Ids\right) c\\&+B(\lambda )\int _{-b}^bU(b,s)g'(s)Z(\Delta ,\lambda ,\eta ,s)ds+N(\lambda ,\eta ), \end{aligned}$$

or equivalently,

$$\begin{aligned}&\left( (I-B(\lambda )\left( U(b,-b)+\int _{-b}^bU(b,s)g'(s)Ids)\right) \right) c \nonumber \\&\quad = B(\lambda )\int _{-b}^bU(b,s)g'(s)Z(\Delta ,\lambda ,\eta ,s)ds+N(\lambda ,\eta ). \end{aligned}$$
(3.5)

Define

$$\begin{aligned} H:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\rightarrow {\mathbb {C}}^{2\times 2} \end{aligned}$$

by

$$\begin{aligned} H(\Delta ,\lambda )=I-B(\lambda )\left( U(b,-b)+\int _{-b}^bU(b,s)g'(s)Ids\right) \in {\mathbb {C}}^{2\times 2}. \end{aligned}$$

We call \(P:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} P(\Delta ,\lambda )=\text {det}\,H(\Delta ,\lambda ) \end{aligned}$$

the characteristic function associated with the operator \({{\mathcal {M}}}_{\Delta }\).

Proposition 3.2

For all \(\Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0\}\), \(P(\Delta ,\lambda )=0\) if and only if \(\lambda \in \sigma _{\Delta }\).

Proof

  1. 1.

    Let \(\Delta \in {\mathbb {R}}\) and \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) be given. Choose an eigenvector \(\chi \in {{\mathcal {Y}}}\setminus \{0\}\) of the eigenvalue \(\lambda \). Then

    $$\begin{aligned} 0=({{\mathcal {M}}}_{\Delta }-\lambda )\chi =v_4-\lambda \,v_0 \end{aligned}$$

    for \(v=v^{\Delta ,\chi }\). Observe that \(Z(\Delta ,\lambda ,0,t)=0\) on \([-b,b]\) and \(N(\lambda ,0)=0\). We apply Proposition 3.1 with \(\eta =0\). In terms of the remarks before Proposition 3.2 we obtain that the map \(y=\left( \begin{array}{c} u\\ w\end{array}\right) \) with the components \(u:[-b,b]\ni t\mapsto v(3+t)\in {\mathbb {C}}\) and \(w:[-b,b]\ni t\mapsto v(1+t)\in {\mathbb {C}}\) is given by Eq. (3.4) with \(c=y(-b)\) and \(Z(\Delta ,\lambda ,0,s)=0\), and \(H(\Delta ,\lambda )c=0\) (with \(Z(\Delta ,\lambda ,0,s)=0\) and \(N(\lambda ,0)=0\)). We have \(c\ne 0\) since otherwise Eq. (3.4) with \(Z(\Delta ,\lambda ,0,s)=0\) and \(c=0\) yields \(0=y=\left( \begin{array}{c} u\\ w\end{array}\right) \), which means \(v(t)=0\) on \([1-b,1+b]\cup [3-b,3+b]\), and as y is constant on \([0,1-b],[1+b,3-b],[3+b,4]\) it follows that \(v(t)=0\) on [2, 4], hence \(\chi =v_0=\frac{1}{\lambda }v_4=0\), in contradiction to \(\chi \ne 0\). Now \(H(\Delta ,\lambda )c=0\) yields \(P(\Delta ,\lambda )=\det \,H(\Delta ,\lambda )=0\).

  2. 2.

    Conversely, assume \(P(\Delta ,\lambda )=0\) for some \(\Delta \in {\mathbb {R}},\lambda \in {\mathbb {C}}\setminus \{0\}\). Then there exists \(c\in {\mathbb {C}}^2\setminus \{0\}\) with \(H(\Delta ,\lambda )c=0\). The map \(y:[-b,b]\rightarrow {\mathbb {C}}^2\) given by Eq. (3.4) with \(\eta =0\), hence \(Z(\Delta ,\lambda ,0,s)=0\), satisfies \(y(-b)=c\) and, because of \(H(\Delta ,\lambda )c=0\), \(y(-b)=B(\lambda )y(b)\). Set \(\left( \begin{array}{c} u\\ w\end{array}\right) =y\). Then \(u(-b)=w(b)\) and \(w(-b)=\frac{1}{\lambda }u(b)\). Define \(v:[-2,4]\rightarrow {\mathbb {C}}\) by

    $$\begin{aligned} v(t)=\left\{ \begin{array}{l} u(t-3)\quad \text {on}\quad [3-b,3+b]\\ w(t-1)\quad \text {on}\quad [1-b,1+b]\\ u(-b)=w(b)\quad \text {on}\quad [1+b,3-b]\\ u(b)=\lambda \,w(-b)\quad \text {on}\quad [3+b,4]\\ w(-b)=\frac{1}{\lambda }u(b)\quad \text {on}\quad [-1+b,1-b]\\ (\text {then, for}\,\,-1+b\le t\le 0, v(t)=\frac{1}{\lambda }v(4+t))\\ \frac{1}{\lambda }v(4+t)\quad \text {on}\quad [-2,-1+b] \end{array} \right. \end{aligned}$$

    In particular, \(v(t)=\frac{1}{\lambda }v(4+t)\) on \([-2,0]\), or \(v_4=\lambda v_0\). The function v is continuous and Eq. (1.4) holds on

    $$\begin{aligned}{}[0,1-b)\cup (1+b,3-b)\cup (3+b,4]. \end{aligned}$$

    On \((3-b,3+b)\) we have

    $$\begin{aligned} v'(t)= & {} u'(t-3)=g'(t-3)\Delta \Big (u(t-3)+w(t-3)\Big )+g'(t-3)u(-b)\\= & {} g'(p(t-3))\Big \{(v(t-1)-p'(t-1)\Delta \Big [v(t)+v(t-2)\Big ]\Big \}\\= & {} g'(-p(t-1))\Big \{v(t-1)-p'(t-1)\Delta \Big [v(t)+v(t-2)\Big ]\Big \}\\= & {} g'(p(t-1))\Big \{v(t-1)-p'(t-1)\Delta \Big [v(t)+v(t-2)\Big ]\Big \} \end{aligned}$$

    and on \((1-b,1+b)\) we have

    $$\begin{aligned} v'(t)= & {} w'(t-1)=g'(t-1)\Delta \left( -\frac{1}{\lambda }u(t-1)-w(t-1)\right) +g'(t-1)w(-b)\\= & {} g'(p(t-1))\left\{ v(t-1)-p'(t-1)\Delta \left[ v(t)+\frac{1}{\lambda }u(t-1)\right] \right\} . \end{aligned}$$

    Observe that

    $$\begin{aligned} \frac{1}{\lambda }u(t-1)=\frac{1}{\lambda }v(t+2)=\frac{1}{\lambda }v(4+t-2)=v(t-2). \end{aligned}$$

    It follows that also on \((1-b,1+b)\) Eq. (1.4) is satisfied by v. A look at the continuous coefficient \(g'(p(t-1))\) which is zero at \(1-b,1+b,3-b,3+b\) yields that \(v'\) has limit zero at each of these points. We infer that v is continuously differentiable and satisfies Eq. (1.4) on [0, 4]. By \(v'_0(0)=v'(0)=0\), \(v_0\in {{\mathcal {Y}}}\), and we have \(v_4={{\mathcal {M}}}_{\Delta }v_0\), hence \({{\mathcal {M}}}_{\Delta }v_0=v_4=\lambda v_0\). For \(\lambda \) to be an eigenvalue it remains to show \(v_0\ne 0\). This is a consequence of \(0\ne c=\left( \begin{array}{c}u(-b)\\ w(-b)\end{array}\right) \), which implies \(v(t)\ne 0\) for some \(t\in [0,4]\), hence \(v_0\ne 0\).\(\square \)

For \(\Delta \in {\mathbb {R}}\) and \(\lambda \in \rho _{\Delta }\setminus \{0\}\) we now compute the resolvent \(({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}:{{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}}\). Let \(\eta \in {{\mathcal {Y}}}\) be given, set \(\chi =({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\eta \). Then \(\chi =\frac{1}{\lambda }({{\mathcal {M}}}_{\Delta }\chi -\eta )=\frac{1}{\lambda }(v_4-\eta )\) with \(v=v^{\Delta ,\chi }\). Or, \(\chi =L(\lambda ,v_4,\eta )\) with the continuous map

$$\begin{aligned} L:({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times {{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}} \end{aligned}$$

given by \(L(\lambda ,\phi ,\psi )=\frac{1}{\lambda }(\phi -\psi )\). Notice that each map \(L(\lambda ,\cdot ,\cdot ):{{\mathcal {Y}}}\times {{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}}\), \(0\ne \lambda \in {\mathbb {C}}\), is linear. The argument \(v_4\) in \(L(\lambda ,v_4,\eta )\) is given by Eq. (3.3) where u is the first component of the map

$$\begin{aligned} y=\left( \begin{array}{c} u\\ w\end{array}\right) \in C_0^1([-b,b],{\mathbb {C}}^2) \end{aligned}$$

defined by Eq. (3.4), with \(c\in {\mathbb {C}}^2\) satisfying \(H(\Delta ,\lambda )c=E(\Delta ,\lambda ,\eta )\) where the map

$$\begin{aligned} E:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\rightarrow {\mathbb {C}}^2 \end{aligned}$$

is given by the right hand side of Eq. (3.5). As \(0\ne \lambda \in \rho _{\Delta }\) we have \(0\ne P(\Delta ,\lambda )=\text {det}\,H(\Delta ,\lambda )\), and obtain \(c=H(\Delta ,\lambda )^{-1}E(\Delta ,\lambda ,\eta )\).

In order to collect the result of the previous discussion in a formula for the resolvents consider the continuous linear map

$$\begin{aligned} V:C_0^1([-b,b],{\mathbb {C}})\rightarrow {{\mathcal {Y}}} \end{aligned}$$

which is given by the right hand side of Eq. (3.3), the projection

$$\begin{aligned} p_1: C_0^1([-b,b],{\mathbb {C}}^2)\rightarrow C_0^1([-b,b],{\mathbb {C}}) \end{aligned}$$

onto the first component, and the map

$$\begin{aligned} S:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times {\mathbb {C}}^2\ni (\Delta ,\lambda ,\eta ,c)\mapsto y\in C_0^1([-b,b],{\mathbb {C}}^2) \end{aligned}$$

which is given by Eq. (3.4). Each map

$$\begin{aligned} S(\Delta ,\lambda ,\cdot ,\cdot ):{{\mathcal {Y}}}\times {\mathbb {C}}^2\rightarrow C_0^1([-b,b],{\mathbb {C}}^2), \end{aligned}$$

for \(\Delta \in {\mathbb {R}}\) and \(0\ne \lambda \in {\mathbb {C}}\), is linear.

Corollary 3.3

For \(\Delta \in {\mathbb {R}}\) and \(0\ne \lambda \in \rho _{\Delta }\) and \(\eta \in {{\mathcal {Y}}}\),

$$\begin{aligned} ({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\eta = L\Big (\lambda ,V\Big (p_1S(\Delta ,\lambda ,\eta ,H(\Delta ,\lambda )^{-1}E(\Delta ,\lambda ,\eta ))\Big ),\eta \Big ). \end{aligned}$$

4 Continuity Properties

We collect some results on continuity. For the maps \(L,V,p_1\) continuity is obvious or easily seen.

Proposition 4.1

The maps SHPE are continuous.

Proof

  1. 1.

    The first component of the map Z is constant. The second component of Z is given by

    $$\begin{aligned} \frac{1}{\lambda }ev_{*}(\beta \alpha \eta ,t)+\frac{\Delta }{\lambda }ev(\eta ,t-1) \end{aligned}$$

    with the continuous linear maps

    $$\begin{aligned} \alpha :{{\mathcal {C}}}\rightarrow & {} C([-b,0],{\mathbb {C}}),\quad \alpha \eta (t)=\eta (0)-\eta (t)\quad \text {on}\quad [-b,0],\\ \beta : C([-b,0],{\mathbb {C}})\rightarrow & {} C([-b,b],{\mathbb {C}}),\quad \beta \phi (t)=\phi (t)\quad \text {on}\quad [-b,0]\\&\text {and}\quad \beta \phi (t)=\phi (0)\quad \text {on}\quad [0,b],\\ ev_{*}: C([-b,b],{\mathbb {C}})\times [-b,b]\rightarrow & {} {\mathbb {C}},\quad ev_{*}(\chi ,s)=\chi (s),\\ ev:{{\mathcal {C}}}\times [-2,0]\rightarrow & {} {\mathbb {C}},\quad ev(\phi ,s)=\phi (s). \end{aligned}$$
  2. 2.

    On the map S. As \(g'\), A, and Z are continuous the solutions of the initial value problems

    $$\begin{aligned} r'(t)= & {} g'(t)\Big [\Delta A(\lambda )r(t)+c+Z(\Delta ,\lambda ,\eta ,t)\Big ],\\ r(-b)= & {} {\hat{c}}\in {\mathbb {C}}^2, \end{aligned}$$

    depend continuously on

    $$\begin{aligned} (\Delta ,\lambda ,\eta ,c,{\hat{c}},t)\in {\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times {\mathbb {C}}^2\times {\mathbb {C}}^2\times [-b,b]. \end{aligned}$$

    It follows that the map

    $$\begin{aligned} {\hat{S}}:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times {\mathbb {C}}^2\times [-b,b]\ni (\Delta ,\lambda ,\eta ,c,t)\mapsto S(\Delta ,\lambda ,\eta ,c)(t)\in {\mathbb {C}}^2 \end{aligned}$$

    is continuous. Using this and the differential equation above we infer that the map

    $$\begin{aligned} \partial _5{\hat{S}}:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\times {\mathbb {C}}^2\times [-b,b]\rightarrow {\mathbb {C}}^2, \end{aligned}$$
    $$\begin{aligned} \partial _5{\hat{S}}(\Delta ,\lambda ,\eta ,c,t)=S(\Delta ,\lambda ,\eta ,c)'(t), \end{aligned}$$

    is continuous. A compactness argument yields that for both maps \({\hat{S}}\) and \(\partial _5{\hat{S}}\) continuity is uniform with respect to \(t\in [-b,b]\), and the continuity of S (with respect to the norm on \(C^1([-b,b],{\mathbb {C}}^2)\)) follows.

  3. 3.

    Continuity of H and P. Let \(e_1\in {\mathbb {C}}^2\) and \(e_2\in {\mathbb {C}}^2\) be given by the first and second column of the unit matrix \(I\in {\mathbb {C}}^{2\times 2}\), respectively. Consider the solutions of the initial value problems

    $$\begin{aligned} r'(t)= & {} g'(t)\Big [\Delta A(\lambda )r(t)+e_j\Big ],\\ r(-b)= & {} 0, \end{aligned}$$

    for \(j\in \{1,2\}\). Their values at \(t=b\) are given by the maps

    $$\begin{aligned} {\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\ni (\Delta ,\lambda )\mapsto \int _{-b}^bU(\Delta ,\lambda ,b,s)g'(s)e_jds\in {\mathbb {C}}^2,\quad j\in \{1,2\}. \end{aligned}$$

    Due to continuous dependence on parameters both maps are continuous, and it follows that the matrix-valued map

    $$\begin{aligned} {\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\ni (\Delta ,\lambda )\mapsto \int _{-b}^bU(\Delta ,\lambda ,b,s)g'(s)Ids\in {\mathbb {C}}^{2\times 2} \end{aligned}$$

    is continuous. Combining this with the continuity of the map B we infer that \(H:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\rightarrow {\mathbb {C}}^{2\times 2}\) is continuous, and that \(P=\text {det}\circ H\) is continuous.

  4. 4.

    Continuity of E. Continuous dependence of solutions of the initial value problem

    $$\begin{aligned} r'(t)= & {} g'(t)\Big [\Delta A(\lambda )r(t)+Z(\Delta ,\lambda ,\eta ,t)\Big ],\\ r(-b)= & {} 0, \end{aligned}$$

    on parameters yields that the map

    $$\begin{aligned} {\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\ni (\Delta ,\lambda ,\eta )\mapsto \int _{-b}^bU(\Delta ,\lambda ,b,s)g'(s)Z(\Delta ,\lambda ,\eta ,s)ds\in {\mathbb {C}}^2 \end{aligned}$$

    is continuous. Use this and the continuity of B and N in order to complete the proof that E is continuous. \(\square \)

Proposition 4.2

The map

$$\begin{aligned} \{(\Delta ,\lambda ,\eta )\in {\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}:\lambda \in \rho _{\Delta }\}\ni (\Delta ,\lambda ,\eta )\mapsto P(\Delta ,\lambda )\cdot ({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\eta \in {{\mathcal {Y}}} \end{aligned}$$

has a continuous extension to \({\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\).

Proof

  1. 1.

    Consider the map \({\hat{H}}:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\rightarrow {\mathbb {C}}^{2\times 2}\) given by

    $$\begin{aligned} {\hat{H}}(\Delta ,\lambda )=\left( \begin{array}{cr}H_{22} &{} -H_{12}\\ -H_{21} &{} H_{11}\end{array}\right) \end{aligned}$$

    with the entries \(H_{jk}=H_{jk}(\Delta ,\lambda )\) of \(H(\Delta ,\lambda )\). For all \(\Delta \in {\mathbb {R}}\) and all \(\lambda \in \rho _{\Delta }\setminus \{0\}\) we have \(P(\Delta ,\lambda )\ne 0\) and \(H(\Delta ,\lambda )^{-1}=\frac{1}{P(\Delta ,\lambda )}{\hat{H}}(\Delta ,\lambda )\). The continuity of H (Proposition 4.1) yields that \({\hat{H}}\) is continuous. Using the continuity of \(L,V,p_1,S,{\hat{H}},E,P\) we infer that the map

    $$\begin{aligned} R_{*}:{\mathbb {R}}\times ({\mathbb {C}}\setminus \{0\})\times {{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}} \end{aligned}$$

    given by

    $$\begin{aligned} R_{*}(\Delta ,\lambda ,\eta )= L\Big (\lambda ,V\Big (p_1S(\Delta ,\lambda ,P(\Delta ,\lambda )\eta ,{\hat{H}}(\Delta ,\lambda )E(\Delta ,\lambda ,\eta ))\Big ),P(\Delta ,\lambda )\eta \Big ) \end{aligned}$$

    is continuous.

  2. 2.

    Let \(\Delta \in {\mathbb {R}}\) and \(\lambda \in \rho _{\Delta }\setminus \{0\}\) be given. From Corollary 3.3 in combination with the equation \(P(\Delta ,\lambda )H(\Delta ,\lambda )^{-1}={\hat{H}}(\Delta ,\lambda )\) and with the linearity of the maps \(L(\lambda ,\cdot ,\cdot ):{{\mathcal {Y}}}\times {{\mathcal {Y}}}\rightarrow {{\mathcal {Y}}} \) and \(p_1\), V, and \(S(\Delta ,\lambda ,\cdot ,\cdot ):{{\mathcal {Y}}}\times {\mathbb {C}}^2\rightarrow C_0^1([-b,b],{\mathbb {C}}^2)\), we obtain that for every \(\eta \in {{\mathcal {Y}}}\) we have

$$\begin{aligned} P(\Delta ,\lambda )\cdot ({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}\eta =R_{*}(\Delta ,\lambda ,\eta ). \end{aligned}$$

\(\square \)

5 Analyticity, Order of Zeros and Poles

We begin with the computation of \(H(\Delta ,\lambda )\) for \(\Delta \in {\mathbb {R}}\setminus \{0\}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\). Then \(\text {det}\, A(\lambda )=\frac{1}{\lambda }-1\ne 0\), and \(A(\lambda )\) is invertible. We have

$$\begin{aligned} H(\Delta ,\lambda )=I-B(\lambda )\left( U(\Delta ,\lambda ,b,-b)+\int _{-b}^bU(\Delta ,\lambda ,b,s)g'(s)Ids\right) \end{aligned}$$

with

$$\begin{aligned} U(\Delta ,\lambda ,b,-b)=e^{(g(b)-g(-b))\Delta A(\lambda )}=e^{-2\Delta A(\lambda )} \end{aligned}$$

and

$$\begin{aligned} \int _{-b}^bU(\Delta ,\lambda ,b,s)g'(s)Ids= & {} \int _{-b}^b e^{(g(b)-g(s))\Delta A(\lambda )}g'(s)A(\lambda )A(\lambda )^{-1}ds\\= & {} e^{g(b)\Delta A(\lambda )}\left( -\frac{1}{\Delta } \left[ e^{-g(b)\Delta A(\lambda )}-e^{-g(-b)\Delta A(\lambda )}\right] A(\lambda )^{-1}\right) \\= & {} \frac{1}{\Delta }(e^{-2\Delta A(\lambda )}-I)A(\lambda )^{-1}. \end{aligned}$$

In order to compute the exponential term \(e^{-2\Delta A(\lambda )}\) observe that the characteristic equation of \(A(\lambda )\) is \(z^2=1-\frac{1}{\lambda }\). Any square root \(z=z(\lambda )\) of \(1-\frac{1}{\lambda }\) is an eigenvalue of \(A(\lambda )\) with associated eigenvector

$$\begin{aligned} a=a(z)=\left( \begin{array}{c}\frac{1}{z-1}\\ 1\end{array}\right) , \end{aligned}$$

and \(-z\) is an eigenvalue with associated eigenvector

$$\begin{aligned} b=b(z)=\left( \begin{array}{c}\frac{-1}{z+1}\\ 1\end{array}\right) . \end{aligned}$$

Using \(A(\lambda )=(a\,b)\left( \begin{array}{cr}z &{} 0\\ 0 &{} -z\end{array}\right) (a\,b)^{-1}\) we obtain

$$\begin{aligned} e^{-2\Delta \,A(\lambda )}=\sum _{n=0}^{\infty }\frac{(-2\Delta )^n}{n!}A(\lambda )^n=(a\,b)\left( \begin{array}{cr}e^{-2\Delta \,z} &{} 0\\ 0 &{} e^{2\Delta \,z}\end{array}\right) (a\,b)^{-1}. \end{aligned}$$
(5.1)

It follows that

$$\begin{aligned} H(\Delta ,\lambda )= & {} \left( \begin{array}{ll}1 &{}\quad 0\\ 0 &{}\quad 1\end{array}\right) -\left( \begin{array}{ll}0&{}\quad 1\\ \frac{1}{\lambda } &{}\quad 0\end{array}\right) \left[ (a\,b)\left( \begin{array}{ll}e^{-2\Delta \,z} &{}\quad 0\\ 0 &{}\quad e^{2\Delta \,z}\end{array}\right) (a\,b)^{-1}\right. \nonumber \\&\left. +\,\frac{1}{\Delta }\left[ (a\,b)\left( \begin{array}{ll}e^{-2\Delta \,z} &{}\quad 0\\ 0 &{}\quad e^{2\Delta \,z}\end{array}\right) (a\,b)^{-1}-\left( \begin{array}{ll}1 &{}\quad 0\\ 0 &{}\quad 1\end{array}\right) \right] A(\lambda )^{-1}\right] . \end{aligned}$$
(5.2)

Corollary 5.1

Each map \(P(\Delta ,\cdot ):{\mathbb {C}}\setminus \{0\}\rightarrow {\mathbb {C}}\), \(0\ne \Delta \in {\mathbb {R}}\), is analytic.

Proof

Let \(\Delta \in {\mathbb {R}}\setminus \{0\}\) be given, and let \(\lambda _0\in {\mathbb {C}}\setminus \{0,1\}\) be given. Then \(1-\frac{1}{\lambda _0}\ne 0\). Choose a square root of \(1-\frac{1}{\lambda _0}\). An application of the Implicit Function Theorem for analytic maps [12, Corollary 4.23] yields an open disk \(D\subset {\mathbb {C}}\setminus \{0,1\} \) centered at \(\lambda _0\) and an analytic function \(\zeta :D\rightarrow {\mathbb {C}}\) with \(\zeta (\lambda )^2=1-\frac{1}{\lambda }\) on D. The preceding calculations with \(z(\lambda )=\zeta (\lambda )\) show that the restriction of \(P(\Delta ,\cdot )=\text {det}\,H(\Delta ,\cdot )\) to D is analytic. It follows that the restriction of \(P(\Delta ,\cdot )\) to \({\mathbb {C}}\setminus \{0,1\}\) is analytic. As \(P(\Delta ,\cdot )\) is continuous (Proposition 4.1), \(\lambda =1\) is a removable singularity, and \(P(\Delta ,\cdot )\) is analytic. \(\square \)

Corollary 5.2

For every \(\Delta \in {\mathbb {R}}\setminus \{0\}\) and for every \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) the order \(j(\lambda )\in {\mathbb {N}}\) of \(\lambda \) as a pole of the resolvent

$$\begin{aligned} \rho _{\Delta }\ni \mu \mapsto ({{\mathcal {M}}}_{\Delta }-\mu )^{-1}\in L_c({{\mathcal {Y}}},{{\mathcal {Y}}}) \end{aligned}$$

is majorized by the order \(o(\lambda )\in {\mathbb {N}}\) of \(\lambda \) as a zero of \(P(\Delta ,\cdot )\).

Proof

Let \(\Delta \in {\mathbb {R}}\setminus \{0\}\) and \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) be given. Proposition 4.2 yields an \(\epsilon >0\) and a bound \(b\ge 0\) with

$$\begin{aligned} |P(\Delta ,\mu )({{\mathcal {M}}}_{\Delta }-\mu )^{-1}\eta |_1\le b \end{aligned}$$

for \(0<|\mu -\lambda |<\epsilon \) and \(|\eta |_1\le \epsilon \). It follows that \(P(\Delta ,\mu )({{\mathcal {M}}}_{\Delta }-\mu )^{-1}\in L_c({{\mathcal {Y}}},{{\mathcal {Y}}})\) is bounded by \(b/\epsilon \) for \(0<|\mu -\lambda |<\epsilon \). Use the power series for \(P(\Delta ,\cdot )\) at \(\mu =\lambda \) and the Laurent series for the resolvent at \(\mu =\lambda \) in order to obtain

$$\begin{aligned} P(\Delta ,\mu )({{\mathcal {M}}}_{\Delta }-\mu )^{-1}=(\mu -\lambda )^{o(\lambda )-j(\lambda )}L+h(\mu ) \end{aligned}$$

for \(0<|\mu -\lambda |<\epsilon \), with \(0\ne L\in L_c({{\mathcal {Y}}},{{\mathcal {Y}}})\) and \(h:\{\mu \in {\mathbb {C}}:|\mu -\lambda |<\epsilon \}\rightarrow L_c({{\mathcal {Y}}},{{\mathcal {Y}}})\) analytic. This representation in combination with the previous statement on boundedness implies \(o(\lambda )-j(\lambda )\ge 0\). \(\square \)

6 The Characteristic Function in Terms of Elementary Functions

In this section it is convenient to use the following abbreviations, for \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\) given: \(z=z(\lambda )\) is a square root of \(1-\frac{1}{\lambda }\), \(a=a(z)\) and \(b=b(z)\) are eigenvectors as in Sect. 5, and

$$\begin{aligned} Ch=\text {cosh}(2\Delta \,z),\quad Sh=\text {sinh}(2\Delta \,z),\quad \alpha =\frac{Sh}{z}-\frac{Ch-1}{\Delta z^2}. \end{aligned}$$

Notice that \(z\notin \{-1,0,1\}\). As cosh is even and sinh is odd the values Ch, \(z\,Sh\), \(\frac{Sh}{z}\), and \(\alpha \) do not depend on the choice of the square root z.

Proposition 6.1

For \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\),

$$\begin{aligned} e^{-2\Delta \,A(\lambda )}=\left( \begin{array}{ll} Ch-\frac{Sh}{z} &{}\quad -\frac{Sh}{z} \\[3pt] \frac{Sh}{\lambda z} &{} \quad Ch+\frac{Sh}{z} \end{array}\right) . \end{aligned}$$

Proof

Let \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\) be given. Recall Eq. (5.1) for \(e^{-2\Delta \,A(\lambda )}\). We have

$$\begin{aligned} (a\,b)\left( \begin{array}{ll}e^{-2\Delta \,z} &{}\quad 0\\ 0 &{}\quad e^{2\Delta \,z}\end{array}\right) = \left( \begin{array}{ll} \frac{e^{-2\Delta \,z}}{z-1} &{}\quad \frac{e^{2\Delta \,z}}{-z-1} \\[3pt] e^{-2\Delta \,z} &{}\quad e^{2\Delta \,z} \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} (a\,b)^{-1}=\frac{z^2-1}{2z}\left( \begin{array}{ll} 1 &{}\quad \frac{1}{z+1} \\[3pt] -1 &{}\quad \frac{1}{z-1}\end{array}\right) , \end{aligned}$$

hence

$$\begin{aligned} e^{-2\Delta \,A(\lambda )}= & {} \frac{z^2-1}{2z} \left( \begin{array}{ll} \frac{e^{-2\Delta \,z}}{z-1} &{}\quad \frac{e^{2\Delta \,z}}{-z-1} \\[3pt] e^{-2\Delta \,z} &{} e^{2\Delta \,z} \end{array}\right) \left( \begin{array}{ll} 1 &{} \frac{1}{z+1} \\[3pt] -1 &{}\quad \frac{1}{z-1}\end{array}\right) \\= & {} \frac{z^2-1}{2z} \left( \begin{array}{ll} \frac{e^{2\Delta \,z}}{z+1}+\frac{e^{-2\Delta \,z}}{z-1} &{}\quad \frac{e^{-2\Delta \,z}-e^{2\Delta \,z}}{z^2-1} \\[3pt] e^{-2\Delta \,z}-e^{2\Delta \,z} &{} \quad \frac{e^{-2\Delta \,z}}{z+1}+\frac{e^{2\Delta \,z}}{z-1}\end{array}\right) \\= & {} \left( \begin{array}{ll} Ch-\frac{Sh}{z} &{}\quad -\frac{Sh}{z} \\[3pt] \frac{Sh}{\lambda \,z} &{}\quad Ch+\frac{Sh}{z} \end{array}\right) \\&\left( \text {with}\quad z^2=1-\frac{1}{\lambda }\right) . \end{aligned}$$

\(\square \)

Proposition 6.2

For \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\),

$$\begin{aligned} -H(\Delta ,\lambda )=\left( \begin{array}{ll} \frac{\alpha }{\lambda }-1 &{}\quad \left( Ch-\frac{Sh}{\Delta z}\right) +\alpha \\[3pt] \frac{1}{\lambda }\left( Ch-\frac{Sh}{\Delta z}\right) -\frac{\alpha }{\lambda } &{}\quad -1-\frac{\alpha }{\lambda }\end{array}\right) . \end{aligned}$$

Proof

Recall Eq. (5.2) for \(H(\Delta ,\lambda )\). We have

$$\begin{aligned} A(\lambda )^{-1}=\frac{\lambda }{1-\lambda }\left( \begin{array}{ll} -1 &{}\quad -1 \\ \frac{1}{\lambda } &{}\quad 1 \end{array}\right) . \end{aligned}$$

Using this and Proposition 6.1 and \(z^2=1-\frac{1}{\lambda }=\frac{\lambda -1}{\lambda }\) we get

$$\begin{aligned} \frac{1}{\Delta }(e^{-2\Delta \,A(\lambda )}-I)A(\lambda )^{-1}= & {} -\frac{1}{\Delta \,z^2} \left( \begin{array}{ll} Ch-\frac{Sh}{z}-1 &{}\quad -\frac{Sh}{z} \\[3pt] \frac{Sh}{\lambda \,z} &{}\quad Ch+\frac{Sh}{z}-1\end{array}\right) \left( \begin{array}{ll} -1 &{}\quad -1 \\[3pt] \frac{1}{\lambda } &{}\quad 1 \end{array}\right) \\= & {} -\frac{1}{\Delta \,z^2} \left( \begin{array}{ll} 1+\frac{Sh}{z}-Ch-\frac{Sh}{\lambda \,z} &{}\quad 1+\frac{Sh}{z}-Ch-\frac{Sh}{z} \\[3pt] -\frac{Sh}{\lambda \,z}+\frac{Ch}{\lambda }+\frac{Sh}{\lambda \,z}-\frac{1}{\lambda } &{}\quad -\frac{Sh}{\lambda \,z}+Ch+\frac{Sh}{z}-1 \end{array}\right) \\= & {} \frac{1}{\Delta } \left( \begin{array}{ll} \frac{Ch-1}{z^2}-\frac{Sh}{z} &{}\quad \frac{Ch-1}{z^2} \\[3pt] -\frac{Ch-1}{\lambda \,z^2} &{}\quad -\frac{Ch-1}{z^2}-\frac{Sh}{z} \end{array}\right) \\ \end{aligned}$$

and therefore, with Proposition 6.1,

$$\begin{aligned} -H(\Delta ,\lambda )= & {} B(\lambda )\left[ e^{-2\Delta \,A(\lambda )}+\frac{1}{\Delta }(e^{-2\Delta \,A(\lambda )}-I)A(\lambda )^{-1}\right] -I\\= & {} B(\lambda )\,\left( \begin{array}{ll} Ch-\frac{Sh}{z}+\frac{Ch-1}{\Delta \,z^2}-\frac{Sh}{\Delta \,z} &{}\quad -\frac{Sh}{z}+\frac{Ch-1}{\Delta \,z^2} \\[3pt] \frac{Sh}{\lambda \,z}-\frac{Ch-1}{\Delta \,\lambda \,z^2} &{}\quad Ch+\frac{Sh}{z}-\frac{Ch-1}{\Delta \,z^2}-\frac{Sh}{\Delta \,z} \end{array}\right) -I\\= & {} \left( \begin{array}{ll} \frac{Sh}{\lambda \,z}-\frac{Ch-1}{\Delta \,\lambda \,z^2}-1 &{} \quad Ch+\frac{Sh}{z}-\frac{Ch-1}{\Delta \,z^2}-\frac{Sh}{\Delta \,z} \\[3pt] \frac{Ch}{\lambda }-\frac{Sh}{\lambda \,z}+\frac{Ch-1}{\Delta \,\lambda \,z^2} -\frac{Sh}{\lambda \,\Delta \,z} &{}\quad -\frac{Sh}{\lambda \,z}+ \frac{Ch-1}{\Delta \,\lambda \,z^2}-1 \end{array}\right) \\= & {} \left( \begin{array}{ll} \frac{\alpha }{\lambda }-1 &{}\quad \left( Ch-\frac{Sh}{\Delta \,z}\right) +\alpha \\[3pt] \frac{1}{\lambda }\left( Ch-\frac{Sh}{\Delta \,z}\right) -\frac{\alpha }{\lambda } &{}\quad -1 -\frac{\alpha }{\lambda } \end{array}\right) \end{aligned}$$

\(\square \)

Proposition 6.3

For \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\),

$$\begin{aligned} P(\Delta ,\lambda )=\frac{p(\Delta ,\lambda )}{\Delta ^2(\lambda -1)} \end{aligned}$$

with

$$\begin{aligned} p(\Delta ,\lambda )=2(1-\mathrm{cosh}(2\Delta \,z))+\frac{\Delta ^2}{\lambda }(\lambda -1)^2+2\Delta \,z\,\mathrm{sinh}(2\Delta \,z), \end{aligned}$$

and \(z^2=1-\frac{1}{\lambda }\).

Proof

For \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\),

$$\begin{aligned} P(\Delta ,\lambda )&= \text {det}(H(\Delta ,\lambda ))=\text {det}(-H(\Delta ,\lambda ))\\&(\text {with}\quad H(\Delta ,\lambda )\in {\mathbb {C}}^{2\times 2})\\&= 1-\frac{\alpha ^2}{\lambda ^2}-\frac{1}{\lambda }\left[ \left( Ch-\frac{Sh}{\Delta \,z}\right) ^2-\alpha ^2\right] \\&= 1+\alpha ^2\left( \frac{1}{\lambda }-\frac{1}{\lambda ^2}\right) -\frac{1}{\lambda }\left( Ch-\frac{Sh}{\Delta \,z}\right) ^2\\&\left( \text {now use}\quad \alpha ^2\left( \frac{1}{\lambda }-\frac{1}{\lambda ^2}\right) =\frac{\alpha ^2}{\lambda }z^2\quad \text {and the definition of}\quad \alpha \right) \\&= 1+\frac{z^2}{\lambda }\left( \frac{Sh}{z}-\frac{Ch-1}{\Delta \,z^2}\right) ^2-\frac{1}{\lambda }\left( Ch-\frac{Sh}{\Delta \,z}\right) ^2\\&= 1+\frac{1}{\lambda }\left\{ \left( Sh-\frac{Ch-1}{\Delta \,z}\right) ^2-\left( Ch-\frac{Sh}{\Delta \,z}\right) ^2\right\} \\&(\text {in the sequel use}\quad Ch^2-Sh^2=1)\\&= 1+\frac{1}{\lambda }\left\{ -1+\frac{(Ch-1)^2}{\Delta ^2\,z^2}-2\frac{Sh}{\Delta \,z}(Ch-1)+2\frac{Sh\,Ch}{\Delta \,z}-\frac{Sh^2}{\Delta ^2z^2}\right\} \\&= 1+\frac{1}{\lambda }\left\{ -1+\frac{(Ch-1)^2}{\Delta ^2\,z^2}+2\frac{Sh}{\Delta \,z}-\frac{Sh^2}{\Delta ^2z^2}\right\} \\&= 1+\frac{1}{\lambda }\left\{ -1+2\frac{Sh}{\Delta \,z}+\frac{1}{\Delta ^2\,z^2}-2\frac{Ch}{\Delta ^2\,z^2}+\frac{1}{\Delta ^2\,z^2}\right\} \\&= 1+\frac{1}{\lambda \Delta ^2z^2}\Big \{2-\Delta ^2z^2+2\Big [\Delta \,z\,Sh-Ch\Big ]\Big \} \\&(\text {now use}\quad \lambda \,z^2=\lambda -1)\\&= \frac{1}{\Delta ^2(\lambda -1)}\left\{ \Delta ^2(\lambda -1)+2-\Delta ^2\left( 1-\frac{1}{\lambda }\right) +2\Big [\Delta \,z\,Sh-Ch\Big ]\right\} \\&= \frac{1}{\Delta ^2(\lambda -1)}\left\{ \Delta ^2\frac{(\lambda -1)^2}{\lambda }+2\Big [\Delta \,z\,Sh+1-Ch\Big ]\right\} . \end{aligned}$$

\(\square \)

The power series \(\sum _2^{\infty }\frac{2(n-1)}{(2n)!}u^{n-2}\) defines an analytic function \(R:{\mathbb {C}}\rightarrow {\mathbb {C}}\).

Proposition 6.4

For \(0\ne \Delta \in {\mathbb {R}}\), \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\), and \(u=4\Delta ^2\left( 1-\frac{1}{\lambda }\right) \), we have \(4\Delta ^2-u\ne 0\) and

$$\begin{aligned} p(\Delta ,\lambda )=\frac{u^2}{4(4\Delta ^2-u)}+u^2R(u). \end{aligned}$$

Proof

Let \(0\ne \Delta \in {\mathbb {R}}\), \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\), and \(u=4\Delta ^2\left( 1-\frac{1}{\lambda }\right) \). Then \(4\Delta ^2-u=\frac{4\Delta ^2}{\lambda }\ne 0\). With a square root z of \(1-\frac{1}{\lambda }\) and \(w=2\Delta \,z\) we have \(u=w^2\) and

$$\begin{aligned} p(\Delta ,\lambda )-\frac{\Delta ^2}{\lambda }(\lambda -1)^2= & {} 2(1-\text {cosh}(w))+w\,\text {sinh}(w)\\= & {} 2\left( 1-\sum _0^{\infty }\frac{w^{2n}}{(2n)!}\right) +w\sum _0^{\infty }\frac{w^{2n+1}}{(2n+1)!}\\= & {} -\sum _1^{\infty }\frac{2\,w^{2n}}{(2n)!}+\sum _1^{\infty }\frac{2n\,w^{2n}}{(2n)!}=\sum _1^{\infty }\frac{2(n-1)\,w^{2n}}{(2n)!}\\= & {} \sum _2^{\infty }\frac{2(n-1)\,w^{2n}}{(2n)!}= \sum _2^{\infty }\frac{2(n-1)\,u^n}{(2n)!}=u^2R(u). \end{aligned}$$

From \(\lambda \,u=4\,\Delta ^2(\lambda -1)\) we get

$$\begin{aligned} \lambda =\frac{4\,\Delta ^2}{4\,\Delta ^2-u}, \end{aligned}$$

hence

$$\begin{aligned} \frac{\Delta ^2}{\lambda }(\lambda -1)^2=\frac{u}{4}(\lambda -1)=\frac{u}{4}\frac{4\,\Delta ^2-(4\,\Delta ^2-u)}{4\,\Delta ^2-u}=\frac{u^2}{4(4\Delta ^2-u)}. \end{aligned}$$

\(\square \)

Notice that for \(0\ne \Delta \in {\mathbb {R}}\) and \(\lambda \in (1,\infty )\) we have \(u=4\Delta ^2\left( 1-\frac{1}{\lambda }\right) >0\), \(4\Delta ^2-u>0\), and \(R(u)>0\).

Corollary 6.5

For every \(\Delta \in {\mathbb {R}}\setminus \{0\}\) there are no eigenvalues of \({{\mathcal {M}}}_{\Delta }\) in \((1,\infty )\).

7 Bifurcation of a Negative Floquet Multiplier

Consider the function

$$\begin{aligned} Q:{\mathbb {R}}^2\ni (\Delta ,u)\mapsto 4(4\Delta ^2-u)R(u)+1\in {\mathbb {R}}. \end{aligned}$$

For \(0\ne \Delta \in {\mathbb {R}}\) and \(0\ne u\in {\mathbb {R}}\) with \(4\Delta ^2\ne u\) we have

$$\begin{aligned} Q(\Delta ,u)=0\quad \text {if and only if}\quad \frac{u^2}{4(4\Delta ^2-u)}+u^2R(u)=0. \end{aligned}$$

With Proposition 6.4 in mind we first look for zeros of the functions \(Q(\Delta ,\cdot ):{\mathbb {R}}\rightarrow {\mathbb {R}}\). Obviously,

$$\begin{aligned} Q(\Delta ,u)>0\quad \text {for}\quad 0<u<4\Delta ^2, \end{aligned}$$

and

$$\begin{aligned} \partial _2Q(\Delta ,u)=-4R(u)+4(4\Delta ^2-u)R'(u)<0 \quad \text {for}\quad 4\Delta ^2<u<\infty . \end{aligned}$$

Because of \(Q(\Delta ,u)=Q(-\Delta ,u)\) we restrict attention to \(\Delta >0\).

Proposition 7.1

  1. (i)

    For every \(\Delta >0\) there exists exactly one zero \(u\in (4\Delta ^2,\infty )\) of the function \(Q(\Delta ,\cdot ):{\mathbb {R}}\rightarrow {\mathbb {R}}\).

  2. (ii)

    The function \({{\mathcal {U}}}:(0,\infty )\rightarrow {\mathbb {R}}\) given by \(Q(\Delta ,{{\mathcal {U}}}(\Delta ))=0\) and \(4\Delta ^2<{{\mathcal {U}}}(\Delta )\) is analytic.

  3. (iii)

    \(0<{{\mathcal {U}}}'(\Delta )<8\,\Delta \) for all \(\Delta >0\).

  4. (iv)

    \(\lim _{\Delta \searrow 0}{{\mathcal {U}}}(\Delta )=u_{*}>0\) satisfies \(u_{*}R(u_{*})=\frac{1}{4}\).

  5. (v)

    We have

    $$\begin{aligned} 1<\frac{{{\mathcal {U}}}(\Delta )}{4\Delta ^2}\quad \text {for all}\quad \Delta >0\quad \text {and}\quad \lim _{\Delta \rightarrow \infty }\frac{{{\mathcal {U}}}(\Delta )}{4\Delta ^2}=1. \end{aligned}$$

Proof

  1. 1.

    On (i). Existence follows by continuity from \(\lim _{u\searrow 4\Delta ^2}Q(\Delta ,u)=1\) and \(\lim _{u\rightarrow \infty } Q(\Delta ,u)=-\infty \). Uniqueness is due to \(\partial _2Q(\Delta ,u)<0\) for \(4\Delta ^2<u<\infty \).

  2. 2.

    Analyticity. The map Q is analytic. Let \(\Delta _0>0\), \(u_0={{\mathcal {U}}}(\Delta _0)\in (4\Delta _0^2,\infty )\). Then \(Q(\Delta _0,u_0)=0\) and \(\partial _2Q(\Delta _0,u_0)\ne 0\). By the Implicit Function Theorem for analytic maps [12, Corollary 4.23], there are open neighbourhoods N of \(\Delta _0\) and V of \(u_0\) with \(4\,\Delta ^2<u\) on \(N\times V\), and an analytic function \(\widehat{{\mathcal {U}}}:N\rightarrow V\) with \(\widehat{{\mathcal {U}}}(\Delta _0)=u_0\) and

    $$\begin{aligned} \{(\Delta ,u)\in N\times V:Q(\Delta ,u)=0\}= \{(\Delta ,\widehat{{\mathcal {U}}}(\Delta )):\Delta \in N\} \end{aligned}$$

    Using this and Part (i) we get \({{\mathcal {U}}}(\Delta )= \widehat{{\mathcal {U}}}(\Delta )\) on N, and the analyticity of \({{\mathcal {U}}}\) follows.

  3. 3.

    On (iii). Differentiation of \(Q(\Delta ,{{\mathcal {U}}}(\Delta ))=0\) yields

    $$\begin{aligned} {{\mathcal {U}}}'(\Delta )=-\frac{\partial _1Q(\Delta ,{{\mathcal {U}}}(\Delta ))}{\partial _2Q(\Delta ,{{\mathcal {U}}}(\Delta ))} =-\frac{32\,\Delta \,R({{\mathcal {U}}}(\Delta ))}{\partial _2Q(\Delta ,{{\mathcal {U}}}(\Delta ))}>0 \quad \text {for all}\quad \Delta >0. \end{aligned}$$

    Using the definition of Q we infer

    $$\begin{aligned} 0=(8\,\Delta -{{\mathcal {U}}}'(\Delta ))R({{\mathcal {U}}}(\Delta ))+(4\,\Delta ^2-{{\mathcal {U}}}(\Delta ))R'({{\mathcal {U}}}(\Delta )){{\mathcal {U}}}'(\Delta )\quad \text {for all}\quad \Delta >0. \end{aligned}$$

    The terms \(R({{\mathcal {U}}}(\Delta )),R'({{\mathcal {U}}}(\Delta )),{{\mathcal {U}}}'(\Delta )\) are positive while \(4\,\Delta ^2-{{\mathcal {U}}}(\Delta )<0\). It follows that \(8\,\Delta -{{\mathcal {U}}}'(\Delta )>0\).

  4. 4.

    On (iv). Boundedness from below and monotonicity according to Part (iii) yield the existence of \(\lim _{\Delta \searrow 0}{{\mathcal {U}}}(\Delta )=u_{*}\ge 4\,\Delta ^2\). Passing to the limit in \(0=Q(\Delta ,{{\mathcal {U}}}(\Delta ))=4(4\,\Delta ^2-{{\mathcal {U}}}(\Delta ))R({{\mathcal {U}}}(\Delta ))+1\) gives \(u_{*}R(u_{*})=\frac{1}{4}\).

  5. 5.

    On (v). The inequality holds by the definition of \({{\mathcal {U}}}\). In order to find the limit observe that the equation \(0=Q(\Delta ,{{\mathcal {U}}}(\Delta ))\) yields

    $$\begin{aligned} 0=1-\frac{{{\mathcal {U}}}(\Delta )}{4\,\Delta ^2}+\frac{1}{16\,\Delta ^2R({{\mathcal {U}}}(\Delta ))}\quad \text {for all}\quad \Delta >0, \end{aligned}$$

    with \(R({{\mathcal {U}}}(\Delta ))\ge R(0)>0\). \(\square \)

Theorem 7.2

Each operator \({{\mathcal {M}}}_{\Delta }\), \(\Delta >0\), has exactly one eigenvalue \(\lambda =\lambda _{\Delta }\) in \((-\infty ,0)\). The function \(\Lambda :(0,\infty )\rightarrow (-\infty ,0)\) given by \(\Lambda (\Delta )=\lambda _{\Delta }\) is analytic, with \(\Lambda '(\Delta )<0\) for all \(\Delta >0\) and

$$\begin{aligned} \lim _{\Delta \searrow 0}\Lambda (\Delta )=0\quad \text {and}\quad \lim _{\Delta \rightarrow \infty }\Lambda (\Delta )=-\infty . \end{aligned}$$

Proof

  1. 1.

    (Uniqueness) Suppose \(\lambda <0\) is a eigenvalue of \({{\mathcal {M}}}_{\Delta }\) for some \(\Delta >0\). Apply Propositions 3.2, 6.3, and 6.4. It follows that \(u=4\,\Delta ^2\left( 1-\frac{1}{\lambda }\right) \) satisfies \(u>4\,\Delta ^2\) and \(0=p(\Delta ,\lambda )=u^2R(u)+\frac{u^2}{4(4\,\Delta ^2-u)}\), hence \(Q(\Delta ,u)=0\), and thereby \(u={{\mathcal {U}}}(\Delta )\). We obtain \(\frac{4\,\Delta ^2}{\lambda }=4\,\Delta ^2-{{\mathcal {U}}}(\Delta )\), or

    $$\begin{aligned} \lambda =\frac{4\,\Delta ^2}{4\,\Delta ^2-{{\mathcal {U}}}(\Delta )}. \end{aligned}$$
  2. 2.

    The last equation defines an analytic function \(\Lambda :(0,\infty )\ni \Delta \mapsto \lambda \in (-\infty ,0)\). We show that given \(\Delta >0\) the value \(\lambda =\Lambda (\Delta )\) is an eigenvalue of \({{\mathcal {M}}}_{\Delta }\) : Indeed, \(u={{\mathcal {U}}}(\Delta )\) satisfies \(u={{\mathcal {U}}}(\Delta )=4\,\Delta ^2-\frac{4\,\Delta ^2}{\Lambda (\Delta )}=4\,\Delta ^2\left( 1-\frac{1}{\lambda }\right) \). Using Proposition 6.4 and \(0=Q(\Delta ,{{\mathcal {U}}}(\Delta ))=Q(\Delta ,u)\) we obtain

    $$\begin{aligned} 0=u^2R(u)+\frac{u^2}{4(4\,\Delta ^2-u)}=p(\Delta ,\lambda )=p(\Delta ,\Lambda (\Delta )) \end{aligned}$$

    which means \(\Lambda (\Delta )\in \sigma ({{\mathcal {M}}}_{\Delta })\), according to Propositions 6.3 and 3.2.

  3. 3.

    The relation \(\lim _{\Delta \searrow 0}\Lambda (\Delta )=0\) follows from Proposition 7.1 (iv) in combination with the definition of \(\Lambda \). Proposition 7.1 (v) yields \(\lim _{\Delta \rightarrow \infty }\Lambda (\Delta )=-\infty \). Using

    $$\begin{aligned} \Lambda '(\Delta )=\frac{8\,\Delta (4\,\Delta ^2-{{\mathcal {U}}}(\Delta ))-4\,\Delta ^2(8\,\Delta -{{\mathcal {U}}}'(\Delta ))}{(4\,\Delta ^2-{{\mathcal {U}}}(\Delta ))^2} \end{aligned}$$

    in combination with \(4\,\Delta ^2-{{\mathcal {U}}}(\Delta )<0\) and \(8\,\Delta -{{\mathcal {U}}}'(\Delta )>0\) (Proposition 7.1 (iii)) we obtain \(\Lambda '(\Delta )<0\) for all \(\Delta >0\). \(\square \)

8 Simplicity

Theorem 8.1

(Geometric multiplicity) For every \(\Delta >0\) and all \(\lambda \in \sigma _{\Delta }\setminus \{0\}\),

$$\begin{aligned} \dim \,({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}(0)=1. \end{aligned}$$

Proof

  1. 1.

    Let \(\Delta >0\) be given. We show \(\text {dim}\,\{c\in {\mathbb {C}}^2:H(\Delta ,\lambda )c=0\}=1\) for \(0\ne \lambda \in \sigma _{\Delta }\). For all \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\) the sum of the diagonal entries of the matrix \(H(\Delta ,\lambda )\) is 2, see Proposition 6.2. By continuity (Proposition 4.1) this holds also for \(\lambda =1\). For \(0\ne \lambda \in {\mathbb {C}}\), we get \(H(\Delta ,\lambda )\ne 0\in {\mathbb {C}}^{2\times 2}\), and \(H(\Delta ,\lambda )c\ne 0\) for some \(c\in {\mathbb {C}}^2\), hence \(\text {dim}\,\{c\in {\mathbb {C}}^2:H(\Delta ,\lambda )c=0\}\le 1\). For \(0\ne \lambda \in \sigma _{\Delta }\), Proposition 3.2 gives \(0=P(\Delta ,\lambda )=\text {det}\,H(\Delta ,\lambda )\), which yields \(0<\text {dim}\,\{c\in {\mathbb {C}}^2:H(\Delta ,\lambda )c=0\}\).

  2. 2.

    Let \(\lambda \in \sigma _{\Delta }\setminus \{0\}\) be given. Part 1 of the proof of Proposition 3.2 shows that the composition \(L_{*}=L_3\circ L_2\circ L_1\) of the linear maps

    $$\begin{aligned} L_1:({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}(0)\ni \chi\mapsto & {} v^{\Delta ,\chi }\in C([-2,\infty ),{\mathbb {C}}),\\ L_2:C([-2,\infty ),{\mathbb {C}})\ni w\mapsto & {} \left( \begin{array}{c} w(\cdot +3)\\ w(\cdot +1)\end{array}\right) \in C([-b,b],{\mathbb {C}}^2),\\ L_3:C([-b,b],{\mathbb {C}}^2)\ni \left( \begin{array}{c} y\\ z\end{array}\right)\mapsto & {} \left( \begin{array}{c} y(-b)\\ z(-b)\end{array}\right) \in {\mathbb {C}}^2 \end{aligned}$$

    satisfies \(H(\Delta ,\lambda )L_{*}\chi =0\) for all \(\chi \in ({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}(0)\), and \(L_{*}\chi \ne 0\) in case \(\chi \ne 0\). Therefore \(L_{*}\) defines an injective linear map from \(({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}(0)\) into \(\{c\in {\mathbb {C}}^2:H(\Delta ,\lambda )c=0\}\), which yields \(0<\text {dim}\,({{\mathcal {M}}}_{\Delta }-\lambda )^{-1}(0)\le \text {dim}\,\{c\in {\mathbb {C}}^2:H(\Delta ,\lambda )c=0\}=1\). \(\square \)

We turn to algebraic multiplicities. Notice that due to Theorem 7.2 there exists exactly one parameter \(\Delta _{*}>0\) so that \(\lambda =-1\) is an eigenvalue of the operator \({{\mathcal {M}}}_{\Delta _{*}}\). The algebraic multiplicity of eigenvalues on the unit circle is of interest for bifurcation from the periodic orbit \({{\mathcal {O}}}\).

Theorem 8.2

The eigenvalue \(-1\) of \({{\mathcal {M}}}_{\Delta _{*}}\) is simple, and 1 is a simple eigenvalue of \({{\mathcal {M}}}_{\Delta }\) for all \(\Delta >0\).

In the next section we shall argue that for certain \(\Delta >0\) also non-simple eigenvalues are present, in the interval (0, 1).

Proof of Theorem 8.2

  1. 1.

    We show \(\partial _2P(\Delta _{*},-1)\ne 0\). For every eigenvalue \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\) of \({{\mathcal {M}}}_{\Delta }\), \(\Delta >0\), Proposition 3.2 yields \(P(\Delta ,\lambda )=0\). Using Proposition 6.3 we get \(p(\Delta ,\lambda )=0\). Differentiation of the formula in Proposition 6.3 shows that for every \(\Delta >0\) and for every eigenvalue \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\) of \({{\mathcal {M}}}_{\Delta }\), \(\partial _2P(\Delta ,\lambda )=0\) if and only if \(\partial _2p(\Delta ,\lambda )=0\). With the analytic function \(g:(0,\infty )\times (-\infty ,0)\ni (\Delta ,\lambda )\mapsto 2\,\Delta \sqrt{1-\frac{1}{\lambda }}\in (0,\infty )\),

    $$\begin{aligned} p(\Delta ,\lambda )= 2\Big (1-\text {cosh}(g(\Delta ,\lambda ))\Big )+\frac{\Delta ^2}{\lambda }(\lambda -1)^2+g(\Delta ,\lambda )\text {sinh}(g(\Delta ,\lambda )) \end{aligned}$$

    for all \(\Delta >0\) and \(\lambda <0\), hence

    $$\begin{aligned} \partial _2p(\Delta ,\lambda )= & {} \partial _2g(\Delta ,\lambda )\Big [-2\,\text {sinh}(g(\Delta ,\lambda ))+\text {sinh}(g(\Delta ,\lambda ))+g(\Delta ,\lambda )\text {cosh}(g(\Delta ,\lambda ))\Big ]\\&+\frac{\Delta ^2}{\lambda ^2}\Big [2(\lambda -1)\lambda -(\lambda -1)^2\Big ]\\= & {} \partial _2g(\Delta ,\lambda )\Big [g(\Delta ,\lambda )\text {cosh}(g(\Delta ,\lambda ))-\text {sinh}(g(\Delta ,\lambda ))\Big ] +\Delta ^2\left( 1-\frac{1}{\lambda ^2}\right) \end{aligned}$$

    for these \(\Delta \) and \(\lambda \). In particular,

    $$\begin{aligned} \partial _2p(\Delta _{*},-1)=\partial _2g(\Delta _{*},-1)[g(\Delta _{*},-1)\text {cosh}(g(\Delta _{*},-1))-\text {sinh}(g(\Delta _{*},-1))]>0 \end{aligned}$$

    since \(g(\Delta _{*},-1)>0\) and

    $$\begin{aligned} \partial _2g(\Delta _{*},-1)=\frac{\Delta }{\lambda ^2}\sqrt{\frac{\lambda }{\lambda -1}}>0 \end{aligned}$$

    and \(\frac{\text {sinh}(x)}{\text {cosh}(x)}<x\) for all \(x>0\).

  2. 2.

    Proof of \(\partial _2P(\Delta ,1)>0\) for all \(\Delta >0\). Let \(\Delta >0\) be given. By \(1\in \sigma _{\Delta }\), \(P(\Delta ,1)=0\), see Proposition 3.2. Using this and Proposition 6.3 we get

    $$\begin{aligned} \partial _2P(\Delta ,1)=\lim _{1\ne \lambda \rightarrow 1}\frac{P(\Delta ,\lambda )}{\lambda -1}=\lim _{1\ne \lambda \rightarrow 1}\frac{p(\Delta ,\lambda )}{\Delta ^2(\lambda -1)^2}. \end{aligned}$$

    Proposition 6.4 shows that with \(u=4\,\Delta ^2\left( 1-\frac{1}{\lambda }\right) \), or, \(\lambda -1=\frac{\lambda \,u}{4\,\Delta ^2}\), we have

    $$\begin{aligned} \frac{p(\Delta ,\lambda )}{\Delta ^2(\lambda -1)^2}= & {} \frac{16\,\Delta ^2}{\lambda ^2}\frac{p(\Delta ,\lambda )}{u^2}=\frac{16\,\Delta ^2}{\lambda ^2}\left( \frac{1}{4(4\,\Delta ^2-u)}+R(u)\right) \\= & {} \frac{16\,\Delta ^2}{\lambda ^2}\left( \frac{\lambda }{16\,\Delta ^2}+R\left( 4\,\Delta ^2\left( 1-\frac{1}{\lambda }\right) \right) \right) \end{aligned}$$

    for all \(\lambda \in {\mathbb {C}}\setminus \{0,1\}\). It follows that

    $$\begin{aligned} \partial _2P(\Delta ,1)=\lim _{1\ne \lambda \rightarrow 1} \frac{p(\Delta ,\lambda )}{\Delta ^2(\lambda -1)^2}= 16\,\Delta ^2\left( \frac{1}{16\,\Delta ^2}+R(0)\right) >0. \end{aligned}$$
  3. 3.

    From Corollary 5.2 in combination with the result of Part 1 we obtain that the order of the pole of the resolvent \(\rho _{\Delta _{*}}\rightarrow L_c({{\mathcal {Y}}},{{\mathcal {Y}}})\) at \(\lambda =-1\) is 1. Therefore the chain length of the eigenvalue \(\lambda =-1\) of \({{\mathcal {M}}}_{\Delta _{*}}\) is 1, and Theorem 8.1 gives simplicity. The proof of simplicity of the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) for every \(\Delta >0\) is analogous. \(\square \)

9 About Further Eigenvalues and Period Doubling

We discuss real eigenvalues of the operators \({{\mathcal {M}}}_{\Delta }\), \(\Delta >0\), in the remaining interval (0, 1), address the existence of non-real eigenvalues, and sketch finally how to deduce from Theorems 7.2 and 8.2 that a period doubling bifurcation from the periodic orbit \({{\mathcal {O}}}\) occurs at \(\Delta =\Delta _{*}\).

By Propositions 3.2 and 6.3 the eigenvalues \(\lambda \in (0,1)\) of the operators \({{\mathcal {M}}}_{\Delta }\), \(\Delta >0\), are given by the zeros of the functions \(p(\Delta ,\cdot )\) in (0, 1). For \(\lambda \in (0,1)\) we may write \(\sqrt{1-\frac{1}{\lambda }}=i\,\sqrt{\frac{1}{\lambda }-1}\) with \(\sqrt{\frac{1}{\lambda }-1}>0\). Setting \(v=2\,\Delta \sqrt{\frac{1}{\lambda }-1}>0\) we get

$$\begin{aligned} p(\Delta ,\lambda )= & {} 2(1-\text {cosh}(iv))+i\,v\,\text {sinh}(iv)+\frac{v^4}{4(4\,\Delta ^2+v^2)}\end{aligned}$$
(9.1)
$$\begin{aligned}= & {} 2(1-\cos (v))-v\,\sin (v)+\frac{v^4}{4(4\,\Delta ^2+v^2)}. \end{aligned}$$
(9.2)

As the map \(T:(0,\infty )\times (0,1)\rightarrow (0,\infty )\times (0,\infty )\) given by \(T(\Delta ,\lambda )=(\Delta ,v)\) is bijective we now look for \(\Delta >0\) and \(v>0\) so that the maps \(\alpha :[0,\infty )\rightarrow {\mathbb {R}}\), \(\alpha (v)=2(1-\cos (v))-v\,\sin (v)\), and \(\beta :(0,\infty )\times [0,\infty )\rightarrow [0,\infty )\), \(\beta (\Delta ,v)=\frac{v^4}{4(4\,\Delta ^2+v^2)}\), satisfy \(\alpha (v)=-\beta (\Delta ,v)\). We have \(\alpha (0)=0\) and \(\beta (\Delta ,0)=0\) for all \(\Delta >0\), and each function \(\beta (\Delta ,\cdot )\), \(\Delta >0\), is strictly increasing. For the next remarks compare Fig. 1 below.

Fig. 1
figure 1

Intersections of \(\alpha \) and \(\beta (\Delta ,\cdot )\) for \(\Delta \) increasing

The zeros of \(\alpha \) form a strictly increasing sequence \((z_j)_0^{\infty }\). For \(\Delta \ge 0\) sufficiently small the function \(-\beta (\Delta ,\cdot )\) is strictly below \(\alpha \) on \((0,\infty )\). If \(\Delta \) increases it moves towards the horizontal axis, and there is a first \(\Delta _1>0\) so that the functions \(\alpha \) and \(-\beta (\Delta _1,\cdot )\) touch, at a zero \(v_1\) of \(\alpha +\beta (\Delta _1,\cdot )\) which is situated in the interval \((z_1,z_2)\). If \(\Delta \) increases beyond \(\Delta _1\) the zero \(v_1\) bifurcates into a pair of simple real zeros \(v_{1-}(\Delta )<v_{1+}(\Delta )\) of \(\alpha +\beta (\Delta ,\cdot )\) in \((z_1,z_2)\), with

$$\begin{aligned} v_{1-}(\Delta )\rightarrow z_1\quad \text {and}\quad v_{1+}(\Delta )\rightarrow z_2 \quad \text {as}\quad \Delta \rightarrow \infty . \end{aligned}$$

In each interval \((z_{2n},z_{2n+1})\), \(n\in {\mathbb {N}}_0\), the function \(\alpha \) is positive, and there are no zeros of any function \(\alpha +\beta (\Delta ,\cdot )\), \(\Delta >0\). In the intervals \((z_{2n+1},z_{2n+2})\), \(n\in {\mathbb {N}}\), the creation and asymptotic behaviour of zeros of \(\alpha +\beta (\Delta ,\cdot )\), \(\Delta >0\), is as in \((z_1,z_2)\), with the associated critical parameters \(\Delta _n\) strictly increasing. Using the transformation T we obtain that for each \(\Delta >0\) the zeros of \(p(\Delta ,\cdot )\) in (0, 1) are given as \(\lambda =\frac{4\,\Delta ^2}{v^2+4\,\Delta ^2}\), with the zeros v of \(\alpha +\beta (\Delta ,\cdot )\) in \((0,\infty )\), and we arrive at the following description of the zeroset of p in \((0,\infty )\times (0,1)\): For every \(n\in {\mathbb {N}}\) there exists a zero \(\lambda _n\in (0,1)\) of \(p(\Delta _n,\cdot )\) which bifurcates for \(\Delta >\Delta _n\) into a pair of simple zeros \(\lambda _{n+}(\Delta )<\lambda _{n-}(\Delta )\) in (0, 1), and both \(\lambda _{n+}(\Delta )\) and \(\lambda _{n-}(\Delta )\) tend to 1 as \(\Delta \rightarrow \infty \). For \(n\in {\mathbb {N}}\) and \(\Delta _n\le \Delta \) and \(2\le j\le n\) we have \(\lambda _{j-}(\Delta )<\lambda _{j-1,+}(\Delta )\).

Continuity arguments now show that for every \(n\in {\mathbb {N}}\) the order of the zero \(\lambda _n\) of \(p(\Delta _n,\cdot )\) is 2. For \(\Delta <\Delta _n\) close to \(\Delta _n\) the double zero \(\lambda _n\) bifurcates into a complex conjugate pair of simple zeros \(v_{nc}(\Delta )\ne \overline{v_{nc}(\Delta )}\) of \(p(\Delta ,\cdot )\). - Recall from Proposition 2.5 that for \(\Delta \searrow 0\) all eigenvalues \(\lambda \ne 1\) of \({{\mathcal {M}}}_{\Delta }\) uniformly tend to \(0\in {\mathbb {C}}\).

We turn to period doubling which is a bifurcation from the periodic orbit \({{\mathcal {O}}}\) at \(\Delta =\Delta _{*}\) in the sense that every neighbourhood of \((\Delta _{*},p_0,8)\) in \({\mathbb {R}}\times C^1\times {\mathbb {R}}\) contains triples \((\Delta ,\phi _{\Delta },\omega _{\Delta })\) such that \(\phi _{\Delta }\ne p_0\) is the initial value of a periodic solution of Eq. (1.2) with minimal period \(\omega _{\Delta }\).

The initial data \(\phi _{\Delta }\) arise as fixed points of the first iterates of Poincaré maps \({{\mathcal {P}}}_{\Delta }\) for \(\Delta \) close to \(\Delta _{*}\). In the sequel we describe the situation. Recall that the periodic solution \(p:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is twice continuously differentiable. This implies that the curve \({\mathbb {R}}\ni t\mapsto p_t\in C^1\) is continuously differentiable. Its tangent vector at \(t=0\) is \(p_0'\in Y\subset C^1\), which does not belong to the closed hyperplane \({{\mathcal {H}}}=\{\phi \in C^1:\phi (0)=0\}\) since \(p'(0)=1\). We have

$$\begin{aligned} Y=(Y\cap {{\mathcal {H}}})\oplus {\mathbb {R}}p_0'. \end{aligned}$$

For every parameter \(\Delta \in {\mathbb {R}}\) sufficiently small neighbourhoods \({{\mathcal {N}}}_{\Delta }\) of \(p_0\) in \(X_{\Delta }\cap {{\mathcal {H}}}\) are continuusly differentiable submanifolds of \(X_{\Delta }\), all with the same tangent space \(Y\cap {{\mathcal {H}}}\) at \(p_0\). There exist a neighbourhood \({{\mathcal {U}}}_{\Delta }\) of \(p_0\) in \(X_{\Delta }\) and a continuously differentiable return time map

$$\begin{aligned} \tau _{\Delta }:{{\mathcal {U}}}_{\Delta }\rightarrow (0,\infty ) \end{aligned}$$

with \(\tau _{\Delta }(p_0)=4\) and

$$\begin{aligned} x^{\Delta ,\phi }_{\tau _{\Delta }(\phi )}\in {{\mathcal {H}}} \end{aligned}$$

for every \(\phi \in {{\mathcal {U}}}_{\Delta }\), with the solution \(x^{\Delta ,\phi }\) of Eq. (1.2). The segment \(p_0\) becomes a fixed point of the Poincaré map

$$\begin{aligned} {{\mathcal {P}}}_{\Delta }:{{\mathcal {U}}}_{\Delta }\cap {{\mathcal {H}}}\ni \phi \mapsto x^{\Delta ,\phi }_{\tau _{\Delta }(\phi )}\in X_{\Delta }\cap {{\mathcal {H}}}. \end{aligned}$$

The simplicity of the eigenvalue 1 of \({{\mathcal {M}}}_{\Delta }\) (Theorem 8.2) yields that the spectrum of the derivative \(D{{\mathcal {P}}}_{\Delta }(p_0):Y\cap {{\mathcal {H}}}\rightarrow Y\cap {{\mathcal {H}}}\) is \(\sigma _{\Delta }\setminus \{1\}\). For the first iterate

$$\begin{aligned} {{\mathcal {P}}}_{\Delta }^2:\{\phi \in {{\mathcal {U}}}_{\Delta }\cap {{\mathcal {H}}}:{{\mathcal {P}}}_{\Delta }(\phi )\in {{\mathcal {U}}}_{\Delta }\}\rightarrow X_{\Delta }\cap {{\mathcal {H}}} \end{aligned}$$

of \({{\mathcal {P}}}_{\Delta }\) the spectrum of its derivative at the fixed point \(p_0\) is the set

$$\begin{aligned} \{\lambda ^2\in {\mathbb {C}}:1\ne \lambda \in \sigma _{\Delta }\}. \end{aligned}$$

Theorems 7.2 and 8.2 guarantee that at \(\Delta =\Delta _{*}\) the positive eigenvalue \((\Lambda (\Delta ))^2\) of \(D{{\mathcal {P}}}_{\Delta }^2(p_0)\) crosses the unit circle with positive velocity and algebraic multiplicity 1. This yields a change of the fixed point index, and bifurcation of fixed points \(\phi _{\Delta }\ne p_0\) of the iterates \({{\mathcal {P}}}_{\Delta }^2\) follows. For the maps \({{\mathcal {P}}}_{\Delta }\) the points \(\phi _{\Delta }\ne p_0\) have period 2, and they determine periodic solutions of Eq. (1.2) with periods \(\omega _{\Delta }\) close to 8, due to continuity of the map \((\Delta ,\phi )\mapsto \tau _{\Delta }(\phi )\). The periods \(\omega _{\Delta }\) are minimal since otherwise one obtains a contradiction to the fact that \(p_0\) is the only fixed point of \({{\mathcal {P}}}_{\Delta }\) in a certain neighbourhood, for \(\Delta \) close to \(\Delta _{*}\).

Notice that a complete proof along the lines above must take care of the fact that the Poincaré maps \({{\mathcal {P}}}_{\Delta }\) are defined on domains in different manifolds, each one containing the fixed point \(p_0\).