1 Introduction

Double-beam systems are a classical subject of theoretical mechanics, see e.g. [9, 43]: they consist of two beams mediated by a viscoelastic material layer. On the mathematical level, this is modeled by strong couplings between the equations, usually complemented by identical, homogeneous boundary conditions—say, clamped or hinged. In the last two decades, coupled systems consisting of networks of (almost) one-dimensional beams have aroused more and more interest: unlike in double-beam systems, all interactions take place in the ramification points.

Our aim in this note is twofold: we first discuss some properties of beam equations

$$\begin{aligned} \frac{\partial ^2 u}{\partial t^2}=-\frac{\partial ^4u}{\partial x^4} \end{aligned}$$

on networks of one-dimensional elements, with a focus on the solution properties that depend on rather general transmission conditions in the nodes. To this purpose, we propose a variational treatment of beam equations developing a formalism that happens then to be easily extendible to the study of parabolic equations driven by elliptic operators of arbitrary even order, again with general combinations of stationary and dynamic boundary conditions.

The analysis of evolution equations on networks has become a very popular topic since Lumer introduced in [32] a theoretical framework to study heat equations on ramified structures; but in fact, time-dependent Schrödinger equations on networks have been studied by quantum chemists since the 1940s and perhaps earlier, see the references in [41], § 2.5]. Also, networks of thin linear beams have been studied often in the literature, with a special focus on controllability and stabilization issues, ever since the pioneering discussion in [29, 30]. The simplest case of a network corresponds to a path graph—a concatenation of linear elements. This case models a beam consisting of different segments with various elasticity properties: different vertex conditions for the bi-Laplacian that appears in the beam equation on a single path graph have been derived from physical principles in [8]. It turns out that there is no unique natural choice for transmission conditions in a network’s node: based on physical considerations, several conditions have been proposed in the literature, especially of stationary nature [7, 14,15,16,17, 27]. In [24] we have applied the classical extension theory of symmetric positive semidefinite operators to discuss general transmission conditions for the bi-Laplacian; in particular, we have described an infinite class of transmission conditions leading to well-posedness of the beam equation on networks.

Along with stationary conditions, conditions of dynamic type have been very popular in the context of networks of beams, as they naturally model massive junctions: we refer to [35,36,37, 42]. In Sect. 2 we are going to continue and extend the analysis of fourth-order operators on finite networks initiated in [24]: we characterize dynamic conditions leading to self-adjoint realizations and in fact, parametrize an infinite class of transmission conditions under which the corresponding system of beams is well-posed and enjoys conservation of energy. Rather general dynamic boundary conditions leading to self-adjoint, dissipative realizations on a single interval have been studied in [20]: they are special cases of our parametrization, too, which elaborates on an idea of Arendt and his co-authors [2, 3] for the discussion of second-order elliptic operators with dynamic boundary conditions through quadratic forms on product Hilbert spaces. An interesting feature of our theoretical framework is that, by appropriately choosing the product space, we can simultaneously treat evolution equations endowed by dynamic and/or stationary conditions. We mention that similar ideas and a comparable formalism have been successfully applied in [28] to study hyperbolic systems with dynamic boundary conditions.

In Sect. 3 we also extend most of the methods developed for the beam equations to the study of parabolic features of equations driven by jth powers of the Laplacian, again on networks. Linear and semilinear elliptic equations associated with such operators have been discussed often in the literature since a classical article by Davies [12]; we also refer to [5] for a collection of the main features of such equations, to [31] for a deep study of contractivity properties of the generated semigroups, to [23] for a selection of related models in physics and mechanics, and to [13, 18, 21] for a study of some realizations with dynamic boundary conditions on domains. After briefly showing well-posedness of general hyperbolic equations (second derivative in time, arbitrary integer powers of the Laplacians in space) with dynamic and/or stationary equations we turn to the properties of the analytic semigroup generated by the same differential operator’s realizations on a finite network. In particular, we show that such a semigroup is of trace class and (under mild assumptions) ultracontractive; we also show that in spite of failure of the maximum principle for any \(j\ge 2\), depending on the transmission conditions such a semigroup may or may not be eventually sub-Markovian and eventually enjoy a strong Feller property.

Based on a classical idea that goes back to [19], it is well-known that in the case of second-order elliptic operators there is a direct connection between dynamic and Wentzell-type boundary conditions: formally taking the boundary trace of the evolution equation and plugging it into the dynamic boundary condition, one thus obtains a boundary condition involving boundary terms of order as high as the operator itself. We are also going to demonstrate that the class of Wentzell-type boundary conditions hitherto considered in the literature is unnecessarily restrictive; possibly the most surprising finding of our paper is that the natural Wentzell-type boundary conditions for an evolution equation of order 2j in space are in fact of higher order than the operator itself: namely, of order \(3j-1\), see Proposition 3.5.(iv).

2 The Beam Equation on Networks

We consider a finite connected graph \({\mathsf {G}}=({\mathsf {V}},{\mathsf {E}})\), with \(V:=|{\mathsf {V}}|\) vertices and \(E:=|{\mathsf {E}}|\) edges; loops and parallel edges are allowed. We also denote by \({\mathsf {E}}_{{\mathsf {v}}}\) the set of all edges incident in \({\mathsf {v}}\). We fix an arbitrary orientation of \({\mathsf {G}}\), so that each edge \({\mathsf {e}}\equiv ({\mathsf {v}},{\mathsf {w}})\) can be identified with an interval \([0,\ell _{\mathsf {e}}]\) and its endpoints \({\mathsf {v}},{\mathsf {w}}\) with 0 and \(\ell _{\mathsf {e}}\), respectively. In such a way one naturally turns the \({\mathsf {G}}\) into a metric measure space \({\mathcal {G}}\): a network (or metric graph) whose underlying discrete graph is precisely \({\mathsf {G}}\). We refer to [41, Chapt. 3] for details.

Functions on \({\mathcal {G}}\) are vectors \((u_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\), where each \(u_{\mathsf {e}}\) is defined on the edge \({\mathsf {e}}\simeq (0,\ell _{\mathsf {e}})\). We introduce the Hilbert space of measurable, square integrable functions on \({\mathcal {G}}\)

$$\begin{aligned} L^2({\mathcal {G}}):=\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} L^2(0,\ell _{\mathsf {e}}) \end{aligned}$$

endowed with the natural inner product

$$\begin{aligned} (u,v)_{L^2({\mathcal {G}})}:=\int _{\mathcal {G}}u(x)\overline{v(x)}\,dx= \sum \limits _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} u_{\mathsf {e}}(x)\overline{v_{\mathsf {e}}(x)}\,dx. \end{aligned}$$

Boundary values of elements of \(L^2({\mathcal {G}})\) are not defined, and in this sense functions that are merely in \(L^2({\mathcal {G}})\) cannot mirror the topology of the network \({\mathcal {G}}\): in order to describe transition conditions in the vertices we to introduce the Sobolev spaces

$$\begin{aligned} {\widetilde{H}}^k({\mathcal {G}}):=\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} H^k(0,\ell _{\mathsf {e}}),\qquad k\in {\mathbb {N}}: \end{aligned}$$

they consist of \(L^2({\mathcal {G}})\)-functions whose k-th weak derivatives are elements of \(L^2({\mathcal {G}})\), too.

Consider the operator A defined edgewise as the fourth derivative

$$\begin{aligned} A:(u_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}}\mapsto (u''''_{\mathsf {e}})_{{\mathsf {e}}\in {\mathsf {E}}} \end{aligned}$$

(here and in the following, \('=\frac{d }{d x}\)) with domain

$$\begin{aligned} \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}}C_c^\infty (0,\ell _{\mathsf {e}}): \end{aligned}$$

it is symmetric and strictly positive, hence its self-adjoint extensions can be described by means of the extension theory due to Friedrichs and Krein. An important role is played by the closable quadratic form associated with A, which is given by

$$\begin{aligned} a(u,v)=\sum \limits _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}}u_{\mathsf {e}}''(x)\overline{v_{\mathsf {e}}''(x)}\,dx,\qquad u,v\in \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}}C_c^\infty (0,\ell _{\mathsf {e}}). \end{aligned}$$

However, a sesquilinear form can—and typically will—have different associated operators whenever it is studied on different Hilbert spaces. In [24, § 3] we have characterized the self-adjoint extensions of A on \(L^2({\mathcal {G}})\) and discussed further realizations that generate cosine operator functions and operator semigroups, again on \(L^2({\mathcal {G}})\). In this paper we are going to discuss the more general case of extensions on Hilbert spaces of the form

$$\begin{aligned} L^2({\mathcal {G}})\oplus Y_d \end{aligned}$$

where \(Y_d\) is any subspace of the “boundary space” \({\mathbb {C}}^{4 E}\). Therefore, let us consider the space \(L^2({\mathcal {G}})\oplus Y_d\) whose elements are of the form \({\mathfrak {u}}=\begin{pmatrix}u\\ \theta \end{pmatrix}\). This is a Hilbert space with respect to the canonical inner product

$$\begin{aligned} \left( {\mathfrak {u}},{\mathfrak {v}}\right) =\left( \begin{pmatrix}u\\ \theta \end{pmatrix},\begin{pmatrix}v\\ \phi \end{pmatrix}\right) :=(u,v)_{L^2({\mathcal {G}})}+\left( \theta ,\phi \right) _{Y_d}, \end{aligned}$$

where \((\cdot ,\cdot )_{Y_d}\) is the restriction to \(Y_d\) of the canonical inner product of \({\mathbb {C}}^{4E}\).

In the following, we denote by \(P_{Z}\) and \(P^\perp _Z\) the orthogonal projector of \({\mathbb {C}}^{4E}\) onto subspaces Z and \(Z^\perp \), respectively; we also introduce the notation

$$\begin{aligned} \Gamma _\circ u:=\left( \begin{matrix} \left( u_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ \left( u_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ -\left( u'_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ \left( u'_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}} \end{matrix}\right) \quad \hbox {and}\quad \Gamma ^\circ u:= \left( \begin{matrix} -\left( u'''_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ \left( u'''_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ -\left( u''_{\mathsf {e}}(0)\right) _{{\mathsf {e}}\in {\mathsf {E}}}\\ - \left( u''_{\mathsf {e}}(\ell _{\mathsf {e}})\right) _{{\mathsf {e}}\in {\mathsf {E}}} \end{matrix}\right) . \end{aligned}$$

Consider the quadratic form

$$\begin{aligned} \begin{aligned} a({\mathfrak {u}},{\mathfrak {v}})&=\sum _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}}u_{\mathsf {e}}''(x)\overline{v_{\mathsf {e}}''(x)}\,dx,\\ D(a)&=\left\{ {\mathfrak {u}}=\begin{pmatrix}u\\ \theta \end{pmatrix}\in {\widetilde{H}}^2({\mathcal {G}})\oplus Y_d: \Gamma _\circ u=\theta \right\} . \end{aligned} \end{aligned}$$
(2.1)

It is closable and its closure is associated with a self-adjoint operator: it is easily seen that this is the operator matrix

$$\begin{aligned} {\mathcal {A}}=\begin{pmatrix} \frac{d^4}{dx^4}&{}\quad 0\\ -\Gamma ^\circ &{}\quad 0 \end{pmatrix} \end{aligned}$$
(2.2)

with domain

$$\begin{aligned} D({\mathcal {A}})=\left\{ \begin{pmatrix}u\\ \theta \end{pmatrix}\in {\widetilde{H}}^4({\mathcal {G}})\oplus Y_d:\Gamma _\circ u=\theta \right\} . \end{aligned}$$

Consider now \({\mathcal {A}}_{\max }\) and \({\mathcal {A}}_0\), the maximal and the minimal realizations of the operator \({\mathcal {A}}\), respectively, endowed with the domains

$$\begin{aligned} D({\mathcal {A}}_{\max })&={\widetilde{H}}^4({\mathcal {G}})\oplus Y_d;\\ D({\mathcal {A}}_0)&=\left\{ \begin{pmatrix} u\\ \theta \end{pmatrix}\in {\widetilde{H}}^4({\mathcal {G}})\oplus Y_d: P_{Y_d}\Gamma _\circ u=\theta ,\ P^\perp _{Y_d}\Gamma _\circ u=0, \Gamma ^\circ u=0 \right\} . \end{aligned}$$

The main result in this section is a characterization of all further self-adjoint extensions of \({\mathcal {A}}_0\) on

$$\begin{aligned} L^2({\mathcal {G}})\oplus Y_d \end{aligned}$$

in the spirit of [6], Thm. 1.4.4], see also [24, Thm. 3.1] for the extension theory of bi-Laplacians on \(L^2({\mathcal {G}})\).

Our starting point is the sesquilinear form (2.1), which can be further generalized by adding a boundary term of the form \(\left( R \Gamma _\circ u,\Gamma _\circ v \right) \) for some \(R\in {{\mathcal {L}}}({\mathbb {C}}^{4E})\). Now, take any subspace \(Y_s\) of \({\mathbb {C}}^{4E}\) that is orthogonal to \(Y_d\) and impose the boundary conditions

$$\begin{aligned} \Gamma _\circ u,\Gamma _\circ v\in Y:=Y_d\oplus Y_s. \end{aligned}$$

This motivates us to introduce the Hilbert space

$$\begin{aligned} {\mathcal {V}}:=\left\{ \begin{pmatrix}u\\ \theta \end{pmatrix}\in {\widetilde{H}}^2({\mathcal {G}})\oplus Y_d:\Gamma _\circ u\in Y\hbox { and } P_{Y_d}\Gamma _\circ u=\theta \right\} . \end{aligned}$$

Hence, consider the sesquilinear form \({\tilde{a}}\) defined by

$$\begin{aligned} \begin{aligned} {\tilde{a}}({\mathfrak {u}},{\mathfrak {v}}):&=\int _{{\mathcal {G}}} u'' \overline{v''}\, dx-\left( R \Gamma _\circ u,\Gamma _\circ v \right) _{Y}\\&=\int _{{\mathcal {G}}} u'' \overline{v''}\, dx-\left( P_{Y_s}R P_{Y_s} \Gamma _\circ u,P_{Y_s}\Gamma _\circ v \right) _{Y_s}-\left( P_{Y_d}R P_{Y_d} \Gamma _\circ u,P_{Y_d}\Gamma _\circ v \right) _{Y_d} \end{aligned} \end{aligned}$$

for \({\mathfrak {u}},{\mathfrak {v}}\in {\mathcal {V}}.\) If additionally \(u\in {\widetilde{H}}^4({\mathcal {G}})\), integrating by parts we find

$$\begin{aligned} \int _{\mathcal {G}}u'' \overline{v''}\, dx=\int _{\mathcal {G}}u'''' {\overline{v}}\, dx -\left( \Gamma ^\circ u,\Gamma _\circ v\right) _Y. \end{aligned}$$

We deduce for all \({\mathfrak {u}},{\mathfrak {v}}\in {\mathcal {V}}\) such that \(u\in {\widetilde{H}}^4({\mathcal {G}})\)

$$\begin{aligned} \begin{aligned} {\tilde{a}}({\mathfrak {u}},{\mathfrak {v}})&= \int _{\mathcal {G}}u'''' {\overline{v}}\, dx -\left( \Gamma ^\circ u,\Gamma _\circ v\right) _Y -\left( R \Gamma _\circ u,\Gamma _\circ v \right) _Y \end{aligned} \end{aligned}$$

and hence

$$\begin{aligned} {\tilde{a}}({\mathfrak {u}},{\mathfrak {v}})&= \int _{\mathcal {G}}u'''' {\overline{v}}\, dx -\left( P_{Y_d}(\Gamma ^\circ u+RP_{Y_d}\Gamma _\circ u),P_{Y_d}\Gamma _\circ v\right) _{Y_d}\nonumber \\&\quad -\left( P_{Y_s}(\Gamma ^\circ u+RP_{Y_s}\Gamma _\circ u),P_{Y_s}\Gamma _\circ v\right) _{Y_s}. \end{aligned}$$
(2.3)

If we additionally impose \((\Gamma ^\circ u+RP_{Y_s}\Gamma _\circ u)\perp Y_s\), i.e.,

$$\begin{aligned} P_{Y_s}\left( \Gamma ^\circ u+RP_{Y_s}\Gamma _\circ u\right) =0, \end{aligned}$$
(2.4)

we can hence compactly write

$$\begin{aligned} {\tilde{a}}({\mathfrak {u}},{\mathfrak {v}})=\left( \begin{pmatrix} \frac{d^4}{dx^4} &{}\quad 0\\ -P_{Y_d} \Gamma ^\circ &{}\quad -P_{Y_d}R \end{pmatrix} \begin{pmatrix} u\\ P_{Y_d}\Gamma _\circ u \end{pmatrix} , \begin{pmatrix} v\\ P_{Y_d}\Gamma _\circ v \end{pmatrix}\right) _{L^2({\mathcal {G}})\oplus Y_d}, \end{aligned}$$

for all \({\mathfrak {u}}\in ({\widetilde{H}}^4({\mathcal {G}})\oplus Y_d)\cap {\mathcal {V}}\) satisfying (2.4) and all \({\mathfrak {v}}\in {\mathcal {V}}\). Summing up, we describe the transmission conditions in the network’s vertices by means of a subspace Y of \({\mathbb {C}}^{4E}\): this consists of two orthogonal subspaces \(Y_d\) and \(Y_s\) that encode the dynamic and stationary part of the transmission conditions, respectively.

Observe that the quadratic form \({\tilde{a}}\) in (2.3) is symmetric if and only if both \(D:=P_{Y_d}RP_{Y_d}\) and \(S:=P_{Y_s}RP_{Y_s}\) are self-adjoint.

Let us first focus on the case \(D=0\).

Theorem 2.1

Let \(Y_d\) be a subspace of \({\mathbb {C}}^{4E}\). Then, for any extension \({\mathcal {A}}\) of \({\mathcal {A}}_0\) on \(L^2({\mathcal {G}})\oplus Y_d\), the following are equivalent.

  1. (i)

    \({\mathcal {A}}\) is self-adjoint.

  2. (ii)

    The operator \({\mathcal {A}}\) has the form

    $$\begin{aligned} \begin{aligned} {\mathcal {A}}&=\begin{pmatrix}\frac{d^4 }{dx^4} &{}\quad 0 \\ -P_{Y_d}\Gamma ^\circ &{}\quad 0 \end{pmatrix},\\ D({\mathcal {A}})&=\bigg \{ \begin{pmatrix}u\\ \mathbf{\theta }\end{pmatrix}\in {\widetilde{H}}^4({\mathcal {G}})\oplus Y_d: \Gamma _\circ u\in Y,\ P_{Y_d}\Gamma _\circ u=\theta ,\\&\qquad \qquad P_{Y_s}(\Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u)=0\bigg \}, \end{aligned} \end{aligned}$$

    for some subspace \(Y_s\) of \({\mathbb {C}}^{4E}\) orthogonal to \(Y_d\) with \(Y:=Y_d\oplus Y_s\) and some self-adjoint linear operator \(S\in {\mathcal {L}}(Y_s)\).

In the case of \(Y_d=\{0\}\), this has been proved in [24]. Theorem 2.1 sharpens the main result in [20] already in the case of an interval (i.e., a graph consisting of a unique edge).

Proof

(i)\(\Rightarrow \)(ii) Because \({\mathcal {A}}\) is an extension of the minimal realization \({\mathcal {A}}_0\), hence a restriction of the maximal realization \({\mathcal {A}}_{\max }\), \({\mathcal {A}}\) has the same form of the operator matrix in (2.2).

Let \({\mathfrak {u}}\in D({\mathcal {A}})\) and \({\mathfrak {v}}=\begin{pmatrix}v\\ \phi \end{pmatrix}\in D({\mathcal {A}}_{\max })\): self-adjointness of \({\mathcal {A}}\) amounts to the condition that

$$\begin{aligned} ({\mathcal {A}}{\mathfrak {u}},{\mathfrak {v}})_{L^2({\mathcal {G}})\oplus Y_d}&=\int _{\mathcal {G}}u'''' {\overline{v}}\, dx -(P_{Y_d}\Gamma ^\circ u,\phi )_{Y_d} \\&=\int _{\mathcal {G}}u\overline{v''''}\,dx -(P_{Y_d}\Gamma ^\circ u,\phi )_{Y_d}+(\Gamma ^\circ u,\Gamma _\circ v)-(\Gamma _\circ u,\Gamma ^\circ v) \\&=\int _{\mathcal {G}}u\overline{v''''}\,dx -(P_{Y_d}\Gamma ^\circ u,\phi )_{Y_d}+\left( P_{Y_d}\Gamma ^\circ u,P_{Y_d}\Gamma _\circ v\right) _{Y_d}\\ {}&\qquad -\left( P_{Y_d}\Gamma _\circ u,P_{Y_d}\Gamma ^\circ v\right) _{Y_d}+\left( P_{Y_s}\Gamma ^\circ u,P_{Y_s}\Gamma _\circ v\right) _{Y_s}\\ {}&\qquad -\left( P_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma ^\circ v\right) _{Y_s}+(P_{Y^\perp }\Gamma ^\circ u,P_{Y^\perp }\Gamma _\circ v)_{Y^\perp }\\ {}&\qquad -(P_{Y^\perp }\Gamma _\circ u,P_{Y^\perp }\Gamma ^\circ v)_{Y^\perp } \end{aligned}$$

agrees with

$$\begin{aligned} ({\mathfrak {u}},{\mathcal {A}}{\mathfrak {v}})_{L^2({\mathcal {G}})\oplus Y_d}&=\int _{\mathcal {G}}u \overline{ v''''}\, dx -(P_{Y_d}\Gamma _\circ u,P_{Y_d}\Gamma ^\circ v)_{Y_d}. \end{aligned}$$

Therefore, the boundary terms should vanish. Considering that \(\Gamma _\circ u\in Y\) one has

$$\begin{aligned} \left\{ \begin{aligned}&(P_{Y_d}\Gamma ^\circ u,\phi )_{Y_d}-\left( P_{Y_d}\Gamma ^\circ u,P_{Y_d}\Gamma _\circ v\right) _{Y_d} =0,\\&(P_{Y^\perp }\Gamma ^\circ u,P_{Y^\perp }\Gamma _\circ v)_{Y^\perp }=0 ,\\&\left( P_{Y_s}\Gamma ^\circ u,P_{Y_s}\Gamma _\circ v\right) _{Y_s}-\left( P_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma ^\circ v\right) _{Y_s}=0. \end{aligned} \right. \end{aligned}$$

The first equality \((P_{Y_d}\Gamma ^\circ u,\phi )_{Y_d}=(P_{Y_d}\Gamma ^\circ u,P_{Y_d}\Gamma _\circ v)_{Y_d}\) shows that \(\phi =P_{Y_d}\Gamma _\circ v\). The remaining conditions are

$$\begin{aligned} \left\{ \begin{aligned}&(P_{Y^\perp }\Gamma ^\circ u,P_{Y^\perp }\Gamma _\circ v)_{Y^\perp }=0 ,\\&\left( P_{Y_s}\Gamma ^\circ u,P_{Y_s}\Gamma _\circ v\right) _{Y_s}-\left( P_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma ^\circ v\right) _{Y_s}=0, \end{aligned} \right. \end{aligned}$$
(2.5)

from which it follows that S is self-adjoint and \({\mathfrak {v}}\in D({\mathcal {A}})\). Indeed, from the first condition of (2.5) one straightforwardly obtains that \(\Gamma _\circ v\in Y\). From the last one, using the fact that \(P_{Y_s}(\Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u)=0\) one obtains \(\left( SP_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma _\circ v\right) _{Y_s}-\left( P_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma ^\circ v\right) _{Y_s}=0\) and hence S needs to be self-adjoint and \(P_{Y_s}(\Gamma ^\circ v+SP_{Y_s}\Gamma _\circ v)=0\).

(ii)\(\Rightarrow \)(i) In order to prove self-adjointness of \({\mathcal {A}}\) we have to establish two facts: (a) if \({\mathfrak {u}}\) and \({\mathfrak {v}}\) belong to \(D({\mathcal {A}})\), then (2.5) holds and (b) if \({\mathfrak {u}}\in D({\mathcal {A}})\) and (2.5) holds, then \({\mathfrak {v}}\in D({\mathcal {A}})\). If \({\mathfrak {u}},{\mathfrak {v}}\) belong to \(D({\mathcal {A}})\), then the set of equalities (2.5) holds and this takes care of (a). If instead \({\mathfrak {u}}\in D({\mathcal {A}})\) and (2.5) holds with S self-adjoint one directly has, as shown before, that \({\mathfrak {v}}\in D({\mathcal {A}})\) that is (b). \(\square \)

This motivates us to impose the following throughout this section.

Assumptions 2.2

Y is a subspace of \({\mathbb {C}}^{4E}\), \(Y_d,Y_s\) are orthogonal subspaces of Y such that \(Y=Y_d\oplus Y_s\), and S is a linear operator on \(Y_s\).

Let us recall a celebrated result due to J. Kisyński: given a closed, densely defined operator A on a Banach space X, generation of a cosine operator function by A is equivalent to the existence of a space V such that \(D(A)\hookrightarrow V\hookrightarrow X\) and that the part of the operator matrix

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad I\\ A &{}\quad 0 \end{pmatrix} \end{aligned}$$

in \(V\oplus X\) generates a strongly continuous semigroup, see [1, Thm. 3.14.11]. In this case, V is unique and is often called Kisyński space in the literature.

Lemma 2.3

Under the Assumptions 2.2, the operator \(-{\mathcal {A}}\) associated with the form

$$\begin{aligned} a({\mathfrak {u}},{\mathfrak {v}}):=\int _{\mathcal {G}}u'' \overline{ v''}\, dx -\left( SP_{Y_s} \Gamma _\circ u,P_{Y_s}\Gamma _\circ v \right) _{Y_s} \end{aligned}$$

with domain

$$\begin{aligned} {\mathcal {V}}:=\left\{ \begin{pmatrix}u\\ \theta \end{pmatrix}\in {\widetilde{H}}^2({\mathcal {G}})\oplus Y_d:\Gamma _\circ u\in Y,\ P_{Y_d}\Gamma _\circ u=\theta \right\} \end{aligned}$$

generates on \(L^2({\mathcal {G}})\oplus Y_d\) a cosine operator function with Kisyński space \({\mathcal {V}}\).

Proof

The sesquilinear form a is the same form introduced in [24], whereas \({\mathcal {V}}\) is isomorphic to

$$\begin{aligned} {\widetilde{H}}^2_Y({\mathcal {G}}):=\left\{ u\in {\widetilde{H}}^2({\mathcal {G}}):\Gamma _\circ u\in Y\right\} . \end{aligned}$$

We have already checked in [24, Thm. 4.3] that a is densely defined and continuous. Let

$$\begin{aligned} j:{\widetilde{H}}^2_Y({\mathcal {G}})\ni u\mapsto \begin{pmatrix} u\\ P_{Y_d} \Gamma _\circ u \end{pmatrix} \in L^2({\mathcal {G}})\oplus Y_d: \end{aligned}$$

it is clear that this map has dense range, hence a is a j-elliptic form in the sense of [3], § 2], and the associated operator in the sense of [3, Thm. 2.1] agrees with the operator associated with a with domain \({\mathcal {V}}\). Because

$$\begin{aligned} |\mathfrak {I}a({\mathfrak {u}},{\mathfrak {u}})|\le c\Vert S\Vert _{{\mathcal {L}}(Y_s)}\Vert u\Vert _{{\widetilde{H}}^2({\mathcal {G}})}\Vert j({\mathfrak {u}})\Vert _{L^2({\mathcal {G}})\oplus Y_d}, \end{aligned}$$

the assertions then follows by a direct application of [33, Prop. 2.4]. \(\square \)

We are finally in the position to prove the main result of this section; we re-introduce the boundary term \(D=P_{Y_d}RP_{Y_d}\) which we have been discussing at the beginning of this section, along with further perturbing terms.

Theorem 2.4

Under the Assumptions 2.2 let, for all \({\mathsf {e}}\in {\mathsf {E}},\) \( p_{\mathsf {e}}\in L^\infty (0,\ell _{\mathsf {e}})\) be real-valued such that \(p_{\mathsf {e}}(x) \ge P_{\mathsf {e}}\) for some \(P_{\mathsf {e}}>0\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\) and let \(\Pi \) be a self-adjoint, positive definite operator on \(Y_d\). Then for all \(D\in {\mathcal {L}}(Y_d)\)

$$\begin{aligned} -\tilde{{\mathcal {A}}}=\begin{pmatrix}-p\frac{d^4}{dx^4} &{}\quad 0\\ \Pi P_{Y_d}\Gamma ^{\circ } &{}\quad D\end{pmatrix} \end{aligned}$$
(2.6)

with domain

$$\begin{aligned} D(\tilde{{\mathcal {A}}})=\bigg \{ \begin{pmatrix}u\\ \mathbf{\theta }\end{pmatrix}\in {\widetilde{H}}^4({\mathcal {G}})\oplus Y_d: \Gamma _\circ u\in Y,\ P_{Y_d}\Gamma _\circ u=\mathbf{\theta }, P_{Y_s}\left( \Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u\right) =0\bigg \} \end{aligned}$$

generates on \(L^2({\mathcal {G}})\oplus Y_d\) a cosine operator function with Kisyński space \({\mathcal {V}}\).

Proof

Under our assumptions we can endow \(L^2({\mathcal {G}})\oplus Y_d\) with the inner product

$$\begin{aligned} \left( \begin{pmatrix}u\\ \theta \end{pmatrix},\begin{pmatrix}v\\ \phi \end{pmatrix}\right) :=\sum \limits _{{\mathsf {e}}\in {\mathsf {E}}}\int _0^{\ell _{\mathsf {e}}} \frac{1}{p_{\mathsf {e}}(x)}u_{\mathsf {e}}(x)\overline{v_{\mathsf {e}}(x)}\,dx+\left( \Pi \theta ,\phi \right) _{Y_d}. \end{aligned}$$
(2.7)

This is again a Hilbert space and in fact the new inner product is equivalent to the canonical one, hence both Hilbert spaces are isomorphic.

Let us first consider the case \(D=0\). A direct computation similar to that preceding Theorem 2.1 shows that \(\tilde{{\mathcal {A}}}\) is the operator associated with a on \(L^2({\mathcal {G}})\oplus Y_d\) with respect to the above inner product and we can prove just like in Lemma 2.3 that \(-\tilde{{\mathcal {A}}}\) is the generator of a cosine operator function on \(L^2({\mathcal {G}})\oplus Y_d\) with respect to the canonical inner product, since so it is with respect to the equivalent inner product in (2.7).

In order to complete the proof, it suffices to observe that the sesquilinear form

$$\begin{aligned} b({\mathfrak {u}},{\mathfrak {v}}):=(D\theta ,\phi ),\qquad \begin{pmatrix} u\\ \theta \end{pmatrix},\begin{pmatrix} v\\ \phi \end{pmatrix}\in L^2({\mathcal {G}})\oplus Y_d, \end{aligned}$$

is bounded. Thus, the operator associated with the sesquilinear form \(a+b\) in \(L^2({\mathcal {G}})\oplus Y_d\) with respect to the inner product in (2.7)—i.e. \(\tilde{{\mathcal {A}}}\) in (2.6)—is again the generator of a cosine operator function in \(L^2({\mathcal {G}})\oplus Y_d\) with respect to the canonical inner product. \(\square \)

Remark 2.5

  1. (1)

    We stress that while the case of \(D\ne 0\) could also be dealt with as a bounded perturbation of a well-behaved operator, the case of \(S\ne 0\) cannot and requires the specific treatment in Lemma 2.3.

  2. (2)

    One sees that \(\tilde{{\mathcal {A}}}\) is self-adjoint—or equivalently the associated sesquilinear form \({\tilde{a}}\), i.e.,

    $$\begin{aligned}&{{\tilde{a}}}({\mathfrak {u}},{\mathfrak {v}})=\int _{{\mathcal {G}}} u'' \overline{v''}\, dx-\left( S P_{Y_s}\Gamma _\circ u,P_{Y_s}\Gamma _\circ v \right) _{Y_s}-\left( DP_{Y_d} \Gamma _\circ u,P_{Y_d}\Gamma _\circ v \right) _{Y_d}, {\mathfrak {u}},{\mathfrak {v}}\in {\mathcal {V}},\nonumber \\ \end{aligned}$$
    (2.8)

    is symmetric—if and only if SD are self-adjoint operators. Furthermore, \(\tilde{{\mathcal {A}}}\) is self-adjoint and positive semi-definite—or equivalently the associated sesquilinear form \({\tilde{a}}\) is symmetric and accretive—if SD are self-adjoint and negative semi-definite operators; this condition is however not necessary, even in the simple case of \(Y_d=\{0\}\), a counterexample being the Krein–von Neumann extension of \(\Delta ^2_{|C^\infty _c(0,1)}\) discussed in [24, Exa. 4.6].

As a consequence of Theorem 2.4, for all \({\mathfrak {f}}\in D({\tilde{{\mathcal {A}}}})\) and all \({\mathfrak {g}}\in {\mathcal {V}}\) there exists a unique solution

$$\begin{aligned} {\mathfrak {u}}(t):=C(t,-{\tilde{{\mathcal {A}}}}){\mathfrak {f}}+S(t,-{\tilde{{\mathcal {A}}}}){\mathfrak {g}}\end{aligned}$$

of the second-order abstract Cauchy problem associated to \({\tilde{{\mathcal {A}}}}\)

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial ^2 {\mathfrak {u}}}{\partial t^2}(t,x)=-{\tilde{{\mathcal {A}}}} {\mathfrak {u}}(t,x), &{}t\ge 0,\, x\in {\mathcal {G}},\\ {\mathfrak {u}}(0,x)={\mathfrak {f}}(x), &{}x\in {\mathcal {G}},\\ \frac{\partial {\mathfrak {u}}}{\partial t}(0,x)={\mathfrak {g}}(x), &{}x\in {\mathcal {G}}, \end{array}\right. } \end{aligned}$$
(2.9)

where \((C(t,-{\tilde{{\mathcal {A}}}}))_{t\in {\mathbb {R}}}\) is the cosine operator function generated by \(-{\tilde{{\mathcal {A}}}}\) and \((S(t,-{\tilde{{\mathcal {A}}}}))_{t\in {\mathbb {R}}}\) denotes the sine operator function generated by \(-{\tilde{{\mathcal {A}}}}\), which is defined by

$$\begin{aligned} S(t,-{\tilde{{\mathcal {A}}}}){\mathfrak {f}}:=\int _0^t C(s,-{\tilde{{\mathcal {A}}}}){\mathfrak {f}}\ ds,\qquad t\in {\mathbb {R}},\ {\mathfrak {f}}\in L^2({\mathcal {G}})\oplus Y_d. \end{aligned}$$

Moreover, our approach based on forms and cosine operator functions allows us to derive the well-posedness of the damped wave equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial ^2 {\mathfrak {u}}}{\partial t^2}(t)=-{\tilde{{\mathcal {A}}}} \big ({\mathfrak {u}}(t)+\kappa \frac{\partial {\mathfrak {u}}}{\partial t}(t)\big ), &{}t\ge 0,\\ {\mathfrak {u}}(0)={\mathfrak {f}},\\ \frac{\partial {\mathfrak {u}}}{\partial t}(0)={\mathfrak {g}}, \end{array}\right. } \end{aligned}$$
(2.10)

for all \(\kappa \in {\mathbb {C}}\): by Mugnolo [39, Thm. 3.1], (2.10) is governed by an analytic semigroup of angle \(\frac{\pi }{2}\).

Remark 2.6

The generation result in Theorem 2.4 lies within the scope of [38, Thm. 5.3]. In practice, however, the conditions proposed there are very difficult to check, since cosine function generation by \(-{{\mathcal {A}}}\) is shown to be equivalent to cosine function generation by a rather nasty perturbation of \(-\frac{d^4}{dx^4}\) with transmission conditions \(\Gamma _\circ u\in Y_s\); while the latter bi-Laplacian realization is well-behaved by the theory developed in [24], unbounded perturbation theory for cosine operator function is a notoriously tricky business.

Theorem 2.4 can also be compared with some results in [20]. For example, all cases where \(Y_d\ne Y_1\oplus \{0_{{\mathbb {C}}^{2E}}\}\) are covered by Theorem 2.4 but seemingly not by [20, Thm. 8].

Beam equations with dynamic boundary conditions have been often considered in the literature and can be studied with our formalism. In order to make this formalism more concrete we give some examples.

Example 2.7

  1. 1.

    Fix a vertex \({\mathsf {v}}_1\) and consider the beam equation with the following transmission conditions:

    1. (i)

      continuity for u and \(u''\) in every vertex \({\mathsf {v}}\in {\mathsf {V}}\);

    2. (ii)

      Kirchhoff condition for the normal derivative and third normal derivative in every vertex except for \({\mathsf {v}}_1\);

    3. (iii)

      dynamic condition in \({\mathsf {v}}_1\): \(\displaystyle \frac{\partial ^2 u}{\partial t^2}({\mathsf {v}}_1)=\sum _{{\mathsf {e}}\sim {\mathsf {v}}_1} \frac{\partial u''_{\mathsf {e}}}{\partial \nu }({\mathsf {v}}_1);\)

    4. (iv)

      compatibility condition in \({\mathsf {v}}_1\) : \(\displaystyle u''({\mathsf {v}}_1)=\sum _{{\mathsf {e}}\sim {\mathsf {v}}_1} \frac{\partial u_{\mathsf {e}}}{\partial \nu }({\mathsf {v}}_1).\)

    With the formalism of Theorem 2.1, the above vertex conditions correspond to a second order abstract Cauchy problem (2.9), where the operator \(\tilde{{\mathcal {A}}}\) is determined by the choice of the subspaces

    $$\begin{aligned} Y&=\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle \oplus \left( {\mathbb {C}}^{\deg {\mathsf {v}}_1}\oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ^\perp \right) ,\\ Y_d&=\left( \langle {{\mathbf {1}}}_{{\mathsf {E}}_{{\mathsf {v}}_1}}\rangle \oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \{0\}\right) \oplus \bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} \{0\},\\ Y_s&=\left( \{0_{{\mathsf {v}}_1}\}\oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle \right) \oplus \left( {\mathbb {C}}^{\deg {\mathsf {v}}_1}\oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ^\perp \right) ; \end{aligned}$$

    here \({{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\) denotes the characteristic function of the set \({\mathsf {E}}_{\mathsf {v}}\) of edges incident with any given vertex \({\mathsf {v}}\).

  2. 2.

    Castro and Zuazua have discussed the boundary controllability of a non-degenerate ([11]) and a degenerate ([10]) equation modeling the vibrations of two flexible beams connected by a point mass: remarkably, the model in [10] involves two dynamic conditions. The transmission conditions they consider in the connecting vertex can be written in our formalism taking \(D=S=0\) and \(Y=Y_d= \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle \oplus \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ^\perp \) with \(Y_s=\{0_{{\mathbb {C}}^{2}}\}\oplus \{0_{{\mathbb {C}}^{2}}\}\); and \(D=S=0\) and \(Y= \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle \oplus \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ^\perp \) with \(Y_d=\langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle \oplus \{0_{{\mathbb {C}}^{2}}\}\) and \(Y_s=\{0_{{\mathbb {C}}^{2}}\}\oplus \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ^\perp \), respectively.

Let us now investigate on conservation of energy for the beam equation. Let us introduce the energy-type functionals in \(L^2({\mathcal {G}})\oplus Y_d\)

$$\begin{aligned} K(t):=\frac{1}{2}\left\| \frac{\partial {\mathfrak {u}}}{\partial t}(t)\right\| ^2_{L^2({\mathcal {G}})\oplus Y_d},\quad P(t):=\frac{1}{2}{\tilde{a}}({\mathfrak {u}}(t)),\quad E(t):=K(t)+P(t)\ , \end{aligned}$$

for any solution \({\mathfrak {u}}=\left( {\begin{matrix}u\\ P_{Y_d}\Gamma _\circ u \end{matrix}}\right) \) of (2.9), where \({\tilde{a}}\) is the sesquilinear form in (2.8).

Lemma 2.8

Under the assumptions of Theorem 2.4, let \(\tilde{{\mathcal {A}}}\) be self-adjoint and positive semi-definite. Then the total energy E of (2.9) is conserved, i.e., it is a constant (over time) that only depends on the initial data \({\mathfrak {f}}\in D(\tilde{{\mathcal {A}}}),{\mathfrak {g}}\in {\mathcal {V}}\).

Proof

Let us first observe that

$$\begin{aligned} K(t)=\frac{1}{2}\int _{\mathcal {G}}\left( \frac{\partial u}{\partial t}(t,x)\right) ^2\,dx+\frac{1}{2}\left( \frac{d}{dt}\Gamma _\circ u,\frac{d}{dt}\Gamma _\circ u\right) _{Y_d} \end{aligned}$$

and

$$\begin{aligned} P(t)= & {} \frac{1}{2}\int _{\mathcal {G}}\left( \frac{\partial ^2 u}{\partial x^2}(t,x)\right) ^2\,dx+\frac{1}{2}\frac{d}{d t}\Vert (-D)^{\frac{1}{2}}P_{Y_d}\Gamma _\circ u\Vert _{Y_d}^2\\\qquad&+\frac{1}{2}\frac{d}{d t}\Vert (-S)^{\frac{1}{2}}P_{Y_s}\Gamma _\circ u\Vert _{Y_s}^2. \end{aligned}$$

Differentiating the energy of a given solution \({\mathfrak {u}}\) with respect to t and integrating by parts one obtains

$$\begin{aligned} \begin{aligned} \frac{dE}{dt}(t)&=\int _{\mathcal {G}}\left( \frac{\partial u}{\partial t}\frac{\partial ^2 u}{\partial t^2}+\frac{\partial ^2 u}{\partial x^2}\frac{\partial ^3 u}{\partial t\partial x^2}\right) \,dx+\left( \frac{d}{dt}\Gamma _\circ u,\frac{d^2}{dt^2}\Gamma _\circ u\right) _{Y_d}\\ {}&\qquad +\frac{1}{2}\frac{d}{d t}\Vert (-D)^{\frac{1}{2}}P_{Y_d}\Gamma _\circ u\Vert _{Y_d}^2+\frac{1}{2}\frac{d}{d t}\Vert (-S)^{\frac{1}{2}}P_{Y_s}\Gamma _\circ u\Vert _{Y_s}^2\\&=\int _{\mathcal {G}}\left( \frac{\partial u}{\partial t}\frac{\partial ^2 u}{\partial t^2}+\frac{\partial u}{\partial t}\frac{\partial ^4 u}{\partial x^4}\right) \,dx-\left( \frac{d}{dt} \Gamma _\circ u,\Gamma ^\circ u\right) +\left( \frac{d}{dt}\Gamma _\circ u,\frac{d^2}{dt^2}\Gamma _\circ u\right) _{Y_d}\\ {}&\qquad -\left( \frac{d}{dt}\Gamma _\circ u,DP_{Y_d}\Gamma _\circ u\right) _{Y_d}-\left( \frac{d}{dt}\Gamma _\circ u,SP_{Y_s}\Gamma _\circ u\right) _{Y_s}\\&=\left( \frac{d}{dt}\Gamma _\circ u, -\Gamma ^\circ u+\frac{d^2}{d t^2}\Gamma _\circ u-DP_{Y_d}\Gamma _\circ u\right) _{Y_d}\\ {}&\qquad -\left( \frac{d}{dt}\Gamma _\circ u, \Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u\right) _{Y_s}=0 \end{aligned} \end{aligned}$$

since \(P_{Y_d}\left( \frac{d^2}{d t^2}\Gamma _\circ u(t)-\Gamma ^\circ u(t)-DP_{Y_d}\Gamma _\circ u(t)\right) =0\) for all \(t\ge 0\) and \(P_{Y_s}(\Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u)=0\). \(\square \)

This motivates us to introduce the notation

$$\begin{aligned} E({\mathfrak {f}},{\mathfrak {g}}) \end{aligned}$$

for the total energy of (2.9) with initial data \({\mathfrak {f}},{\mathfrak {g}}\). J.A. Goldstein and coauthors have studied since [25] whether wave-like equations enjoy equipartition of energy, i.e., when

$$\begin{aligned} \lim _{t\rightarrow \pm \infty } K(t)=\lim _{t\rightarrow \pm \infty } P(t)=\frac{1}{2}E({\mathfrak {f}},{\mathfrak {g}})\quad \hbox {for all }{\mathfrak {f}}\in D(\tilde{{\mathcal {A}}}),\ {\mathfrak {g}}\in {\mathcal {V}}. \end{aligned}$$

Proposition 2.9

Under the assumptions of Theorem 2.4, let \(\tilde{{\mathcal {A}}}\) be self-adjoint and positive semi-definite. Then equipartition of energy fails for (2.9).

Proof

Let \({\mathcal {B}}\) be the square root of \(\tilde{{\mathcal {A}}}\); it is well-known that its domain agrees with \({\mathcal {V}}\). It is known that (2.9) enjoys equipartition of energy if and only if

$$\begin{aligned} \lim _{s\rightarrow \pm \infty }(e^{is{\mathcal {B}}}\mathfrak \phi ,\mathfrak \phi )_{L^2({\mathcal {G}})\oplus Y_d}=0\quad \hbox {for all }\phi \in L^2({\mathcal {G}})\oplus Y_d, \end{aligned}$$

see [25, Thm. and the text around (14)]. Now, because \({\mathcal {G}}\) is finite, \({\mathcal {V}}\) is compactly embedded in \(L^2({\mathcal {G}})\oplus Y_d\) and hence \({\mathcal {B}}\) has compact resolvent: accordingly, there exists an orthonormal basis of \(L^2({\mathcal {G}})\oplus Y_d\) of eigenvectors of \({\mathcal {B}}\). Let \(\lambda \) be an eigenvalue of \(-{\mathcal {B}}\) and \(\mathfrak \phi \) be a corresponding normalized eigenvector: then

$$\begin{aligned} (e^{is{\mathcal {B}}}\mathfrak \phi ,\mathfrak \phi )_{L^2({\mathcal {G}})\oplus Y_d}= e^{is\lambda }{\not \rightarrow } 0, \end{aligned}$$

showing that equipartition of energy fails to hold. \(\square \)

In the proof of Proposition 2.9, the square root \({\mathcal {B}}\) of our operator matrix \(\tilde{{\mathcal {A}}}\) has appeared. While the proof relies on general properties of square roots, it is sometimes possible to describe it more closely: this is interesting because it delivers a more explicit formula for the cosine and sine operator functions generated by \(-\tilde{{\mathcal {A}}}\): indeed, because \({\mathcal {B}}\) is self-adjoint, \(i{\mathcal {B}}\) generates a unitary group. It is known that \(C(t,-\tilde{{\mathcal {A}}})=\cosh (t,i{\mathcal {B}})\), i.e.,

$$\begin{aligned} C(t,-\tilde{{\mathcal {A}}})=\frac{e^{it{\mathcal {B}}}+e^{-it{\mathcal {B}}}}{2},\qquad t\in {\mathbb {R}}, \end{aligned}$$

cf. [1, Exa. 3.14.15].

Example 2.10

  1. 1.

    In [24] we have reviewed stationary transmission conditions that appear in several models of beam networks in the literature—especially in [7, 8, 15, 16, 27], showing that they fit in our scheme; additionally, we have considered the bi-Laplacian with continuity conditions across the vertices and zero conditions on the first, second, and third derivatives at the endpoints of each edge, and determined its Friedrichs and Krein–von Neumann extensions. All these realizations satisfy the assumptions of Lemma 2.8 and Proposition 2.9, leading to conservation of energy and failure of equipartition of energy. On the other hand, only the transmission conditions in [24], Exa. 3.2], taken from [15], lead to a realization of the forth derivative that is a square operator.

  2. 2.

    A Laplacian realization on a network with conditions of continuity on each vertex \({\mathsf {v}}_1,\ldots ,{\mathsf {v}}_n\), complemented by dynamic conditions in \({\mathsf {v}}_1\) and (stationary) Kirchhoff conditions in \({\mathsf {v}}_2,\ldots ,{\mathsf {v}}_n\) has been studied by the second author and S. Romanelli: we refer to [34] for more details and an overview of earlier appearances of this model in mathematical and biological literature. It can be written as

    $$\begin{aligned} {\mathcal {B}}=\begin{pmatrix} -\frac{d^2}{dx^2}&{}\quad 0\\ P_{{{\tilde{Y}}}_d}\gamma ^\circ &{}\quad 0 \end{pmatrix} \end{aligned}$$

    with domain

    $$\begin{aligned} D({\mathcal {B}})=\left\{ \begin{pmatrix}u\\ \xi \end{pmatrix}\in {\widetilde{H}}^2({\mathcal {G}})\oplus {{\tilde{Y}}}_d:\gamma _\circ u\in {{\tilde{Y}}},\ P_{{{\tilde{Y}}}_d}\gamma _\circ u=\xi , P_{{{\tilde{Y}}}_s} \gamma ^\circ u=0 \right\} , \end{aligned}$$

    with

    $$\begin{aligned} {\tilde{Y}}=\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle ,\quad {\tilde{Y}}_d=\langle {{\mathbf {1}}}_{{\mathsf {E}}_{{\mathsf {v}}_1}}\rangle \oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \{ 0_{\mathsf {v}}\},\quad {\tilde{Y}}_s=\{0_{{\mathsf {v}}_1}\}\oplus \bigoplus _{{\mathsf {v}}\ne {\mathsf {v}}_1} \langle {{\mathbf {1}}}_{{\mathsf {E}}_{\mathsf {v}}}\rangle , \end{aligned}$$

    where

    $$\begin{aligned} \gamma _\circ :u\mapsto \begin{pmatrix} (u_{\mathsf {e}}(0))_{{\mathsf {e}}\in {\mathsf {E}}}\\ (u_{\mathsf {e}}(\ell _{\mathsf {e}}))_{{\mathsf {e}}\in {\mathsf {E}}} \end{pmatrix},\quad \gamma ^\circ :u\mapsto \begin{pmatrix} -(u'_{\mathsf {e}}(0))_{{\mathsf {e}}\in {\mathsf {E}}}\\ (u'_{\mathsf {e}}(\ell _{\mathsf {e}}))_{{\mathsf {e}}\in {\mathsf {E}}} \end{pmatrix}. \end{aligned}$$

    Taking the square of this operator leads to a bi-Laplacian realization that fits the scheme of our Theorem 2.1. Indeed, the square of this Laplacian is precisely the bi-Laplacian presented in Example 2.7.(1).

3 Parabolic Theory of Polyharmonic Operators with Boundary Conditions on Networks

In this section we are going to extend the theory developed in the previous section to the study of jth powers of the Laplacian, for generic \(j\ge 2\). It turns out that the formalism introduced before allows us to discuss parabolic problems driven by general poly-harmonic operators under very general (stationary or dynamic) boundary conditions.

It is easy to prove by induction that for all \(j\in {\mathbb {N}}\)

$$\begin{aligned} \begin{aligned} \int _0^\ell (-1)^ju^{(2j)}(x) \overline{v(x)}dx -\int _0^\ell u^{(j)}(x) \overline{v^{(j)}(x)}\,dx&=\sum _{k=0}^{j-1} \left[ (-1)^{j+k} \partial _\nu ^{2j-k-1}u\right] \cdot \overline{\partial _\nu ^{k}v}\\&=: \Gamma ^\circ u \cdot \overline{\Gamma _\circ v}\\ \end{aligned} \end{aligned}$$
(3.1)

where the vectors \(\Gamma ^\circ u,\Gamma _\circ v\in {\mathbb {C}}^{2j}\) are defined using the notation

$$\begin{aligned} \partial ^h_\nu u:={\left\{ \begin{array}{ll} \begin{pmatrix} u^{(h)}(0)\\ u^{(h)}(\ell ) \end{pmatrix}\;\; \quad \hbox {if }h\; \hbox {is even},\\ \begin{pmatrix} -u^{(h)}(0)\\ u^{(h)}(\ell ) \end{pmatrix}\quad \hbox {if }h \hbox {is odd}.\\ \end{array}\right. } \end{aligned}$$

This clearly suggests to introduce the sesquilinear form

$$\begin{aligned} a(u,v):=\int _0^\ell u^{(j)}(x) \overline{v^{(j)}(x)}\,dx-(R\Gamma _\circ u,\Gamma _\circ v)_Y \end{aligned}$$

for all uv in the form domain

$$\begin{aligned} {\mathcal {V}}:=\left\{ u\in H^j(0,\ell ):\Gamma _\circ u\in Y \right\} \end{aligned}$$

for any given subspace Y of \({\mathbb {C}}^{2j}\) and any linear operator R on Y. This form is symmetric (and hence the corresponding operator is self-adjoint) if and only if R is self-adjoint; indeed, the corresponding operator A is the operator \((-1)^j \frac{d^{2j}}{dx^{2j}}\) with boundary conditions

$$\begin{aligned} \Gamma _\circ u\in Y,\qquad \Gamma ^\circ u+R\Gamma _\circ u\in Y^\perp . \end{aligned}$$

Following the same ideas in the proof of [6, Thm. 1.4.4] one can prove that each self-adjoint realization of \((-1)^j \frac{d^{2j}}{dx^{2j}}\) is of this type.

Upon replacing \(L^2(0,\ell )\) by \(\bigoplus _{{\mathsf {e}}\in {\mathsf {E}}}L^2(0,\ell _{\mathsf {e}})\oplus Y_d\), scalar-valued functions by \({\mathbb {C}}^{E}\)-valued functions, and the boundary space \({\mathbb {C}}^{2j}\) by \({\mathbb {C}}^{2jE}\), we can consider the sesquilinear form

$$\begin{aligned} {{\tilde{a}}}({\mathfrak {u}},{\mathfrak {v}}):=\int _{{\mathcal {G}}} u^{(j)}(x) \overline{v^{(j)}(x)}\,dx-\left( SP_{Y_s} \Gamma _\circ u,P_{Y_s}\Gamma _\circ v \right) _{Y_s}-\left( DP_{Y_d} \Gamma _\circ u,P_{Y_d}\Gamma _\circ v \right) _{Y_d}, \end{aligned}$$

with domain

$$\begin{aligned} {\mathcal {V}}=\bigg \{ \begin{pmatrix}u\\ \mathbf{\theta }\end{pmatrix}\in {\widetilde{H}}^{j}({\mathcal {G}})\oplus Y_d: \Gamma _\circ u\in Y,\ P_{Y_d}\Gamma _\circ u=\theta \bigg \} \end{aligned}$$
(3.2)

and then extend the above considerations to the case of elliptic operators of order 2j on networks; the essential ideas coincide with those presented in the previous section and we omit the details.

Theorem 3.1

Let \(j\in {\mathbb {N}}\) and \(Y_d\) be a subspace of \({\mathbb {C}}^{2jE}\) and consider the operator

$$\begin{aligned} \begin{aligned} {\mathcal {A}}_0&=(-1)^j\begin{pmatrix} \frac{d^{2j}}{dx^{2j}} &{}\quad 0\\ - P_{Y_d}\Gamma ^{\circ } &{}\quad 0\end{pmatrix}\\ D({\mathcal {A}}_0)&=\left\{ \begin{pmatrix} u\\ \theta \end{pmatrix}\in {\widetilde{H}}^{2j}({\mathcal {G}})\oplus Y_d: P_{Y_d}\Gamma _\circ u=\theta ,\ P^\perp _{Y_d}\Gamma _\circ u=0, \Gamma ^\circ u=0 \right\} , \end{aligned} \end{aligned}$$

where the operators \(\Gamma ^\circ ,\Gamma _\circ \) are defined in (3.1). Then for any extension \({\mathcal {A}}\) of \({\mathcal {A}}_0\) on \(L^2({\mathcal {G}})\oplus Y_d\), the following are equivalent.

  1. (i)

    \({\mathcal {A}}\) is self-adjoint.

  2. (ii)

    The operator \({\mathcal {A}}\) takes the form

    $$\begin{aligned} {\mathcal {A}}= & {} (-1)^j\begin{pmatrix} \frac{d^{2j}}{dx^{2j}} &{}\quad 0\\ - P_{Y_d}\Gamma ^{\circ } &{}\quad 0\end{pmatrix},\nonumber \\ D({\mathcal {A}})= & {} \bigg \{ \begin{pmatrix}u\\ \mathbf{\theta }\end{pmatrix}\in {\widetilde{H}}^{2j}({\mathcal {G}})\oplus Y_d: \Gamma _\circ u\in Y,\ P_{Y_d}\Gamma _\circ u=\theta ,\nonumber \\&\qquad P_{Y_s}(\Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u)=0\bigg \},\nonumber \\ \end{aligned}$$
    (3.3)

    for some subspace \(Y_s\) of \({\mathbb {C}}^{2jE}\) orthogonal to \(Y_d\) and some self-adjoint linear operator S on \(Y_s\); here \(Y:=Y_d\oplus Y_s\).

In the non-dynamic case of \(Y_d=\{0\}\), \({\mathcal {A}}_0\) satisfies zero boundary conditions on all derivatives up to order \(2j-1\): hence \({\mathcal {A}}_0\) is a symmetric, positive definite operator and we recover the classical characterization of self-adjoint extensions of one-dimensional polyharmonic operators.

Motivated by the above result we therefore impose the following in the remainder of this section.

Assumptions 3.2

\(j\in {\mathbb {N}}\), Y is a subspace of \({\mathbb {C}}^{2jE}\), \(Y_d,Y_s\) are orthogonal subspaces of Y such that \(Y=Y_d\oplus Y_s\), S is a linear operator on \(Y_s\), and D is a linear operator on \(Y_d\).

We can thus state the following, without proof.

Theorem 3.3

Under the Assumptions 3.2, let \(\Pi \) be a self-adjoint, positive definite operator on \(Y_d\). Also, let for all \({\mathsf {e}}\in {\mathsf {E}}\) \( p_{\mathsf {e}}\in L^\infty (0,\ell _{\mathsf {e}})\) be real-valued and such that \(p_{\mathsf {e}}(x) \ge P_{\mathsf {e}}\) for some \(P_{\mathsf {e}}>0\) and a.e. \(x\in (0,\ell _{\mathsf {e}})\). Then for all \(D\in {\mathcal {L}}(Y_d)\)

$$\begin{aligned} -\tilde{{\mathcal {A}}}=(-1)^{j+1}\begin{pmatrix} p\frac{d^{2j}}{dx^{2j}} &{}\quad 0\\ -\Pi P_{Y_d}\Gamma ^{\circ } &{}\quad -D\end{pmatrix} \end{aligned}$$

with domain \(D({\mathcal {A}})\) as in (3.3) generates on \(L^2({\mathcal {G}})\oplus Y_d\) a cosine operator function with Kisyński space \({\mathcal {V}}\) in (3.2).

Remark 3.4

Theorem 3.3 extends the generation results from [40], where only the case of \(j=1\) and \(Y_s=\{0\}\) was considered; in turn, the latter generalized the main assertions from [34], where \(Y=Y_d\) was taken to be the subspace of \({\mathbb {C}}^{2E}\) consisting of those vectors that are vertex-wise constant, i.e., \(\bigoplus _{{\mathsf {v}}\in {\mathsf {V}}}\langle {\mathbf {1}}_{\mathsf {v}}\rangle \).

By Arendt et al. [1, Thm. 3.1.4.17], each generator of a cosine operator function also generates on the same Banach space an analytic semigroup of angle \(\frac{\pi }{2}\). In the next proposition we study some properties for this semigroup. For the sake of simplicity we focus on the simple case of \(p\equiv 1\) and \(\Pi ={{\,\mathrm{Id}\,}}\), but see Remark 3.6 below.

Proposition 3.5

Under the Assumptions 3.2 the following properties hold for the semigroup generated by

$$\begin{aligned} -{\tilde{{\mathcal {A}}}}=(-1)^{j+1}\begin{pmatrix} \frac{d^{2j}}{dx^{2j}}&{} \quad 0\\ -P_{Y_d}\Gamma ^\circ &{}\quad -D\end{pmatrix}. \end{aligned}$$
  1. (i)

    \(e^{-t{\tilde{{\mathcal {A}}}}}\) is of trace class for all \(t>0\).

  2. (ii)

    If DS are dissipative, then there exist \(C,\omega >0\) such that

    $$\begin{aligned} \Vert e^{-t\tilde{{\mathcal {A}}}}\Vert _{2\rightarrow \infty }\le C(t^{-\frac{1}{4j}} +1) \qquad \hbox {for all }t>0, \end{aligned}$$

    hence in particular \(e^{-t{\tilde{{\mathcal {A}}}}}\) is ultracontractive.

  3. (iii)

    \(e^{-t{\tilde{{\mathcal {A}}}}}\) has for all \(t>0\) an integral kernel of class \(L^\infty \).

  4. (iv)

    The solution \({\mathfrak {u}}:=e^{-t{\tilde{{\mathcal {A}}}}}{{\mathfrak {f}}}\) of

    $$\begin{aligned} \left\{ \begin{array}{ll} \frac{\partial {\mathfrak {u}}}{\partial t}(t)=-{\tilde{{\mathcal {A}}}} {\mathfrak {u}}(t), &{}\quad t\ge 0,\\ {\mathfrak {u}}(0)={\mathfrak {f}}&{} \\ \end{array} \right. \end{aligned}$$

    satisfies the boundary condition

    $$\begin{aligned} P_{Y_d}\Gamma _\circ \frac{\partial ^{2j}u}{\partial x^{2j}}(t)+P_{Y_d}\Gamma ^\circ u(t)+DP_{Y_d}\Gamma _\circ u(t)=0,\qquad t> 0. \end{aligned}$$

The terms \(\Gamma ^\circ u\) and \(\Gamma _\circ u\) involve differential terms of order up to \(2j-1\) and \(j-1\), respectively. The property in (iv) is therefore surprising: it states that the natural order of the Wentzell-type boundary conditions for an operator of order 2j is not necessarily 2j, as most usually considered in the literature (for example, in [13, 18, 20, 21]) but rather up to \(3j-1\) (as it was already observed in [10], for \(j=2\)).

Proof

  1. (i)

    Observe that the image of the form domain \({\mathcal {V}}\) under j is compactly embedded in \(L^2({\mathcal {G}})\oplus Y_d\), hence the operator \(-{\tilde{{\mathcal {A}}}}\) has compact resolvent. Indeed, more is true: by Gramsch [26] the embedding of \(j({\mathcal {V}})\) in \(L^2({\mathcal {G}})\oplus Y_d\) is of Schatten class, hence the analytic semigroup generated by \(-{\tilde{{\mathcal {A}}}}\) consists for all \(t>0\) of trace class operators [33, Rem. 3.4].

  2. (ii)

    Let us then consider \(L^\infty ({\mathcal {G}})\oplus Y_d\) with the norm

    $$\begin{aligned} \Vert {\mathfrak {u}}\Vert _\infty =\left\| \begin{pmatrix} u\\ \theta \end{pmatrix}\right\| _\infty :=\max \{\Vert u\Vert _{L^\infty ({\mathcal {G}})},\Vert \theta \Vert _{Y_d}\}. \end{aligned}$$

    Let \({{\mathfrak {u}}}:=\left( {\begin{matrix}u\\ \theta \end{matrix}}\right) \in {\mathcal {V}}\). The Gagliardo–Nirenberg inequality, cf. [22], yields

    $$\begin{aligned} \Vert u\Vert _{L^\infty ({\mathcal {G}})}\le c_1\Vert u^{(j)}\Vert _{L^2({\mathcal {G}})}^\frac{1}{2j}\Vert u\Vert _{L^2({\mathcal {G}})}^\frac{2j-1}{2j}+c_2\Vert u\Vert _{L^2({\mathcal {G}})}. \end{aligned}$$

    On the other hand, because \(\Vert u\Vert _{L^\infty ({\mathcal {G}})}\le c\Vert u\Vert _{{\widetilde{H}}^j({\mathcal {G}})}\) we find that

    $$\begin{aligned} \Vert \theta \Vert _{Y_d}=|\Gamma _\circ u|\le {\widetilde{c}}_1\Vert u\Vert _{L^2({\mathcal {G}})}+{\widetilde{c}}_2\Vert u^{(j)}\Vert _{L^2({\mathcal {G}})}. \end{aligned}$$

    Therefore, since D and S are dissipative

    $$\begin{aligned} \Vert {\mathfrak {u}}\Vert _\infty&\le k_1\Vert u^{(j)}\Vert _{L^2({\mathcal {G}})}^\frac{1}{2j}\Vert u\Vert _{L^2({\mathcal {G}})}^\frac{2j-1}{2j}+k_2\Vert u\Vert _{L^2({\mathcal {G}})}\\ {}&= k_1 \left( a({\mathfrak {u}})+(DP_{Y_d}\Gamma _\circ u,\Gamma _\circ u)_{Y_d}+(SP_{Y_s}\Gamma _\circ u,\Gamma _\circ u)_{Y_s}\right) ^\frac{1}{4j}\Vert u\Vert _{L^2({\mathcal {G}})}^\frac{2j-1}{2j}\\ {}&\qquad +k_2\Vert u\Vert _{L^2({\mathcal {G}})} \\ {}&\le k_1 \left( a({\mathfrak {u}})\right) ^\frac{1}{4j}\Vert u\Vert _{L^2({\mathcal {G}})}^\frac{2j-1}{2j}+k_2\Vert u\Vert _{L^2({\mathcal {G}})}. \end{aligned}$$

    Observe that SD dissipative also implies that the semigroup is contractive. Also, recall that if an operator A generates an analytic semigroup, then there exists a positive constant such that \(||Ae^{-tA}||\le \frac{c}{t}\) for all \(t>0\). We follow an argument similar to that in [24, Proposition 5.1]: letting \({{\mathfrak {u}}}:=e^{-t{\tilde{{\mathcal {A}}}}}{{\mathfrak {f}}}\) and using analyticity and contractivity on \(L^2({\mathcal {G}})\) one obtains

    $$\begin{aligned} \begin{aligned} \Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{\infty }&\le k_1\Vert {\tilde{{\mathcal {A}}}} e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}^\frac{1}{4j}\Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}^\frac{1}{4j}\Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}^\frac{2j-1}{2j}\\ {}&\qquad +k_2\Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}\\&\le C(t^{-\frac{1}{4j}}+1)\Vert {\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}\\&\le {\tilde{C}}t^{-\frac{1}{4j}}e^{\omega t}\Vert {\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d}. \end{aligned} \end{aligned}$$
    (3.4)

    This concludes the proof.

  3. (iii)

    The adjoint \({\tilde{{\mathcal {A}}}}^*\) of \({\tilde{{\mathcal {A}}}}\) satisfies the assumptions of this theorem, too, hence by (ii)

    $$\begin{aligned} \Vert e^{-t{\tilde{{\mathcal {A}}}}^*}{\mathfrak {f}}\Vert _{\infty }\le Ct^{-\frac{1}{4j}}e^{\omega t}\Vert {\mathfrak {f}}\Vert _{L^2({\mathcal {G}})\oplus Y_d} \end{aligned}$$

    and by duality

    $$\begin{aligned} \Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{2}\le C t^{-\frac{1}{4j}}e^{\omega t}\Vert {\mathfrak {f}}\Vert _{L^1({\mathcal {G}})\oplus Y_d}. \end{aligned}$$
    (3.5)

    By the semigroup law, combining (3.4) and (3.5) yields

    $$\begin{aligned} \Vert e^{-t{\tilde{{\mathcal {A}}}}}\Vert _{1\rightarrow \infty }\le C^2t^{-\frac{1}{2j}}e^{2\omega t}\qquad \hbox {for all }t>0, \end{aligned}$$

    and in particular \(e^{-t{\tilde{{\mathcal {A}}}}}\) maps for all \(t>0\) \(L^1\) to \(L^\infty \): the existence of an \(L^\infty \)-kernel then follows from the Kantorovich–Vulikh Theorem.

  4. (iv)

    Because of the smoothing effect of the analytic semigroup generated by \(-{\tilde{{\mathcal {A}}}}\), the solution \(u(t,\cdot )\) is for all \(t>0\) infinitely often differentiable (with respect to space), hence we can take the boundary values \(\Gamma _\circ \frac{d^{2j}}{dx^{2j}}u\) of \(\frac{d^{2j}}{dx^{2j}}u\). Because the time derivative and \(\Gamma _\circ \) commute, plugging the parabolic equation satisfied in the interior of the edges into the dynamic boundary conditions we deduce that

    $$\begin{aligned} (-1)^j \Gamma _\circ u^{2j}(t)=P_{Y_d}\Gamma ^\circ u(t)+ DP_{Y_d}\Gamma _\circ u(t)\qquad t>0. \end{aligned}$$

    This concludes the proof. \(\square \)

Observe that from the computations in (3.4) one also finds

$$\begin{aligned} \Vert e^{-t{\tilde{{\mathcal {A}}}}}{\mathfrak {f}}\Vert _{L^\infty ({\mathcal {G}})\oplus Y_d}\le c(t^{-\frac{1}{4j}}+1)^2\Vert {\mathfrak {f}}\Vert _{L^1({\mathcal {G}})\oplus Y_d}\approx c\Vert {\mathfrak {f}}\Vert _{L^1({\mathcal {G}})\oplus Y_d}\ \text {as}\ t\rightarrow \infty . \end{aligned}$$

Remark 3.6

Parabolic equations driven by Laplacians on networks with dynamic vertex conditions have been studied in [34]. It has been observed in [34, Rem 3.6] that modifying the coefficients of the normal derivative (the lower-left entry of the relevant operator matrix in that context) amounts to a relatively compact perturbation of an analytic semigroup generator: by a perturbation theorem due to Desch and Schappacher [1, Thm. 3.7.25], the new operator generates an analytic semigroup, too. (Similar assertions were proved in [4, 44].) The same idea carries over to our setting and yields that the operator matrix

$$\begin{aligned} -\tilde{{\mathcal {A}}}=(-1)^{j+1}\begin{pmatrix} p\frac{d^{2j}}{dx^{2j}} &{}\quad 0\\ M &{}\quad T\end{pmatrix} \end{aligned}$$

with domain

$$\begin{aligned} D(\tilde{{\mathcal {A}}})=\bigg \{ \begin{pmatrix}u\\ \mathbf{\theta }\end{pmatrix}\in {\widetilde{H}}^{2j}({\mathcal {G}})\oplus Y_d: \mathbf{\theta }=P_{Y_d}\Gamma _\circ u, P_{Y_s}\left( \Gamma ^\circ u+SP_{Y_s}\Gamma _\circ u\right) =0\bigg \} \end{aligned}$$

for any bounded linear operators M from \({\widetilde{H}}^{2j}({\mathcal {G}})\) to \(Y_d\) and T on \(Y_d\), generates an analytic semigroup on \(L^2({\mathcal {G}})\oplus Y_d\). The proof of [4, Thm. 9] can be modified to show that \(-\tilde{{\mathcal {A}}}\) has a number of negative eigenvalues at least as large as the number of negative eigenvalues of \(\Pi \), provided M factorizes as \(M=\Pi P_{Y_d}\Gamma ^{\circ }\).

In [24] we have discussed the bi-Laplacian on \({\mathcal {G}}\) through extension theory of Hilbert spaces. We could pursue similar results here, but we avoid the details. Suffice it to say that if we impose continuity vertex conditions on the pre-minimal operator, i.e., we consider the operator matrix \({\mathcal {A}}\) in (2.2) restricted to

$$\begin{aligned} \left\{ \begin{pmatrix}u\\ \theta \end{pmatrix}\in C ({\mathcal {G}})\oplus Y_d: u'\in \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} C^\infty _c(0,\ell _{\mathsf {e}}), P_{Y_d}\Gamma _\circ u= \theta \right\} , \end{aligned}$$

then its Friedrichs extension \({\mathcal {A}}_F\) is the realization of \({\mathcal {A}}\) whose domain contains functions u that enjoy the boundary conditions \(\partial _\nu ^{h}u=0\) for all \(1\le h\le j-1\), along with continuity of u on the metric space \({\mathcal {G}}\) and a Kirchhoff-type condition on \(\partial _\nu ^{2j-1}u\) at each vertex. In particular, the null space of \({\mathcal {A}}_F\) is 1-dimensional: it is given by the space of all constants on \({\mathcal {G}}\). Also observe that \(e^{-t{\mathcal {A}}_F}\) maps, for each \(t>0\), \(L^2({\mathcal {G}})\oplus Y_d\) into

$$\begin{aligned} D({\mathcal {A}}_F)\hookrightarrow \left\{ \begin{pmatrix}u\\ \theta \end{pmatrix}\in C ({\mathcal {G}})\oplus Y_d: u\in \bigoplus _{{\mathsf {e}}\in {\mathsf {E}}} C^{2j-1}([0,\ell _{\mathsf {e}}]), P_{Y_d}\Gamma _\circ u=\theta \right\} . \end{aligned}$$

Combining these two facts with [24, Cor. 7.4 and Prop. 7.5] we can immediately deduce remarkable properties of the semigroup generated by \(-{\mathcal {A}}_F\), which we state without proof. It is interesting to compare them with the properties of the \(-{\mathcal {A}}_N\), defined as the realization of \(-{\mathcal {A}}\) whose domain contains functions u such that \(\partial _\nu ^{h}u\) is continuous of \({\mathcal {G}}\) for all \(0\le h\le j-1\), while a Kirchhoff-type condition is satisfied by \(\partial _\nu ^{h}u\) for all \(j\le h\le 2j-1\).

Proposition 3.7

Let \(j\ge 2\). Under the Assumptions 3.2, for all subspaces \(Y_d\) of Y the semigroup on \(L^2({\mathcal {G}})\oplus Y_d\) generated by \(-{\mathcal {A}}_F\) is uniformly eventually sub-Markovian; furthermore, it eventually enjoys a uniform strong Feller property. On the other hand, the semigroup on \(L^2({\mathcal {G}})\oplus Y_d\) generated by \(-{\mathcal {A}}_N\) is not even individually asymptotically positive.

By eventual sub-Markovian (resp., eventually irreducible) we mean that there exists some \(t_0>0\) such that \(0\le e^{-t{\mathcal {A}}_F}{\mathfrak {f}}\le \mathbf{1}\) (resp., \(0\ll e^{-t{\mathcal {A}}_F}{\mathfrak {f}}\)) for all \(t\ge t_0\) and all \({\mathfrak {f}}\) such that \(0\le {\mathfrak {f}}\le \mathbf{1}\) (resp, \(0\le {\mathfrak {f}}\), \(0\not \equiv {\mathfrak {f}}\)), where \(\mathbf{1}\) is the constant 1 function. Also, a bounded semigroup is called individually asymptotically positive if the distance between each orbit and the Hilbert lattice’s positive cone tends to 0 as \(t\rightarrow \infty \).

Similarly, we say that a semigroup eventually enjoys a strong Feller property if for all \(t\ge t_0\) it is sub-Markovian and maps bounded measurable functions to bounded continuous functions.

While \(-{\mathcal {A}}_F\) generates on \(L^2({\mathcal {G}})\) a semigroup that leaves \(C({\mathcal {G}})\) invariant and is bounded in \(\infty \)-norm, it is currently unknown whether its part in \(C({\mathcal {G}})\) is the generator of a strongly continuous semigroup.