Skip to content
BY 4.0 license Open Access Published by De Gruyter March 9, 2022

On the nonlinear perturbations of self-adjoint operators

  • Michał Bełdziński , Marek Galewski EMAIL logo and Witold Majdak

Abstract

Using elements of the theory of linear operators in Hilbert spaces and monotonicity tools we obtain the existence and uniqueness results for a wide class of nonlinear problems driven by the equation T x = N ( x ) , where T is a self-adjoint operator in a real Hilbert space and N is a nonlinear perturbation. Both potential and nonpotential perturbations are considered. This approach is an extension of the results known for elliptic operators.

MSC 2010: 35J91; 35J25; 47H05

1 Introduction

Let be a real Hilbert space. Take a densely defined linear operator T : D ( T ) and a nonlinear operator N : D ( T 1 / 2 ) . It makes sense to consider the solvability of the following equation:

T x = N ( x )

in case N interacts with the spectrum of T . When T is positive definite and when the growth of N is governed by its first eigenvalue leading to the coercivity of T N (see, for example, [1]), then the celebrated Strong Monotonicity Principle is applicable. If the additional assumption of potentiality is imposed on N , then in the presence of the coercivity of the related Euler action functional the direct variational method can be used leading also to the variational characterization of the solution (see [2]). In addition to those, the dual least action principle concerning problems driven by positive definite self-adjoint operator T and its potential perturbation N was considered in [3], where again the growth conditions were strictly related to the first eigenvalue of T . The situation complicates if the growth conditions interact with the spectrum of this operator in a more subtle manner, which does not lead to the coercivity of the relevant operator. This is the case when the growth conditions concern behavior of the nonlinear term somehow in-between the eigenvalues. In such a situation one cannot directly use any method related to the coercivity. In order to overcome this difficulty based on the methods of Hilbert space theory we introduce a related operator and next a new real problem, which can be treated by the Strong Monotonicity Principle and which provides the existence result for an original problem as well. Consequently, we obtain some approach towards the unique solvability of nonlinear equations that cannot be directly treated by the aforementioned methods related to the coercivity. In the presence of the potentiality of the operators involved we may obtain a type of min-max characterization of a solution. The idea lying behind our approach can be best illustrated in the finite dimensional case as follows. Consider the matrices

A = 1 0 0 4 , N n = 3 n 1 2 1 2 3 n for n = 0 , 1 , 2

and corresponding quadratic forms A n ( x , y ) = 1 2 x 2 + 2 y 2 3 n 2 x 2 1 2 x y 3 n 2 y 2 which are, respectively, convex, convex–concave, and concave and all having ( 0 , 0 ) as a critical point. Such a critical point cannot be obtained by some unified approach, i.e., we must minimize, look for a saddle point or maximize A n , respectively. However, if we define

U 0 = 1 0 0 1 , U 1 = 1 0 0 1 , U 2 = 1 0 0 1 ,

then we can consider monotone and coercive vector fields z ( A N n ) U n z for n = 0 , 1 , 2 . Hence, a unified approach related to the usage of finite dimensional method of monotone operators leads to the solvability of equation

( A N n ) u = 0 .

In the infinite dimensional case, the spectrum of a differential operator, like for example the negative Laplacian, is unbounded from above which makes the situation more complicated and leads either to a direct minimization or to a version of a saddle point result in the potential case. There has already been some attempt to deal with situation when growth conditions involve the spectrum of a differential operator in [4]. The special case of elliptic problems for both Dirichlet and Neuman boundary conditions via arguments related to the global invertibility was considered. The method which we propose here allows one to have some unified approach towards certain type of nonlinear problems together with the variational characterization of a solution in the potential case.

The article is organized as follows. In Section 2, we invoke some definitions and facts which will be employed throughout this article. We start from elements of the theory of Hilbert space linear operators, go through the concepts from the theory of monotone operators, give some tools from the analysis in Euclidean spaces, and conclude with some background on Sobolev spaces. In Section 3, we provide motivation for our research based on the classical nonlinear boundary value problem with the Dirichlet conditions. Next, in a real Hilbert space we formulate its abstract counterpart which is associated with a fixed self-adjoint operator acting in . In Proposition 3.1, we give an equivalent condition for an element of to be a solution of the problem under consideration. Further we proceed to the unique solvability of the potential problem, which is the subject of Theorem 3.8. We complete the existence result in Theorem 3.9 with the min-max characterization. For the nonpotential case we get Theorem 3.11, which is an abstract analog of the “freezing method.” Finally, Section 4 contains applications of our existence and uniqueness tools towards nonlinear boundary value problems driven the Neumann Laplacian.

2 Preliminaries

In all that follows, is a real Hilbert space and stands for its dual space.

2.1 Linear operators in Hilbert spaces

Most of the basic results from the theory of Hilbert space operators can be transferred, with proper caution, from the complex to the real case. The reader is referred, for example, to [5], where special attention is drawn to the subtleties arising when working in a real Hilbert space.

By a (linear) operator T in we understand a linear mapping T : D ( T ) with the domain D ( T ) . We denote by B ( ) the Banach algebra of all bounded operators from the whole space into itself. Below we mainly deal with unbounded operators in , which makes the situation more delicate because their domains are in general strictly contained in . For operators S and T in , we write S T if D ( S ) D ( T ) and S x = T x for every x D ( S ) . We say that an operator T in is closed if its graph { ( T x , x ) : x D ( T ) } is a closed subspace of the product Hilbert space , or equivalently D ( T ) is a Hilbert space under the graph norm T given by

x T = ( T x 2 + x 2 ) 1 / 2 for all x D ( T ) .

If T is a densely defined operator in (i.e., the closure of D ( T ) equals to ), then we denote by T its adjoint (being also a linear operator in ). When T = T we say that the operator T is self-adjoint. Analogously as in the complex case, each self-adjoint operator T in can be written as the spectral integral σ ( T ) t E ( d t ) with respect to a unique spectral measure E (see [5, Theorem 7.17]), i.e.,

T x , x = σ ( T ) t E ( d t ) x , x for all x D ( T ) ,

where σ ( T ) stands for the spectrum of T . This representation together with some other basic properties of spectral integrals will be utilized in Section 3. Let us add that by ρ ( T ) we will mean the resolvent set of T , that is, ρ ( T ) = R σ ( T ) .

Furthermore, a self-adjoint operator T in is called positive if T x , x 0 for every x D ( T ) . In this case, we can find a unique positive self-adjoint operator S in satisfying the equality S 2 = T ([5, Theorem 7.20a]). Such an operator S , denoted further by T 1 / 2 , is called the square root of T . Moreover, if U B ( ) , then U T T U if and only if U T 1 / 2 T 1 / 2 U . It is also worth noting that D ( T 1 / 2 ) is in general larger than D ( T ) .

Now let T be a closed densely defined operator in . We define its modulus as T = ( T T ) 1 / 2 . It turns out that D ( T ) = D ( T ) D ( T 1 / 2 ) . The polar decomposition theorem ([5, Theorem 7.20b]) guarantees that there exists a unique partial isometry U T B ( ) such that T = U T T and the kernels of U T and T coincide. By a partial isometry U T we mean an operator such that U T x = x for every x belonging to the orthogonal complement of the kernel of U T in . It can be shown that if T is self-adjoint, then U T T = T U T .

2.2 Monotone operators

Following [6] or [7] we provide necessary background information on the theory of monotone operators, which pertains to the existence of nonlinear equations. We consider a (nonnecessarily linear) mapping A : called further an operator.

We say that A is demicontinuous if for every sequence ( x n ) and x 0 such that x n x 0 we have A ( x n ) A ( x 0 ) , where denotes the weak convergence in . We say that A is monotone if

A ( x ) A ( y ) , x y 0 for all x , y .

An operator A is called strongly monotone (or α -strongly monotone) if there exists α > 0 such that

A ( x ) A ( y ) , x y α x y 2 for all x , y .

In turn A is said to be relaxed monotone (or β -relaxed monotone) if there exists β R such that

A ( x ) A ( y ) , x y β x y 2 for all x , y .

Recall that the Gâteaux derivative of a mapping F : K , where K is a real Hilbert space, at the point x is the functional F ( x ) satisfying

lim τ 0 F ( x + τ y ) F ( x ) τ = F ( x ) y for all y .

F is called Gâteaux differentiable if it has the Gâteaux derivative at each point. Moreover, we denote by F the second derivative of F , that is, the Gâteaux derivative of F . An operator A is called potential if there exists a Gâteaux differentiable functional A : R , called the potential of A , such that A = A . Note that for a given operator A , a potential of A (if exists) is uniquely determined up to a constant. The Gâteaux differentiability of a functional in general does not imply any type of its continuity. However, every monotone and potential operator is necessarily demicontinuous (see [6 Lemma 5.4]). We generalize this observation to the case of relaxed monotone and potential operators.

Lemma 2.1

Every relaxed monotone and potential operator is demicontinuous.

Proof

Let A be a β -relaxed monotone and potential operator. Then we easily see that the operator β j A is monotone and potential, where j : is the normalized duality mapping between and , which is a Gâteaux derivative of 1 2 2 . Following the Riesz representation theorem ([8, Theorem 5.5]) we see that j is continuous and strongly monotone with constant 1. Hence, by [6, Lemma 5.4], the operator β j A is demicontinuous. Therefore, since j is continuous, the operator A = β j ( β j A ) is demicontinuous.□

Next we recall main existence and uniqueness tools used in this article.

Theorem 2.2

(Strong Monotonicity Principle, [6]) If A : is a demicontinuous and strongly monotone operator, then it is a bijection.

It is worth noting that the proof of the above theorem presented in [6] works in separable Hilbert spaces only. However, this result can be extended to the general case following ideas described in [7, Section 7].

2.3 Analysis on the Euclidean space

In this section, we present the results that, for a given mapping, relate the relaxed monotonicity to the Lipschitz condition. These tools will be extended to the infinite dimensional case in Section 3. The symbols C ( R m ) , C 1 ( R m ) , and C ( R m ) stand for the spaces of all, respectively, continuous, continuously differentiable, and smooth functionals on R m . We denote the support of a functional f C ( R m ) by supp f = { x : f ( x ) 0 } ¯ and we put C c ( R m ) = { f C ( R m ) : supp f is compact } . The Euclidean space R m will be considered with the standard inner product , .

Lemma 2.3

Let V be an open convex subset of R m and f : V R m be a smooth function. Then for all α , β R the following conditions are equivalent:

  1. α x y 2 f ( x ) f ( y ) , x y β x y 2 for all x , y V ,

  2. α y 2 f ( x ) y , y β y 2 for all x , y V .

Proof

To show that (1) (2) it suffices to estimate the expression

f ( x ) y , y = lim τ 0 1 τ f ( x + τ y ) f ( x ) , y ,

while to get the implication (2) (1) one can apply the standard estimation techniques to

f ( x ) f ( y ) , x y = 0 1 f ( y + τ ( x y ) ) ( x y ) , x y d τ ,

where x , y V .□

Let us fix a sequence of mollifiers ( ρ n ) , that is, the functions such that

ρ n C c ( R m ) , supp ρ n B 1 / n ¯ , R m ρ n ( x ) d x = 1 ,

where B 1 / n is an open ball centered at the origin with radius 1 n . Given f C ( R m ) , we define the n -th approximation f n of f by

(2.1) f n ( x ) = ( f ρ n ) ( x ) = R m f ( x z ) ρ n ( z ) d z ,

where stands for a convolution operator. Proposition 4.21 of [8] provides an almost uniform convergence f n f , that is, a uniform convergence f n K f K for every compact set K R m . Moreover, if we assume that f C 1 ( R m ) , then we obtain f n f almost uniformly, because

f n x k ( x ) = x k ( f ρ n ) ( x ) = f x k ρ n ( x )

(see [8, Proposition 4.20]). Let us recall that for a self-adjoint operator L B ( R m ) we have L = sup x = 1 L x = sup x = 1 L x , x . In order to extend this observation to the case of nonlinear mappings, we have to assume some analog of the symmetry L = L . For operators of the class C 1 , such a role plays the symmetry of the derivative operator at each point, which is equivalent to the operator’s potentiality. This is the subject of the following lemma.

Lemma 2.4

Let f C 1 ( R m ) and assume that there exists α 0 such that

f ( x ) f ( y ) , x y α x y 2 f o r a l l x , y R m .

Then f is α -Lipschitz, that is,

f ( x ) f ( y ) α x y f o r a l l x , y R m .

Proof

We denote by z k the k -th coordinate of z R m ( 1 k m ) and let f n be defined by (2.1). By direct calculations, for all x , y R m and every n N , we obtain

f n ( x ) f n ( y ) , x y = k = 1 m f x k ρ n ( x ) f x k ρ n ( y ) ( x k y k ) = k = 1 m B 1 / n f x k ( x z ) f x k ( y z ) ρ n ( z ) ( x k y k ) d z = B 1 / n f ( x z ) f ( y z ) , x y ρ n ( z ) d z α x y 2 B 1 / n ρ n ( z ) d z = α x y 2 .

Hence, by Lemma 2.3, we obtain that for all n N

f n ( x ) y , y α y 2 for all x , y R m .

However, since f n ( x ) is symmetric as the second derivative of a functional f n , we get f n ( x ) α . Therefore,

f n ( x ) f n ( y ) = 0 1 f n ( x + τ ( y x ) ) ( y x ) d τ sup 0 τ 1 f n ( x + τ ( y x ) ) y x α y x

holds for all x , y R m . Now since f n f almost uniformly, we obtain the assertion.□

2.4 Functional framework

Let Ω be an open bounded subset of R m with a boundary of the class C 2 (see [9] for details). We denote by H 1 ( Ω ) (resp. H 2 ( Ω ) ) the first Sobolev space (resp. the second Sobolev space) consisting of all functions from L 2 ( Ω ) , whose all first (resp. second) order weak derivatives belong to L 2 ( Ω ) . We equip H 1 ( Ω ) with the norm

u H 1 = Ω u ( x ) 2 d x + Ω u ( x ) 2 d x 1 2 .

Let C 0 ( Ω ) denote the space of all smooth functions with a compact support in Ω . By H 0 1 ( Ω ) we mean the closure of C 0 ( Ω ) in the norm H 1 . For u H 2 ( Ω ) , we denote the Laplacian of u by

Δ u ( x ) = i = 1 m 2 x i 2 u ( x ) .

Following [10] we put

D ( Δ D ) = H 2 ( Ω ) H 0 1 ( Ω ) and D ( Δ N ) = { u H 2 ( Ω ) : ν u = 0 on Ω } .

Here ν is the outward normal derivative defined on Ω . Moreover, the condition ν u = 0 is understood in sense of the trace operator (see [9, Section 5.5] for details). Spaces D ( Δ D ) and D ( Δ N ) corresponds to the Dirichlet condition: u = 0 on Ω and the Neumann condition: ν u = 0 on Ω , respectively.

f : Ω × R × R m R is called a Carathéodory function if for a.e. x Ω and all u R , v R m the function f ( , u , v ) is Lebesgue measurable, while functions f ( x , , v ) and f ( x , u , ) are continuous. We define (pointwisely a.e.) the Niemytskii operator N f associated with f by the formula

N f ( u , v ) ( x ) = f ( x , u ( x ) , v ( x ) ) .

The domain and the codomain of N f will be specified later. In a similar way, we can define the Niemytskii operator associated with a Carathéodory function f : Ω × R R (compare with [11, Chapter 2]). For the reader’s convenience, we recall the Krasnoselskii (see [11, Theorems 2.4 and 2.5]) and Gagliardo-Nirenberg-Sobolev (see [9, Section 5.6]) theorems in suitable forms.

Theorem 2.5

(Krasnoselskii) Let f : Ω × R R be a Carathéodory function, p > 1 . Then the Niemytskii operator N f : L p ( Ω ) L 2 ( Ω ) defined (pointwisely a.e.) by

N f ( u ) ( x ) = f ( x , u ( x ) )

is continuous and well defined if and only if there exists c > 0 and an a.e. positive function γ L 2 ( Ω ) such that

f ( x , u ) γ ( x ) + c u p / 2 f o r a.e. x Ω and all u R .

Theorem 2.6

(Gagliardo-Nirenberg-Sobolev) Let Ω be an open and bounded subset of R m with a boundary of the class C 2 . Then the embedding H 0 1 ( Ω ) L p ( Ω ) is continuous:

  1. for all p < if m = 2 ,

  2. for p = 2 m m 2 if m 3 .

3 Main results

The motivation for our research comes from the following classical problem with the Dirichlet boundary conditions:

(3.1) Δ u = f ( x , u ) + h ( x ) , u Ω = 0 ,

where Ω is an open and bounded subset of the Euclidean space. In order to settle the question of its solvability by using variational and monotonic methods we need to consider its solvability in a weak sense. Thus, we transfer our considerations from a closed subset of H 2 ( Ω ) H 0 1 ( Ω ) , where the operator Δ is defined in a strong sense, to the space H 0 1 ( Ω ) which in fact is the domain of the operator ( Δ ) 1 / 2 . Having finally obtained a weak solution u to problem (3.1), that is, an element u H 0 1 ( Ω ) satisfying

Ω u ( x ) φ ( x ) d x = 0 1 f ( x , u ( x ) ) φ ( x ) d x Ω h ( x ) φ ( x ) d x for all φ H 0 1 ( Ω ) ,

we use the regularization techniques to show that u belongs to the class H 2 . The self-adjointness of Δ is a key factor to improve the regularity of the solution u . Such an approach to the solvability of problem (3.1) in the abstract settings is the subject of our discussion in the present section. We additionally note that the right hand side of (3.1) is typically defined on the space H 0 1 ( Ω ) , but not on the entire L 2 ( Ω ) . This allows us to use the embedding H 0 1 ( Ω ) L p ( Ω ) for suitable p < and get the well definiteness of the mapping H 0 1 ( Ω ) u ( x ) f ( x , u ( x ) ) L 2 ( Ω ) for a function f satisfying f ( x , u ) u p / 2 . Let us recall that, by the virtue of the Krasnoselskii theorem, the well definiteness of the mapping L 2 ( 0 , 1 ) u ( x ) f ( x , u ( x ) ) L 2 ( Ω ) entails the sublinearity of f with respect to the second variable, which in turn can be regarded as a restrictive condition.

3.1 Regularity of a solution

Take a self-adjoint operator T in and an operator N : D ( T 1 / 2 ) . Let us consider the following equation:

(3.2) T x = N ( x ) ,

where x D ( T ) . We define a bilinear form t : D ( T 1 / 2 ) × D ( T 1 / 2 ) R by the formula

t [ x , y ] = U T T 1 / 2 x , T 1 / 2 y for all x , y D ( T 1 / 2 ) ,

where T = U T T is the polar decomposition of T . Now we are in the position to formulate an abstract counterpart of regularization results.

Proposition 3.1

If there exists x D ( T 1 / 2 ) such that

(3.3) t [ x , y ] = N ( x ) , y for every y D ( T 1 / 2 ) ,

then x is a solution to (3.2).

Proof

A look at the definition of the adjoint of an operator and the fact that T 1 / 2 is self-adjoint reveals that U T T 1 / 2 x D ( T 1 / 2 ) . Since U T T = T U T , it follows by the polar decomposition theorem that U T T 1 2 T 1 2 U T , so we have U T T 1 2 x = T 1 2 U T x D ( T 1 / 2 ) . Consequently, for all y D ( T 1 / 2 ) ,

U T T 1 2 x , T 1 2 y = T 1 2 U T x , T 1 2 y = T 1 2 T 1 2 U T x , y = T U T x , y .

Since T U T = U T T = T , we see that x D ( T ) , and

T x , y = N ( x ) , y for all y D ( T 1 / 2 ) .

By the density of D ( T 1 / 2 ) in , we obtain T x = N ( x ) . This means that x is a solution to (3.2).□

Proposition 3.1 does not share the ambiguity about the relation between strong and weak solutions for problems driven by self-adjoint operators presented in [12, Section 3.4].

Remark 3.2

In case of equation (3.1) condition (3.3) takes the form

(3.4) Ω ( Δ ) 1 / 2 u ( x ) ( Δ ) 1 / 2 φ ( x ) d x = Ω f ( x , u ( x ) ) φ ( x ) d x + Ω h ( x ) φ ( x ) d x for all φ H 0 1 ( Ω ) .

It seemingly differs from the standard weak solvability condition:

(3.5) Ω u ( x ) φ ( x ) d x = Ω f ( x , u ( x ) ) φ ( x ) d x + Ω h ( x ) φ ( x ) d x for all φ H 0 1 ( Ω ) .

Note, however, that for every x H 2 ( Ω ) H 0 1 ( Ω ) we have

Ω u ( x ) φ ( x ) d x = Ω ( Δ ) u ( x ) φ ( x ) d x = Ω ( Δ ) 1 / 2 u ( x ) ( Δ ) 1 / 2 φ ( x ) d x for all φ H 0 1 ( Ω ) ,

which by the density of H 2 ( Ω ) H 0 1 ( Ω ) in H 0 1 ( Ω ) leads to the equivalence of (3.5) and (3.4).

3.2 The case of potential perturbations

Take a self-adjoint operator T : D ( T ) , the spectral measure E of T , and a Gâteaux differentiable functional N : D ( T 1 / 2 ) R . We assume that for every x D ( T 1 / 2 ) there exists N ( x ) satisfying

(3.6) N ( x ) , y = N ( x ) , y

for every y D ( T 1 / 2 ) . Note the above assumption means that for every x D ( T 1 / 2 ) , the functional N ( x ) has a representation with respect to the inner product in , while the differentiability of N ensures only that such a representation exists with respect to the inner product in D ( T 1 / 2 ) . The meaningness of these assumptions is well described by the following remark.

Remark 3.3

We consider as N : H 0 1 ( Ω ) R the potential of the Niemytskii operator associated with the function f , so

N ( x ) = Ω 0 u ( x ) f ( x , v ) d v d x .

Then for every x H 0 1 ( Ω ) the derivative N ( x ) ( H 0 1 ( Ω ) ) has an integer representation of the form f ( , u ( ) ) . To be more precise, we have

N ( u ) , φ = Ω f ( x , u ( x ) ) φ ( x ) d x for all φ H 0 1 ( Ω ) .

Of course, the existence of a regular representation is not guaranteed in general. It is easy to check that the functional N : H 0 1 ( 0 , 1 ) R defined by N ( x ) = x ( 1 / 2 ) is continuously differentiable as a linear and bounded functional on H 0 1 ( 0 , 1 ) . However, it does not have an integral representation.

We put

(3.7) α = inf x y N ( x ) N ( y ) , x y x y 2 , β = sup x y N ( x ) N ( y ) , x y x y 2 .

Remark 3.4

Note that if α or β is finite, then N is demicontinuous by Lemma 2.1. Moreover, N is continuous on every finite dimensional subspace of D ( T 1 / 2 ) because the norm and weak topologies coincide therein.

Lemma 3.5

Let N satisfy the aforementioned assumptions. If there exists c > 0 such that

N ( x ) N ( y ) , x y c x y for all x , y D ( T 1 / 2 ) ,

then N is c -Lipschitz.

Proof

Take x , y D ( T 1 / 2 ) and consider any unitary isomorphism U : R m span { x , y } (where in fact m { 0 , 1 , 2 } , depending on the choice of vectors x and y ) and an auxiliary functional f : R m R given by f ( z ) = N ( U z ) . Then

f ( z ) f ( w ) , z w = N ( U z ) N ( U w ) , U z U w = N ( U z ) N ( U w ) , U z U w c z w 2

for all z , w R m . Hence, we can use Lemma 2.1 to obtain the continuity of f . Applying Lemma 3.5, we obtain

f ( z ) f ( w ) c z w for all z , w R m .

Since N ( x ) N ( y ) = f ( U x ) f ( U y ) , we obtain the assertion.□

For a fixed h , we define J h : D ( T 1 / 2 ) R by

J h ( x ) = 1 2 t [ x , x ] N ( x ) h , x for all x D ( T 1 / 2 ) .

Remark 3.6

Let us note that in our model case (3.1) the function J h is the Euler action functional defined by

J h ( u ) = 1 2 Ω u ( x ) 2 d x Ω 0 u ( x ) f ( x , v ) d v d x Ω h ( x ) u ( x ) d x .

As before, expressions

Ω ( Δ ) 1 / 2 u ( x ) 2 d x and Ω u ( x ) 2 d x

coincide. This is again due to the density of H 2 ( Ω ) H 0 1 ( Ω ) in H 0 1 ( Ω ) and due to the equality

Ω ( Δ ) 1 / 2 u ( x ) 2 d x = 0 1 ( Δ u ( x ) ) u ( x ) d x = Ω u ( x ) 2 d x ,

which holds for all u H 0 1 ( Ω ) .

The space D ( T 1 / 2 ) will be equipped with the norm

(3.8) x δ , ω = ( δ T 1 / 2 x 2 + ω x 2 ) 1 / 2 ,

where δ , ω > 0 . Note that, for arbitrarily chosen constants δ and ω , the norm δ , ω is equivalent to the graph norm of T 1 / 2 . The space D ( T 1 / 2 ) becomes a Hilbert space when equipped with norm (3.8).

Let d α , β ( t ) = dist ( { t } , ( α , β ) ) for t R . The function d α , β plays only a technical role and enables us to avoid more complex calculations.

Remark 3.7

Note that for every closed subset C of R on which the function d α , β is positive there exist constants δ and ω satisfying

(3.9) d α , β ( t ) δ t + ω for all t σ ( T ) .

Indeed, it suffices to take γ = dist ( ( α , β ) , C ) > 0 , and put

δ = ω = γ 1 + max { α , β } + γ .

The constants selected in this way may not be optimal.

Theorem 3.8

Assume that the distance between ( α , β ) and σ ( T ) is positive. Then for every h , problem

(3.10) T x = N ( x ) + h

has a unique solution x h D ( T ) . Moreover, the solution mapping h x h D ( T 1 / 2 ) is Lipschitz (in fact, nonexpanding). Here we consider with the standard norm, while D ( T 1 / 2 ) is equipped with the norm 2 δ ω , ω 2 .

Proof

By the assumptions, the function d α , β is positive on σ ( T ) . We take any δ , ω > 0 satisfying condition (3.9). Let us define the operator A : D ( T 1 / 2 ) ( D ( T 1 / 2 ) ) * by the formula

A ( x ) , y = J ( U x ) , y = t [ U x , y ] N ( U x ) , y U h , y for all x , y D ( T 1 / 2 ) ,

where U = I 2 E ( ( , α ] ) . We show that for all x , y D ( T ) there is

(3.11) A ( x ) A ( y ) , x y σ ( T ) d α , β ( t ) E ( d t ) ( x y ) , x y .

First, if β < min σ ( T ) , then U = I and direct calculations yield

A ( x ) A ( y ) , x y T ( x y ) , x y β ( x y ) , x y = σ ( T ) ( t β ) E ( d t ) ( x y ) , x y = σ ( T ) d α , β ( t ) E ( d t ) ( x y ) , x y .

We act analogously if max σ ( T ) < α . If both α and β are finite, then we have

T U x , x β + α 2 U x , x = α t + β + α 2 E ( d t ) x , x + β t β + α 2 E ( d t ) x , x .

Moreover, by assumptions we obtain

N ( x ) N ( y ) , x y β + α 2 x y , x y β α 2 x y 2 .

Since β + α 2 is potential, we can use Lemma 3.5 to deduce that N β + α 2 I is β α 2 -Lipschitz. As U is an isometry, N U β + α 2 U is β α 2 -Lipschitz too. Consequently,

N ( U x ) N ( U y ) , x y β + α 2 U ( x y ) , x y β α 2 x y 2 = σ ( T ) β α 2 E ( d t ) ( x y ) , x y .

Therefore, we have

A ( x ) A ( y ) , x y = T ( x y ) , x y β + α 2 x y , x y N ( U x ) N ( U y ) , x y + β + α 2 x y , x y α t + β + α 2 E ( d t ) ( x y ) , x y + β t β + α 2 E ( d t ) ( x y ) , x y α + β β α 2 E ( d t ) ( x y ) , x y

= α ( t + α ) E ( d t ) ( x y ) , x y + β ( t β ) E ( d t ) ( x y ) , x y = σ ( T ) d α , β ( t ) E ( d t ) ( x y ) , x y

and hence (3.11) holds. Condition (3.9) gives

(3.12) A ( x ) A ( y ) , x y σ ( T ) ( δ t + ω ) E ( d t ) ( x y ) , x y = x y δ , ω 2 .

Since D ( T ) is dense in D ( T 1 / 2 ) and A is demicontinuous by Lemma 2.1, it follows that (3.12) holds for all x , y D ( T 1 / 2 ) . Therefore, we can use the Strong Monotonicity Principle to obtain a unique y h satisfying

t [ U y h , y ] N ( U y h ) , y h , y = 0 for all y D ( T 1 / 2 ) .

By Proposition 3.1, y h D ( T ) and T U y h N ( U y h ) = h . Therefore, x h = U y h is a desired solution to (3.10). Direct calculations show that

x h 1 x h 2 U h 1 U h 2 x h 1 x h 2 , U h 1 U h 2 = x h 1 x h 2 , T U x h 1 N ( U x h 1 ) T U x h 2 + N ( U x h 2 ) = A ( x h 1 ) A ( x h 2 ) , x h 1 x h 2 δ T 1 / 2 ( x h 1 x h 2 ) 2 + ω x h 1 x h 2 2 ( 2 δ ω T 1 / 2 ( x h 1 x h 2 ) 2 + ω 2 x h 1 x h 2 2 ) 1 / 2 x h 1 x h 2 = x h 1 x h 2 2 δ ω , ω 2 x h 1 x h 2

and hence, since U is an isometry, we conclude that the solution operator is Lipschitz.□

Assume that D ( T 1 / 2 ) = X + Y , where X and Y are linear subspaces satisfying X Y = { 0 } . Let us recall that the min-max inequality says that

sup x X inf y Y J h ( x + y ) inf y Y sup x X J h ( x + y ) .

We say that J h is convex (resp. concave) along a space X if for every y Y a functional J h ( + y ) : X R is convex (resp. concave). For a given ξ R , we define X ξ = E ( ( , ξ ] ) D ( T 1 / 2 ) and Y ξ = E ( ( ξ , ) ) D ( T 1 / 2 ) . Moreover, we let X = Y = { 0 } .

Theorem 3.9

Assume that the distance between ( α , β ) and σ ( T ) is positive and let h . Then the unique solution to (3.10) satisfies the condition

J h ( x h ) = sup x X α inf y Y β J h ( x + y ) = inf y Y β sup x X α J h ( x + y ) .

Proof

We define the operator A as in the proof of Theorem 3.8. Take x X α and z D ( T 1 / 2 ) . Let us calculate

J ( z + x ) J ( z ) , x = J ( U 2 z + x ) J ( U 2 z ) , x = J ( U ( U z x ) ) J ( U U z ) , x 0 ,

which means that J h is concave along X α . Analogously, it follows that J h is convex along Y β . Therefore, since x h is a critical point of J h , there is

sup x X α J h ( x h + x ) J h ( x h ) inf y Y β J h ( x h + x 2 ) ,

which finally yields

inf y Y β sup x X α J h ( x + y ) J h ( x h ) sup x X α inf y Y β J h ( x + y ) .

The min-max inequality gives the assertion.□

3.3 The case of nonpotential perturbations

We start with providing the model problem which best illustrates our ideas. Let us consider

(3.13) Δ u = f ( x , u ( x ) , u ( x ) ) + h ( x ) , u Ω = 0 ,

where f : Ω × R R is such a Carathéodory function, that the related Niemytskii operator N f : H 0 1 ( Ω ) × H 0 1 ( Ω ) L 2 ( Ω ) defined by

(3.14) N f ( u , v ) ( x ) = f ( x , u ( x ) , v ( x ) ) for a.e. x Ω .

is well defined. Then N ( , v ) is potential for every v H 1 ( Ω ) . Moreover, problem (3.13) becomes

Δ u = N ( u , u ) + h .

Such a form of N allows us to consider separate assumptions on the term involving u (the potential part) and on the term involving u (the nonpotential part).

The abstract approach which we present now is related to the so-called “freezing method” (see, for example, [13] or [14] for its applications). This method relies on fixing the term that is responsible for the fact that the system is nonpotential, then on obtaining the solution operator to the new problem via some variational approach and finally on reaching the solution to the original problem via suitable fixed point theorem applied to this solution operator. Here we also follow such a path with one exception that instead of a fixed point we apply the strongly monotone principle.

Remark 3.10

With reference to problem (3.13) we freeze the gradient on right hand side and obtain now problem

(3.15) Δ u = N f ( u , v )

with a fixed parameter v which is now potential. The solution operator to (3.15) depends on v and the assumptions which we impose make it strongly monotone and continuous. Hence, the strongly monotone principle applies.

We consider the operator G : D ( T 1 / 2 ) × D ( T 1 / 2 ) , which is an abstract counterpart of the operator N defined in (3.14) and corresponds to assumption (3.6). We assume that for every z D ( T 1 / 2 ) there exists a Gâteaux differentiable functional G z : D ( T 1 / 2 ) R such that

G ( x , z ) , y = G ( x ) , y for all y D ( T 1 / 2 )

and, analogously as before, we denote

α = inf z inf x y G ( x , z ) G ( y , z ) , x y x y 2 , β = sup z sup x y G ( x , z ) G ( y , z ) , x y x y 2 .

Theorem 3.11

Assume that the distance between ( α , β ) and σ ( T ) is positive. Take any δ , ω satisfying (3.9). If additionally there exists ε ( 0 , 1 ) such that

G ( z , x ) G ( z , y ) ε x y 2 δ ω , ω 2 for all x , y , z D ( T 1 / 2 ) ,

then for every h problem

T x = G ( x , x ) + h

has a unique solution x h D ( T ) . Moreover, the mapping

( , ) h x h ( D ( T 1 / 2 ) , T 1 / 2 )

is Lipschitz.

Proof

We define an auxiliary operator Λ : D ( T 1 / 2 ) ( D ( T 1 / 2 ) ) by the formula

Λ ( x ) , y = t [ U x , y ] G ( U x , U x ) , y for all y D ( T 1 / 2 ) ,

where U = I 2 E ( ( , α ] ) . Note that for every z D ( T 1 / 2 ) the operator G ( , z ) satisfies assumptions of Theorem 3.8. Hence, calculations analogous to those in the proof of Theorem 3.8 yield

Λ ( x ) Λ ( y ) , x y = t [ U x U y , x y ] G ( U x , U x ) G ( U y , U y ) , x y t [ U x U y , x y ] G ( U x , U x ) G ( U y , U x ) , x y + G ( U y , U x ) G ( U y , U y ) , x y x y δ , ω 2 ε U x U y 2 δ ω , ω 2 x y ( 1 ε ) x y δ , ω 2 ( 1 ε ) x y 2 δ ω , ω 2 .

Thus, we see that Λ is strongly monotone and it satisfies

Λ ( x ) Λ ( y ) ( 1 ε ) x y 2 δ ω , ω 2 for all x , y D ( T 1 / 2 ) .

Therefore, once again following the reasoning from the proof of Theorem 3.8 we obtain the conclusion.□

4 Applications

We assume that Ω R m is an open and bounded set with a boundary of the class C 2 . Then we can consider the operator T = Δ on the domain associated with the Neumann boundary value conditions, that is, D ( T ) = D ( Δ N ) . The operator T is self-adjoint (see [10, Section 10.6]) and it has a pure point spectrum of the form σ ( T ) = { λ 0 , λ 1 , λ 2 , } , where

0 = λ 0 < λ 1 < λ 2 < λ 3 <

Consequently, T = T . Moreover, for every λ k , there is a finite system of eigenfunctions w k 1 , , w k i k D ( L ) . Hence,

X λ k = span j = 0 k { w j 1 , , w j i j } Y λ k + 1 = u H 1 ( Ω ) : Ω u ( x ) v ( x ) d x = 0 for all v X λ k .

Moreover, D ( T 1 / 2 ) = H 1 ( Ω ) . We shall consider two nonlinear problems driven by the Neumann Laplacian, namely, the potential and the nonpotential one. They are considered in the following subsections.

Consider Carathéodory functions f : Ω × R × R m R , g : Ω × R R and assume that they generate continuous Niemytskii operators N f : H 1 ( Ω ) × H 1 ( Ω ) L 2 ( Ω ) and N g : H 1 ( Ω ) L 2 ( Ω ) , respectively, where

N f ( u , v ) ( x ) = f ( x , u ( x ) , v ( x ) ) and N g ( u ) ( x ) = g ( x , u ( x ) ) for a.e. x Ω .

Remark 4.1

Using the Krasnoselskii theorem and the Gagliardo-Nirenberg-Sobolev embedding we can show that the operators N f and N g are continuous if we assume the following growth assumptions:

f ( x , u , v ) γ ( x ) + c ( u ρ + v ) for a.e. x Ω and all u R , v R m

and

g ( x , u ) γ ( x ) + c ( u ρ ) for a.e. x Ω and all u R , v R m ,

where γ L 2 ( Ω ) is a.e. positive, c > 0 and ρ is any finite number if m = 2 or ρ m m 2 otherwise.

4.1 Potential case

For a fixed h L 2 ( Ω ) , we consider a unique solvability of

(4.1) Δ u = g ( x , u ) + h ( x ) on Ω , ν u = 0 on Ω .

To study the unique solvability of problem (4.1) we use Theorems 3.8 and 3.9 taking T as the Neumann Laplacian and N = N g as a perturbation. Then we can define : H 1 ( Ω ) R by

( u ) = 1 2 Ω u ( x ) 2 d x Ω 0 u ( x ) g ( x , s ) d s d x + Ω h ( x ) u ( x ) d x .

Therefore, we can obtain the following result.

Theorem 4.2

Assume that there exist a , b R such that λ n 1 < a < b < λ n for some n N and

(4.2) a u v 2 ( f ( x , u ) f ( x , v ) ) ( u v ) b u v 2 for a.e. x Ω and all u , v R

Then for every h L 2 ( Ω ) problem (4.1) has a unique solution u h H 2 ( Ω ) , which satisfies

( u h ) = sup u X λ n 1 inf v Y λ n ( u + v ) = inf v Y λ n sup u X λ n 1 ( u + v ) .

Moreover, h n h in L 2 ( Ω ) implies u h n u h in H 1 ( Ω ) .

Proof

We will apply Theorems 3.8 and 3.9 using the setting prior to the formulation of the theorem. Note that assumption (4.2) provides

a Ω u ( x ) v ( x ) 2 d x Ω ( f ( x , u ( x ) ) f ( x , v ( x ) ) ) ( u ( x ) v ( x ) ) d x b Ω u ( x ) v ( x ) 2 d x .

Therefore, taking α and β given by (3.7), we obtain a α and β b . This guarantees a positive distance between ( α , β ) and σ ( T ) . Consequently, we can use Theorem 3.8 to obtain the existence of a solution to (4.1). Now, since J h = , we can also apply Theorem 3.9 to obtain the variational characterization of this solution.□

Remark 4.3

It is worth mentioning that we omit the case when

( f ( x , u ) f ( x , v ) ) ( u v ) b u v 2 for all u , v R and a.e. x Ω

for some b < 0 . In this particular case, the solution u h satisfies the standard minimization condition

( u h ) = min u H 0 1 ( Ω ) ( u ) .

We refer to [2] for details.

4.2 Non-potential case

For a fixed h L 2 ( Ω ) , we consider a unique solvability of

(4.3) Δ u = f ( x , u , u ) + h ( x ) on Ω , ν u = 0 on Ω .

For simplicity, we consider only the interplay between the growth assumptions posed on f and λ 0 = 0 .

Theorem 4.4

Assume that there exists b > 0 and ε < 2 b such that

( f ( x , u , w ) f ( x , v , w ) ) ( u v ) b u v 2 a n d f ( x , u , w ) f ( x , u , z ) ε w z

for a.e. x Ω , all u , v R and every w , z R m . Then for every h L 2 ( Ω ) problem (4.3) has a unique solution u h H 2 ( Ω ) . Moreover, h n h in L 2 ( Ω ) implies u h n u h in H 1 ( Ω ) .

Proof

We will apply Theorem 3.11 for the above defined T and G . Direct calculations yield

G ( u , w ) G ( v , w ) , u v u v 2 Ω ( f ( x , u ( x ) , w ( x ) ) f ( x , v ( x ) , w ( x ) ) ) ( u ( x ) v ( x ) ) d x Ω u ( x ) v ( x ) 2 d x b Ω u ( x ) v ( x ) 2 d x Ω u ( x ) v ( x ) 2 d x = b

for all u , v , w H 1 ( Ω ) and consequently β b . Since σ ( T ) [ 0 , ) and λ 0 = 0 , we have d , β ( t ) t + b for t σ ( T ) . Therefore, taking δ = 1 , ω = b we obtain (3.9). We show that G ( u , ) is ε 2 b -Lipschitz (for every fixed u D ( T ) ) with respect to 2 δ ω , ω 2 . Let u , v , w H 1 ( Ω ) . Then

G ( u , v ) G ( u , w ) = Ω f ( x , u ( x ) , v ( x ) ) f ( x , u ( x ) , w ( x ) ) 2 d x 1 2 Ω ε 2 u ( x ) v ( x ) 2 d x 1 2 = ε ( Δ ) 1 / 2 ( u v ) = ε 2 b 2 b ( Δ ) 1 / 2 ( u v ) ε 2 b u v 2 δ ω , ω 2 .

Therefore, by Theorem 3.11, we obtain the assertion.□

Clearly, we can also consider gradient depending terms “between” eigenvalues. For the sake of simplicity, we provide now an explicit example for the ordinary differential equation.

Example 4.5

Consider a differential equation of the form

(4.4) u ( x ) = 5 2 u ( x ) + 1 3 cos ( u ( x ) ) cos 1 4 u ( x ) + x 1 / 4 on ( 0 , π ) , u ( 0 ) = u ( π ) = 0 .

The operator T u = u is self-adjoint on D ( T ) = { u H 2 ( 0 , π ) : u ( 0 ) = u ( π ) = 0 } . Moreover, σ ( T ) = { n 2 : n = 0 , 1 , } . Take G : H 1 ( 0 , π ) × H 1 ( 0 , π ) L 2 ( Ω ) given (pointwisely) by

G ( u , v ) ( x ) = 5 2 u ( x ) + 1 3 cos ( u ( x ) ) cos 1 4 v ( x ) .

Then, for every fixed v H 1 ( 0 , π ) , we have

13 6 0 π u ( x ) w ( x ) 2 d x G ( u , v ) G ( w , v ) , u w 17 6 0 π u ( x ) w ( x ) 2 d x

and

G ( v , u ) G ( v , w ) 1 12 0 π u ( x ) w ( x ) 2 d x 1 2

for all u , w H 1 ( 0 , π ) . We see that 13 6 α β 17 6 and hence d α , β ( t ) 1 6 t + 1 3 for t σ ( T ) . Moreover, 1 12 < 1 9 = 2 1 6 1 3 and h ( x ) = x 1 / 4 L 2 ( 0 , π ) . Therefore, we can apply Theorem 3.8 to obtain unique solvability of (4.4).

We conclude our considerations by shifting Theorem 4.4 to the case of Dirichlet boundary conditions in the nonpotential case in order to show some apparent differences.

Remark 4.6

If there exist positive b < λ 1 and ε ( λ 1 b ) 2 2 λ 1 such that f satisfies

( f ( x , u , w ) f ( x , v , w ) ) ( u v ) b u v 2 and f ( x , u , w ) f ( x , u , z ) ε w z

for a.e. x Ω , all u , v R and all w , z R m , then the following nonlinear Dirichlet boundary value problem is uniquely solvable

Δ u = f ( x , u , u ) + h ( x ) on Ω , u = 0 on Ω .

Similarly as in the Neumann case, we use Theorem 3.11 taking δ = λ 1 b 2 λ 1 and ω = λ 1 b 2 .

Acknowledgements

This article has been completed while one of the authors – Michał Bełdziński, was the Doctoral Candidate in the Interdisciplinary Doctoral School at the Lodz University of Technology, Poland.

  1. Conflict of interest: Authors state no conflict of interest.

References

[1] N. S. Papageorgiou, V. D. Rǎdulescu and D. D. Repovš, Nonlinear Analysis - Theory and Methods, Springer International Publishing, 2019.Search in Google Scholar

[2] J. Mawhin, Problemes de Dirichlet Variationnels Non Linéaires, vol. 1, Montréal, Québec, Presses de l’Université de Montréal, QC, 1987.Search in Google Scholar

[3] D. Idczak, Stability in semilinear problems, J. Differ. Equ. 162 (2000), no. 1, 64–90.Search in Google Scholar

[4] M. Bełdziński and M. Galewski, On solvability of elliptic boundary value problems via global invertibility, Opuscula Math. 40 (2020), no. 1, 37–47.Search in Google Scholar

[5] J. Weidmann, Linear Operators in Hilbert Spaces, vol. 68, Springer, Cham, 1980.Search in Google Scholar

[6] M. Galewski, Basic Monotonicity Methods with Some Applications, Compact Textbooks in Mathematics, Birkhäuser, Springer Nature, 2021.Search in Google Scholar

[7] J. Franců, Monotone operators. A survey directed to applications to differential equations, Applications of Mathematics 35 (1990), no. 4, 257–301.Search in Google Scholar

[8] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer Science & Business Media, 2010.Search in Google Scholar

[9] L. C. Evans, Partial Differential Equations, 2nd ed, Providence, RI: American Mathematical Society (AMS), 2010.Search in Google Scholar

[10] K. Schmüdgen, Unbounded Self-adjoint Operators on Hilbert Space, vol. 265, Springer Science & Business Media, Dordrecht, 2012.Search in Google Scholar

[11] D. G. De Figueiredo, Lectures on the Ekeland Variational Principle with Applications and Detours, vol. 81, Springer Berlin, 1989.Search in Google Scholar

[12] D. Idczak, Bipolynomial fractional Dirichlet-Laplace problem, J. Differ. Equ. 2019 (2019), Paper No. 59, 17–390.Search in Google Scholar

[13] N. S. Papageorgiou, C. Vetro and F. Vetro, Nonlinear multivalued Duffing systems, J. Math. Anal. Appl. 468 (2018), 376–390.Search in Google Scholar

[14] N. S. Papageorgiou, C. Vetro, and F. Vetro, Nonlinear vector Duffing inclusions with no growth restriction on the orientor field, Topol. Methods Nonlinear Anal. 54 (2019), no. 1, 257–274.Search in Google Scholar

Received: 2021-11-17
Revised: 2022-01-04
Accepted: 2022-01-11
Published Online: 2022-03-09

© 2022 Michał Bełdziński et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.4.2024 from https://www.degruyter.com/document/doi/10.1515/anona-2022-0235/html
Scroll to top button