Next Article in Journal
More Effective Results for Testing Oscillation of Non-Canonical Neutral Delay Differential Equations
Next Article in Special Issue
A Convergent Three-Step Numerical Method to Solve a Double-Fractional Two-Component Bose–Einstein Condensate
Previous Article in Journal
The Impact of Regulatory Reforms on Demand Weighted Average Prices
Previous Article in Special Issue
Why Improving the Accuracy of Exponential Integrators Can Decrease Their Computational Cost?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Semilinear Wave Problems with Time-Dependent Boundary Values Using Arbitrarily High-Order Splitting Methods

1
Departamento de Matemática Aplicada, Facultad de Ciencias, Universidad de Valladolid, Paseo de Belén 7, 47011 Valladolid, Spain
2
Departamento de Matemática Aplicada, Escuela de Ingenierías Industriales, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain
*
Author to whom correspondence should be addressed.
Instituto de Investigación en Matemáticas de la Universidad de Valladolid (IMUVa), 47011 Valladolid, Spain.
These authors contributed equally to this work.
Mathematics 2021, 9(10), 1113; https://doi.org/10.3390/math9101113
Submission received: 9 April 2021 / Revised: 7 May 2021 / Accepted: 12 May 2021 / Published: 14 May 2021
(This article belongs to the Special Issue Numerical Methods for Evolutionary Problems)

Abstract

:
The initial boundary-value problem associated to a semilinear wave equation with time-dependent boundary values was approximated by using the method of lines. Time integration is achieved by means of an explicit time method obtained from an arbitrarily high-order splitting scheme. We propose a technique to incorporate the boundary values that is more accurate than the one obtained in the standard way, which is clearly seen in the numerical experiments. We prove the consistency and convergence, with the same order of the splitting method, of the full discretization carried out with this technique. Although we performed mathematical analysis under the hypothesis that the source term was Lipschitz-continuous, numerical experiments show that this technique works in more general cases.

1. Introduction

We consider full discretizations by means of the method of lines of a semilinear second order in time-evolutionary problems with time-dependent boundary values. For this, we first discretize in space by means of finite differences, obtaining a system of ordinary differential equations.
For time integration, we rewrite the semidiscrete system as a first-order system in time and apply an arbitrary splitting scheme. Useful descriptions of splitting methods can be found in review articles [1,2,3]. Splitting schemes are especially useful in the field of geometric integration. In fact, splitting integrators preserve the structural properties of the original problem flow for as long as the flow of intermediate problems does. The good performance of geometric integrators in the long-term integration of systems of Hamiltonian ODEs is well-demonstrated in [4,5].
In this paper, we thus obtain a time integrator that is explicit and has the advantage of being cheap to implement, but with the disadvantage that its stability interval is finite. However, for these second-order in-time evolutionary problems, the stability condition is acceptable, and the step size in time and space may be taken to be of a similar size. It is also possible to use implicit methods, (see, for example, [6]), where Gautschi methods are studied avoiding the order-reduction phenomenon that appears with these methods.
The way in which a splitting method works requires three steps [3]: first, by choosing how to split the problem into several simpler intermediate problems, integrating each intermediate problem either exactly or approximately, and lastly composing the solution of the intermediate problems to obtain an approximation of a certain order of the original problem.
Denoting by h the step size in space and by k the step size in time, and separating the problem, we integrate into two intermediate problems with exact flows given by Φ h , t [ 1 ] and Φ h , t [ 2 ] . We consider a general splitting integrator with m stages and coefficients a j , b j , j = 1 , , m ,
Ψ h , k = Φ h , b m k [ 2 ] Φ h , a m k [ 1 ] Φ h , a 2 k [ 1 ] Φ h , b 1 k [ 2 ] Φ h , a 1 k [ 1 ] .
In the case of the standard method of lines, each must be integrated in time (either exactly or numerically using a sufficiently accurate quadrature formula); however, for this, the term due to the space discretization of the nonvanishing boundary values must be addressed in the same way as the source term is. Since each stage of the splitting method applied to the spatial discretization is exactly integrated, we deduced that the optimal order was achieved, at least for a fixed thickness of the spatial discretization.
We propose in this paper a technique to cheaply and effectively incorporate the boundary values to the time integration carried out by the splitting method. This technique is consistent with these values being used to approximate the spatial differential operator on the boundary. We prove consistency and convergence with optimal order of the full discretization obtained with our technique under the hypothesis that the nonlinear term of the original second order in time problem was Lipschitz-continuous; however, numerical experiments showed that this hypothesis is not necessary in practice. Moreover, numerical experiments clarified the superiority of this technique compared to the use of the standard line method, with minor errors refining the discretization in both space and time.
Throughout this paper, several constants that are independent of the time step of the time integration could be likewise denoted (usually with the letter C, and possibly with some subscript).
The paper is organized as follows. The studied problem and spatial discretization are introduced in Section 2. Section 3 is devoted to the standard method of lines, time discretization being performed with a splitting method. In Section 4, we explain the alternative method that we propose to incorporate boundary values when implementing the splitting method. Numerical experiments that clearly show the better accuracy of the proposed method versus the standard method of lines are carried out in Section 5. Mathematical analysis of the convergence is developed in Section 6, where consistency is proved, and in Section 7, where convergence is stated along a brief review of the needed linear stability.

2. Preliminaries

2.1. Partial Differential Equation

Let X and Y be Hilbert spaces, D ( A ) X a dense subspace, and A : D ( A ) X X , B : D ( A ) X Y two closed linear operators. We consider the abstract second-order in time semilinear equation given by
u ( t ) = A u ( t ) + f ( t , u ( t ) ) , t [ 0 , T ] , u ( 0 ) = u 0 , u ( 0 ) = v 0 , B u ( t ) = g ( t ) ,
where source term f : [ 0 , T ] × D ( A ) X is a smooth function that is generally nonlinear. In practice, data f and g, solution u, and operators A and B are defined on a domain Ω R n , and they could depend on spatial variables. We did not make this dependence explicit in the abstract formulation (2) in order to simplify the notation.
We make the following hypotheses on operators A and B:
(A1)
Operator B is onto.
(A2)
Ker ( B ) = D ( A 0 ) is a dense subspace of X and A 0 = A | D ( A 0 ) is a negative definite self-adjoint operator. We denote S 0 = ( A 0 ) 1 / 2 .
(A3)
Steady-state problem
A x = 0 , B x = v Y ,
possesses a unique solution denoted by x = K ( 0 ) v , and there exists a constant C, such that linear operator K ( 0 ) : Y D ( A ) satisfies
K ( 0 ) v X C v Y .
(A4)
Solution u in (2) satisfies u ( t ) D ( A ) for t [ 0 , T ] and is smooth enough in time.
(A5)
Source term f ( t , u ) is a Lipschitz-continuous function with respect to variable u.
Remark 1.
Because of Hypotheses (A2) and (A3), we deduce that linear problem
u ( t ) = A u ( t ) + f ( t ) , t [ 0 , T ] , u ( 0 ) = u 0 , u ( 0 ) = v 0 , B u ( t ) = g ( t ) ,
is well-posed; see [7,8]. Moreover, Hypothesis (A2) may be generalized to the case of A being a cosine operator [9,10].

2.2. Spatial Discretization

Our first step to discretize (2) by means of the method of lines is spatial discretization. Let h ( 0 , h 0 ] be a parameter that is used to measure the thickness of spatial discretization. We assumed that X h is a family of finite-dimensional spaces that approximate X. The discrete norm in X h is denoted by · h . Moreover, there is a subspace X h , 0 X h where the elements of D ( A 0 ) are well-approximated by using P h : D ( A ) X X h , 0 , that is, we assumed that P h u is the best approximation when u D ( A 0 ) . The boundary values are discretized by means of the linear operator Q h : Y X h , b , where X h = X h , 0 X h , b .
Operator A is approximated by using operators A h : X h X h , 0 and A h , 0 = A h | X h , 0 . When u D ( A ) ,
A h ( P h u + Q h B u ) = A h , 0 P h u + A h Q h B u P h A u .
In practice, if we look for solution u D ( A ) of steady-state problem
A u = F , B u = g ,
where F X and g Y , we cannot obtain P h u . Instead, we can compute R h u satisfying
A h ( R h u + Q h g ) = A h , 0 R h u + A h Q h g = P h F .
In order to discretize the source term, we suppose that function f can be defined in space X h , 0 . That is, we can consider f : [ 0 , T ] × X h , 0 X h , 0 , and
P h f ( t , u ) = f ( t , P h u ) ,
for each u X and t [ 0 , T ] .
With this spatial discretization, we obtain semidiscrete ordinary differential system
u h ( t ) = A h , 0 u h ( t ) + A h Q h g ( t ) + f ( t , u h ( t ) ) , u h ( 0 ) = P h u 0 , u h ( 0 ) = P h v 0 .
We make the following hypotheses:
(H1)
There exists a constant C independent of h, such that, for u D ( A ) and small enough h,
P h u h C u .
(H2)
Operator A h , 0 is symmetric and negative definite. Let S h , 0 be the symmetric and positive definite operator, such that S h , 0 2 = A h , 0 . We also assumed that A h , 0 and S h , 0 were invertible and their inverses were uniformly bounded on h.
(H3)
There exists a subspace Z D ( A ) with norm · Z , such that
A h , 0 ( R h P h ) u h ε h u Z ,
for each u Z , where we suppose that ε h 0 when h 0 .
(H4)
f : [ 0 , T ] × X h , 0 X h , 0 is Lipschitz-continuous.
With these hypotheses, we can prove that the solution of (5) is a good approximation of the one of (2).
Theorem 1.
We assumed Hypotheses (A1–A5) and (H1–H4), that g C 1 ( [ 0 , T ] , Y ) , f C ( [ 0 , T ] × X h , 0 , X h , 0 ) and u C ( [ 0 , T ] , Z ) . Then, spatial error e h ( t ) = P h u ( t ) u h ( t ) satisfies
e h ( t ) E h = [ P h u ( t ) u h ( t ) , P h u ( t ) u h ( t ) ] T E h = S h , 0 ( P h u ( t ) u h ( t ) ) h 2 + P h u ( t ) u h ( t ) h 2 1 / 2 C ε h ,
where C only depends on T, u, u and Lipschitz constant L.
Proof. 
Applying P h to (2), considering (4), and making the difference with (5),
P h u ( t ) u h ( t ) = A h , 0 ( P h u ( t ) u h ( t ) ) + f ( t , P h u ( t ) ) f ( t , u h ( t ) ) A h , 0 ( P h u ( t ) R h u ( t ) ) , P h u ( 0 ) u h ( 0 ) = 0 , P h u ( 0 ) u h ( 0 ) = 0 .
Rewriting (6) as a first-order in-time problem,
P h u ( t ) u h ( t ) P h u ( t ) u h ( t ) = 0 I h A h , 0 0 P h u ( t ) u h ( t ) P h u ( t ) u h ( t ) + 0 f ( t , P h u ( t ) ) f ( t , u h ( t ) ) A h , 0 ( P h u ( t ) R h u ( t ) ) , P h u ( 0 ) u h ( 0 ) P h u ( 0 ) u h ( 0 ) = 0 0 .
Using that
exp t 0 I h A h , 0 0 = cos ( t S h , 0 ) S h , 0 1 sin ( t S h , 0 ) sin ( t S h , 0 ) cos ( t S h , 0 ) ,
the variation of the constant formula and the vanishing initial conditions, we deduce that
P h u ( t ) u h ( t ) P h u ( t ) u h ( t ) = 0 t S h , 0 1 sin ( ( t s ) S h , 0 ) ( f ( s , P h u ( s ) ) f ( s , u h ( s ) ) A h , 0 ( P h u ( s ) R h u ( s ) ) ) cos ( ( t s ) S h , 0 ) ( f ( s , P h u ( s ) ) f ( s , u h ( s ) ) A h , 0 ( P h u ( s ) R h u ( s ) ) ) d s .
Now, using (H2–H4), and that u C ( [ 0 , T ] , Z ) , we have
P h u ( t ) u h ( t ) E h = S h , 0 ( P h u ( t ) u h ( t ) ) h 2 + P h u ( t ) u h ( t ) h 2 2 L 0 t P h u ( s ) u h ( s ) h d s + 2 T ε h max t [ 0 , T ] u ( t ) Z .
Applying the Gronwall lemma, we obtain for t [ 0 , T ] ,
P h u ( t ) u h ( t ) E h max t [ 0 , T ] u ( t ) Z 2 T e 2 L T ε h .
Therefore, we can obtain a good approximation of the solution of (2) by considering the solution of (5) with a small enough value of h. Next, we use a time integrator to achieve full discretization. For this, we consider a splitting scheme. In the next two sections, we study two different ways of incorporating the boundary values with the full discretization.

3. Full Discretization: Standard Method of Lines

First, we rewrite (5) as a first-order differential problem. We denote u h ( t ) = [ u 1 , h ( t ) , u 2 , h ( t ) ] = [ u h ( t ) , u h ( t ) ] and obtain system
u 1 , h u 2 , h = 0 I h A h , 0 0 u 1 , h u 2 , h + 0 A h Q h g ( t ) + 0 f ( t , u 1 , h ) , u 1 , h ( 0 ) u 2 , h ( 0 ) = P h u 0 P h v 0 ,
of which the exact flow is denoted by u h ( t ) = Φ h , t u h ( 0 ) .
Second, we apply to (7) a splitting scheme in the usual way. We then choose a split of (7) in two intermediate problems. The first is
z 1 , h z 2 , h = 0 I h 0 0 z 1 , h z 2 , h ,
of which the exact flow is denoted by z h ( t ) = Φ h , t I h z h ( 0 ) , and the second is
z ˜ 1 , h z ˜ 2 , h = 0 0 A h , 0 0 z ˜ 1 , h z ˜ 2 , h + 0 A h Q h g ( t ) + 0 f ( t , z ˜ 1 , h ) ,
of which the exact flow is denoted by z ˜ h ( t ) = Φ h , t A h , 0 + g + f z ˜ h ( 0 ) .
Let k > 0 be a time step; we consider t n = n k , n 1 , and u h , n = [ u 1 , h , n , u 2 , h , n ] T [ u 1 , h ( t n ) , u 2 , h ( t n ) ] = u h ( t n ) . We now consider a general splitting integrator with m stages and coefficients a j , b j , j = 1 , , m . This splitting scheme composes the flows of intermediate problems to obtain order p as follows:
z ˜ 0 , h , n = u h , n , z ˜ j , h , n = Φ h , b j k A h , 0 + g + f Φ h , a j k I h z ˜ j 1 , h , n , j = 1 , , m , u h , n + 1 = z ˜ m , h , n ,
where z ˜ j , h , n = [ z ˜ 1 , j , h , n , z ˜ 2 , j , h , n ] T .
To more explicitly write the j-stage in (10), we first consider initial-value problem
z 1 , j , h , n ( s ) = z 2 , j 1 , h , n ( s ) , s [ 0 , a j k ] , z 2 , j , h , n ( s ) = 0 , z 1 , j , h , n ( 0 ) = z ˜ 1 , j 1 , h , n ( b j 1 k ) , z 2 , j , h , n ( 0 ) = z ˜ 2 , j 1 , h , n ( b j 1 k ) ,
and, by advancing a step a j k in time,
z 1 , j , h , n ( a j k ) = z ˜ 1 , j 1 , h , n ( b j 1 k ) + a j k z ˜ 2 , j 1 , h , n ( b j 1 k ) , z 2 , j , h , n ( a j k ) = z ˜ 2 , j 1 , h , n ( b j 1 k ) .
Second, we consider
z ˜ 1 , j , h , n ( s ) = 0 , s [ 0 , b j k ] , z ˜ 2 , j , h , n ( s ) = A h , 0 z ˜ 1 , j , h , n ( s ) + A h Q h g ( t n + k i = 1 j 1 b i + s ) + f ( t n + k i = 1 j 1 b i + s , z ˜ 1 , j , h , n ( s ) ) , z ˜ 1 , j , h , n ( 0 ) = z 1 , j , h , n ( a j k ) , z ˜ 2 , j , h , n ( 0 ) = z 2 , j , h , n ( a j k ) ,
and we advance a step b j k in time,
z ˜ 1 , j , h , n ( b j k ) = z 1 , j , h , n ( a j k ) , z ˜ 2 , j , h , n ( b j k ) = z 2 , j , h , n ( a j k ) + b j k A h , 0 z 1 , j , h , n ( a j k ) + A h Q h 0 b j k g ( t n + k i = 1 j 1 b i + τ ) d τ + 0 b j k f ( t n + k i = 1 j 1 b i + τ , z 1 , j , h , n ( a j k ) ) d τ .
With this full discretization, we obtain order p in time since this is the order of the splitting method that we use, and the flows of the intermediate problems are exactly calculated when both integrals must be exactly calculated. In any case, we can always use a quadrature rule with the same accuracy as that of the splitting method. Convergence must be obtained in the discrete energy norm, and a suitable stability hypothesis is needed, similarly to the case of the discretization studied in the next section.

4. Full Discretization: An Alternative Way to Incorporate Boundary Values

The standard method of lines studied in Section 3 seems to be optimal, especially when integrals in (11) are exactly calculated. However, the two integrals had very different origins, since one of them came from the source term and the other one from the boundary value. Furthermore, the integral that came from the discretization of the boundary values in (11) can be arbitrarily large when spatial discretization is refined. This is because operator A h arises from the approximation at the boundary of differential operator A, which is unbounded.
We now introduce full discretization that more suitably incorporates the boundary values, as we show in the numerical experiments in Section 5. For this purpose, we need a more consistent notation with the fact that we discretize an initial boundary value problem (cf. [11]).
Suppose that x h X h , 0 , v h X h , b and we want to calculate A h , 0 x h + A h v h . Then, we denote
B h x h = v h ,
and
A h x h = A h , 0 x h + A h B h x h = A h , 0 x h + A h v h .
In this way, semidiscrete Problem (5) can be rewritten as
u h = A h u h + f ( t , u h ) , u h ( 0 ) = P h u 0 , u h ( 0 ) = P h v 0 , B h u h ( t ) = Q h g ( t ) ,
which is more similar to the original problem.
With this notation, we rewrite Problem (12) as first-order differential system
u 1 , h u 2 , h = 0 I h A h 0 u 1 , h u 2 , h + 0 f ( t , u 1 , h ) , u 1 , h ( 0 ) u 2 , h ( 0 ) = P h u 0 P h v 0 , B h u 1 , h ( t ) = Q h g ( t ) ,
of which the exact flow is given by u h ( t ) = Φ h , t u h ( 0 ) , as in Section 3.
For time discretization, we consider the same splitting scheme as that in Section 3, and the challenge is to obtain a suitable way with which to incorporate the boundary values. For this, we split Problem (13) into two intermediate problems; the first is
v 1 , h v 2 , h = 0 I h 0 0 v 1 , h v 2 , h ,
of which the exact flow is given by v h ( t ) = Φ h , t I h v h ( 0 ) , and the second is
w 1 , h w 2 , h = 0 0 A h 0 w 1 , h w 2 , h + 0 f ( t , w 1 , h ) ,
of which the exact flow is w h ( t ) = Φ h , t A h + f w h ( 0 ) , where the value of B h w 1 , h ( 0 ) , necessary to compute A h w 1 , h ( 0 ) = A h , 0 w 1 , h ( 0 ) + B h w 1 , h ( 0 ) , must be chosen every time that this second intermediate problem is used in the final scheme.
We supposed that we computed u h , n = [ u 1 , h , n , u 2 , h , n ] T [ u 1 , h ( t n ) , u 2 , h ( t n ) ] = u h ( t n ) . As in Section 3, we use a general splitting method with m stages, with coefficients a j , b j , j = 1 , , m . Therefore,
w 0 , h , n = u h , n , w j , h , n = Φ h , b j k A h + f Φ h , a j k I h w j 1 , h , n , j = 1 , , m , u h , n + 1 = w m , h , n ,
where w j , h , n = [ w 1 , j , h , n , w 2 , j , h , n ] T .
To more explicitly write the j-stage in (16), we first consider initial-value problem
v 1 , j , h , n ( s ) = v 2 , j 1 , h , n ( s ) , s [ 0 , a j k ] , v 2 , j , h , n ( s ) = 0 , v 1 , j , h , n ( 0 ) = w 1 , j 1 , h , n ( b j 1 k ) , v 2 , j , h , n ( 0 ) = w 2 , j 1 , h , n ( b j 1 k ) ,
of which the solution at s = a j k is
v 1 , j , h , n ( a j k ) = w 1 , j 1 , h , n ( b j 1 k ) + a j k w 2 , j 1 , h , n ( b j 1 k ) , v 2 , j , h , n ( a j k ) = w 2 , j 1 , h , n ( b j 1 k ) ,
and we assign boundary value
B h v 1 , j , h , n ( a j k ) = Q h g ( t n + k i = 1 j a i ) .
Then,
w 1 , j , h , n ( s ) = 0 , s [ 0 , b j k ] , w 2 , j , h , n ( s ) = A h w 1 , j , h , n ( s ) + f ( t n + k i = 1 j 1 b i + s , w 1 , j , h , n ( s ) ) , w 1 , j , h , n ( 0 ) = v 1 , j , h , n ( a j k ) , w 2 , j , h , n ( 0 ) = v 2 , j , h , n ( a j k ) ,
where using (18), we can calculate
A h w 1 , j , h , n ( s ) = A h , 0 w 1 , j , h , n ( s ) + A h B h w 1 , j , h , n ( s ) = A h , 0 w 1 , j , h , n ( s ) + A h B h v 1 , j , h , n ( a j k ) = A h , 0 w 1 , j , h , n ( s ) + A h Q h g ( t n + k i = 1 j a i ) ,
and we deduce that its solution at s = b j k is
w 1 , j , h , n ( b j k ) = v 1 , j , h , n ( a j k ) , w 2 , j , h , n ( b j k ) = v 2 , j , h , n ( a j k ) + b j k A h , 0 v 1 , j , h , n ( a j k ) + b j k A h Q h g ( t n + k i = 1 j a i ) + 0 b j k f ( t n + k i = 1 j 1 b i + τ , v 1 , j , h , n ( a j k ) ) d τ .
Remark 2.
If we compare Formulas (11) and (19), the only difference is the treatment of the boundary values. The two ways of dealing with the boundary are
E X ( g ) = 0 b j k g ( t n + k i = 1 j 1 b i + τ ) d τ , B ( g ) = b j k g ( t n + k i = 1 j a i ) ,
for the standard method of lines and for the one that we propose, respectively.
Obviously, the second option is much simpler since the first option may even require to numerically evaluate the integral. We see in the numerical experiments in the next section that the second option also allows for obtaining much more precise results.
We prove in Section 6 and Section 7 that, if the full discretization described in this section is used to approximate Problem (12), then the method is convergent, and the optimal order p of the splitting method is achieved.

5. Numerical Experiments

For the numerical experiments in this section, we consider splitting integrators of m stages, with coefficients a j , b j , j = 1 , , m , and b m = 0 , which are particular cases of the symmetric ones [3]. More specifically, we use the Strang method with order p = 2 , and coefficients a 1 = a 2 = 0.5 and b 1 = 1 and two other methods that are particular cases of symmetric-splitting methods, whose coefficients are given in the following way: Let l N be an even number; then, we consider a method with m = l + 1 stages satisfying a l / 2 + 1 = 1 2 ( a 1 + + a l / 2 ) , b l / 2 = 1 / 2 ( b 1 + + a l / 2 1 ) , a l + 2 i = a i and b l + 1 i = b i , for i = 1 , , l / 2 . (Note that our parameter l is called m in [3]).
For l = 6 , we consider method Ψ S 4 with order p = 4 obtained from
a 1 = 0.0792036964311957 , b 1 = 0.209515106613362 , a 2 = 0.353172906049774 , b 2 = 0.143851773179818 , a 3 = 0.0420650803577195 ,
and, for l = 10 , method Ψ S 6 with order p = 6 , given by coefficients
a 1 = 0.0502627644003922 , b 1 = 0.148816447901042 , a 2 = 0.413514300428344 , b 2 = 0.132385865767784 , a 3 = 0.0450798897943977 , b 3 = 0.067307604692185 , a 4 = 0.188054853819569 , b 4 = 0.432666402578175 , a 5 = 0.541960678450780 .

5.1. Numerical Experiment: Test 1

We consider test problem
u t t ( x , t ) = u x x ( x , t ) sin ( u ) + e μ t ( 25 ( x 2 + 1 ) 2 ) + sin ( e μ t ( x 2 + 1 ) ) , u ( x , 0 ) = x 2 + 1 , u t ( x , 0 ) = μ ( x 2 + 1 ) , u ( 0 , t ) = e μ t , u ( 1 , t ) = 2 e μ t ,
where x [ 0 , 1 ] , t [ 0 , T ] , of which the exact solution is u ( x , t ) = e μ t ( x 2 + 1 ) .
This problem can be written in abstract format (2) by taking X = L 2 ( 0 , 1 ) , D ( A ) = H 2 ( 0 , 1 ) , A is the second-order derivative in the spatial variable, B is the Dirichlet trace operator, and Y = R 2 . Then, D ( A 0 ) = H 2 ( 0 , 1 ) H 0 1 ( 0 , 1 ) and operator A 0 = A | D ( A 0 ) is self-adjoint and definite negative.
We consider grid x j = j h , 0 j J + 1 , of interval [ 0 , 1 ] . We look for approximations u j , n u ( x j , t n ) , 0 j J + 1 , 0 n N . We take X h = R J + 2 , and elements u h X h are denoted by u h = [ u 0 , u 1 , , u J , u J + 1 ] T . In this way, X h , 0 = { u h X h such that u 0 = u J + 1 = 0 } but, for the sake of simplicity, we use X h , 0 = R J and their elements are denoted by u h = [ u 1 , , u J ] T . Moreover, X h , b = { u h X h such that u 1 = = u J = 0 } .
Operator P h : D ( A ) X h , 0 is given by P h u = [ u ( x 1 ) , , u ( x J ) ] T , and operator Q h : Y X h , b is given by Q h ( u 0 , u J + 1 ) = ( u 0 , 0 , , 0 , u J + 1 ) . Operator A h arises from the approximation of the second-order derivative in space by using central finite differences, that is,
A h = 1 h 2 1 2 1 1 2 1 1 2 1 1 2 1 .
Now, by using the previous notation, we can write
A h , 0 = 1 h 2 2 1 1 2 1 1 2 1 1 2 ,
which is a symmetric and definite negative operator on X h , 0 .
Hypothesis (H3) can be verified in the standard way by using Taylor series for the local truncation error, with ε h = O ( h 2 ) and Z = C 2 ( 0 , 1 ) . Taking into account that the exact solution of (20) is a second-order polynomial in variable x, there is no spatial error, and we can focus on the error due to time discretization.
In this problem, for f ( t , u ) = sin ( u ) + e μ t ( μ 2 ( x 2 + 1 ) 2 ) + sin ( e μ t ( x 2 + 1 ) ) , we exactly compute 0 k f ( t n + τ , z ˜ 1 , h ( k 2 ) ) d τ
E X ( f ) = k sin ( z ˜ 1 , h ( k 2 ) ) + ( e μ t n e μ ( t n + k ) ) ( μ 2 ( x j 2 + 1 ) 2 ) / μ + ( sinint ( e μ t n ( x j 2 + 1 ) ) sinint ( e μ ( t n + k ) ( x j 2 + 1 ) ) ) / μ ,
where 1 j J .
For the three symmetric-splitting methods, we compare the errors in the energy norm for the choice of boundary B ( g ) and for the E X ( g ) option in Remark 2, for values 1, 3, and 5 of μ . As Figure 1 shows, in all cases, option B ( g ) (solid line) obtained smaller errors than E X ( g ) did (dashed line). Moreover, the difference was more noticeable for Ψ S 4 and Ψ S 6 . Dependence on the size of boundary function g was also observed; for example, the difference between errors of B ( g ) and E X ( g ) was larger for μ = 1 than that for values μ = 3 and μ = 5 , where the values taken by g were smaller.
In addition, as shown in Figure 1 the slopes of the lines were 2 (Strang), 4 ( Ψ S 4 ) and 6 ( Ψ S 6 ), which coincides with the expected optimal order of the three methods.

5.2. Numerical Experiment: Test 2

We consider test problem
u t t ( x , t ) = u x x ( x , t ) sin ( u ) + e t 2 ( ( 2 + 4 t 2 ) ( x 2 + 1 ) 2 ) + sin ( e t 2 ( x 2 + 1 ) ) , u ( x , 0 ) = x 2 + 1 , u t ( x , 0 ) = 0 , u ( 0 , t ) = e t 2 , u ( 1 , t ) = 2 e t 2 ,
where x [ 0 , 1 ] , t [ 0 , T ] , of which the exact solution is u ( x , t ) = e t 2 ( x 2 + 1 ) . In this problem, a primitive of function f ( t , u ) = sin ( u ) + e t 2 ( ( 2 + 4 t 2 ) ( x 2 + 1 ) 2 ) + sin ( e t 2 ( x 2 + 1 ) ) cannot be exactly expressed using elementary functions, so we used a quadrature formula of appropriate order. In the calculations to obtain the data in Figure 2 we used the 3-point Gaussian quadrature of order 6, denoted by G 3 ( g ) . Errors are compared for Strang’s method (black), Ψ S 4 (red), and Ψ S 6 (blue), and for options B ( g ) (solid line) and G 3 ( g ) (dashed line). The slopes of the lines indicate order 2 for Strang’s method, order 4 for Ψ S 4 , and order 6 for Ψ S 6 . Errors are always smaller for option B ( g ) than those for option G 3 ( g ) . Moreover, this difference was more pronounced as the order of the method increased.

5.3. Numerical Experiment: Test 3

Now, we study the behavior of the error of both methods when spatial discretization is refined. The source term of semidiscrete Problem (5) grows when h 0 due to the boundary. However, it is expected that this growth has no influence when it is treated as part of the discretization of operator A at the boundary, as in the method we propose in Section 4.
For this, we consider test problem
u t t ( x , t ) = u x x ( x , t ) sin ( u ) + sin ( e t + x ) , u ( x , 0 ) = e x , u t ( x , 0 ) = e x , u ( 0 , t ) = e t , u ( 1 , t ) = e t + 1 ,
where x [ 0 , 1 ] , t [ 0 , T ] , of which the exact solution is u ( x , t ) = e t + x .
In this experiment, spatial error dominates, and we expected to observe order 2 of spatial discretization. Figure 3 shows errors for the splitting using B(g) in continuous line and E X ( g ) in dashed line, in red for Ψ S 4 and in blue Ψ S 6 . The values of h = k = 1 / N were used for N = 25 , 50 , 100 , 200 . In this way, we remained in the stability interval of both methods. In the two methods with B ( g ) , order 2 of spatial discretization was observed; when using the E X ( g ) option, the errors were larger, and order 2 was lost when h decreased.

5.4. Numerical Experiment: Test 4

Although theoretical results were shown for the case where f was Lipschitz-continuous, we see in a couple of examples that it also works if f is locally Lipschitz-continuous. We now consider test problem
u t t ( x , t ) = u x x ( x , t ) + u 2 ( x , t ) + e t ( x 2 1 ) e 2 t ( x 2 + 1 ) 2 , u ( x , 0 ) = x 2 + 1 , u t ( x , 0 ) = ( x 2 + 1 ) , u ( 0 , t ) = e t , u ( 1 , t ) = 2 e t ,
where x [ 0 , 1 ] , t [ 0 , T ] , of which the exact solution is u ( x , t ) = e t ( x 2 + 1 ) .
Table 1 shows the time errors for the three symmetric-splitting methods with h = 1 / 100 for final time T = 1 . The evolution of log 2 ( error ( k ) / error ( k / 2 ) ) is displayed in Table 2. Orders 2, 4, and 6 are shown for Strang’s method, Ψ S 4 , and Ψ S 6 , respectively. The little loss of order in the lower-right corner was due to the influence of rounding errors.

5.5. Numerical Experiment: Test 5

Lastly, we consider an example in two spatial dimensions, and in the locally Lipschitz = continuous case. The test problem is
u t t ( x , y , t ) = u x x ( x , y , t ) + u y y ( x , y , t ) + u 2 ( x , y , t ) + e t ( x 2 + y 2 3 ) e 2 t ( x 2 + y 2 + 1 ) 2 , u ( x , y , 0 ) = x 2 + y 2 + 1 , u t ( x , y , 0 ) = ( x 2 + y 2 + 1 ) , u ( 0 , y , t ) = e t ( y 2 + 1 ) , u ( 1 , y , t ) = e t ( y 2 + 2 ) , u ( x , 0 , t ) = e t ( x 2 + 1 ) , u ( x , 1 , t ) = e t ( x 2 + 2 ) ,
for values x [ 0 , 1 ] , y [ 0 , 1 ] , t [ 0 , T ] , of which the exact solution is u ( x , t ) = e t ( x 2 + y 2 + 1 ) .
This problem can be written in the abstract format (2). For this, we denote Ω = ( 0 , 1 ) × ( 0 , 1 ) and Γ is the boundary of Ω . We take X = L 2 ( Ω ) , D ( A ) = H 2 ( Ω ) ; A is the Laplacian operator in the spatial variables x and y, B is the Dirichlet trace operator on Γ and Y = h 1 / 2 ( Γ ) . Then, D ( A 0 ) = H 2 ( Ω ) H 0 1 ( Ω ) and operator A 0 = A | D ( A 0 ) is self-adjoint and definite negative.
We consider grid ( x j , y l ) = ( j h , l h ) , 0 j , l J + 1 , of Ω ¯ . We look for approximations u j , l , n u ( x j , y l , t n ) , 0 j , l J + 1 , 0 n N . We take X h = R J + 2 × R J + 2 , and elements u h X h are denoted by u h = [ u 0 , 1 , u 0 , 2 , , u J + 1 , J , u J + 1 , J + 1 ] T . In this way, X h , 0 = { u h X h such that u 0 , l = u J + 1 , l = u j , 0 = u j , J + 1 = 0 } ; however, for the sake of simplicity, we use X h , 0 = R J × R J , and their elements are denoted by u h = [ u 1 , 1 , u 1 , 2 , , u J , J 1 , u J , J ] T . Subspace X h , b is similarly defined.
Operator P h : D ( A ) X h , 0 is given by
P h u = [ u ( x 1 , y 1 ) , u ( x 1 , y 2 ) , , u ( x J , y J 1 ) , u ( x J , y J ) ] T ,
and A h arises from the approximation of the Laplacian operator by using central finite differences in each spatial direction, that is, considering second-order spatial discretization
A h , 0 = 1 h 2 B J I J I J B J I J I J B J I J I J B J ,
where I J is the identity matrix, and
B J = 4 1 1 4 1 1 4 1 1 4 .
Taking into account that the exact solution is a second-order polynomial in variables x , y , there is no spatial error; then, we can again focus on the error due to time discretization.
Table 3 shows the time errors for the three symmetric-splitting methods, with h = 1 / 100 and for final time T = 1 . The missing value for the Strang method and k = 1 / 100 was due to the instability of the numerical solution in this case; see Section 7.1. The evolution of log 2 ( error ( k ) / error ( k / 2 ) ) is displayed in Table 4. Orders 2, 4, and 6 are shown for Strang’s method, Ψ S 4 and Ψ S 6 , respectively.

6. Consistency Correctly Incorporating Boundary Values

Here, we deduce consistency in the energy norm of the implementation of a splitting method with the boundary values that we chose in Section 4.
As a first step in the proof of consistency, we introduce ordinary differential system
u 1 , h u 2 , h u 3 , h = 0 I h 0 A h , 0 0 A h 0 0 0 u 1 , h u 2 , h u 3 , h + 0 f ( t , u 1 , h ) Q h g ( t ) , u 1 , h ( 0 ) u 2 , h ( 0 ) u 3 , h ( 0 ) = P h u 0 P h v 0 Q h g ( 0 ) .
We have
u 3 , h ( t ) = Q h g ( t ) , u 3 , h ( 0 ) = Q h g ( 0 ) .
and, therefore,
u 3 , h ( t ) = Q h g ( t ) .
Then, for the first two components of (25), we deduce that
u 1 , h ( t ) = u 2 , h ( t ) , u 2 , h ( t ) = A h , 0 u 1 , h ( t ) + A h Q h g ( t ) + f ( t , u 1 , h ) ( t ) ,
which is the same problem as (7) (and as (12) with the notation of Section 4). We also deduce that
u 1 , h ( t ) = u h ( t ) , u 2 , h ( t ) = u h ( t ) ,
being u h ( t ) the solution of (5).
We split (25) into two intermediate problems that are similar to the ones used in Section 4, and we applied to it the same splitting method. The solution of (25) was approximated with order p of the splitting method. This particularly is true for the two first components that match those of the solution of (12). Therefore, to prove consistency, it suffices to see that the obtained approximations for the first two components are the same as those described in Section 4 with the choice of boundary values made in (18).
We choose the following split of Problem (25). The first intermediate problem is
v 1 , h v 2 , h v 3 , h = 0 I h 0 0 0 0 0 0 0 v 1 , h v 2 , h v 3 , h + 0 0 Q h g ( t ) ,
of which the exact flow is denoted as v h ( t ) = Φ h , t [ 1 ] v h ( 0 ) , and the second is
w 1 , h w 2 , h w 3 , h = 0 0 0 A h , 0 0 A h 0 0 0 w 1 , h w 2 , h w 3 , h + 0 f ( t , w 1 , h ) 0 .
of which the exact flow is denoted as w h ( t ) = Φ h , t [ 2 ] w h ( 0 ) .
Assuming that approximation u h , n = [ u 1 , h , n , u 2 , h , n , u 3 , h , n ] T [ u 1 , h ( t n ) , u 2 , h ( t n ) , u 3 , h ( t n ) ] T is already calculated and, as in previous sections, we apply a general splitting of order p,
w 0 , h , n = u h , n , w j , h , n = Φ h , b j k [ 2 ] Φ h , a j k [ 1 ] w j 1 , h , n , j = 1 , , m , u h , n + 1 = w m , h , n ,
where w j , h , n = [ w 1 , j , h , n , w 2 , j , h , n , w 3 , j , h , n ] T .
The performance of a time step k > 0 is as follows. For each j = 1 , , m , the first problem to be solved is
v 1 , j , h , n ( s ) = v 2 , 1 , h , n ( s ) , s [ 0 , a j k ] , v 2 , j , h , n ( s ) = 0 , v 3 , j , h , n ( s ) = Q h g ( t n + k l = 1 j 1 a l + s ) , v 1 , j , h , n ( 0 ) = w 1 , j 1 , h , n ( b j 1 k ) , v 2 , j , h , n ( 0 ) = w 2 , j 1 , h , n ( b j 1 k ) , v 3 , j , h , n ( 0 ) = w 3 , j 1 , h , n ( b j 1 k ) = Q h g ( t n + k l = 1 j 1 a l ) ,
whose solution at s = a j k is
v 1 , j , h , n ( a j k ) = w 1 , j 1 , h , n ( b j 1 k ) + a j k w 2 , j 1 , h , n ( b j 1 k ) , v 2 , j , h , n ( a j k ) = w 2 , j 1 , h , n ( b j 1 k ) , v 3 , j , h , n ( a j k ) = Q h g ( t n + k l = 1 j a l ) .
Since the third component of (31) provides the boundary value (18), the first and second components of (31) are the same as (17). Then, the second problem is
w 1 , j , h , n ( s ) = 0 , s [ 0 , b j k ] , w 2 , j , h , n ( s ) = A h , 0 w 1 , j , h , n + A h w 3 , j , h , n + f ( t n + k l = 1 j 1 b l + s , w 1 , j , h , n ( s ) ) , w 3 , j , h , n ( s ) = 0 , w 1 , j , h , n ( 0 ) = v 1 , j , h , n ( a j k ) , w 2 , j , h , n ( 0 ) = v 2 , j , h , n ( a j k ) , w 3 , j , h , n ( 0 ) = v 3 , j , h , n ( a j k ) ,
whose solution at s = b j k is
w 1 , j , h , n ( b j k ) = v 1 , j , h , n ( a j k ) , w 2 , j , h , n ( b j k ) = v 2 , j , h , n ( a j k ) + b j k A h , 0 v 1 , j , h , n ( a j k ) + b j k A h Q h g ( t n + k l = 1 j a l ) + 0 b j k f ( t n + k l = 1 j 1 b l τ , v 1 , j , h , n ( a j k ) ) d τ , w 3 , j , h , n ( b j k ) = Q h g ( t n + k l = 1 j a l ) .
The first two components of (32) are the same as those of (19).
Remark 3.
Although the splitting approximation (30) provides the same approximation as (16) through its first two components, it is not convenient to use it in practice. It is more useful to use the implementation of Section 4, which can be carried out with minimal modifications to the standard method of lines.
Now, we consider u h ( t ) = u 1 , h ( t ) , u 2 , h ( t ) , u 3 , h ( t ) T , the solution of (25). We can easily deduce that
u ˜ h ( t ) = u ˜ 1 , h ( t ) , u ˜ 2 , h ( t ) , u ˜ 3 , h ( t ) T = S h , 0 u 1 , h ( t ) , u 2 , h ( t ) , u 3 , h ( t ) T
is the solution of ordinary differential system
u ˜ 1 , h u ˜ 2 , h u ˜ 3 , h = 0 S h , 0 0 S h , 0 0 A h 0 0 0 u ˜ 1 , h u ˜ 2 , h u ˜ 3 , h + 0 f ( t , S h , 0 1 u ˜ 1 , h ) Q h g ( t ) , u ˜ 1 , h ( 0 ) u ˜ 2 , h ( 0 ) u ˜ 3 , h ( 0 ) = S h , 0 P h u 0 P h v 0 Q h g ( 0 ) .
We deduce that u ˜ 3 , h ( t ) = Q h g ( t ) , and that
u ˜ 1 , h ( t ) u ˜ 2 , h ( t ) = 0 S h , 0 S h , 0 0 u ˜ 1 , h ( t ) u ˜ 2 , h ( t ) + 0 A h Q h g ( t ) + f ( t , S h , 0 1 u 1 , h ( t ) ) , .
Therefore, the solution of Problem (33) is the appropriate one to calculate the energy norm of the solution of (25). Now, we see that approximating the solution of Problem (33) by means of a splitting method is equivalent to applying the same change of variables to (30), as is stated in (37).
The first intermediate problem is
v ˜ 1 , h v ˜ 2 , h v ˜ 3 , h = 0 S h , 0 0 0 0 0 0 0 0 v ˜ 1 , h v ˜ 2 , h v ˜ 3 , h + 0 0 Q h g ( t ) ,
of which the exact flow is denoted as v ˜ h ( t ) = Φ ˜ t [ 1 ] v ˜ h ( 0 ) and the second intermediate problem
w ˜ 1 , h w ˜ 2 , h w ˜ 3 , h = 0 0 0 S h , 0 0 A h 0 0 0 w ˜ 1 , h w ˜ 2 , h w ˜ 3 , h + 0 f ( t , S h , 0 1 w ˜ 1 , h ) 0 .
of which the exact flow is denoted as w ˜ h ( t ) = Φ ˜ t [ 2 ] w ˜ h ( 0 ) .
As in the previous sections, we use a general splitting method with m stages and order p, with coefficients a j , b j , j = 1 , , m . Therefore,
w ˜ 0 , h , n = u ˜ h , n , w ˜ j , h , n = Φ ˜ h , b j k [ 2 ] Φ ˜ h , a j k [ 1 ] w ˜ j 1 , h , n , j = 1 , , m , u ˜ h , n + 1 = w ˜ m , h , n ,
where w ˜ j , h , n = [ w ˜ 1 , j , h , n , w ˜ 2 , j , h , n , w ˜ 3 , j , h , n ] T .
We now prove that
u ˜ h , n = [ u ˜ 1 , h , n , u ˜ 2 , h , n , u ˜ 3 , h , n ] T = [ S h , 0 u 1 , h , n , u 2 , h , n , u 3 , h , n ] T ,
where u h , n = [ u 1 , h , n , u 2 , h , n , u 3 , h , n ] T is the solution of the splitting given by (30).
For j = 1 , , m , the following problems are solved.
v ˜ 1 , j , h , n ( s ) = S h , 0 v ˜ 2 , j , h , n ( s ) , s [ 0 , a j k ] , v ˜ 2 , j , h , n ( s ) = 0 , v ˜ 3 , j , h , n ( s ) = Q h g ( t n + k l = 1 j 1 a l + s ) , v ˜ 1 , j , h , n ( 0 ) = w ˜ 1 , j 1 , h , n ( b j 1 k ) = w 1 , j 1 , h , n ( b j 1 k ) , v ˜ 2 , j , h , n ( 0 ) = w ˜ 2 , j 1 , h , n ( b j 1 k ) = w 2 , j 1 , h , n ( b j 1 k ) , v ˜ 3 , j , h , n ( 0 ) = w ˜ 3 , j 1 , h , n ( b j 1 k ) = Q h g ( t n + k l = 1 j 1 a l ) = w 3 , j 1 , h , n ( b j 1 k ) ,
whose solution at s = a j k is
v ˜ 1 , j , h , n ( a j k ) = S h , 0 w 1 , j 1 , h , n ( b j 1 k ) + a j k S h , 0 w 2 , j 1 , h , n ( b j 1 k ) = S h , 0 v 1 , j , h , n ( a j k ) , v ˜ 2 , j , h , n ( a j k ) = w 2 , j 1 , h , n ( b j 1 k ) = v 2 , j , h , n ( a j k ) , v ˜ 3 , j , h , n ( a j k ) = Q h g ( t n + k l = 1 j a l ) = v 3 , j , h , n ( a j k ) .
Then, in s [ 0 , b j k ] , we solve system
w ˜ 1 , j , h , n ( s ) = 0 , w ˜ 2 , j , h , n ( s ) = S h , 0 w ˜ 1 , j , h , n + A h w ˜ 3 , j , h , n + f ( t n + k l = 1 j 1 b l + s , S h , 0 1 w ˜ 1 , j , h , n ( s ) ) , w ˜ 3 , j , h , n ( s ) = 0 , w ˜ 1 , j , h , n ( 0 ) = v ˜ 1 , j , h , n ( a j k ) = S h , 0 v 1 , j , h , n ( a j k ) , w ˜ 2 , j , h , n ( 0 ) = v ˜ 2 , j , h , n ( a j k ) = v 2 , j , h , n ( a j k ) , w ˜ 3 , j , h , n ( 0 ) = v ˜ 3 , j , h , n ( a j k ) = Q h g ( t n + k l = 1 j a l ) = v 3 , j , h , n ( a j k ) ,
whose solution at s = b j k is
w ˜ 1 , j , h , n ( b j k ) = v ˜ 1 , j , h , n ( a j k ) = S h , 0 v 1 , j , h , n ( a j k ) = S h , 0 w 1 , j , h , n ( b j k ) , w ˜ 2 , j , h , n ( b j k ) = v ˜ 2 , j , h , n ( a j k ) + b j k ( S h , 0 v ˜ 1 , j , h , n ( a j k ) + A h Q h g ( t n + k l = 1 j a l ) ) + 0 b j k f ( t n + k l = 1 j 1 b l + τ , S h , 0 1 v ˜ 1 , j , h , n ( a j k ) ) d τ , = v 2 , j , h , n ( a 1 k ) + b j k ( A h , 0 v 1 , j , h , n ( a j k ) + A h Q h g ( t n + k l = 1 j a l ) ) + 0 b j k f ( t n + k l = 1 j 1 b l + τ , v 1 , j , h , n ( a j k ) ) d τ , = w 2 , j , h , n ( b j k ) w ˜ 3 , j , h , n ( b j k ) = Q h g ( t n + k l = 1 j a l ) = w 3 , j , h , n ( b j k ) ,
and we obtain (37).
Theorem 2.
We assume that the time discretization of (12) is obtained by means of a splitting method of order p, applied as described in Formulas (16)–(17), with the choice of intermediate boundary values (18). Let u h ( t n ) = [ u 1 , h ( t n ) , u 2 , h ( t n ) ] T be the value at t n of the solution of (13), and let u ¯ h , n = [ u ¯ 1 , h , n , u ¯ 2 , h , n ] T be its approximation obtained with the splitting method (16) by taking a step of size k starting from the exact value u h ( t n 1 ) = [ u 1 , h ( t n 1 ) , u 2 , h ( t n 1 ) ] T .
Then, local error ρ h , n = u h ( t n ) u ¯ h , n satisfies
ρ h , n E h = u h ( t n ) u ¯ h , n E h = S h , 0 ( u 1 , h ( t n ) u ¯ 1 , h , n ) h 2 + u 2 , h ( t n ) u ¯ 2 , h , n h 2 1 / 2 = O ( k p + 1 ) .
Proof. 
ρ h , n E h = u h ( t n ) u ¯ h , n E h = S h , 0 ( u 1 , h ( t n ) u ¯ 1 , h , n ) h 2 + u 2 , h ( t n ) u 2 , h , n h 2 1 / 2 S h , 0 ( u 1 , h ( t n ) u ¯ 1 , h , n ) h 2 + u 2 , h ( t n ) u ¯ 2 , h , n h 2 + u 3 , h ( t n ) u ¯ 3 , h , n h 2 1 / 2 = u ˜ 1 , h ( t n ) u ˜ ¯ 1 , h , n h 2 + u ˜ 2 , h ( t n ) u ˜ ¯ 2 , h , n h 2 + u ˜ 3 , h ( t n ) u ˜ ¯ 3 , h , n h 2 1 / 2 = u ˜ h ( t n ) u ˜ ¯ h , n E h = O ( k p + 1 ) ,
where u ˜ ¯ h , n = [ u ˜ ¯ 1 , h , n , u ˜ ¯ 2 , h , n , u ˜ ¯ 3 , h , n ] T is the approximation obtained at t = t n with the splitting method (36) by taking a step of size k starting from the exact value at t n 1 of the solution of (33) u ˜ h ( t n 1 ) = [ u ˜ 1 , h ( t n 1 ) , u ˜ 2 , h ( t n 1 ) , u ˜ 3 , h ( t n 1 ) ] T . Therefore, the result is a consequence that p is the order of the splitting method. □

7. Stability and Convergence

7.1. Stability

To achieve convergence in energy norm, we need time discretization to be stable. In our case, it is sufficient to have linear stability. That is, it is enough that time discretization with the splitting method is stable for fully homogeneous linear problem
u h ( t ) = A h , 0 u h ( t ) , u h ( 0 ) = P h u 0 , u h ( 0 ) = P h v 0 ,
corresponding to the space discretization of (2) with vanishing boundary values and source term.
To test linear stability, we first apply the splitting method to the harmonic oscillator y + λ 2 y = 0 , λ > 0 . We denote [ p , q ] T = [ λ y , y ] T and we consider the standard splitting
p q = 0 λ 0 0 + 0 0 λ 0 p q
If we now apply a splitting method with a time step k > 0 , we obtain numerical method
p n + 1 q n + 1 = R ( ω ) p n q n
where ω = k λ > 0 .
Matrix R, of which the elements are polynomials in the ω variable, is called the stability matrix and is given by
R ( ω ) = R 11 ( ω ) R 12 ( ω ) R 21 ( ω ) R 22 ( ω ) ,
an it can be computed as
R ( ω ) = j = 1 m 1 0 b j ω 1 1 a j ω 0 1 = j = 1 m R j ( ω ) ,
where the product of matrices must be calculated in the correct order.
To obtain stability for the harmonic oscillator, the boundedness of the powers of stability matrix (43) is required. For this, the following definition of “stability interval” is very useful (cf. [12,13]).
Definition 1.
The stability interval of a method with stability matrix R ( ω ) is [ 0 , ω * ) if ω * is the supremum of values ω 0 , such that, for all ω [ 0 , ω * ) ,
ρ ( R ( ω ) ) 1 ,
and R ( ω ) is simple when ρ ( R ( ω ) ) = 1 , where ρ ( R ( ω ) ) is the spectral radius of R ( ω ) .
In this way, we have linear stability for test Problem (42) when k λ < ω * . The larger the value of ω * is, the less restrictive the stability condition is.
Remark 4.
In order to calculate the ω * value of the stability interval for the methods used in the numerical experiments, Strang, Ψ S 4 and Ψ S 6 , we follow the same technique as in [14,15]. Taking into account that det ( R ( ω ) ) = 1 from (43) and, for the three methods, R 11 ( ω ) = R 22 ( ω ) , the eigenvalues of R ( ω ) are the solutions of λ 2 2 R 11 ( ω ) λ + 1 = 0 . Then, to obtain the stability interval, it is enough to study the greatest real value ω * , such that R 11 ( ω ) 2 1 0 for all ω [ 0 , ω * ] . The value of ω * for the Strang method is 2, for Ψ S 4 is 6.31 and for Ψ S 6 is 3.44 .
Regarding Problem (41), linear stability in the energy norm states that the powers of matrix
R ( k S h , 0 ) = R 11 ( k S h , 0 ) R 12 ( k S h , 0 ) R 21 ( k S h , 0 ) R 22 ( k S h , 0 ) ,
are bounded in the matrix norm induced by the discrete norm in X h , 0 , that is, if T > 0 is fixed,
R n ( k S h , 0 ) h C ,
where C is a constant independent of h, n and k when n k T .
Therefore, we need that k | λ h , 0 | < ω * for all λ h , 0 ρ ( S h , 0 ) . Taking into account that S h , 0 is symmetric and positive definite, any of its eigenvalues are positive; therefore, it is enough that
k λ h , 0 * < ω *
is satisfied, where λ h , 0 * is the largest eigenvalue of S h , 0 .
Remark 5.
We can now deduce the ratio between parameters k and h that must be satisfied to have stability in the energy norm for the numerical experiments in Section 5.
The eigenvalues of the tridiagonal matrix d i a g ( 1 , 2 , 1 ) , which appears in the matrix (21) used in tests from 1 to 4, are given by 2 + 2 cos ( j π / ( J + 1 ) ) , j = 1 , 2 , , J , and they all belong to the interval ( 4 , 0 ) (see for example [16,17]). We conclude that the largest eigenvalue of S h , 0 satisfies λ h , 0 * < 2 h and, to achieve stability, it suffices that
2 k h < ω * k h < ω * 2 .
Similarly, for the two-dimensional problem in Test 5, the eigenvalues of the block tridiagonal matrix appearing in (24) are included in interval ( 8 , 0 ) , which means that, in this case, the largest eigenvalue of S h , 0 satisfies λ h , 0 * < 2 2 h and, to achieve stability, it suffices that
2 2 k h < ω * k h < ω * 2 2 .
Stability conditions (46) and (47) for the three splitting methods considered in the numerical experiments of Section 5 are given in Table 5. The missing value for the Strang method for h = 1 / 100 and k = 1 / 100 in Table 3 is due to the instability of the numerical solution because the stability condition was not fulfilled.

7.2. Convergence

We now study the convergence of the full discretization of Section 4.
Theorem 3.
Assume that the time discretization of (12) is obtained by means of a splitting method of order p, applied as described in Formulas (16)–(17), with the choice of intermediate boundary values (18). Let u h ( t n ) = [ u 1 , h ( t n ) , u 2 , h ( t n ) ] T be the value at t n of solution of (13), and let u h , n = [ u 1 , h , n , u 2 , h , n ] T be its approximation obtained with the splitting method (16). Assume also that (45) and the linear stability condition (44) are satisfied. Then, global error e h , n = u h ( t n ) u h , n satisfies
e h , n E h = u h ( t n ) u h , n E h = S h , 0 ( u 1 , h ( t n ) u 1 , h , n ) h 2 + u 2 , h ( t n ) u 2 , h , n h 2 1 / 2 = O ( k p ) ,
Proof. 
Let u ˜ h ( t ) = [ u ˜ 1 , h ( t ) , u ˜ 2 , h ( t ) ] T be the first two components of (33). We showed in Section 6 that u ˜ h ( t ) = [ S h , 0 u 1 , h ( t ) , u 2 , h ( t ) ] T .
On the other hand, let u ˜ h , n = [ u ˜ 1 , h , n , u ˜ 2 , h , n ] T be the first two components of (36). We also showed in Section 6 that u ˜ h , n = [ S h , 0 u 1 , h , n , u 2 , h , n ] T . Therefore, we can obtain u ˜ h , n from Section 4, using Equations (17) and (19). For this, we make v ˜ j , h , n = [ S h , 0 v 1 , j , h , n , v 2 , j , h , n ] T , w ˜ j , h , n = [ S h , 0 w 1 , j , h , n , w 2 , j , h , n ] T , j = 1 , , m , and we deduce that
v ˜ j , h , n = v ˜ 1 , j , h , n v ˜ 2 , j , h , n = I h a j k S h , 0 0 I h w ˜ 1 , j 1 , h , n w ˜ 2 , j 1 , h , n = M j ( k S h , 0 ) w ˜ j 1 , h , n .
Then,
w ˜ j , h , n = w ˜ 1 , j , h , n w ˜ 2 , j , h , n = I h 0 b j k S h , 0 I h v ˜ 1 , j , h , n v ˜ 2 , j , h , n + 0 Q h g ( t n + r = 1 j a r k ) + 0 0 b j k f ( t n + r = 1 j 1 b r k + τ , S h , 0 1 v ˜ 1 , j , h , n d τ ) = I h 0 b j k S h , 0 I h I h a j k S h , 0 0 I h w ˜ 1 , j 1 , h , n w ˜ 2 , j 1 , h , n + 0 Q h g ( t n + r = 1 j a r k ) + 0 0 b j k f ( t n + r = 1 j 1 b r k + τ , S h , 0 1 v ˜ 1 , j , h , n d τ ) = R j ( k S h , 0 ) w ˜ j , h , n + 0 Q h g ( t n + r = 1 j a r k ) + 0 0 b j k f ( t n + r = 1 j 1 b r k + τ , S h , 0 1 v ˜ 1 , j , h , n d τ ) .
By using recursive reasoning,
w ˜ j , h , n = l = 1 j R l ( k S h , 0 ) w ˜ 0 , h , n + s = 1 j l = s + 1 j R l ( k S h , 0 ) 0 Q h g ( t n + r = 1 s a r k ) + s = 1 j l = s + 1 j R l ( k S h , 0 ) 0 0 b s k f ( t n + r = 1 s 1 b r k + τ , S h , 0 1 v ˜ 1 , s , h , n ) d τ .
We now define w ˜ ¯ 0 , h , n = u ˜ h ( t n ) , and
w ˜ ¯ j , h , n = l = 1 j R l ( k S h , 0 ) w ˜ ¯ 0 , h , n + s = 1 j l = s + 1 j R l ( k S h , 0 ) 0 Q h g ( t n + r = 1 s a r k ) + s = 1 j l = s + 1 j R l ( k S h , 0 ) 0 0 b s k f ( t n + r = 1 s 1 b r k + τ , S h , 0 1 v ˜ ¯ 1 , s , h , n ) d τ
and, subtracting
w ˜ ¯ j , h , n w ˜ j , h , n = l = 1 j R l ( k S h , 0 ) ( w ˜ ¯ 0 , h , n w ˜ 0 , h , n ) + F n , j
where
F n , j = s = 1 j l = s + 1 j R l ( k S h , 0 ) 0 0 b s k [ f ( t n + r = 1 s 1 b r k + τ , S h , 0 1 v ˜ ¯ 1 , s , h , n ) f ( t n + r = 1 s 1 b r k + τ , S h , 0 1 v ˜ 1 , s , h , n ) ] d τ = R j ( k S h , 0 ) F n , j 1 + 0 0 b j k [ f ( t n + r = 1 j b r k + τ , S h , 0 1 v ˜ ¯ 1 , j , h , n ) f ( t n + r = 1 j b r k + τ , S h , 0 1 v ˜ 1 , j , h , n ) ] d τ .
To use an inductive reasoning, we first consider
F n , 1 = 0 0 b 1 k [ f ( t n + τ , S h , 0 1 v ˜ ¯ 1 , 1 , h , n ) f ( t n + τ , S h , 0 1 v ˜ 1 , 1 , h , n ) ] d τ
and, taking norm,
F n , 1 h | b 1 | k | L S h , 0 1 h M 1 ( k S h , 0 ) h w ˜ ¯ 0 , h , n w ˜ ¯ 0 , h , n h = k C 1 , 1 w ˜ ¯ 0 , h , n w ˜ ¯ 0 , h , n h ,
where, if (45) is satisfied and taking into account Hypothesis (H2), constant C 1 , 1 is independent on h, k, and n.
On the other hand,
w ˜ ¯ 1 , h , n w ˜ 1 , h , n = R 1 ( k S h , 0 ) ( w ˜ ¯ 0 , h , n w ˜ 0 , h , n ) + F n , 1
and, taking again norm
w ˜ ¯ 1 , h , n w ˜ 1 , h , n h R 1 ( k S h , 0 ) h w ˜ ¯ 0 , h , n w ˜ 0 , h , n h + F n , 1 h ( R 1 ( k S h , 0 ) h + k C 1 , 1 ) w ˜ ¯ 0 , h , n w ˜ 0 , h , n h = C 1 , 2 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h ,
where constant C 1 , 2 is also independent on k, n, and h because of (45) and Hypothesis (H2).
Now, we use inductive reasoning, assuming that
F n , j 1 h k C j 1 , 1 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h ,
and,
w ˜ ¯ j 1 , h , n w ˜ j 1 , h , n h C j 1 , 2 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h
where constants C j 1 , 1 and C j 1 , 2 are independent on k, n, and h when (45) is satisfied.
Then,
F n , j h R j ( k S h , 0 ) h F n , j 1 h + | b j | k L S h , 0 1 h M j ( k S h , 0 ) h w ˜ ¯ j 1 , h , n w ˜ j 1 , h , n h R j ( k S h , 0 ) h k C j 1 , 1 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h + | b j | k L S h , 0 1 h M j ( k S h , 0 ) h C j 1 , 2 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h = k C j , 1 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h
and
w ˜ ¯ j , h , n w ˜ j , h , n h l = 1 j R l ( k S h , 0 ) h w ˜ ¯ 0 , h , n w ˜ 0 , h , n h + F n , j h l = 1 j R l ( k S h , 0 ) h w ˜ ¯ 0 , h , n w ˜ 0 , h , n h + k C j , 1 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h = C j , 2 w ˜ ¯ 0 , h , n w ˜ 0 , h , n h .
Lastly, we prove convergence taking into account that w ˜ 0 , h , n 1 = u ˜ h , n 1 , w ˜ ¯ 0 , h , n 1 = u ˜ h ( t n 1 ) , w ˜ m , h , n 1 = u ˜ h , n , w ˜ ¯ m , h , n 1 = u ˜ ¯ h , n . Moreover, we denote F n , m = F n and C m , 1 = C 1 .
The global error is given by
e ˜ h , n = e ˜ 1 , h , n e ˜ 2 , h , n = u ˜ 1 , h ( t n ) u ˜ 2 , h ( t n ) u ˜ 1 , h , n u ˜ 2 , h , n = u ˜ 1 , h ( t n ) u ˜ 2 , h ( t n ) u ˜ ¯ 1 , h , n u ˜ ¯ 2 , h , n + u ˜ ¯ 1 , h , n u ˜ ¯ 2 , h , n u ˜ 1 , h , n u ˜ 2 , h , n = æ ˜ h , n + R ( k S h , 0 ) e ˜ h , n 1 + F n
where ρ ˜ h , n is the semidiscrete local error at t = t n .
Therefore, we deduce that
e ˜ h , n = j = 1 n R n j ( k S h , 0 ) æ ˜ h , j + j = 1 n R n j ( k S h , 0 ) F j .
Taking norms and using stability Hypothesis (44)
e ˜ h , n h j = 1 n R n j ( k S h , 0 ) h æ ˜ h , j h + j = 1 n R n j ( k S h , 0 ) h F j h C j = 1 n ρ ˜ h , j h + C j = 1 n F j h C 1 k p + C 2 j = 1 n k e ˜ h , j 1 h = C 1 k p + C 2 j = 0 n 1 k e ˜ h , j h .
The proof of convergence is achieved by using the discrete Gronwall lemma. □

Author Contributions

Conceptualization, I.A.-M. and A.M.P.; Formal analysis, I.A.-M. and A.M.P.; Funding acquisition, I.A.-M.; Investigation, I.A.-M. and A.M.P.; Software, A.M.P.; Writing–review and editing, I.A.-M. and A.M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministerio de Ciencia y Educación, grant number PGC2018-101443-B-I00, and the first author by Consejería de Educación, Junta de Castilla y León y Leónand Feder funds, grant number VA193P20.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blanes, S.; Casas, F.; Murua, A. Splitting and composition methods in the numerical integration of differential equation. Bol. Soc. Esp. Mat. Apl. 2008, 45, 89–145. [Google Scholar]
  2. McLachlan, R.I. On the numerical integration of ordinary differential equations by symmetric composition methods. SIAM J. Sci. Comput. 1995, 16, 151–168. [Google Scholar] [CrossRef] [Green Version]
  3. McLachlan, R.I.; Quispel, G.R. Splitting methods. Acta Numer. 2002, 11, 341–434. [Google Scholar] [CrossRef]
  4. Hairer, E.; Wanner, G.; Lubich, C. Geometric Numerical Integration; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  5. Sanz-Serna, J.M.; Calvo, M.P. Numerical Hamiltonian Problems; Chapmand and Hall: London, UK, 1994. [Google Scholar]
  6. Cano, B.; Moreta, M.J. A modified Gautschi’s method without order reduction when integrating boundary value nonlinear wave problems. Appl. Math. Comput. 2020, 373, 125022. [Google Scholar] [CrossRef]
  7. Alonso–Mallo, I.; Palencia, C. On the convolutions operators arising in the study of abstract initial boundary value problems. Proc. Roy. Soc. Edinb. Sect. 1996, 126, 515–539. [Google Scholar] [CrossRef]
  8. Palencia, C.; Alonso-Mallo, I. Abstract initial-boundary value problems. Proc. R. Soc. Edinb. Sect. A 1994, 124, 879–908. [Google Scholar] [CrossRef]
  9. Arendt, W.; Batty, C.F.K.; Hieber, M.; Neubrander, F. Vector-Valued Laplace Transforms and Cauchy Problems. In Monographs in Mathematics; Birkhäuser: Basel, Switzerland, 2001; Volume 96. [Google Scholar]
  10. Fattorini, H.O. Second Order Linear Differential Equations in Banach Spaces, 1st ed.; North-Holland Mathematics Studies; Notas de Matemática: Amsterdam, The Netherlands, 2000; Volume 108. [Google Scholar]
  11. Alonso–Mallo, I. Explicit single step methods with optimal order of convergence for partial differential equations. Appl. Numer. Math. 1999, 31, 117–131. [Google Scholar] [CrossRef]
  12. Alonso–Mallo, I.; Cano, B.; Moreta, M.J. Stability of Runge-Kutta-Nyström methods. J. Comput. Appl. Math. 2006, 189, 120–131. [Google Scholar] [CrossRef] [Green Version]
  13. Alonso–Mallo, I.; Cano, B.; Moreta, M.J. The stability of rational approximations of cosine functions on Hilbert spaces. Appl. Numer. Math. 2009, 59, 21–38. [Google Scholar] [CrossRef]
  14. Blanes, S.; Casas, F.; Murua, A. On the linear stability of splitting methods. Found. Comp. Math. 2008, 8, 357–393. [Google Scholar] [CrossRef] [Green Version]
  15. Portillo, A.M. High order full discretization for anisotropic wave equations. Appl. Math. Comput. 2018, 323, 1–16. [Google Scholar] [CrossRef] [Green Version]
  16. Elliott, J.F. The Characteristic Roots of Certain Real Symmetric Matrices. Master’s Thesis, University of Tennessee, Knoxville, TN, USA, 1953. [Google Scholar]
  17. Gregory, R.T.; Karney, D. A Collection of Matrices for Testing Computational Algorithm; Wiley: New York, NY, USA, 1969. [Google Scholar]
Figure 1. Error, in logarithmic scale, for the energy norm of the solution of Test 1, with μ = 1 , μ = 3 and μ = 5 , N = 100 , final time T = 1 and E X ( f ) , for Strang’s method (black), Ψ S 4 (red) and Ψ S 6 (blue).
Figure 1. Error, in logarithmic scale, for the energy norm of the solution of Test 1, with μ = 1 , μ = 3 and μ = 5 , N = 100 , final time T = 1 and E X ( f ) , for Strang’s method (black), Ψ S 4 (red) and Ψ S 6 (blue).
Mathematics 09 01113 g001
Figure 2. Error in logarithmic scale of energy norm for Test 2, for several options of symmetric-splitting methods, with N = 100 for final time T = 1 and G3(f).
Figure 2. Error in logarithmic scale of energy norm for Test 2, for several options of symmetric-splitting methods, with N = 100 for final time T = 1 and G3(f).
Mathematics 09 01113 g002
Figure 3. Error in energy norm for Test 3, N = 25 , 50 , 100 , 200 and h = k = 1 / N .
Figure 3. Error in energy norm for Test 3, N = 25 , 50 , 100 , 200 and h = k = 1 / N .
Mathematics 09 01113 g003
Table 1. Time error for three symmetric-splitting methods with h = 1 / 100 for final time T = 1 .
Table 1. Time error for three symmetric-splitting methods with h = 1 / 100 for final time T = 1 .
h = 1 / 100 k = 1 / 100 k = 1 / 200 k = 1 / 400
Strang 4.8394 × 10 4 1.0727 × 10 4 2.6416 × 10 5
Ψ S 4 3.0924 × 10 5 1.6787 × 10 6 1.0135 × 10 7
Ψ S 6 3.5808 × 10 10 5.5734 × 10 12 9.6204 × 10 14
Table 2. Evolution of log 2 ( error ( k ) / error ( k / 2 ) ) for three symmetric-splitting methods with h = 1 / 100 .
Table 2. Evolution of log 2 ( error ( k ) / error ( k / 2 ) ) for three symmetric-splitting methods with h = 1 / 100 .
h = 1 / 100 k = 1 / 100 k = 1 / 200
Strang 2.1737 2.0217
Ψ S 4 4.2034 4.0499
Ψ S 6 6.0056 5.8563
Table 3. Time error for three symmetric-splitting methods with h = 1 / 100 for final time T = 1 .
Table 3. Time error for three symmetric-splitting methods with h = 1 / 100 for final time T = 1 .
h = 1 / 100 k = 1 / 100 k = 1 / 200 k = 1 / 400
Strang 1.3639 × 10 4 3.3576 × 10 5
Ψ S 4 1.3720 × 10 6 7.9938 × 10 8 4.9125 × 10 9
Ψ S 6 5.2613 × 10 8 7.6468 × 10 10 1.1734 × 10 11
Table 4. Evolution of log 2 ( error ( k ) / error ( k / 2 ) ) for three symmetric-splitting methods with h = 1 / 100 .
Table 4. Evolution of log 2 ( error ( k ) / error ( k / 2 ) ) for three symmetric-splitting methods with h = 1 / 100 .
h = 1 / 100 k = 1 / 100 k = 1 / 200
Strang 2.0222
Ψ S 4 4.1012 4.0244
Ψ S 6 6.1044 6.0261
Table 5. Stability ratios (46) and (47) for the three splitting methods considered in the numerical experiments.
Table 5. Stability ratios (46) and (47) for the three splitting methods considered in the numerical experiments.
Strang Ψ S 4 Ψ S 6
1D1 3.16 1.72
2D 0.71 2.23 1.22
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alonso-Mallo, I.; Portillo, A.M. Integrating Semilinear Wave Problems with Time-Dependent Boundary Values Using Arbitrarily High-Order Splitting Methods. Mathematics 2021, 9, 1113. https://doi.org/10.3390/math9101113

AMA Style

Alonso-Mallo I, Portillo AM. Integrating Semilinear Wave Problems with Time-Dependent Boundary Values Using Arbitrarily High-Order Splitting Methods. Mathematics. 2021; 9(10):1113. https://doi.org/10.3390/math9101113

Chicago/Turabian Style

Alonso-Mallo, Isaías, and Ana M. Portillo. 2021. "Integrating Semilinear Wave Problems with Time-Dependent Boundary Values Using Arbitrarily High-Order Splitting Methods" Mathematics 9, no. 10: 1113. https://doi.org/10.3390/math9101113

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop