Next Article in Journal
Modified Tseng’s Method with Inertial Viscosity Type for Solving Inclusion Problems and Its Application to Image Restoration Problems
Next Article in Special Issue
The Optimal Homotopy Asymptotic Method for Solving Two Strongly Fractional-Order Nonlinear Benchmark Oscillatory Problems
Previous Article in Journal
Smooth kNN Local Linear Estimation of the Conditional Distribution Function
Previous Article in Special Issue
A Singularly P-Stable Multi-Derivative Predictor Method for the Numerical Solution of Second-Order Ordinary Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fourth Order Symplectic and Conjugate-Symplectic Extension of the Midpoint and Trapezoidal Methods

by
Felice Iavernaro
1 and
Francesca Mazzia
2,*
1
Dipartimento di Matematica, Università degli studi di Bari Aldo Moro, 70125 Bari, Italy
2
Dipartimento di Informatica, Università degli studi di Bari Aldo Moro, 70125 Bari, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(10), 1103; https://doi.org/10.3390/math9101103
Submission received: 14 April 2021 / Revised: 7 May 2021 / Accepted: 10 May 2021 / Published: 13 May 2021
(This article belongs to the Special Issue Numerical Methods for Solving Differential Problems)

Abstract

:
The paper presents fourth order Runge–Kutta methods derived from symmetric Hermite–Obreshkov schemes by suitably approximating the involved higher derivatives. In particular, starting from the multi-derivative extension of the midpoint method we have obtained a new symmetric implicit Runge–Kutta method of order four, for the numerical solution of first-order differential equations. The new method is symplectic and is suitable for the solution of both initial and boundary value Hamiltonian problems. Moreover, starting from the conjugate class of multi-derivative trapezoidal schemes, we have derived a new method that is conjugate to the new symplectic method.

1. Introduction

In the present work, we will consider a suitable modification of multi-derivative one-step methods derived in [1] for the numerical solution of first order differential equations
y ( t ) = f ( y ( t ) ) , t [ t 0 , T ] ,
with sufficiently regular vector field f : R m R m , subject to initial conditions y ( t 0 ) = y 0 , or boundary conditions g ( y ( t 0 ) , y ( T ) ) = 0 . The methods we are going to introduce are especially suited for the long time simulation of canonical Hamiltonian problems
y = J H ( y ) , y ( t 0 ) = y 0 R 2 m ,
with
y = q p , q , p R m , J = O I I O ,
where q and p are the generalized coordinates and conjugate momenta, H : R 2 m R is the Hamiltonian function and I stands for the identity matrix of dimension m. The investigation of numerical methods for integrating differential equations such as (2) forms a branch of numerical analysis called Geometric Integration. Problem (2) admits the Hamiltonian function H ( y ) as a first integral, namely H ( y ( t ) ) = H ( y 0 ) for t t 0 . It may admit additional constant of motions bringing important physical properties of the system that general-purpose codes would fail to reproduce in a long time simulation. Rather than focusing on the control of the accuracy in a given time interval of finite length, geometric integrators aim at reproducing the correct global behavior of the trajectory in the phase space, trying to retain the geometric features of the system itself. We refer the reader to the monographs [2,3,4] for the fundamental theory on the numerical treatment of conservative problems. Examples of the relevant role of geometric integration in several application areas may be found in the review papers [5,6,7] and references therein.
In [8,9] we analyzed two classes of one step symmetric Hermite–Obreshkov schemes with interesting symplectic properties and in [1] we focused on the symplectic properties of two families of multi-derivative high-order one-step methods which contains the well-known implicit midpoint and trapezoidal methods as seed formulae. Here, we consider a proper discretization of the Lie derivative appearing in these formulae, to define Runge–Kutta schemes with geometric properties. In the following, y 1 = Φ h ( y 0 ) denotes a generic one-step method of order p > 0 that provides the numerical solution of (2) with stepsize of integration h > 0 . We recall a few definitions and properties which are relevant for our analysis. The one-step method Φ h is called:
-
symplectic, if its Jacobian matrix is symplectic, that is
Φ h ( y ) y J Φ h ( y ) y = J , for   all y R 2 m .
-
conjugate-symplectic, if it is topologically conjugate to a symplectic method y 1 = Ψ h ( y 0 ) , which means that a global change of coordinates χ h ( y ) = y + O ( h ) exists such that
Φ h = χ h Ψ h χ h 1 .
-
conjugate-symplectic up to order r, if there exists a global change of coordinates
χ h ( y ) = y + O ( h p ) such that (5) holds true, with the map Ψ h satisfying
Ψ h ( y ) y J Ψ h ( y ) y = J + O ( h p + r + 1 ) .
A symplectic method inherits relevant properties of the flow associated with a canonical Hamiltonian problem such as volume preservation of closed surfaces in the phase space under time evolution. For a detailed analysis of symplectic Runge–Kutta methods, see the monographs [2,3,4]. Further relevant features are the conservation of quadratic first integrals, and near conservation of the Hamiltonian function over exponentially long times ([4], page 366).
Conjugate-symplecticity leave these properties essentially unchanged (see [4] page 222 and [10]). In fact, from (5) we have
y n = Φ h n ( y 0 ) = ( χ h Ψ h χ h 1 ) n ( y 0 ) = χ h Ψ h n χ h 1 ( y 0 ) .
The third property listed above is clearly a relaxation of conjugate-symplecticity. In this case, the method Φ h ( y ) nearly conserves all quadratic first integrals and the Hamiltonian function over time intervals of length O ( h r ) (see [11]).
The starting point of our investigation is the class of Hermite–Obreshkov (HO) linear multistep methods [12].
i = 0 k α i y n + i = j = 1 l h j i = 0 k β j i y n + i ( j ) .
Here y n + i ( j ) denotes an approximation to the j-th derivative of the solution y ( t ) at t n + i , with t n + i = t n + i h and is defined as
y n ( j ) : = D j 1 f ( y n ) , j = 1 , 2 , , l .
For a given integer k 0 and u R m , D k ( f ( u ) ) is the k-th order Lie derivative of the vector field f, defined as the k-th time derivative of f ( y ( t ) ) formally computed at y ( t ) = u , assuming that y ( t ) satisfies the differential equation in (1) ( D 0 = I is the identity operator). We have used the subscript to define this operator to avoid confusion with the same order classical derivative operator denoted by D k . Of course, the two operators take the same values when applied to the projection of the true solution y ( t ) on the mesh points but, in general, they will differ. Recently, we have introduced and analyzed four different one step ( k = 1 ) HO methods:
-
Euler–Maclaurin methods: higher derivative collocation methods deriving their name from the well-known Euler–Maclaurin integration formula. In [9] it has been shown that these integrators are conjugate-symplectic up to order p + 2 .
-
BS Hermite–Obreshkov methods: based on a special subclass of Hermite–Obreshkov methods which admit a continuous spline extension [8], these formulae are a multi-derivative generalization of the BS linear multistep methods derived in [13] and generalized in the field of quasi-interpolation in [14,15,16]. In [8] it has been shown that the BS Hermite–Obreshkov methods are conjugate-symplectic up to order p + 2 .
-
multi-derivative midpoint and trapezoidal methods: generalizations of the classical midpoint and trapezoidal methods, these formulae are derived by a combination of the implicit Taylor and the explicit Taylor expansion up to a given order [1]. The multi-derivative midpoint (MDMP) and trapezoidal (MDTR) methods are conjugate-symplectic up to order p + 2 .
The analysis of these classes of multi-derivative methods has been also motivated by the possibility of computing the derivative efficiently, by exploiting the Infinity Computer arithmetic as described in [17,18,19]. In fact, the analytical computation of the j-th derivative y ( j ) involves a tensor of order j, which evidently heavily affects the computational cost associated with the implementation of the method. In this respect, the use of the Infinity Computer is able to accurately evaluate y ( j ) ( z ) without requiring its explicit expression in terms of the derivatives of f. In this paper, we consider the more natural approach that uses finite differences to approximate the Lie derivatives appearing in a given multi-derivative formula.
Since a certain degree of freedom is allowed in the choice of the derivative discretization stepsize, it turns out that the final full discretized formula may exhibit more favorable geometric properties with respect to the original one. This is the case for two special methods in the classes we are going to introduce: they form a pair of symplectic and conjugate-symplectic Runge–Kutta integrators that originate from the midpoint method and its conjugate-symplectic counterpart, namely the trapezoidal methods. To the best of our knowledge, no couple of symplectic and conjugate-symplectic Runge–Kutta methods of order higher than two has been devised up to now, so their existence constitutes the core result of the paper.
In addition, we introduce and analyze a new technique for solving the nonlinear systems emerging from the implementation of the methods. It consists of a block-diagonal variant of the simplified Newton scheme which requires, at each integration step, a single Jacobian evaluation of the vector field and a single LU factorization of a matrix having the same size of the continuous problem. Moreover, the diagonal structure of the linear systems involved in the iterative procedure, makes it suitable for a parallel implementation. A comparison of the new integrators with other existing symplectic integrators in terms of their efficiency is a delicate question and will not be addressed here.
The paper is organized as follows. In Section 2 we illustrate the multi-derivative fourth-order extensions of the midpoint and trapezoidal methods, while their modifications obtained by approximating the Lie derivatives are introduced in Section 3 and Section 4 respectively. Section 5 analyzes the above-mentioned nonlinear systems solver needed to advance the solution in time. Some numerical illustrations are presented in Section 6. Finally, Section 7 contains some concluding remarks.

2. MDMP and MDTR Methods

The multi-derivative generalization of the midpoint (MP) and trapezoidal (TR) methods we are interested in is easily obtained via a standard Taylor approach by exploiting the property that such schemes may be viewed as composition of the Implicit (IE) and Explicit Euler (EE) methods, in direct and reverse order, applied on half the integration time-step:
MP = EE ∘ IE:
y n + 1 = y n + h f ( y n + y n + 1 2 ) y n + 1 / 2 = y n + h 2 f ( y n + 1 / 2 ) , y n + 1 = y n + 1 / 2 + h 2 f ( y n + 1 / 2 ) ,
TR = IE ∘ EE:
y n + 1 = y n + h 2 f ( y n ) + f ( y n + 1 ) y n + 1 / 2 = y n + h 2 f ( y n ) , y n + 1 = y n + 1 / 2 + h 2 f ( y n + 1 ) .
We focus on the two conjugate classes of the MDMP and MDTR methods of order four. By denoting as ET4 (IT4) the explicit (implicit) Taylor method of order four we have that
MDMP4 =  ET4 ∘ IT4:
    y n + 1 = y n + h f ( y n + 1 / 2 ) + h 3 24 D 2 f ( y n + 1 / 2 ) ,
y n + 1 / 2 = y n + h 2 f ( y n + 1 / 2 ) h 2 8 D 1 f ( y n + 1 / 2 ) + h 3 48 D 2 f ( y n + 1 / 2 ) , y n + 1 = y n + 1 / 2 + h 2 f ( y n + 1 / 2 ) + h 2 8 D 1 f ( y n + 1 / 2 ) + h 3 48 D 2 f ( y n + 1 / 2 ) ,
MDTR4 =  IT4 ∘ ET4:
    y n + 1 = y n + h 2 f ( y n + 1 ) + f ( y n ) h 2 8 D 1 f ( y n + 1 ) D 1 f ( y n ) + h 3 48 D 2 f ( y n + 1 ) + D 2 f ( y n ) ,
y n + 1 / 2 = y n + h 2 f ( y n ) + h 2 8 D 1 f ( y n ) + h 3 48 D 2 f ( y n ) , y n + 1 = y n + 1 / 2 + h 2 f ( y n + 1 ) h 2 8 D 1 f ( y n + 1 ) + h 3 48 D 2 f ( y n + 1 ) .
Here we introduce and analyze two families of methods depending on a real parameter, which are derived by approximating the Lie derivative appearing in the formulae above by means of suitable difference schemes. These latter are defined with the aid of auxiliary local steps that will be exploited for this purpose. We observe that two Lie derivatives used in the MDMP4 and MDTR4 methods are multiplied respectively by h 2 and h 3 , so we shall approximate them by means of symmetric difference schemes of order at least two to preserve the symmetry and order properties of the original methods.
For the same reason, the formulae used to approximate the solution in the additional local steps should also be at least of order two. In particular, they take the form of implicit or explicit methods of order two so that the symmetry condition of the resulting method is preserved. In the next two sections we introduce these new fourth-order methods.

3. Approximated MDMP

In this section, we show two generalizations of the MDMP4 method. All the presented extensions are based on the computation of two local approximations of the solution in the two additional steps
t n + 1 / 2 α = t n + 1 / 2 α h a n d t n + 1 / 2 + α = t n + 1 / 2 + α h ,
where α is a real positive parameter. In all the cases to approximate the first and second Lie derivatives we use the following standard finite differences schemes of order two:
D ^ 1 f n + 1 / 2 = f ( y n + 1 / 2 + α ) f ( y n + 1 / 2 α ) 2 α h , D ^ 2 f n + 1 / 2 = f ( y n + 1 / 2 + α ) 2 f ( y n + 1 / 2 ) + f ( y n + 1 / 2 α ) α 2 h 2 .

3.1. Computation of the Additional Stages Using the Standard Explicit RK Method of Order 2

Let us approximate the value of y ( t ) at t n + 1 / 2 α and t n + 1 / 2 + α by using the explicit Runge–Kutta method of order 2 backward and forward, starting at t n + 1 / 2 . The obtained values are used to compute the approximation of the derivatives using the central differences formulas in (9). The resulting method is
Y n + 1 2 α = y n + 1 2 h α f n + 1 2 , y n + 1 2 α = y n + 1 2 h α 2 ( F n + 1 2 α + f n + 1 2 ) , Y n + 1 2 + α = y n + 1 2 + h α f n + 1 2 , y n + 1 2 + α = y n + 1 2 + h α 2 ( F n + 1 2 + α + f n + 1 2 ) , y n + 1 2 = y n + h 2 f n + 1 2 h 2 8 D ^ 1 f n + 1 2 + h 3 48 D ^ 2 f n + 1 2 , y n + 1 = y n + h f n + 1 2 + h 3 24 D ^ 2 f n + 1 2
where f n + 1 2 = f ( y n + 1 2 ) , Y n + 1 2 ± α are the stages of the two Runge–Kutta steps and F n + 1 2 ± α = f ( Y n + 1 2 ± α ) . We call this method AMDMP4_RK2. Written as a Runge–Kutta scheme the method is described by the tableau
c A b 1 / 2 α 1 16 α + b 1 α 2 α 2 + b 3 / 2 0 1 16 α + b 5 1 / 2 α 1 16 α + b 1 0 α + b 3 / 2 0 1 16 α + b 5 1 / 2 1 16 α + b 1 0 b 3 / 2 0 1 16 α + b 5 1 / 2 + α 1 16 α + b 1 0 α + b 3 / 2 0 1 16 α + b 5 1 / 2 + α 1 16 α + b 1 0 α 2 + b 3 / 2 α 2 1 16 α + b 5 b 1 0 b 3 0 b 5
where b 1 = b 5 = 1 / ( 24 α 2 ) and b 3 = 1 1 / ( 12 α 2 ) . The fourth-order conditions have been checked by exploiting the formulas in [12] (Table 2.2 p. 148). Applying the method to the scalar linear test problem y = λ y , we obtain the recurrence relation y n + 1 = R ( h λ ) y n , where R ( z ) = 1 + z b ( I 5 z A ) 1 e is the stability function ( I 5 denotes the identity matrix of size 5 and e = ( 1 , 1 , 1 , 1 , 1 ) ). For a general introduction to the linear stability theory of Runge–Kutta schemes, we refer to [20] (chapter IV.3) and [21]. It turns out that, for AMDMP4_RK2 methods, the stability function actually does not depend on the parameter α . Setting, as usual, q = h λ , it takes the form
R ( q ) = P ( q ) Q ( q ) = q 3 + 6 q 2 + 24 q + 48 q 3 + 6 q 2 24 q + 48 .
Since P ( q ) = Q ( q ) , we get | R ( q ) | = 1 on the imaginary axis and since the poles of R ( q ) have a positive real part, the maximum modulus principle shows that all the formulae in the family are precisely A-stable, that is its domain of absolute stability coincides with C .
In comparison with the original MDMP method we see that the computational cost for the implementation of the new formulae decreases, because we just need to add the computation of the two explicit steps in the nonlinear iteration.

3.2. Approximation of the Derivative Using the Trapezoidal Scheme of Order 2

The second family of methods is derived by approximating the Lie derivatives with the aid of the backward and forward trapezoidal schemes. The resulting method is
y n + 1 2 α = y n + 1 2 h α 2 ( f n + 1 2 α + f n + 1 2 ) y n + 1 2 + α = y n + 1 2 + h α 2 ( f n + 1 2 + α + f n + 1 2 ) y n + 1 2 = y n + h 2 f n + 1 2 h 2 8 D ^ 1 f n + 1 2 + h 3 48 D ^ 2 f n + 1 2 , y n + 1 = y n + h f n + 1 2 + h 3 24 D ^ 2 f n + 1 2
Written as a Runge–Kutta scheme we have the following tableau
1 / 2 α α 2 + 1 16 α + 1 48 α 2 α 2 + 1 2 1 24 α 2 1 16 α + 1 48 α 2 1 / 2 1 16 α + 1 48 α 2 1 2 1 24 α 2 1 16 α + 1 48 α 2 1 / 2 + α 1 16 α + 1 48 α 2 α 2 + 1 2 + 1 24 α 2 α 2 + 1 16 α + 1 48 α 2 1 24 α 2 1 1 12 α 2 1 24 α 2
and it is easy to check that its order is four. More interestingly, within this family we may discover a new fourth-order symplectic Runge–Kutta formula.
Theorem 1.
The Runge–Kutta scheme defined by the tableau (10) is A-stable for α < 1 / 6 and it is symplectic if we choose α = 2 / 4 .
Proof. 
The stability function R ( q ) is equal to the following rational function:
R ( q ) = P ( q ) Q ( q ) = ( 6 α 2 1 ) q 3 ( 12 α 2 6 ) q 2 + 24 q + 48 ( 6 α 2 1 ) q 3 ( 12 α 2 6 ) q 2 24 q + 48 ,
from which P ( q ) = Q ( q ) and hence | R ( q ) | = 1 on the imaginary axis. In order to impose that the poles of R lie in the right-half of the complex plane, we can apply the Routh–Hurwitz stability criterion to Q ( q ) = P ( q ) . A direct computation then leads to the condition α < 1 / 6 for the roots of P ( q ) to have negative real part. Consequently, for these values of α , the method is precisely A-stable. A sufficient condition for symplecticity (see [4] (Theorem 4.3) or [22]) is b i a i j + b j a j i b i b j = 0 , i , j = 1 , 2 , 3 . It turns out that this condition is satisfied only if we choose α = 2 / 4 . □
The new symplectic RK method is
1 / 2 2 4 1 6 1 6 2 8 1 6 2 8 1 / 2 1 6 + 2 8 1 6 1 6 2 4 1 / 2 + 2 4 1 6 + 2 8 1 6 + 2 8 1 6 1 3 1 3 1 3
Symplectic Runge–Kutta schemes with three stages and order four are already known in the literature, see for example [23]. The method (12) has the special property of being defined as the composition of two formulae: this suggests that the method obtained by the reverse composition is conjugate to (12), and thus we get a couple of symplectic/conjugate-symplectic Runge–Kutta schemes that extend to the fourth-order the well-known couple formed by the midpoint and trapezoidal methods. The conjugate-symplectic method associated with (12) is introduced in the next section.

4. Approximated MDTR4 Methods

In this section, we devise two classes of methods obtained by approximating the Lie derivatives appearing in the MDTR4 formulae: each method in these families are conjugate to the corresponding one derived in Section 3. All the generalizations are based on the computation of two local approximations of the solution in two additional steps depending on a positive parameter α . This time, the new additional steps are
t n + 1 α = t n + 1 α h a n d t n + 1 + α = t n + 1 + α h .
To approximate the Lie derivatives, we use the same finite differences schemes of order two used in (9) for the AMDMP4 methods:
D ^ 1 f n + 1 = f ( y n + 1 + α ) f ( y n + 1 α ) 2 α h
D ^ 2 f n + 1 = f ( y n + 1 + α ) 2 f ( y n + 1 ) + f ( y n + 1 α ) α 2 h 2

4.1. Approximation of the Derivative Using the Standard Explicit RK Method of Order 2

As in the previous section, the first family of methods is derived by employing the second order explicit Runge–Kutta methods to approximate the solution in the two extra abscissae t n + 1 α and t n + 1 + α . We get
Y n + 1 α = y n + 1 h α f n + 1 , y n + 1 α = y n + 1 h α 2 ( F n + 1 α + f n + 1 ) , Y n + 1 + α = y n + 1 + h α f n + 1 , y n + 1 + α = y n + 1 + h α 2 ( F n + 1 + α + f n + 1 ) , y n + 1 = y n + h 2 ( f n + f n + 1 ) h 2 8 D ^ 1 f n D ^ 1 f n + 1 + h 3 48 D ^ 2 f n + D ^ 2 f n + 1 .
This method, denoted by AMDTR4_RK2, is symmetric and has order four. Written as a Runge–Kutta scheme, it has s = 10 stages and the following tableau:
α 0 α 2 α 2 0 0 0 0 0 0 0 α 0 0 α 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 α 0 0 α 0 0 0 0 0 0 0 α 0 0 α 2 α 2 0 0 0 0 0 0 1 α b 1 0 b 3 0 b 5 b 6 α 2 α 2 + b 8 0 b 10 1 α b 1 0 b 3 0 b 5 b 6 0 α + b 8 0 b 10 1 b 1 0 b 3 0 b 5 b 6 0 b 8 0 b 10 1 + α b 1 0 b 3 0 b 5 b 6 0 α + b 8 0 b 10 1 + α b 1 0 b 3 0 b 5 b 6 0 α 2 + b 8 α 2 b 10 b 1 0 b 3 0 b 5 b 6 0 b 8 0 b 10
where the non null weights b i are
b 1 = b 10 = 1 16 α + 1 48 α 2 , b 3 = b 8 = 1 2 1 24 α 2 , b 5 = b 6 = 1 16 α + 1 48 α 2 .
The coefficient matrix A has a block structure with many vanishing elements. The AMDTR4_RK2 scheme may be also cast as a parameterized implicit Runge–Kutta (PIRK) method having the general form
Y i = v i y n + ( 1 v i ) y n + 1 + h j = 1 s x i , j f ( t n + c j h , Y j ) , i = 1 , s y n + 1 = y n + h j = 1 s b j f ( t n + c j h , Y j ) ,
and represented by the tableau
c v X b T
with c i = v i + j = 1 s x i , j . Notice that a PIRK is equivalent to a RK with A = X + v b T . Order results for this class of methods are reported in [24], where the special subclass of mono-implicit Runge–Kutta (MIRK) methods is analyzed. The MIRK class has been investigated by many authors since it has the interesting feature that the matrix X is strictly lower triangular. In this form, the method AMPTR_RK2 has the following tableau:
α 0 0 α 2 α 2 0 0 0 0 0 0 0 α 0 0 0 α 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 α 0 0 0 α 0 0 0 0 0 0 0 α 0 0 0 α 2 α 2 0 0 0 0 0 0 1 α 1 0 0 0 0 0 0 α 2 α 2 0 0 1 α 1 0 0 0 0 0 0 0 α 0 0 1 0 b 1 0 b 3 0 b 5 b 6 0 b 8 0 b 10 1 + α 1 0 0 0 0 0 0 0 α 0 0 1 + α 1 0 0 0 0 0 0 0 α 2 α 2 0 b 1 0 b 3 0 b 5 b 6 0 b 8 0 b 10
Separating the block involving t n with the one involving t n + 1 leads to another representation of the AMDTR4_RK2 method as a one-step block method with s = 5 stages, namely
T Z n + 1 = e 3 e 3 T Z n + h e 3 d T G n + h B G n + 1 ,
where (see (15))
Z n = Y n α y n α y n Y n + α y n + α , G n = f ( Y n α ) f ( y n α ) f ( y n ) f ( Y n + α ) f ( y n + α ) , d = b 1 0 b 3 0 b 5 , e 3 = 0 0 1 0 0 ,
T = 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 1 , B = 0 α / 2 α / 2 0 0 0 0 α 0 0 b 6 0 b 8 0 b 10 0 0 α 0 0 0 0 α / 2 α / 2 0 ,
with T, B, d and e 3 used as linear operators, to simplify the notation avoiding the explicit use of the Kronecker products. This latter expression, besides being more compact than the previous ones, leads to a better implementation in terms of computational efficiency.
Applied to the test equation y = λ y , the AMDTR4_RK2 method defines the following recursion:
Y n + 1 = ( I q T 1 B ) 1 T 1 ( e 3 e 3 T + q e 3 d T ) Y n = G ( q ) Y n
where q = h λ . The matrix G ( q ) has four eigenvalues equal to zero and one equal to the following rational function:
R ( q ) = P ( q ) Q ( q ) = q 3 + 6 q 2 + 24 q + 48 q 3 + 6 q 2 24 q + 48 .
Again we see that P ( q ) = Q ( q ) and a direct computation shows that R ( q ) is analytic in the left-half of the complex plane, so the method is precisely A-stable.

4.2. Approximation of the Derivative Using the Trapezoidal Scheme of Order 2

Approximating the solution in the additional points by the trapezoidal scheme yields
y n + 1 α = y n + 1 h α 2 ( f n + 1 α + f n + 1 ) , y n + 1 + α = y n + 1 + h α 2 ( 1 3 f n + 1 + f n + 1 + α ) , y n + 1 = y n + h 2 ( f n + 1 + f n ) h 2 8 D ^ 1 f n D ^ 1 f n + 1 + h 3 48 D ^ 2 f n + D ^ 2 f n + 1 .
These formulae, denoted by AMDTR4_TR2, have order 4. Choosing α = 2 / 4 we get a method which is conjugate to the corresponding method AMDMP4_TR2 defined at (12). In block form, this method assumes the following shape:
T Z n + 1 = e 2 e 2 T Z n + h e 2 d T G n + h B G n + 1 ,
where
Z n = y n α y n y n + α , G n = f ( y n α ) f ( y n ) f ( y n + α ) , d = 1 16 α + 1 48 α 2 1 2 1 24 α 2 1 16 α + 1 48 α 2 , e 2 = 0 1 0 ,
T = 1 1 0 0 1 0 0 1 1 , B = α / 2 α / 2 0 d 3 d 2 d 1 0 α / 2 α / 2 ,
with T, B, d and e 2 used as linear operators.
Applied to the test equation y = λ y we have that the solution is defined by the following recursion:
Y n + 1 = ( I q T 1 B ) 1 T 1 ( e 2 e 2 T + q e 2 d T ) Y n = G ( q ) Y n
where q = h λ . The matrix G ( q ) has two vanishing eigenvalues and one eigenvalue equal to the same rational function defined at (11). From Theorem 1, it then follows that the formulae are precisely A-stable for α < 1 / 6 .

5. Solution of the Nonlinear Systems

In order to optimize the computational cost associated with each method derived in the previous sections, we employ a suitable modified Newton scheme to solve the nonlinear systems emerging from their implementation. In particular, we approximate the Jacobian matrix with a diagonal block-matrix with constant diagonal blocks. The derived iterative scheme for the AMDMP methods is the following
I s I h β ( I s J ) Y n + 1 r + 1 Y n + 1 r = Y n + 1 r e y n + h ( A I ) f Y n + 1 r
where A R s × s denotes the coefficient matrix of the method (s is the number of stages), J R m × m is the Jacobian matrix of the vector field f, evaluated at ( t n , y n ) , y n is the solution computed at the previous step t n , β is a positive parameter, while I s and I stand for the identity matrices of size s and m respectively.
Theorem 2.
The nonlinear iteration scheme (16) applied to the AMDMP4_TR2 method for the solution of the test equations y = λ y is convergent for R e ( q ) < 0 if the spectral radius ρ of the matrix β A I is less then one. When ρ > 1 , the method converges if
| q | β cos θ + β 2 ( cos 2 θ + ρ 2 1 ) ρ 2 1 , π 2 θ π 2
Proof. 
The scheme (16) applied to the test equation reduces to the following iteration:
Y n + 1 r + 1 = q ( 1 q β ) 1 ( A 1 β I ) Y n + 1 r + ( 1 q β ) 1 e y n .
The eigenvalues of the iteration matrix are the eigenvalues of the matrix β A I scaled by q / ( β q ) . It is straightforward to check that if ρ < 1 the method is convergent when R e ( q ) < 0 . The region of convergence when ρ > 1 is computed by imposing that ρ | q / β q ) | < 1 . □
Corollary 1.
The nonlinear iteration scheme (16) with 0 < β 7 applied to the AMDMP4_TR2 with α = 2 / 4 for the solution of the test equations y = λ y , converges if ( λ ) < 0 . The minimum value at ∞ is 0.5637 and is attained choosing β 4.6721 .
Proof. 
The eigenvalues of the matrix ( β A I ) have absolute value less than one for 0 < β 7 with minimum value 0.5637 for β 4.6721 . □
Observe that the iterative scheme (16) only requires one LU factorization of size m and the solution of the nonlinear systems could be easily performed in parallel. Depending on the structure of the problem, this is surely an interesting approach for solving nonlinear equations.
A similar argument may be applied to the solution of the nonlinear systems arising from the implementation of the AMDTR4_TR2 methods. The details are omitted for the sake of brevity.

6. Numerical Illustrations

In the present section, we compare the behavior of the newly-derived formulae with that of the original multi-derivative methods. These integrators have been applied to the well-known Kepler problem, a super-integrable Hamiltonian system that describes the motion of two bodies subject to Newton’s law of gravitation (see, for example [25]). By setting the origin of the coordinate system on one of the two bodies, the Hamiltonian function takes the form
H ( q , p ) = 1 2 ( p 1 2 + p 2 2 ) 1 q 1 2 + q 2 2 .
In particular, taking as initial conditions
q 1 ( 0 ) = 1 e , q 2 ( 0 ) = 0 , p 1 ( 0 ) = 0 , p 2 ( 0 ) = 1 + e 1 e ,
the trajectory describes an ellipse with eccentricity e in the q 1 q 2 plane and is periodic with period T = 2 π . Besides the total energy H, further relevant first integrals are the angular momentum
M ( q , p ) = q 1 p 2 q 2 p 1 ,
and the Lenz vector A = ( A 1 , A 2 , A 3 ) , whose components are
A 1 ( q , p ) = p 2 M ( q , p ) q 1 | | q | | 2 , A 2 ( q , p ) = p 1 M ( q , p ) q 2 | | q | | 2 , A 3 ( q , p ) = 0 .
Of the four first integrals H , M , A 1 and A 2 , only three are independent so, for example, A 1 can be neglected.
Having set e = 0.6 and h = T / 200 , we integrated the problem over 10 3 periods and computed the error y n y 0 1 in the solution at specific times multiples of the period T, that is for n = 200 k , with k = 1 , 2 , . All computations have been carried out on an Intel i7 quad-core CPU with 16GB of RAM, running MATLAB R2020b.
Figure 1 and Figure 2 report the results for the considered methods. On the top-left picture is the absolute error of the numerical solution; the top-right picture shows the error in the Hamiltonian function; the error in the angular momentum is drawn in the bottom-left picture, while the bottom-right picture concerns the error in the second component of the Lenz vector. To get more insights on the performance and conservation properties of the symplectic AMDMP_TR2 formula, in Figure 1 we also report the results for the Gauss method of order four. We observe that, to make the pictures more readable, the errors in the first integrals are computed at the midpoint of each period, that is at the points t n + 100 .
As is expected from a symplectic or a conjugate-symplectic integrator, we can see a linear drift in the error y n y 0 1 as the time increases. The same linear growth is experienced in the Lenz invariant. The conjugate-symplectic AMDMP_TR2 scheme assures near conservation of the Hamiltonian function and angular momentum. This latter quadratic invariant is precisely conserved (up to machine precision) by the symplectic schemes.
In Table 1 and Table 2, we compare the symplectic and conjugate-symplectic formulae with other methods in the same families, in terms of their ability in conserving the angular momentum. To this end, the value of α = 2 / 4 that generates the symplectic and conjugate-symplectic schemes has been scaled by factors γ and 1 / γ , for a few values of γ > 0 . We observe that when γ increases, the errors related to the decreasing values of α approach the value of the corresponding multi-derivative methods. As was expected, the angular momentum is exactly preserved by the symplectic scheme while the value attained at t 1 / 2 = t 0 + h / 2 is exactly preserved by the conjugate-symplectic one.
To show the convergence properties of the diagonal nonlinear iteration (16) introduced in Section 5, we solved the problem over 10 2 periods with sepsize h = T / N , choosing β = 4.6721 (see Corollary 1). For comparison purposes, we also consider the use of the standard simplified Newton scheme defined by approximating the Jacobian matrix with ( I 3 I h A J ) (compare with (16)).
In Table 3 we show for the AMDMP4_TR2 method with α = 2 / 4 the absolute error, the computed convergence rate and the mean number of iterations needed to reach the convergence up to machine precision for the two techniques. The scheme defined by the block-diagonal matrix requires around 1.7 more iterations to attain convergence, but the cost of the Jacobian factorization decreases from 27 m3 to m3. The execution times of the two algorithms for this problem are essentially equivalent, since the computation of the factorization is a built-in optimized function in Matlab so the differences do not consistently show up.

7. Conclusions

Starting from two fourth-order multi-derivative extensions of the midpoint and trapezoidal formulae, we have derived one-parameter families of standard Runge–Kutta methods through the approximation of the first and second-order Lie derivatives by means of suitable finite difference schemes. The resulting formulae retain the order and the symmetry properties of the original methods while, avoiding the use of Lie derivatives, their implementation turns out to be simplified. More interestingly, for a specific choice of the parameter, a symplectic/conjugate-symplectic pair of methods may be detected. This novel result generalizes to order four the well-known conjugacy relationship between the midpoint and the trapezoidal methods. Concerning the implementation of the methods, a block-diagonal version of the simplified Newton scheme has been employed. The iteration requires, at each step, a single Jacobian evaluation of the vector field and a LU factorization of a matrix having the same size of the continuous problem and, interestingly, may be executed in parallel.

Author Contributions

These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the INdAM-GNCS 2020 Research Project “Numerical algorithms in optimization, ODEs, and applications” (the authors are members of the INdAM Research group GNCS).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iavernaro, F.; Mazzia, F. On conjugate-symplecticity properties of a multi-derivative extension of the midpoint and trapezoidal methods. Rend. Semin. Mat. 2018, 76, 123–134. [Google Scholar]
  2. Feng, K.; Quin, M. Symplectic Geometric Algorithms for Hamiltonian Systems; Springer, Zhejiang Publishing United Group Zhejiang Science and Technology Publishing House: Zhejiang, China, 2010. [Google Scholar]
  3. Sanz-Serna, J.; Calvo, M. Numerical Hamiltonian Problems; Chapman & Hall: London, UK, 1994. [Google Scholar]
  4. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration. Structure-Preserving Algorithms for Ordinary Differential Equations, 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  5. Budd, C.J.; Piggott, M.D. Geometric Integration and its Applications. In Handbook of Numerical Analysis; North-Holland Publishing Co.: Amsterdam, The Netherlands, 2003; Volume 11, pp. 35–139. [Google Scholar]
  6. McLachlan, R.I. Perspectives on geometric numerical integration. J. R. Soc. N. Z. 2019, 49, 114–125. [Google Scholar] [CrossRef]
  7. Diele, F.; Marangi, C. Geometric numerical integration in ecological modelling. Mathematics 2020, 8, 25. [Google Scholar] [CrossRef] [Green Version]
  8. Mazzia, F.; Sestini, A. On a class of conjugate symplectic Hermite-Obreshkov one-step methods with continuous spline extension. Axioms 2018, 7, 58. [Google Scholar] [CrossRef] [Green Version]
  9. Iavernaro, F.; Mazzia, F.; Mukhametzhanov, M.; Sergeyev, Y. Conjugate-symplecticity properties of Euler–Maclaurin methods and their implementation on the Infinity Computer. Appl. Numer. Math. 2020, 155, 58–72. [Google Scholar] [CrossRef] [Green Version]
  10. Hairer, E. Conjugate-symplecticity of linear multistep methods. J. Comput. Math. 2008, 26, 657–659. [Google Scholar]
  11. Hairer, E.; Zbinden, C.J. On conjugate symplecticity of B-series integrators. IMA J. Numer. Anal. 2013, 33, 57–79. [Google Scholar] [CrossRef] [Green Version]
  12. Hairer, E.; Nørsett, S.P.; Wanner, G. Solving Ordinary Differential Equations. I. Nonstiff Problems, 2nd ed.; Springer Series in Computational Mathematics; Springer: Berlin, Germany, 1993. [Google Scholar]
  13. Mazzia, F.; Sestini, A.; Trigiante, D. The continuous extension of the B-spline linear multistep methods for BVPs on non-uniform meshes. Appl. Numer. Math. 2009, 59, 723–738. [Google Scholar] [CrossRef]
  14. Mazzia, F.; Sestini, A. The BS class of Hermite spline quasi-interpolants on nonuniform knot distributions. BIT Numer. Math. 2009, 49, 611–628. [Google Scholar] [CrossRef]
  15. Mazzia, F.; Sestini, A. Quadrature formulas descending from BS Hermite spline quasi-interpolation. J. Comput. Appl. Math. 2012, 236, 4105–4118. [Google Scholar] [CrossRef]
  16. Bracco, C.; Giannelli, C.; Mazzia, F.; Sestini, A. Bivariate hierarchical Hermite spline quasi-interpolation. BIT Numer. Math. 2016, 56, 1165–1188. [Google Scholar] [CrossRef] [Green Version]
  17. Sergeyev, Y.; Mukhametzhanov, M.; Mazzia, F.; Iavernaro, F.; Amodio, P. Numerical methods for solving initial value problems on the infinity computer. Int. J. Unconv. Comput. 2016, 12, 3–23. [Google Scholar]
  18. Iavernaro, F.; Mazzia, F. Symplecticity properties of Euler-Maclaurin methods. AIP Conf. Proc. 2018, 1978. [Google Scholar] [CrossRef]
  19. Iavernaro, F.; Mazzia, F.; Mukhametzhanov, M.; Sergeyev, Y. Computation of higher order Lie derivatives on the Infinity Computer. J. Comput. Appl. Math. 2021, 383. [Google Scholar] [CrossRef]
  20. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II. Stiff and Differential Algebraic Problems, 2nd ed.; Springer: Berlin, Germany, 1996. [Google Scholar]
  21. Butcher, J. Runge-Kutta methods. Scholarpedia 2007, 2, 3147, revision #91735. [Google Scholar] [CrossRef]
  22. Sanz-Serna, J. Runge-Kutta schemes for Hamiltonian systems. BIT 1988, 28, 877–883. [Google Scholar] [CrossRef]
  23. Maeda, S. Certain types of Runge-Kutta-type formulas and reproduction of orbits of linear systems. Electron. Commun. Jpn. 1991, 74, 98–104. [Google Scholar] [CrossRef]
  24. Burrage, K.; Chipman, F.; Muir, P. Order Results for Mono-Implicit Runge–Kutta Methods. SIAM J. Numer. Anal. 1994, 31, 876–891. [Google Scholar] [CrossRef] [Green Version]
  25. Brugnano, L.; Iavernaro, F. Line Integral Methods for Conservative Problems; Monographs and Research Notes in Mathematics; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
Figure 1. Results for the fourth-order MDMP (solid lines), Gauss (dashed lines), AMDMP_TR2 (dash-dotted lines) and AMDMP_R2 (dotted lines) methods applied to the Kepler problem computed at each period.
Figure 1. Results for the fourth-order MDMP (solid lines), Gauss (dashed lines), AMDMP_TR2 (dash-dotted lines) and AMDMP_R2 (dotted lines) methods applied to the Kepler problem computed at each period.
Mathematics 09 01103 g001
Figure 2. Results for the fourth-order MDTR (solid lines), Gauss (dashed lines), AMDTR_TR2 (dash-dotted lines) and AMDTR_R2 (dotted lines) methods applied to the Kepler problem computed at each period.
Figure 2. Results for the fourth-order MDTR (solid lines), Gauss (dashed lines), AMDTR_TR2 (dash-dotted lines) and AMDTR_R2 (dotted lines) methods applied to the Kepler problem computed at each period.
Mathematics 09 01103 g002
Table 1. Error in the angular momentum for the methods in the MDMP family.
Table 1. Error in the angular momentum for the methods in the MDMP family.
AMDMP
TR2RK2
α 2 4 γ 2 γ 4 1 2 γ
γ
1 5.32 × 10 15 5.32 × 10 15 3.60 × 10 7
1.2 4.86 × 10 6 6.97 × 10 6 4.65 × 10 6
1.4 7.81 × 10 6 1.51 × 10 5 7.67 × 10 6
1.6 9.72 × 10 6 2.45 × 10 5 9.62 × 10 6
1.8 1.10 × 10 5 3.51 × 10 5 1.09 × 10 5
2.0 1.19 × 10 5 4.68 × 10 5 1.19 × 10 5
3.0 1.42 × 10 5 1.20 × 10 4 1.41 × 10 5
MDMP
1.60 × 10 5
Table 2. Error in the angular momentum for the methods in the MDTR family, computed at the midpoints t n + 1 / 2 . As reference value, we use the one computed at t 1 / 2 .
Table 2. Error in the angular momentum for the methods in the MDTR family, computed at the midpoints t n + 1 / 2 . As reference value, we use the one computed at t 1 / 2 .
AMDTR AMDTR Midpoints
TR2RK2 TR2RK2
α 2 4 γ 2 γ 4 1 2 γ α 2 4 γ 2 γ 4 1 2 γ
γ γ
1 1.55 × 10 5 1.55 × 10 5 1.32 × 10 4 1 5.88 × 10 15 5.88 × 10 15 3.40 × 10 7
1.2 1.89 × 10 5 6.46 × 10 5 6.34 × 10 5 1.2 4.73 × 10 6 6.78 × 10 6 4.52 × 10 6
1.4 3.96 × 10 5 1.22 × 10 4 2.13 × 10 5 1.4 7.59 × 10 6 1.47 × 10 5 7.45 × 10 6
1.6 5.31 × 10 5 1.88 × 10 4 6.87 × 10 6 1.6 9.45 × 10 6 2.38 × 10 5 9.35 × 10 6
1.8 6.23 × 10 5 2.63 × 10 4 2.55 × 10 5 1.8 1.07 × 10 5 3.41 × 10 5 1.06 × 10 5
2.0 6.90 × 10 5 3.45 × 10 4 3.91 × 10 5 2.0 1.16 × 10 5 4.55 × 10 5 1.15 × 10 5
3.0 8.47 × 10 5 8.69 × 10 4 7.14 × 10 5 3.0 1.38 × 10 5 1.17 × 10 4 1.37 × 10 5
MDTR MDTR midpoints
9.730 × 10 5 1.55 × 10 5
Table 3. Convergence rate and mean number of approximated Newton iterations for the AMDMP4_TR2 method with α = 2 / 4 .
Table 3. Convergence rate and mean number of approximated Newton iterations for the AMDMP4_TR2 method with α = 2 / 4 .
NErrorOrderIterations Per Step
(Simplified Newton)
Iterations Per Step
(Scheme (16))
100 4.6981 × 10 2 5.18 9.32
200 3.0275 × 10 3 3.95 4.52 8.12
400 1.9059 × 10 4 3.98 4.21 7.24
800 1.1933 × 10 5 3.99 3.83 6.48
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Iavernaro, F.; Mazzia, F. A Fourth Order Symplectic and Conjugate-Symplectic Extension of the Midpoint and Trapezoidal Methods. Mathematics 2021, 9, 1103. https://doi.org/10.3390/math9101103

AMA Style

Iavernaro F, Mazzia F. A Fourth Order Symplectic and Conjugate-Symplectic Extension of the Midpoint and Trapezoidal Methods. Mathematics. 2021; 9(10):1103. https://doi.org/10.3390/math9101103

Chicago/Turabian Style

Iavernaro, Felice, and Francesca Mazzia. 2021. "A Fourth Order Symplectic and Conjugate-Symplectic Extension of the Midpoint and Trapezoidal Methods" Mathematics 9, no. 10: 1103. https://doi.org/10.3390/math9101103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop