Some observations on preconditioning for non-self-adjoint and time-dependent problems

https://doi.org/10.1016/j.camwa.2021.05.037Get rights and content

Abstract

Numerical Linear Algebra—specifically the computational solution of equations—forms a significant part of Computational Methods for Partial Differential Equations. Here we discuss the contrast between the solution of symmetric systems of equations that arise from self-adjoint problems and non-symmetric systems that arise from non-self-adjoint problems when iterative methods are employed; such methods are the only feasible methods for very large scale computation with PDEs. We then go on to consider non-symmetric all-at-once systems that arise in approximation of time-dependent problems, discuss causality and the parallel-in-time paradigm, suggesting an approach that involves preconditioning initial value problems with time-periodic problems.

Introduction

Computational Partial Differential Equations is a large and vibrant research area. The computational solution of linear(ized) equations that arise from whatever approximation scheme is employed is commonly a major task involving methods of numerical linear algebra. Sparse direct methods are applicable for many problems, but, in particular for 3-dimensional domain problems, iterative methods usually present the only effective solution approach in combination with preconditioning [32] [8]. Almost always, very slow convergence is observed when an effective preconditioner (matrix approximation) is not used. Simple stationary iterations, including multigrid cycles, can be effectively used for some problems, but more generally applicable are Krylov subspace methods [27] [30]. (Multigrid cycles are really effective parts of many preconditioners: see for example [10, chapter 4]).

For self-adjoint problems, symmetric (or Hermitian) matrices generally arise and a common approach is to use the Conjugate Gradient method (cg) [16] (for definite problems) or the minres method [24] (for indefinite problems) with some appropriate symmetric preconditioner. In this situation, a priori descriptive convergence bounds for convergence of the iterative method depend solely on the eigenvalue spectrum of the preconditioned matrix: thus establishing estimates of the eigenvalues is all that is required to reliably predict the number of iterations needed for solution. Fewer iterations than predicted by these bounds can occur, but never more. The mathematics thus indicates what is required of a good preconditioner for a self-adjoint problem (see [32]).

By contrast, for non-self-adjoint problems when non-symmetric (non-Hermitian) matrices arise from approximation, though iterative solution methods are widely used, no generally descriptive convergence bounds are known. There are situations where special techniques can be used, but preconditioning is generally necessarily heuristic for non-symmetric matrices except in rare cases when preconditioning induces symmetry [32] or where self-adjointness in a non-standard inner-product exists [25].

Time-dependent PDE problems which are first order in time necessarily are non-self-adjoint and in model situations even give rise to matrices of the form I+F where F is nilpotent of high index. Thus non-diagonalisable matrices and Jordan structure are centrally relevant for such problems. In some sense, such time-dependent problems give rise to matrices that are furthest from the ideal cases that arise for self-adjoint problems as described above. Nevertheless, some effective preconditioners for such problems have been suggested and even theory guaranteeing fast convergence of certain iterations for such problems has been established. We will describe the relevant structures in this short paper in which we also seek to clarify the symmetric/non-symmetric dichotomy.

Section snippets

The symmetric/non-symmetric dichotomy

For a linear system Bx=c, with BRm×m, from an initial guess x0, a Krylov subspace method generates iterates x1,x2,,xk, using one matrix-vector product at each iteration, k, thus giving rise to a (mathematical) basis for a Krylov subspace{r,Br,B(Br),,Bkr}. There follows that the residuals rk=cBxk satisfy r=r0 andrk=pk(B)r0 where pk is a polynomial of degree less or equal to k satisfying pk(0)=1. Thus if B is diagonalisable and we have B=XΛX1 thenrkXpk(Λ)X1r0, and if B=BT so

Preconditioning for time-dependent problems

We start first with the simple linear ordinary differential equation initial value problemy=ay+f,y(0)=y0 that we discretise with a simple θ-method to integrate from 0 up to time T, giving the equationsykyk1τ=θayk+(1θ)ayk1+fk1,y0=y0, k=1,2,, with τ=T, τ being the time-step. As a linear all-at-once system this gives the matrix equationB[y1y2y3y]y=[τf1+(1+a(1θ)τ)y0τf2τf3τf]f, where the × coefficient matrix B is[bcbcbcb], with b=1aθτ, c=1a(1θ)τ. Notice that B is a bidiagonal

Acknowledgements

I thank two anonymous referees for their helpful and insightful comments that have led to improvements in this article.

References (33)

  • M. Benzi

    Preconditioning techniques for large linear systems: a survey

    J. Comput. Phys.

    (2002)
  • M. Eiermann

    Fields of values and iterative methods

    Linear Algebra Appl.

    (1993)
  • J. Pestana et al.

    On choice of preconditioner for minimum residual methods for non-Hermitian matrices

    J. Comput. Appl. Math.

    (2013)
  • R.H. Chan

    Circulant preconditioners for Hermitian Toeplitz systems

    SIAM J. Matrix Anal. Appl.

    (1989)
  • K. Chen

    Matrix Preconditioning Techniques and Applications

    (2005)
  • E. Chu et al.

    Inside the FFT Black Box: Serial and Parallel Fast Fourier Transform Algorithms

    (2019)
  • J.W. Cooley et al.

    An algorithm for the machine calculation of complex Fourier series

    Math. Comput.

    (1965)
  • F. Danieli et al.

    All-at-once solution of linear wave equations

    Numer. Linear Algebra Appl.

    (2021)
  • M. Ferronato

    Preconditioning for sparse linear systems at the dawn of the 21st century: history, current developments, and future perspectives

    Int. Sch. Res. Not.

    (2012)
  • ...
  • H.C. Elman et al.

    Finite Elements and Fast Iterative Solvers with Applications in Incompressible Fluid Dynamics

    (2014)
  • R.W. Freund

    Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

  • M.J. Gander

    50 years of time parallel time integration

  • M.J. Gander et al.

    Time parallelization for nonlinear problems based on diagonalization

  • A. Goddard et al.

    A note on parallel preconditioning for all-at-once evolutionary PDEs

    Electron. Trans. Numer. Anal.

    (2019)
  • A. Greenbaum et al.

    Any convergence curve is possible for GMRES

    SIAM J. Matrix Anal. Appl.

    (1996)
  • View full text