1 Introduction

Standard methods for the numerical solution of time-independent linear partial differential equations, such as finite difference, finite volume, and finite element methods, yield a sparse system of linear equations

$$\begin{aligned} Au=b. \end{aligned}$$

Since memory requirement is often critical, an iterative solution method may be the only viable choice. To achieve fast convergence, the iterative method is usually applied to a preconditioned problem

$$\begin{aligned} M^{-1}Au=M^{-1}b. \end{aligned}$$

For large-scale systems, it is desirable that the solution algorithm is of optimal order, both in terms of computational cost and computer memory. Therefore:

  1. 1.

    The rate of convergence should not slow down when the grid is refined. This is not a trivial task since differential operators are unbounded and the condition number of A consequently increases rapidly as the grid is refined.

  2. 2.

    Both the number of arithmetic operations and the amount of computer memory required for performing one iteration should be proportional to the number of unknowns. This requires a fast and memory lean algorithm for applying \(M^{-1}\).

For Poisson’s equation, geometric and algebraic multigrid methods are of optimal order and among the most effective strategies available [6, 16]. However, these techniques contain many parameters and may be difficult to tune. Achieving scalability on parallell architectures may also be a challenge. Other approaches are therefore worth considering.

One alternative line of research originates from Strang [15]. He proposed an algorithm for Toeplitz systems based on the conjugate gradient method with a circulant preconditioner: If M is a circulant approximation of A, the Fast Fourier Transform (FFT) can be used when applying \(M^{-1}\), and the resulting algorithm is memory lean and has a computational cost of near optimal order. Strang also provided examples where all but a few eigenvalues of \(M^{-1}A\) clustered around 1, yielding fast convergence. This turned out to be true for a wide class of Toeplitz problems [4].

The technique can be extended to multilevel Toeplitz systems, but the rate of convergence is in general degraded [14]. Furthermore, for a discretisation of a differential equation that is ill-posed with periodic boundary conditions, a uni- or multilevel circulant approximation of A is singular, and the technique fails. One possible remedy is to use a semicirculant approximation, as suggested by Holmgren and Otto [9]. Semicirculant preconditioners have circulant structures on all levels but one and an FFT-based algorithm is still available. This turns out to be an effective remedy for first order partial differential equations, and grid independent convergence has been demonstrated for convective flow problems, see e.g. [2].

For diffusive flow problems, the rate of convergence is significantly improved by a semicirculant preconditioner, but the required number of iterations is not bounded as the grid is refined, see e.g. [8]. This holds in particular for elliptic problems [3]. More generally, Manteuffel and Parter [12] have proved that if A and M are discrete elliptic operators, then the condition number of \(M^{-1}A\) is bounded as the grid is refined if and only if A and M has the same boundary conditions. A multilevel circulant or semicirculant approximation M of a discrete elliptic operator A is elliptic if A is elliptic, and circulant and semicirculant preconditioning therefore fits into this framework. Consequently, since circulant approximations imposes periodic boundary conditions, the condition number grows as the grid is refined if A has non-periodic boundary conditions.

In this paper, the pursuit for grid independent convergence continues. We show that Strang’s idea can be extended in such a way that grid independent convergence is achieved for a multilevel elliptic problem. Our inspiration comes from Neumann [13] and the research about linear integral equations and potential theory that Neumann contributed to. In 1877, Neumann gave the first rigorous proof for the existence of a solution to Laplace’s equation by transforming the elliptic partial differential equation into a boundary integral equation on fixed point form, and by showing that the corresponding fixed point iteration converges. We show that Neumann’s fixed point iteration can be viewed as a preconditioned simple iteration for the differential equation and our numerical experiments indicate that the corresponding procedure yields grid independent convergence in the discrete case. We also improve the convergence rate further by using a slightly more advanced iterative method.

The resulting preconditioning procedure involves an embedding of the discrete domain into an extended domain where the preconditioner has a multilevel circulant structure and can be applied using FFT. Applying our preconditioner to a d-dimensional grid function of size \(\mathcal {O}(N^d)\) requires \(\mathcal {O}(2^dN^d)\) computer memory and \(\mathcal {O}(2^dN^d\log N^d)\) arithmetic operations. Our numerical results show that the computation of an accurate solution requires about 10 to 15 iterations for a convex domain and about 30 to 35 iterations when the domain is L-shaped. The results seem to hold in up to at least 5 spatial dimensions.

2 Problem

We consider Poisson’s equation with Dirichlet boundary conditions

$$\begin{aligned} \begin{aligned} -\Delta u&= f,\quad x\in \Omega ,\\ u&= g,\quad x\in \Gamma , \end{aligned} \end{aligned}$$
(2.1)

where \(\Omega \) is a bounded domain in \(\mathbb {R}^d\) with boundary \(\Gamma \) of class \(C^2\), and f and g are given continuous functions.

A fundamental solution E of the operator \(-\Delta \) is given by

$$\begin{aligned} E(x) = {\left\{ \begin{array}{ll} \frac{1}{2\pi }\ln {\frac{1}{|x|}},&{}d=2,\\ \frac{1}{d(d-2)\gamma (d)}\frac{1}{|x|^{d-2}},&{}d=3,4,\ldots , \end{array}\right. } \end{aligned}$$

where \(\gamma (d)\) is the volume of the unit sphere in \(\mathbb {R}^d\), and |x| is the Euclidean norm of x. Let \(G(x,y)=E(x-y)\) be the free space Green’s function. Then, (2.1) can be transformed into (see Sect. 4)

$$\begin{aligned} \begin{aligned} u(x) =&\int _\Omega G(x,y) f(y)\,dV(y)\\&+\int _\Gamma G(x,y)\frac{\partial u}{\partial n(y)}(y)\,ds(y)\\&-\int _\Gamma \frac{\partial G}{\partial n(y)}(x,y)g(y)\,ds(y), \quad x\in \Omega , \end{aligned} \end{aligned}$$
(2.2)

where n(y) denotes the normal unit vector at \(y\in \Gamma \), directed into the exterior of \(\Omega \). The Green’s function G(xy) is singular at \(x=y\), but the integrals on the right hand side of (2.2) exist for points x not lying on the boundary \(\Gamma \) (see e.g. [11]). Also, from the so called jump conditions in potential theory (see e.g. [5]) follows that the integrals can be continuously extended from \(\Omega \) to \(\bar{\Omega }=\Omega \cup \Gamma \), if f, g and the normal derivative of u on the boundary is continuous. In this sense, (2.2) holds on \(\bar{\Omega }\).

Let F(u) denote the right hand side of (2.2), continuously extended to \(\bar{\Omega }\). Then, given some initial guess \(u^{(0)}\in C(\bar{\Omega })\), the operator F can be used to form a sequence of approximations

$$\begin{aligned} u^{(k+1)}=F(u^{(k)}),\quad k=0,1,2,\ldots , \end{aligned}$$
(2.3)

for \(x\in \bar{\Omega }\). Despite the fact that \(F:C(\bar{\Omega })\rightarrow C(\bar{\Omega })\) is not bounded, \(u^{(k)}\in C(\bar{\Omega })\) converges uniformly to a unique solution of (2.2) since the normal derivative of \(u^{(k)}\) on the boundary converges uniformly to a unique solution (see Sect. 3).

Two tasks are of interest to us in this paper. First, we demonstrate that it may be possible to improve the rate of convergence by using an iterative method that is slightly more advanced than the fixed point iteration (2.3). This is the topic of Sect. 3. The analysis in Sect. 3 guides us when we construct an iterative method for the discrete problem in Sect. 5.

Secondly, we show that it is possible to frame the transformation of (2.1) into (2.2) as a preconditioning operation that can be performed using Fourier technique. This is the topic of Sect. 4. This framing serves as a blueprint for the derivation of an FFT-based algorithm for the discrete problem in Sect. 6.

3 Iterative method

Given some smooth initial guess \(u^{(0)}\), we consider an iterative method of the form

$$\begin{aligned} u^{(k+1)} = u^{(k)} - \alpha _{k+1}\,r(u^{(k)}),\quad k=0,1,2,\ldots , \end{aligned}$$
(3.1)

where \(\alpha _k\) are some chosen constants, and \(r(u)=u-F(u)\) is the residual. If \(\alpha _k=1\) for all k, then (3.1) is the fixed point iteration (2.3).

Let \(u^*\) denote the exact solution to (2.2). Then, from (2.2) and (3.1) it follows that the error \(e^{(k)}=u^{(k)}-u^*\) satisfies

$$\begin{aligned} e^{(k+1)}(x) = (1 - \alpha _{k+1})e^{(k)}(x)+ \alpha _{k+1}\int _\Gamma G(x,y)\frac{\partial e^{(k)}}{\partial n(y)}(y)\,ds(y), \end{aligned}$$
(3.2)

for \(k=0,1,2,\ldots ,\) and \(x\in \bar{\Omega }\). From differentiation of (3.2), it follows that the normal derivative of the error satisfies

$$\begin{aligned} \begin{aligned} \frac{\partial e^{(k+1)}}{\partial n(x)}(x-h\,n(x)) =&(1 - \alpha _{k+1})\frac{\partial e^{(k)}}{\partial n(x)}(x-h\,n(x))\\&+\alpha _{k+1}\int _\Gamma \frac{\partial G}{\partial n(x)}(x-h\,n(x),y) \frac{\partial e^{(k)}}{\partial n(y)}(y)\,ds(y), \end{aligned} \end{aligned}$$

for \(k=0,1,2,\ldots \), \(x\in \Gamma \), and some small and positive parameter h. Letting \(h\rightarrow 0^+\) uniformly on \(\Gamma \) and using the jump conditions in potential theory (see e.g. [5]) yields

$$\begin{aligned} \begin{aligned} \frac{\partial e^{(k+1)}}{\partial n(x)}(x) =&(1 - \alpha _{k+1})\frac{\partial e^{(k)}}{\partial n(x)}(x)\\&+\alpha _{k+1}\int _\Gamma \frac{\partial G}{\partial n(x)}(x,y) \frac{\partial e^{(k)}}{\partial n(y)}(y)\,ds(y)\\&+\frac{\alpha _{k+1}}{2}\frac{\partial e^{(k)}}{\partial n(x)}(x), \end{aligned} \end{aligned}$$
(3.3)

for \(k=0,1,2,\ldots ,\) and \(x\in \Gamma \). Here, the integral on the right hand side of (3.3) is singular, but exists as an improper integral if the normal derivative of the error on the boundary is continuous. For ease of notation, we introduce

$$\begin{aligned} \psi ^{(k)}(x)=\frac{\partial e^{(k)}}{\partial n}(x), \end{aligned}$$

and

$$\begin{aligned} (K\psi )(x) = 2\int _\Gamma \frac{\partial G}{\partial n(x)}(x,y)\psi (y)\,ds(y), \end{aligned}$$
(3.4)

Then, (3.3) can be written

$$\begin{aligned} \psi ^{(k+1)} =\left( 1 -\frac{\alpha _{k+1}}{2}\right) \psi ^{(k)} +\frac{\alpha _{k+1}}{2}\,K\,\psi ^{(k)}, \end{aligned}$$
(3.5)

for \(k=0,1,2,\ldots ,\) and \(x\in \Gamma \).

If (3.5) is a convergent iteration and the normal derivative of the error on the boundary tends to the zero function with a certain rate, then (3.2) can be used to estimate the rate of convergence of the error in the entire domain. This suggests that a convergence analysis of (3.1) should start with the analysis of (3.5), and, in particular, with the analysis of the integral operator K given by (3.4). Denote K’s spectrum by \(\sigma (K)\). Then (see e.g. [11]):

  • K is compact on \(C(\Gamma )\), and, as for any other compact operator, \(\lambda =0\) belongs to \(\sigma (K)\) and \(\sigma (K)\setminus \{0\}\) consists of at most a countable number of eigenvalues.

  • The nullspace of \(I+K\), where I is the identity operator, has dimension one and is spanned by a function that is not identical to zero, so \(\lambda =-1\) belongs to \(\sigma (K)\).

  • The eigenvalues of K are real and \(\sigma (K)\subset [-1,1)\).

In the case of fixed point iteration, i.e. \(\alpha _k=1\) for all k, (3.5) turns into

$$\begin{aligned} \psi ^{(k+1)}=\frac{1}{2}\psi ^{(k)}+\frac{1}{2}K\psi ^{(k)},\quad k=0,1,2,\ldots \end{aligned}$$

Since \(\sigma (I/2+K/2)=1/2+\sigma (K)/2\subset [0,1)\), it follows that the spectral radius of the operator \(I/2+K/2\) is less than 1, and (3.5) is consequently a uniformly convergent iteration.

In general, fixed point iteration requires an infinite number of iterations. However, as the following theorem indicates, if K has a finite number of eigenvalues and the eigenfunctions form an orthonormal basis in \(L^2(\Gamma )\), it is possible to choose the coefficients \(\alpha _k\) in such a way that a solution can be found in a finite number of steps.

Theorem 3.1

If K has a finite number of eigenvalues, \(\lambda _1,\ldots ,\lambda _{s}\), and the eigenfunctions \(\mu _m(x)\) of K form an orthonormal basis in \(L^2(\Gamma )\), then, letting

$$\begin{aligned} \alpha _k = {\left\{ \begin{array}{ll} \frac{2}{1-\lambda _k},&{}k=1,\ldots ,s,\\ 1,&{}k=s+1, \end{array}\right. } \end{aligned}$$

yields \(u^{(s+1)}(x)=u^*(x)\), where \(u^*(x)\) is the exact solution to (2.2).

Proof

Since

$$\begin{aligned} \psi ^{(0)}(x)=\sum _m\beta _m\mu _m(x), \end{aligned}$$

for some constants \(\beta _m\), it follows from (3.5) that

$$\begin{aligned} \psi ^{(1)}(x)=\sum _m\beta _m\left( \left( 1 - \frac{\alpha _1}{2}\right) I + \frac{\alpha _1}{2}\,K\right) \mu _m(x) =\sum _m\beta _m\,c_m\,\mu _m(x), \end{aligned}$$

for some constants \(c_m\). For terms where \(K\mu _m=\lambda _1\,\mu _m\), it follows that

$$\begin{aligned} c_m= \left( 1 - \frac{\alpha _1}{2} + \frac{\alpha _1}{2}\lambda _1\right) = \left( 1 - \alpha _1\frac{1-\lambda _1}{2}\right) =0, \end{aligned}$$

and, consequently, that \(\psi ^{(1)}\) is orthogonal to all eigenfunctions corresponding to the eigenvalue \(\lambda _1\). By repeating the argument k times, \(k<s\), it follows that \(\psi ^{(k+1)}\) is orthogonal to all eigenfunctions corresponding to eigenvalues \(\lambda _1,\ldots ,\lambda _{k+1}\). From the completeness of the orthonormal set of eigenfunctions then follows that \(\psi ^{(s)}=0\), and from (3.2) that \(e^{(s+1)}=0\), so \(u^{(s+1)}=u^*\). \(\square \)

The result in Theorem 3.1 is not surprising, since a discrete counterpart of (3.5) is a Krylov subspace method. For Krylov subspace methods, the rate of convergence is determined by the eigenvalues of the coefficient matrix, as long as the coefficient matrix has a complete set of orthonormal eigenvectors, see e.g. [7]. Consequently, results simliar to the one in Theorem 3.1 can be derived for Krylov subspace methods. For example, one can use error estimates for the GMRES method to show that GMRES converges in \(s+1\) iterations if there are only s distinct eigenvalues.

A discrete counterpart of the following example is examined in Sect. 7.

Example 3.1

Let \(\Omega \) be the unit disc \(|x|<1\) in \(\mathbb {R}^2\) and introduce polar coordinates \(x_1=r\cos \theta , x_2=r\sin \theta , y_1=\rho \cos \varphi \), and \(y_2=\rho \sin \varphi \). Then, it follows from a simple calculation that

$$\begin{aligned} \left. \frac{\partial G}{\partial r}(r,\theta ,\rho ,\varphi )\right| _{r=\rho =1}=-\frac{1}{4\pi }, \end{aligned}$$

and, consequently, that

$$\begin{aligned} (K\psi )(\theta )=-\frac{1}{2\pi }\int _0^{2\pi }\psi (\varphi )\,d\varphi . \end{aligned}$$

Here, \(\sigma (K)\) consists of two eigenvalues (see [1]): \(\lambda =-1\) is an eigenvalue with corresponding eigenfunction 1 and \(\lambda =0\) is an eigenvalue with corresponding eigenfunctions \(\cos m\theta \) and \(\sin m\theta \) for \(m\in \mathbb {N}\) .

In the case of fixed point iteration, i.e. \(\alpha _k=1\) for all k, relation (3.5) reads

$$\begin{aligned} \psi ^{(k+1)}(\theta )= \frac{1}{2}\psi ^{(k)}(\theta )-\frac{1}{4\pi }\int _0^{2\pi }\psi ^{(k)}(\varphi )\,d\varphi , \end{aligned}$$

and \(\sigma (I/2+K/2)=\{0,1/2\}\). In fact,

$$\begin{aligned} \begin{aligned} \psi ^{(1)}(\theta )&= \frac{1}{2}\psi ^{(0)}(\theta )-\frac{1}{4\pi }\int _0^{2\pi }\psi ^{(0)}(\varphi )\,d\varphi ,\\ \psi ^{(k+1)}(\theta )&=\frac{1}{2}\psi ^{(k)}(\theta ),\quad k=1,2,\ldots , \end{aligned} \end{aligned}$$

since

$$\begin{aligned} \int _0^{2\pi }\psi ^{(k+1)}(\theta )\,d\theta = \frac{1}{2}\int _0^{2\pi }\psi ^{(k)}(\theta )\,d\theta - \frac{1}{4\pi }\int _0^{2\pi }\psi ^{(k)}(\varphi )\,d\varphi \int _0^{2\pi }d\theta =0. \end{aligned}$$

Thus, with the exception for the first iteration, the normal derivative of the error on the boundary is halved in each iteration.

Now, let \(\alpha _1=1\), \(\alpha _2=2\), and \(\alpha _3=1\), as suggested by Theorem 3.1. Then, it follows from (3.5) that

$$\begin{aligned} \begin{aligned} \psi ^{(1)}(\theta )&= \frac{1}{2}\psi ^{(0)}(\theta )-\frac{1}{4\pi }\int _0^{2\pi }\psi ^{(0)}(\varphi )\,d\varphi ,\\ \psi ^{(2)}(\theta )&= -\frac{1}{2\pi }\int _0^{2\pi }\psi ^{(1)}(\varphi )\,d\varphi \\&=-\frac{1}{2\pi }\left( \frac{1}{2}\int _0^{2\pi }\psi ^{(0)}(\theta )\,d\theta - \frac{1}{4\pi }\int _0^{2\pi }\psi ^{(0)}(\varphi )\,d\varphi \int _0^{2\pi }d\theta \right) \\&=0,\\ \end{aligned} \end{aligned}$$

and from (3.2) that \(e^{(3)}(x)=0\).

However, as illustrated by the following example, the unit disc is a degenerate case. In general, the operator K has an infinite number of eigenvalues.

Example 3.2

Introduce the elliptic coordinates \(x_1=\cosh r\,\cos \theta \), \(x_2=\sinh r\,\sin \theta \), \(y_1=\cosh \rho \,\cos \varphi \), and \(y_2=\sinh \rho \,\sin \varphi \), and let \(\Omega \) be the ellipse \(r<1\) in \(\mathbb {R}^2\). Then, a not so simple calculation [1] shows that \(-1\), \(-e^{-2m}\), and \(e^{-2m}\) are eigenvalues of K with corresponding eigenfunctions 1, \(\cos m\theta \), and \(\sin m\theta \) for \(m\in \mathbb {N}\). Here, \(\lambda =0\) belongs to the spectrum of K, but as a limit point as m tends to infinity and consequently as a member of the continuous spectrum rather than being an eigenvalue.

4 Transformation

Equation (2.1) can be transformed into (2.2) by the use of Green’s second identity:

$$\begin{aligned} \begin{aligned} \int _{\Omega }G(x,y)f(y)\,dV(y)=&\int _{\Omega }-G(x,y)\Delta u(y)\,dV(y)\\ =&\int _{\Omega }(-\Delta G)(x,y)u(y)dV(y)\\&-\int _\Gamma G(x,y)\frac{\partial u}{\partial n(y)}(y)\,ds(y)\\&+\int _\Gamma \frac{\partial G}{\partial n(y)}(x,y)u(y)\,ds(y), \quad x\in \Omega . \end{aligned} \end{aligned}$$

Using \(\int _\Omega (-\Delta G)u\,dV=u\) for \(x\in \Omega \), and substituting u with g on the boundary \(\Gamma \) gives the result. In this section, we frame this transformation as a preconditioning operation that can be performed using Fourier technique. The framing serves as a blueprint for the derivation of an FFT-based algorithm for the discrete problem in Sect. 5.

Let

$$\begin{aligned} \chi _\Omega = {\left\{ \begin{array}{ll} 1,&{}x\in \Omega ,\\ 0,&{}x\in \mathbb {R}^d\setminus \Omega , \end{array}\right. } \end{aligned}$$

be the characteristic function of the domain \(\Omega \) and define the distributions

$$\begin{aligned} \begin{aligned} \delta _\Gamma (w)&=\int _\Gamma w\,ds,\\ (u\,\delta _\Gamma )(w)&=\int _\Gamma u\,w\,ds,\\ \frac{\partial (u\,\delta _\Gamma )}{\partial n}(w)&= -\int _\Gamma u\frac{\partial w}{\partial n}\,ds,\\ (\Delta v)(w)&=\int _{\mathbb {R}^d}v\,\Delta w\,dV,\\ (\chi _\Omega u)(w)&=\int _\Omega u\,w\,dV, \end{aligned} \end{aligned}$$

for distributions v and continuous functions u, as linear functionals acting on infinitely differentiable test functions w with compact support. It then follows from Green’s second identity that

$$\begin{aligned} \begin{aligned} (\Delta (\chi _\Omega u))(w) =&\int _{\mathbb {R}^d}\chi _\Omega u\,\Delta w\,dV\\ =&\int _{\Omega }u\,\Delta w\,dV\\ =&\int _{\Omega }(\Delta u)\,w\,dV -\int _\Gamma \frac{\partial u}{\partial n}w\,ds +\int _\Gamma u\frac{\partial w}{\partial n}\,ds,\\ =&\left( \chi _\Omega \,\Delta u\right) (w) -\left( \frac{\partial u}{\partial n}\delta _\Gamma \right) (w) -\left( \frac{\partial (u\,\delta _\Gamma )}{\partial n}\right) (w),\\ \end{aligned} \end{aligned}$$

for every test function w and an arbitrary continuous function u. If u satisfies (2.1), this turns into

$$\begin{aligned} -\Delta (\chi _\Omega u) = \chi _\Omega f +\frac{\partial u}{\partial n}\delta _\Gamma +\frac{\partial (g\delta _\Gamma )}{\partial n}. \end{aligned}$$
(4.1)

Equation (4.1) is our blue print for the system of linear equations \(Au=b\) in the discrete case. When using this formulation, rather than (2.1), the preconditioning procedure is simply a convolution with a fundamental solution:

Define the convolution \(u*v\), where u is a function and v is a distribution with compact support, from

$$\begin{aligned} (u*v)(w)=v(\tilde{u}*w), \end{aligned}$$

where \(\tilde{u}(x)=u(-x)\) and

$$\begin{aligned} (u*w)(x)=\int _{\mathbb {R}^d}u(x-y)w(y)\,dV(y), \end{aligned}$$

for functions u and w. Convolving both sides of (4.1) with a fundamental solution E yields

$$\begin{aligned} E*(-\Delta (\chi _\Omega u))= E*\left( \chi _\Omega f +\frac{\partial u}{\partial n}\delta _\Gamma +\frac{\partial (g\,\delta _\Gamma )}{\partial n}\right) . \end{aligned}$$
(4.2)

Since \(E*(-\Delta v)=v\) for every distribution v with compact support (see e.g. [10]), it can be shown that (4.2) holds for every test function w if and only if

$$\begin{aligned} \begin{aligned} (\chi _\Omega u)(x) =&\int _\Omega E(x-y) f(y)\,dV(y)\\&+\int _\Gamma E(x-y)\frac{\partial u}{\partial n(y)}(y)\,ds(y)\\&-\int _\Gamma \frac{\partial E}{\partial n(y)}(x-y)g(y)\,ds(y), \quad x\in \mathbb {R}^d, \end{aligned} \end{aligned}$$

which is the same as (2.2) when \(x\in \Omega \).

Equation (4.2) is our blue print for the preconditioned system of linear equations \(M^{-1}Au=M^{-1}b\) in the discrete case. Applying the preconditioner \(M^{-1}\) is then simply a convolution with a discrete fundamental solution over a larger, but bounded domain where we impose periodic boundary conditions. Thus, viewed as a matrix, the preconditioner has a multilevel circulant structure and can be applied using FFT.

5 Discrete problem

Let \(i=(i_1,\ldots ,i_d)\in \mathbb {Z}^d\) be a multi-index and introduce a uniform grid \(\{x_i\}_{i\in \mathbb {Z}^d}\) in \(\mathbb {R}^d\) with step size \(h_k\) in dimension k, \(k=1,\ldots ,d\). Also, let \(\hat{e}_k\) denote unit vectors in \(\mathbb {Z}^d\),

$$\begin{aligned} (\hat{e}_k)_l = {\left\{ \begin{array}{ll} 1,&{}l=k,\\ 0,&{}\text {otherwise}, \end{array}\right. } \quad k,l=1,\ldots ,d, \end{aligned}$$

and define the discrete Laplace operator \(\Delta _h\) for grid functions u according to

$$\begin{aligned} (\Delta _h u)_i = \sum _{k=1}^d\frac{u_{i+\hat{e}_k}-2u_i+u_{i-\hat{e}_k}}{h_k^2}= \sum _{\begin{array}{c} i+m\hat{e}_k\in \mathbb {Z}^d\\ m=\pm 1 \end{array}}\frac{u_{i+m\hat{e}_k}-u_i}{h_k^2}. \end{aligned}$$

Consider the discrete Poisson’s equation with Dirichlet boundary conditions

$$\begin{aligned} \begin{aligned} -\Delta _h u&=f,\quad i\in \Omega ,\\ u&=g,\quad i\in \Gamma , \end{aligned} \end{aligned}$$
(5.1)

where the domain \(\Omega \) is a finite subset of \(\mathbb {Z}^d\), \(\Gamma \) is the set of points outside of \(\Omega \) that have at least one neighbour in \(\Omega \),

$$\begin{aligned} \Gamma = \left\{ i\in \mathbb {Z}^d\setminus \Omega \,|\,\exists k\in \{1,\ldots ,d\}\text { : } i+\hat{e}_k\in \Omega \vee i-\hat{e}_k\in \Omega \right\} , \end{aligned}$$

and f and g are given grid functions. Furthermore, let \(\Gamma _+\) be the set of points outside of \(\bar{\Omega }=\Omega \cup \Gamma \) that have at least one neighbour in \(\bar{\Omega }\),

$$\begin{aligned} \Gamma _+ = \left\{ i\in \mathbb {Z}^d\setminus \bar{\Omega }\,| \,\exists k\in \{1,\ldots ,d\}\text { : } i+\hat{e}_k\in \bar{\Omega }\vee i-\hat{e}_k\in \bar{\Omega }\right\} , \end{aligned}$$

and let

$$\begin{aligned} \chi _{\Omega } = {\left\{ \begin{array}{ll} 1,&{}i\in \Omega ,\\ 0,&{}i\in \mathbb {Z}^d\setminus \Omega , \end{array}\right. } \end{aligned}$$

be the characteristic function of the set \(\Omega \). Applying \(-\Delta _h\) to \(\chi _{\bar{\Omega }}u\) yields

$$\begin{aligned} -\Delta _h(\chi _{\bar{\Omega }}u)=\chi _{\Omega }(-\Delta _hu)+Ru, \end{aligned}$$
(5.2)

where

$$\begin{aligned} (Ru)_i = {\left\{ \begin{array}{ll} -\sum _{\begin{array}{c} i+m\hat{e}_k\in \bar{\Omega }\\ m=\pm 1 \end{array}} \frac{u_{i+m\hat{e}_k}-u_i}{h_k^2} -\sum _{\begin{array}{c} i+m\hat{e}_k\in \mathbb {Z}^d\setminus \bar{\Omega }\\ m=\pm 1 \end{array}} \frac{-u_i}{h_k^2}, &{}i\in \Gamma ,\\ -\sum _{\begin{array}{c} i+m\hat{e}_k\in \bar{\Omega }\\ m=\pm 1 \end{array}} \frac{u_{i+m\hat{e}_k}}{h_k^2}, &{}i\in \Gamma _+,\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Comparing (5.2) with (4.1) suggests a splitting \(R=R_1+R_2\), where \(R_1u\) and \(R_2u\) are discrete counterparts of

$$\begin{aligned} \frac{\partial u}{\partial n}\delta _\Gamma \quad \text {and}\quad \frac{\partial (u\delta _\Gamma )}{\partial n}. \end{aligned}$$

Such a splitting can be done in several different ways. In this paper, we let

$$\begin{aligned} (R_1u)_i = {\left\{ \begin{array}{ll} -\sum _{\begin{array}{c} i+m\hat{e}_k\in \Omega \\ m=\pm 1 \end{array}} \frac{u_{i+m\hat{e}_k}-u_i}{h_k^2}, &{}i\in \Gamma ,\\ 0,&{}\text {otherwise}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} (R_2u)_i = {\left\{ \begin{array}{ll} -\sum _{\begin{array}{c} i+m\hat{e}_k\in \Gamma \\ m=\pm 1 \end{array}} \frac{u_{i+m\hat{e}_k}-u_i}{h_k^2} -\sum _{\begin{array}{c} i+m\hat{e}_k\in \mathbb {Z}^d\setminus \bar{\Omega }\\ m=\pm 1 \end{array}} \frac{-u_i}{h_k^2}, &{}i\in \Gamma ,\\ -\sum _{\begin{array}{c} i+m\hat{e}_k\in \bar{\Omega }\\ m=\pm 1 \end{array}} \frac{u_{i+m\hat{e}_k}}{h_k^2}, &{}i\in \Gamma _+,\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

If u is a solution to (5.1), then (5.2) turns into

$$\begin{aligned} -\Delta _h(\chi _{\bar{\Omega }}u)=\chi _{\Omega }f+R_1u+R_2g, \end{aligned}$$
(5.3)

which is a discrete counterpart of (4.1).

Let E be a fundamental solution of the operator \(-\Delta _h\) (see Sect. 6) and define the convolution of two grid functions u and v according to

$$\begin{aligned} (u*v)_i = \sum _{j\in \mathbb {Z}^d}u_{i-j}v_j\,h_1\cdot \cdots \cdot h_d, \quad i\in \mathbb {Z}^d. \end{aligned}$$

Convolving both sides of (5.3) with E yields a discrete counterpart of (4.2):

$$\begin{aligned} E*(-\Delta _h(\chi _{\bar{\Omega }}u))=E*(\chi _{\Omega }f+R_1u+R_2g). \end{aligned}$$
(5.4)

Since \(E*(-\Delta v)=v\) for every grid function v with finite support, it follows that

$$\begin{aligned} \begin{aligned} (\chi _{\bar{\Omega }}u)_i=&\sum _{j\in \Omega }E_{i-j}f_j\,h_1\cdot \cdots \cdot h_d\\&+\sum _{j\in \Gamma }E_{i-j}(R_1u)_j\,h_1\cdot \cdots \cdot h_d\\&+\sum _{j\in \Gamma \cup \Gamma _+}E_{i-j}(R_2g)_j\,h_1\cdot \cdots \cdot h_d, \quad i\in \mathbb {Z}^d. \end{aligned} \end{aligned}$$
(5.5)

Let \(F_h(u)\) denote the right hand side of (5.4) and introduce the discrete residual

$$\begin{aligned} r_h(u)=\chi _{\bar{\Omega }}u-F_h(u). \end{aligned}$$

Also, let \(u^*\) denote the exact solution to (5.5). Then, given some initial guess \(u^{(0)}\), the iteration

$$\begin{aligned} u^{(k+1)}=u^{(k)}-\alpha _{k+1}r_h(u^{(k)}), \quad k=0,1,2,\ldots , \end{aligned}$$
(5.6)

can be used to form a sequence of approximations \(u^{(k)}\), \(k=1,2,\ldots \), where the error \(e^{(k)}=\chi _{\bar{\Omega }}u^{(k)}-\chi _{\bar{\Omega }}u^*\) satisfies

$$\begin{aligned} e^{(k+1)}_i=(1-\alpha _{k+1})e^{(k)}_i+ \alpha _{k+1}(\chi _{\bar{\Omega }})_i\sum _{j\in \Gamma }E_{i-j} (R_1e^{(k)})_j\,h_1\cdot \cdots \cdot h_d, \end{aligned}$$
(5.7)

for \(k=0,1,2,\ldots \), and \(i\in \mathbb {Z}^d\). Applying \(R_1\) to both sides of (5.7) and restricting the domain from \(\mathbb {Z}^d\) to \(\Gamma \) yields

$$\begin{aligned} (R_1e^{(k+1)})_i=(1-\alpha _{k+1})(R_1e^{(k)})_i+ \alpha _{k+1}(R_1S\,R_1e^{(k)})_i, \end{aligned}$$
(5.8)

for \(k=0,1,2,\ldots \), and \(i\in \Gamma \), where

$$\begin{aligned} (Su)_i=(\chi _{\bar{\Omega }})_i\sum _{j\in \Gamma }E_{i-j}u_j \,h_1\cdot \cdots \cdot h_d. \end{aligned}$$

For ease of notation, introduce

$$\begin{aligned} \psi ^{(k)}_i=(R_1e^{(k)})_i, \end{aligned}$$

for \(k=0,1,2,\ldots \), and \(i\in \Gamma \), and

$$\begin{aligned} (K_h\psi )_i=2(R_1S\,\psi )_i-\psi _i \end{aligned}$$
(5.9)

for \(i\in \Gamma \). Then, (5.8) can be written

$$\begin{aligned} \psi ^{(k+1)}=\left( 1-\frac{\alpha _{k+1}}{2}\right) \psi ^{(k)}+\frac{\alpha _{k+1}}{2}K_h\psi ^{(k)}, \end{aligned}$$
(5.10)

for \(k=0,1,2,\ldots \), and \(i\in \Gamma \), which is a discrete counterpart of (3.5). This suggests that the convergence properties of (5.6) depends on the convergence properties of (5.10), and, in particular, on the properties of the operator \(K_h\), given by (5.9). The spectrum \(\sigma (K_h)\) and the convergence of (5.6) are investigated numerically in Section 7. The results show that despite the fact that \(F_h\) is not bounded as the grid is refined, \(u^{(k)}\) converges to the solution of (5.3) with a rate that is independent of h.

6 Preconditioning

Let \(B_{\bar{\Omega }_+}\subset \mathbb {Z}^d\) be a box of size \(N_1\times \cdots \times N_d\) such that \(\bar{\Omega }_+=\Omega \cup \Gamma \cup \Gamma _+\subset B_{\bar{\Omega }_+}\). Applying the preconditioner requires the computation of

$$\begin{aligned} w_i=\sum _{j\in B_{\bar{\Omega }_+}}E_{i-j}u_j\,h_1\cdot \cdots \cdot h_d, \quad i\in B_{\bar{\Omega }_+}, \end{aligned}$$
(6.1)

where E is a fundamental solution of the operator \(-\Delta _h\) and u is a grid function with \({{\,\mathrm{supp}\,}}(u)\subset \bar{\Omega }_+\). The sum in (6.1) can be written as a matrix-vector multiplication with a matrix T, where T has a multilevel Toeplitz structure. In this section, we rewrite the sum in (6.1) into a sum that can be written as a matrix-vector multiplication with a matrix C, where C has a multilevel circulant structure, and where the matrix-vector multiplication therefore can be computed using FFT. We also derive an expression for a fundamental solution E.

Let \(E^c\) be a periodic grid function with period \(M=2N\) such that \(E_i^c=E_i\) for \(i_k=-N_k+1,\ldots ,N_k-1\), \(k=1,\ldots ,d\), and introduce an extended box \(B_E\supset B_{\bar{\Omega }_+}\) of size \(M_1\times \cdots \times M_d\). Define periodic convolutions on the set \(B_E\) for M-periodic grid functions \(u^c\) and \(v^c\) according to

$$\begin{aligned} (u^c*_cv^c)_i=\sum _{j\in B_E}u^c_{i-j}v^c_j\,h_1\cdot \cdots \cdot h_d, \quad i\in B_E. \end{aligned}$$

Now, let \(u^c\) be a periodic grid function with period \(M=2N\) such that \(u_i^c=u_i\) for \(i\in B_E\). Then,

$$\begin{aligned} (E^c*_cu^c)_i=\sum _{j\in B_E}E_{i-j}^cu^c_j\,h_1\cdot \cdots \cdot h_d= \sum _{j\in B_{\bar{\Omega }_+}}E_{i-j}u_j\,h_1\cdot \cdots \cdot h_d, \quad i\in B_{\bar{\Omega }_+}, \end{aligned}$$

since \(E^c_{i-j}=E_{i-j}\) when \(i,j\in B_{\bar{\Omega }_+}\), \(u^c_j=u_j\) when \(j\in B_E\), and \({{\,\mathrm{supp}\,}}(u)\subset B_{\bar{\Omega }_+}\). Hence, (6.1) can be computed by computing \((E^c*_cu^c)_i\) for all \(i\in B_E\) and then sampling the result for \(i\in B_{\bar{\Omega }_+}\). Also, since

$$\begin{aligned} \widehat{E^c*_cu^c}=\widehat{E^c}\,\widehat{u^c}, \end{aligned}$$

where \(\widehat{u^c}\) denotes the discrete Fourier transform of \(u^c\), it follows that \(E^c*_cu^c\) can be computed by transforming \(u^c\), multiplying with \(\widehat{E^c}\) and taking the inverse transform of the result. Here, the discrete Fourier transform \(\widehat{u^c}\) of an M-periodic grid function \(u^c\) is given by

$$\begin{aligned} \widehat{u^c}_j=\frac{1}{M_1\cdot \cdots \cdot M_d}\sum _{i\in B_E}u^c_i e^{-2\pi \hat{i}\left( \frac{i_1j_1}{M_1}+\cdots +\frac{i_dj_d}{M_d}\right) }, \quad j\in B_E, \end{aligned}$$

where \(\hat{i}\) is the imaginary unit, and

$$\begin{aligned} u^c_i=\sum _{j\in B_E}\widehat{u^c}_j\, e^{2\pi \hat{i}\left( \frac{i_1j_1}{M_1}+\cdots +\frac{i_dj_d}{M_d}\right) }, \quad i\in B_E, \end{aligned}$$

is the inverse discrete Fourier transform. Both the transform and its inverse can be computed by the FFT-algorithm in \(\mathcal {O}(M_1\cdot \cdots \cdot M_d\cdot \log (M_1\cdot \cdots \cdot M_d))\) arithmetic operations.

To derive an expression for \(\widehat{E^c}\), recall that \(E^c_i=E_i\) for \(i_k=-N_k+1,\ldots ,N_k-1\), \(k=1,\ldots ,d\). Therefore,

$$\begin{aligned} (-\Delta _hE^c)_i=\frac{\delta _i}{h_1\cdot \cdots \cdot h_d}, \end{aligned}$$
(6.2)

for \(i_k=-N_k+2,\ldots ,N_k-2\), \(k=1,\ldots ,d\), where \(\delta _i=1\) if \(i=0\) and \(\delta _i=0\) otherwise. We require (6.2) to hold also for \(i_k=-N_k+1\) and \(i_k=N_k-1\), \(k=1,\ldots ,d\), but allow for a more general right hand side when \(i_k=N_k\), \(k=1,\ldots ,d\). More specifically, we look for a grid function \(E^c\) that satisfies

$$\begin{aligned} -\sum _{k=1}^d\frac{E_{i+\hat{e}_k}^c-2E_i^c+E_{i-\hat{e}_k}^c}{h_k^2} =\sum _{m\in \mathbb {Z}^d}\left( \frac{\delta _{i+(m_1M_1,\ldots ,m_dM_d)}}{h_1\cdot \cdots \cdot h_d}+ \sum _{k=1}^da_k\delta _{i_k-N_k+m_kM_k}\right) , \, i\in \mathbb {Z}^d, \end{aligned}$$
(6.3)

for some constants \(a_k\), \(k=1,\ldots ,d\). Applying the discrete Fourier transform to both sides of (6.3) and using the orthogonality of the Fourier basis yields a transformed equation

$$\begin{aligned} \widehat{P}_j\widehat{E^c}_j =\frac{1}{M_1\cdot \cdots \cdot M_d\cdot h_1\cdot \cdots \cdot h_d} +\sum _{k=1}^d\frac{a_k}{M_k} \delta _{j_1}\cdot \cdots \cdot \delta _{j_{k-1}} e^{-2\pi \hat{i}\frac{N_kj_k}{M_k}} \delta _{j_{k+1}}\cdot \cdots \cdot \delta _{j_d}, \end{aligned}$$
(6.4)

where the symbol \(\widehat{P}_j\) is given by

$$\begin{aligned} \widehat{P}_j=-\sum _{k=1}^d\frac{2(\cos 2\pi \frac{j_k}{M_k}-1)}{h_k^2}. \end{aligned}$$

Note that \(\widehat{P}_j=0\) if and only if \(j=(m_1M_1,\ldots ,m_dM_d)\) for some \(m\in \mathbb {Z}^d\), and, therefore, that (6.4) has a solution only if the right hand side of (6.4) is zero for such a j:

$$\begin{aligned} \frac{1}{M_1\cdot \ldots \cdot M_d\cdot h_1\cdot \cdots \cdot h_d} +\sum _{k=1}^d\frac{a_k}{M_k} =0. \end{aligned}$$

Letting

$$\begin{aligned} a_k=-\frac{1}{d\cdot M_1\cdot \cdots \cdot M_{k-1}\cdot M_{k+1}\cdot \ldots \cdot M_d\cdot h_1\cdot \cdots \cdot h_d}, \end{aligned}$$

makes (6.4) solvable and it then follows that

$$\begin{aligned} \widehat{E^c}_j=- \frac{1-d^{-1}\sum _{k=1}^d \delta _{j_1}\cdot \cdots \cdot \delta _{j_{k-1}} e^{-2\pi \hat{i}\frac{N_kj_k}{M_k}} \delta _{j_{k+1}}\cdot \cdots \cdot \delta _{j_d}}{M_1\cdot \ldots \cdot M_d\cdot h_1\cdot \cdots \cdot h_d\sum _{k=1}^d2(\cos 2\pi \frac{j_k}{M_k}-1)h_k^{-2}}, \end{aligned}$$

when \(j\ne (m_1M_1,\ldots ,m_dM_d)\), \(m\in \mathbb {Z}^d\), and that \(\widehat{E^c}_j=c\) for some arbitrary constant c when \(j=(m_1M_1,\ldots ,m_dM_d)\), \(m\in \mathbb {Z}^d\). For simplicity, we use \(c=0\).

7 Numerical experiments

In this section, we assume that \(N_k\) is even for all k, and use a grid \(\{x_i\}_{i\in \mathbb {Z}^d}\) with

$$\begin{aligned} (x_i)_k=\left( i_k-\frac{N_k}{2}\right) h_k, \end{aligned}$$

and \(h_k=2/(N_k-4)\) for \(k=1,\ldots ,d\). We consider four different domains. First, let

$$\begin{aligned} \Omega ^B = \{i\in \mathbb {Z}^d\,|\,|x_i|<1\}. \end{aligned}$$

This d-dimensional discrete ball is a d-dimensional discrete counterpart of the unit disc in Example 3.1. The second domain is given by

$$\begin{aligned} \Omega ^C = \{i\in \mathbb {Z}^d\,|\,|x_i|_\infty <1\}, \end{aligned}$$

where \(|\cdot |_\infty \) denotes the maximum norm. This domain is a discrete d-dimensional cube, and, consequently, a discrete counterpart of a domain with edges and corners. The third and fourth domain, \(\Omega ^{BL}\) and \(\Omega ^{CL}\), are L-shaped modifications of \(\Omega ^{B}\) and \(\Omega ^{C}\), where points i satisfying \((x_i)_k>0\) for all k have been removed. Figure 1 shows the four sets and their boundary sets for the case \(d=2\) and \(N_1=N_2=16\).

Fig. 1
figure 1

The sets \(\Omega ^B\), \(\Omega ^C\), \(\Omega ^{BL}\) and \(\Omega ^{CL}\) when \(d=2\) and \(N_1=N_2=16\)

Domains with non-smooth boundaries are not covered by the standard results for boundary value problems of potential theory, since the integral operator K for such a boundary is not compact [11]. Fixed point theory can still be used to prove the existence of a unique solution if the norm of the iteration operator \((I+K)/2\) is less than 1. However, this criterion is only a sufficient condition for convergence and it may be that a fixed point iteration converges even though the norm is larger than or equal to 1. Hence, the theory of boundary integral equations becomes less helpful for understanding the behaviour of our preconditioner on domains with non-smooth boundaries. Still, since structured grids are often used on rectangular patches, non-smooth cases are of interest.

We first study the spectrum of the summation operator \(K_h\) given by (5.9). A grid refinement study for the case \(d=2\) is presented in Figs. 2 and 3.

Fig. 2
figure 2

The spectrum \(\sigma (K_h)\) for \(\Omega ^B\) (left) and \(\Omega ^C\) (right) when \(d=2\)

Fig. 3
figure 3

The spectrum \(\sigma (K_h)\) for \(\Omega ^{BL}\) (left) and \(\Omega ^{CL}\) (right) when \(d=2\)

The results indicate that \(K_h\) has the following spectral properties:

  • The point \(\lambda =0\) is not an eigenvalue of \(K_h\). It seems to be a limiting point of the spectrum when \(\Omega =\Omega ^B\) and \(\Omega =\Omega ^{BL}\), but not when \(\Omega =\Omega ^C\) or \(\Omega =\Omega ^{CL}\).

  • The point \(\lambda =-1\) is an eigenvalue of \(K_h\).

  • The eigenvalues of \(K_h\) are real and \(\sigma (K_h)\subset [-1,1)\).

The spectral properties of \(K_h\) thus resembles the spectral properties of K when \(\Omega =\Omega ^B\). The introduction of corners perturbs the spectrum of \(K_h\), but those perturbations seem to be relatively small.

Table 1 shows the results of a grid refinement study of the maximum norm of the iteration operator \((I+K_h)/2\) for the case \(d=2\). These results indicate that the maximum norm cannot be used to prove convergence of fixed point iteration, since the norm of the iteration operator is larger than 1 already for small grid sizes.

Next, we solve the discrete Poisson’s equation (5.1) on the four domains using two different versions of the iterative method (5.6). The first version is fixed point iteration, i.e. \(\alpha _k=1\) for all k. This method is denoted M1. The second version is denoted M2 and uses \(\alpha _k=1\) for k odd and \(\alpha _k=2\) for k even. This is the method introduced in Example 3.1 in restarted form. We construct the problem data f and g from a known solution \(u_i^*=\exp (-2(x_i)_1^2-\cdots -2(x_i)_d^2)\) of the difference equation, use \(u^{(0)}=0\) as initial guess, and iterate until the maximum norm of the error is decreased by a factor of \(10^6\).

Tables 2 and 3 show the number of iterations required for convergence for the case \(d=2\) and \(d=3\) respectively. The convergence rate seems to be grid independent for all four domains.

Table 1 The maximum norm of \((I+K_h)/2\) when \(d=2\)
Table 2 The number of iterations required for convergence when \(d=2\)
Table 3 The number of iterations required for convergence when \(d=3\)

Table 4 shows the number of iterations required for convergence for different number of dimensions d. The convergence rate seems to be independent of the number of dimensions for all four domains.

Table 4 The number of iterations required for convergence when \(N_k=16\)

8 Summary and conclusions

In this paper, the iterative solution of the discrete Poisson’s equation in d dimensions is considered. The discrete domain is embedded into an extended domain and the resulting system of linear equations is solved using fixed point iteration combined with a multilevel circulant preconditioner. The main objective of the paper is to show that Strang’s idea of using circulant preconditioning for Toeplitz systems [15] can be successfully extended to the multilevel case for Poisson’s equation.

Our inspiration comes from Neumann [13] and the research about linear integral equations and potential theory that Neumann contributed to. He transformed the elliptic partial differential equation into a boundary integral equation on fixed point form, \(u = F(u)\), and showed that the corresponding fixed point iteration, \(u^{(k+1)}=F(u^{(k)})\), converges. The operator F is not bounded, but the normal derivative of F(u) on the boundary is bounded if the normal derivative of u on the boundary is bounded, and convergence of the normal derivative of \(u^{(k)}\) on the boundary turns out to be a sufficient condition for the convergence of \(u^{(k)}\).

We show that Neumann’s findings holds also for the discrete Poisson’s equation: The elliptic partial difference equation can be transformed into a boundary summation equation on fixed point form, \(u=F_h(u)\), and our numerical results show that the corresponding fixed point iteration, \(u^{(k+1)}=F_h(u^{(k)})\), converges with a rate that is independent of the grid’s step sizes h, despite the fact that the operator \(F_h\) is not bounded as the grid is refined.

We also show that Neumann’s transformation of Poisson’s equation can be viewed as a convolution with a fundamental solution to both sides of a modified partial differential equation on an extended domain, and that a corresponding transformation of the discrete Poisson’s equation can be viewed as a multilevel circulant preconditioning of both sides of a modified partial difference equation on an extended domain.

Our preconditioned iterative solver is consequently of near optimal order:

  1. 1.

    The rate of convergence does not slow down when the grid is refined.

  2. 2.

    Performing one iteration on a d-dimensional grid function of size \(\mathcal {O}(N^d)\) requires \(\mathcal {O}(2^dN^d)\) computer memory and \(\mathcal {O}(2^dN^d\log N^d)\) arithmetic operations.

Our main objective is thus fulfilled: We have shown that Strang’s idea of using circulant preconditioning for Toeplitz systems can be successfully extended to the multilevel case when solving the discrete Poisson’s equation.