1 Introduction

The computation of gradients is required for numerous applications, such as shape and topology optimisation, error estimation, goal-based mesh adaptation and uncertainty quantification. Algorithmic differentiation (AD) to automatically produce accurate derivatives for numerical codes [13, 23] is a commonly used technique [3, 5, 8, 16, 32]. In typical numerical models this involves the solution of large linear systems of the form

$$\begin{aligned} \mathbf{A}{x}= {b}\end{aligned}$$

which often represents the most expensive part of a computation. We assume here that \(\mathbf{A}\in {\mathbb {R}}^{n\times n}\) is a known non-singular matrix and \({x}, {b}\in {\mathbb {R}}^{n}\) are the unknown and right hand side (RHS) vectors respectively. Historically, linear solver methods have been categorised into two main groups, namely, direct solvers and iterative solvers, even though this classification has become increasingly blurred by developments that combine solvers from either category. Direct solvers are typically robust and widely used in scientific computing packages, but scale poorly with the problem size. Because of this, applications that require the solution to large linear systems such as CFD flow solvers often use efficient iterative linear solvers [29], commonly used methods include CG, BiCG, GMRES, and algebraic or geometric multi-grid methods.

When AD is applied to an algorithm that uses a linear solver, the linear solver itself is typically differentiated by manually produced replacement derivative code rather than applying AD to the solver. This is often the only practical option, for example if the linear solver is part of an external library, or if an AD-differentiated solver would be computationally inefficient [6, 9] or numerically unstable [21]. A manual differentiation can take into account high-level mathematical properties of a given function, which may not be exploited by an automated AD process.

The most common way of manually differentiating calls to linear solvers (direct or iterative) is presented in [9] and hereafter referred to as Differentiated Solver Replacement (DSR). The approach assumes that a linear solver call, x=solve(A,b), is equivalent to the expression \({x}=\mathbf{A}^{-1} {b}\), which is valid if the solver computes the solution to machine precision. In this case, the derivative computation can be performed using another call to the same linear solver for a modified system, as shown in Sect. 2.

Often a numerical algorithm solves a non-linear problem and converges to a steady-state solution within a fixed-point iteration (FPI) loop. An example is the typical iterative approach to solving non-linear systems, consisting of a number of outer, non-linear iterations, each of which performs linearisation and contains calls to linear system solves. In the early phase of convergence to the non-linear solution, it is not efficient to exactly solve the linear system for a linearisation based on a poor approximation. An example in Computational Fluid Dynamics (CFD) is the typical segregated approach to solve the incompressible Navier–Stokes equations through a sequence of linear problems for the momentum and pressure correction equations [7]. In a straightforward application of AD to such algorithms, the gradients are accumulated from a zero initial solution, hence are not in FPI form. Different techniques [31] have been presented to have an FPI discrete adjoint of such algorithms. For instance, the reverse accumulation method [4]. More recently some AD tools (e.g. Tapenade [15]) even offer this capability as an option for reverse differentiation. However, except for fully coupled systems, implementation of fixed-point adjoints for algorithmically differentiated codes is complex and accumulation of gradients is most often used.

It is to be noted that in the original problem, the incomplete linear solves do not affect the accuracy of the final solution, the primal solution, provided enough outer iterations are conducted. Contrary to what one might expect, incomplete linear solves of an accumulated adjoint that uses DSR leads to inaccurate sensitivities, as the analysis and the numerical experiments in this paper show.

In this paper, we show that this is caused by neglecting the influence of the initial guess on the linear system solution, which can be significant if the system is not fully converged. The proposed C-DSR correction achieves consistency between primal and adjoint gradient computation by correctly modelling the adjoint derivatives of an algorithm that uses truncated iterative solvers with the same convergence threshold used for the primal linear systems.

A number of studies have investigated the correction of objective functionals using estimated errors and weighting with adjoint sensitivities, e.g. [12] and [33]. The approaches perform post-processing and consider error estimates derived from the converged steady-state flow solution and weight this with the converged adjoint field. This produces a correction to the objective functional computed from the converged primal. The algorithm proposed in this paper is different, in that it corrects the errors arising from incomplete linear solves in each accumulation step during the computation of the adjoint solution.

The structure of the paper is as follows. In Sect. 2, a brief introduction of AD and DSR is presented. The shortcomings of DSR in the context of reverse-mode AD of algorithms with incompletely converged linear solvers, as well as the proposed correction method C-DSR, are presented in Sect. 3. In Sect. 4 we show numerical experiments that demonstrate the effectiveness of C-DSR. Finally, a summary and conclusions are presented in Sect. 5.

2 Background

In this section, a brief background of AD is provided. Following this, the DSR in both forward and reverse-mode AD is presented.

2.1 Algorithmic differentiation

AD is a technique that evaluates the derivative of the output of a computer program with respect to its inputs. AD differentiates a given primal computer program by applying the chain rule of calculus to the program’s sequence of elementary operations (e.g. additions, subtractions, transcendental functions) [13].

AD has two basic modes of operation, namely the forward-mode (resulting in a tangent-linear model of the primal), and the reverse-mode (producing an adjoint model of the primal). The tangent-linear model computes the product of the Jacobian matrix of the primal program with a given seed vector that has the same number of dimensions as the program input. In contrast, the adjoint model computes the product of the transpose Jacobian with a seed vector that has the size of the primal output.

In the application of AD to numerical codes, the derivative of a given scalar objective function with respect to a scalar primal input variable can be computed at almost equal cost in both tangent-linear and adjoint models. However, in many applications such as gradient-based shape optimisation with CFD, the number of design parameters is much larger than the number of objective functions that are to be computed. As a consequence, the use of adjoint models is essential to compute the gradients at a computational cost that is independent of the number of control variables [10, 11, 19, 24].

A variety of AD tools have been developed in the past, which vary in the supported languages and the used techniques. Examples include Tapenade [15], ADIFOR [1], ADOL-C [14], dco/c++ [18], CoDiPack [30] and ADiMat [2]. The discussion in this study is valid to all types of AD tool.

2.2 Model problem

Consider a non-linear system of the form

$$\begin{aligned} \mathbf{A}(x,\alpha ) x(\alpha ) = b(x,\alpha ) \end{aligned}$$

with \(\alpha\) as input and x the solution to the system. The problem can be re-formulated as

$$\begin{aligned} \mathbf{R}(x(\alpha ),\alpha ) = \mathbf{A}(x,\alpha ) x(\alpha ) - b(x,\alpha )\,. \end{aligned}$$

Applying a linearisation technique, the numerical solution to such a system can be gained by an iterative algorithm

$$\begin{aligned} x_{m} = x_{m-1}+ {{\mathbb {P}}}_{m}\mathbf{R}_{m} \end{aligned}$$
(1)

where \({{\mathbb {P}}}\) is the algorithm operator and the system is considered to be fully solved when \(\mathbf{R}_{m}\) is almost zero. In each iteration of this algorithm a linear system needs to be solved:

$$\begin{aligned} \mathbf{A}_{m} x_{m}- b_{m}=0\,, \end{aligned}$$
(2)

which itself is often solved by an iterative linear solver.

In many numerical models, the objective functional, J, that is going to be differentiated is implicitly dependent on the design variable \(\alpha\) through the solution \({x}(\alpha )\) of a non-linear system of equations similar to (1). Assuming \(J=J(x(\alpha ), \alpha )\), the general form of such an algorithm is shown in Algorithm 1.

figure a

In Algorithm 1, the subscripts (m, M) denote non-linear (outer) iterations while the superscript N denotes the solution after N linear (inner) solver iterations. The arrows denote output. \(\mathbf{A}\) and \({b}\) are being updated in the non-linear loop and \({x}^{N}_{m}\) is the approximate solution to the linear system after N inner iterations at outer iteration m. For each linear solve, solve, \({x}\) is an input (as the initial guess, \({x}_{m}^{0}\)) and an output (as the solution, \({x}_{m}^{N}\)). The objective functional J is dependent on the final solution of the algorithm, \({x}_M^{N}\).

In the following, we consider the case that the number of inner iterations N is not sufficient to fully converge the linear, inner solver to machine accuracy, \({x}^N_{m} \approx \mathbf{A}^{-1}_{m}{b}_{m}\). However, we assume that a sufficient number of M outer, non-linear iterations is conducted, each containing N inner, linear iterations. In this way, in the final outer iterations, the linear system is sufficiently close to the non-linear system, and the error in both non-linear and linear system solutions is close to machine precision.

2.3 Differentiated solver replacement: tangent-linear

The forward differentiation of the gradient of the objective functional J w.r.t. \(\alpha\) requires the differentiation of the non-linear algorithm which at iteration ‘m’ reads

$$\begin{aligned} {\dot{x}}_{m} = {\dot{x}}_{m-1}+ {\dot{{{\mathbb {P}}}}}_{m}\mathbf{R}_{m}+{{\mathbb {P}}}_{m}{\dot{\mathbf{R}}}_{m} \end{aligned}$$

Knowing that the number of primal outer iteration is enough to drive \(\mathbf{R}_m\) to zero, the differentiated system can be simplified as

$$\begin{aligned} {\dot{x}}_{m} = {\dot{x}}_{m-1}+ {{\mathbb {P}}}_{m}{\dot{\mathbf{R}}}_{m}\,. \end{aligned}$$

This requires to compute a solution to the differentiated linear system (2) in each differentiated outer iteration as

$$\begin{aligned} \mathbf{A}_{m} {\dot{x}}_{m} = \dot{b}_{m} - {\dot{\mathbf{A}}}_{m} x_{m}~~\text {or}~~{\dot{x}}_{m} = \mathbf{A}_{m}^{-1}(\dot{b}_{m} - {\dot{\mathbf{A}}}_{m} x_{m})\,. \end{aligned}$$
(3)

The forward differentiation of the Algorithm 1 is shown in Algorithm 2. The function appended with the suffix ‘_d’ represents the tangent-linear derivative of that function.

figure b

The forward-differentiation in Algorithm 2 naturally inherits the fixed-point form of the primal, hence the resulting tangent-linear solution and the gradients computed with it are also impervious to incomplete inner solves, as long as the number of outer iterations is sufficient. If the problem to solve is steady, then linearisation around the converged solution to (2) is sufficient, making the entire problem linear which means that inner and outer iterations solve the same problem.

As mentioned in the Introduction, the differentiation of linear solvers is in practice often performed using an approach that we refer to as differentiated linear solver replacement or DSR. A sample pseudo code of DSR in forward-mode for the linear solver in Algorithm 1 is illustrated in Algorithm 3.

figure c

2.4 Differentiated solver replacement: adjoint

The objective J is assumed here to depend on the control \(\alpha\) and the state x: \(J=J(\alpha ,x)\). Its derivative is hence

$$\begin{aligned} \dot{J} = \frac{\partial J}{\partial \alpha } + \frac{\partial J}{\partial x}{\dot{x}} \end{aligned}$$

In many applications, the function \(J(\alpha )\) can be computed explicitly, without requiring linear solvers. We therefore focus in this work on the term \(\frac{\partial J}{\partial x}{\dot{x}}\) which is implicit, that is, it involves a linear solve for \({\dot{x}}\). Since x is a function of A and b, the function J also depends on A and b, or formally, \(x=x(\mathbf{A},b)\) and \(J=J(\mathbf{A},b)\), and we can expand and transpose as

$$\begin{aligned} \frac{\partial J}{\partial x}{\dot{x}}&= \frac{\partial J}{\partial \mathbf{A}}\frac{\partial \mathbf{A}}{\partial x} {\dot{x}}+ \frac{\partial J}{\partial b}\frac{\partial b}{\partial x} {\dot{x}} \end{aligned}$$
(4)
$$\begin{aligned} {\dot{x}}^T\left( \frac{\partial J}{\partial x}\right) ^T&={\dot{x}}^T \left( \frac{\partial \mathbf{A}}{\partial x}\right) ^T\left( \frac{\partial J}{\partial \mathbf{A}}\right) ^T+{\dot{x}}^T \left( \frac{\partial b}{\partial x}\right) ^T\left( \frac{\partial J}{\partial b}\right) ^T \end{aligned}$$
(5)

By definition, the adjoint of the reverse-differentiated variables is

$$\begin{aligned} {\bar{x}} = \left( \frac{\partial J}{\partial x}\right) ^T,~{\bar{\mathbf{A}}} =\left( \frac{\partial J}{\partial \mathbf{A}}\right) ^T,~{\bar{b}} = \left( \frac{\partial J}{\partial b}\right) ^T, \end{aligned}$$

which simplifies (5) to

$$\begin{aligned} {\dot{x}}^T {\bar{x}}={\dot{\mathbf{A}}}^T {\bar{\mathbf{A}}}+\dot{b}^T{\bar{b}} \,~~\text {or}~~\, \langle {\dot{x}} , {\bar{x}} \rangle =\langle {\dot{\mathbf{A}}} , {\bar{\mathbf{A}}} \rangle +\langle \dot{b} , {\bar{b}} \rangle , \end{aligned}$$
(6)

where the column vectors are expressed as row matrices and \(\langle ~ , ~ \rangle\) denotes an inner product between matrices. Substituting \({\dot{{x}}}\) with \(\mathbf{A}^{-1}({\dot{{b}}} - {\dot{\mathbf{A}}} {x})\), one can rearrange (6) into

$$\begin{aligned} \langle {\dot{\mathbf{A}}} , {\bar{\mathbf{A}}} \rangle +\langle \dot{b} , {\bar{b}} \rangle&=\langle \mathbf{A}^{-1}({\dot{{b}}} - {\dot{\mathbf{A}}}{x}) , {\bar{{x}}} \rangle \nonumber \\&=\langle \mathbf{A}^{-1}{\dot{{b}}} , {\bar{{x}}} \rangle - \langle \mathbf{A}^{-1}{\dot{\mathbf{A}}}{x} , {\bar{{x}}} \rangle \,. \end{aligned}$$
(7)

Then recalling from linear algebra [20], the inner product between two matrices, \(\mathbf{M}_1\) and \(\mathbf{M}_2\), reads

$$\begin{aligned} \langle \mathbf{M}_1 , \mathbf{M}_2 \rangle = \text {Tr} \big (\mathbf{M}_2^T \mathbf{M}_1\big ) \,. \end{aligned}$$

Here ‘Tr’ stands for trace of a matrix, i.e., the sum of its diagonal elements. Now we can expand and rewrite the inner products in RHS of (7),

$$\begin{aligned} \langle \mathbf{A}^{-1}{\dot{{b}}} , {\bar{{x}}} \rangle&= \text {Tr} \big ({\bar{{x}}}^T \mathbf{A}^{-1}{\dot{{b}}}\big )\nonumber \\&= \text {Tr} \big ((\mathbf{A}^{-T}{\bar{{x}}})^T {\dot{{b}}}\big )\nonumber \\&= \langle {\dot{{b}}} , \mathbf{A}^{-T}{\bar{{x}}} \rangle \,, \end{aligned}$$
(8)
$$\begin{aligned} - \langle \mathbf{A}^{-1}{\dot{\mathbf{A}}}{x} , {\bar{{x}}} \rangle =&-\text {Tr} \big ({\bar{{x}}}^T \mathbf{A}^{-1}{\dot{\mathbf{A}}}{x}\big )\nonumber \\ =&-\text {Tr} \big ((\mathbf{A}^{-T}{\bar{{x}}})^T ({x}^T{\dot{\mathbf{A}}}^T)^T\big )\nonumber \\ =&-\text {Tr} \big ((\mathbf{A}^{-T}{\bar{{x}}}{x}^T)^T {\dot{\mathbf{A}}}\big )\nonumber \\ =&-\langle {\dot{\mathbf{A}}} , \mathbf{A}^{-T}{\bar{{x}}}{x}^T \rangle \,. \end{aligned}$$
(9)

Finally, (8) and (9) can be replaced into (7)

$$\begin{aligned} \langle {\dot{\mathbf{A}}} , {\bar{\mathbf{A}}} \rangle +\langle \dot{b} , {\bar{b}} \rangle = \langle {\dot{\mathbf{A}}} , -\mathbf{A}^{-T}{\bar{{x}}}{x}^T \rangle + \langle {\dot{{b}}} , \mathbf{A}^{-T}{\bar{{x}}} \rangle \,. \end{aligned}$$
(10)

Therefore, \({\bar{{b}}}\) and \({\bar{\mathbf{A}}}\) at iteration ‘m’ can be expressed as follows [9]

$$\begin{aligned}&{\bar{{b}}}_{m}=\mathbf{A}_{m} ^{-T} {\bar{{x}}}_{m}, \end{aligned}$$
(11a)
$$\begin{aligned}&{\bar{\mathbf{A}}}_{m}=- \mathbf{A}_{m} ^{-T} {\bar{{x}}}_{m} {x}_{m}^T = -{\bar{{b}}}_{m} {x}_{m}^{T}\,. \end{aligned}$$
(11b)

In practice, the adjoints are incremented because they may already contain previously computed sensitivities from elsewhere in the program (see Algorithm 4). One can derive the adjoint of \(\mathbf{A}\) and \({b}\) and obtain the reverse-DSR as,

$$\begin{aligned} {\bar{{b}}}_{m}^{\prime }&= \mathbf{A}_{m}^{-T} {\bar{{x}}}_{m}, \end{aligned}$$
(12a)
$$\begin{aligned} {\bar{{b}}}_{m}&={\bar{{b}}}_{m}+{\bar{{b}}}_{m}^{\prime }, \end{aligned}$$
(12b)
$$\begin{aligned} {\bar{\mathbf{A}}}_{m}&={\bar{\mathbf{A}}}_{m}-{\bar{{b}}}_{m}^{\prime }{x}_{m}^{T}, \end{aligned}$$
(12c)
$$\begin{aligned} {\bar{{x}}}_{m}&=0\,. \end{aligned}$$
(12d)

The reverse-mode application of AD to the Algorithm 1 and the hand assembled reverse DSR are illustrated in Algorithms 4 and 5. A function appended with the suffix ‘_b’ represents the reverse derivative of that function. For brevity, brackets are used to show the accumulation of sensitivities for matrices and vectors via reverse-differentiated functions.

figure d
figure e

As we will show in Sect. 3, in contrast to the primal and its forward differentiation, if the adjoint linear systems are not fully solved, \({\bar{{b}}}_{m}^{\prime } \not = \mathbf{A}_{m}^{-T} {\bar{{x}}}_{m}\), they introduce an error to the system that would not vanish even after a large number of outer iterations. In the next section, this error and its correction will be discussed.

3 Corrected differentiated solver replacement in reverse-mode

In this section, the forward and reverse differentiation of the Jacobi solver within an outer, non-linear iterative solver is discussed. Furthermore, we discuss the effect that the initial guess has on the solution, when the differentiated inner linear solver is only partially converged. The C-DSR correction method is then developed, which includes a correction term for this error. Finally, we demonstrate the benefit of this correction method. We choose the Jacobi solver because it is easy to prove properties of its differentiation and convergence. However, our C-DSR method also benefits other solvers, as we will show later in this paper.

3.1 Error correction for reverse differentiation of Jacobi solver

The system matrix \(\mathbf{A}_{m}\) can be decomposed as \(\mathbf{A}_{m} = \mathbf{D}_{m}+\mathbf{Q}_{m}\), where \(\mathbf{D}_{m}\) and \(\mathbf{Q}_{m}\) hold the diagonal and off-diagonals entries of \(\mathbf{A}_{m}\), respectively. The iterative relaxation scheme can be written as

$$\begin{aligned} {x}_{m}^{n+1} = \mathbf{D}^{-1}({b}- \mathbf{Q}{x}_{m}^{n})\,. \end{aligned}$$
(13)

The error due to incomplete Jacobi convergence can be expressed as

$$\begin{aligned} {x}_{m}^{n+1} -{x}_{m} = \delta {x}_{m}^{n+1}&= \mathbf{D}^{-1} ({b}- \mathbf{Q}{x}_{m}^{n}) - {x}_{m} \nonumber \\&= \mathbf{D}^{-1} {b}- \mathbf{D}^{-1} \mathbf{Q}{x}_{m}^{n} -{x}_{m} \nonumber \\&= \mathbf{D}^{-1} \mathbf{A}{x}_{m} - \mathbf{D}^{-1} \mathbf{Q}{x}_{m}^{n} - {x}_{m} \nonumber \\&= (\mathbf{D}^{-1} \mathbf{A}- \mathbf{I}) {x}_{m} - \mathbf{D}^{-1} \mathbf{Q}{x}_{m}^{n} \nonumber \\&= (\mathbf{D}^{-1} \mathbf{A}- \mathbf{I}) {x}_{m} - (\mathbf{D}^{-1} \mathbf{A}-\mathbf{I}) {x}_{m}^{n} \nonumber \\&= (\mathbf{I}-\mathbf{D}^{-1} \mathbf{A}) \delta {x}_{m}^{n}\,, \end{aligned}$$
(14)

where \({x}_m\) is the exact solution to the linear system at the (m)th outer iteration. Therefore, after N iterations starting from an initial guess \({x}_{m}^0\), the approximated solution obtained from the linear solver can be written explicitly as

$$\begin{aligned} {x}_{m}^N ={x}_{m}+(\mathbf{I}- \mathbf{D}^{-1} \mathbf{A})^{(N)} ({x}_{m}^{0}-{x}_{m})\,. \end{aligned}$$
(15)

The initial guess of the system is actually the solution to the linear system in the previous outer iteration. In this context, one can write

$$\begin{aligned} {x}_{m}^N ={x}_{m}+(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} ({x}_{m-1}^{N}-{x}_{m}); \quad m=1, \dots , M\,. \end{aligned}$$
(16)

in which after a sufficient number iterations

$$\begin{aligned} (\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)}=0\,. \end{aligned}$$
(17)

3.1.1 Forward differentiation

The tangent-linear model has the same behaviour as its primal, that is, the initial guess for the differentiated solver replacement (DSR) is the solution to the system in the previous outer iteration (see Algorithm 2):

$$\begin{aligned} {x}_{m}^{0}&= {x}_{m-1}^{N},\\ {\dot{{x}}}_{m}^{0}&= {\dot{{x}}}_{m-1}^{N}\,. \end{aligned}$$

The tangent-linear model of (16) is given by

$$\begin{aligned} {\dot{{x}}}_{m}^N={\dot{{x}}}_{m}+ (\epsilon _{1})_{m} + (\epsilon _{2})_{m}, \end{aligned}$$
(18)

where \({\dot{{x}}}_{m}\) is the exact solution to the tangent-linear problem , \(\epsilon _{1}\) and \(\epsilon _{2}\) are the errors due to incomplete solve of the primal and tangent-linear problems, respectively

$$\begin{aligned} (\epsilon _{1})_{m}=&(N)(- {\dot{\mathbf{D}}}_{m}^{-1} \mathbf{A}_{m} - \mathbf{D}_{m}^{-1} {\dot{\mathbf{A}}}_{m}) \\&\big [(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N-1)} ({x}_{m-1}^{N}-{x}_{m})\big ]\,,\\ (\epsilon _{2})_{m}=&(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} ({\dot{{x}}}_{m-1}^{N}-{\dot{{x}}}_{m})\,. \end{aligned}$$

Even though the linear systems are not solved to machine precision in each outer iteration, the errors vanish when the outer loop is iterated sufficiently. Please be aware that the number of outer iteration, M, is considered large enough such that \(\mathbf{A}_{m} = \mathbf{A}_{m-1}\) and \({b}_{m} = {b}_{m-1}\). As a result, the initial guess and final result of the linear solver are identical to machine precision in the final outer iteration, or formally,

$$\begin{aligned} {x}^{N}_{M}={x}_{M}^{0}&= {x}_{M-1}^{N}\\ {\dot{{x}}}^{N}_{M}= {\dot{{x}}}_{M}^{0}&= {\dot{{x}}}_{M-1}^{N} \end{aligned}$$

3.1.2 Reverse differentiation

The reverse-mode application of AD to the model is shown in Algorithm 4, where the sensitivities are accumulated over the reverse loop and for better clarity, except in the DSR, the primal expressions are not depicted. The incomplete convergence of the adjoint linear system means

$$\begin{aligned} {{\bar{{b}}}}^{\prime N}_{m} \ne \mathbf{A}^{-T}_{m}{\bar{{x}}}_{m}^{N} \end{aligned}$$
(19)

with the residual \((\epsilon _{{\bar{b}}})_{m}\) of the system

$$\begin{aligned} (\epsilon _{{\bar{b}}})_{m}=\mathbf{A}_{m}^{-T}{\bar{{x}}}_{m}^{N} - {\bar{{b}}}^{\prime N}_{m} \end{aligned}$$
(20)

As a result, the computation of terms \({\bar{\mathbf{A}}}\) and \({\bar{{b}}}\) are affected in each DSR call such that

$$\begin{aligned} {\bar{{b}}}_{m}&= {\bar{{b}}}_{m}+{\bar{{b}}}^{\prime N}_{m}+(\epsilon _{{\bar{b}}})_{m}, \end{aligned}$$
(21)
$$\begin{aligned} {\bar{\mathbf{A}}}_{m}&= {\bar{\mathbf{A}}}_{m} - ({\bar{{b}}}^{\prime N}_{m}+(\epsilon _{{\bar{b}}})_{m} ) ({x}^{N}_{m})^{T},\nonumber \\&= {\bar{\mathbf{A}}}_{m} - {\bar{{b}}}^{\prime N}_{m} ({x}^{N}_{m})^{T} -(\epsilon _{{\bar{b}}})_{m}({x}^{N}_{m})^{T}\,. \end{aligned}$$
(22)

It is not difficult to derive the derivative of J w.r.t. \(\alpha\) in the reverse-mode from (45),

$$\begin{aligned} {\bar{\alpha }} = {\dot{x}} ^T \left( \frac{\partial \mathbf{A}}{\partial x}\right) ^T \sum _{M}^{1}{\bar{\mathbf{A}}}_{m} +{\dot{x}}^T \left( \frac{\partial b}{\partial x}\right) ^T \sum _{M}^{1}{\bar{b}}_{m}, \end{aligned}$$
(23)

which leads to an accumulated error given by

$$\begin{aligned} \epsilon _{{\bar{\alpha }}}={\dot{x}}^T\left( -\Big (\frac{\partial \mathbf{A}}{\partial {x}}\Big )^{T}\sum _{M}^{1} \big ((\epsilon _{{\bar{b}}})_{m}({x}^{N}_{m})^{T}\big )+\Big (\frac{\partial b}{\partial {x}}\Big )^{T}\sum _{M}^{1}(\epsilon _{{\bar{b}}})_{m}\right) \end{aligned}$$
(24)

The source of the error is the residual of the adjoint systems, and this error is accumulated over the outer iterations. It is important to realise that running more outer iterations does not remove the error, contrary to what might be extrapolated from the behaviour of the primal. Due to the accumulative nature of the adjoint differentiation, with standard DSR any incomplete convergence of the adjoint systems imparts an error on the gradients, which remains even if the number of outer iterations is enough for the primal algorithm to converge. To correct this error, the state of the art is to converge the inner adjoint system solves to machine precision, which makes the adjoint computation significantly more expensive than the primal. This paper proposes an alternative approach, namely an effective way to compute a correction for this error.

3.2 Reverse-DSR correction

Equation (18) can be rewritten as

$$\begin{aligned} {\dot{{x}}}_{m}^N=&\big [\mathbf{I}-(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)}\big ]{\dot{{x}}}_{m}\nonumber \\&+ \underbrace{(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} {\dot{{x}}}_{m-1}^{N}}_{(\gamma )_{m}}\nonumber \\&+(\epsilon _1)_{m}\,, \end{aligned}$$
(25)

where the term \(\gamma\) is the influence of the initial guess on the approximated tangent-linear derivative in the (m)th outer iteration of the algorithm after N Jacobi steps (linear solver iterations),

$$\begin{aligned} {\gamma }_{m} =(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} {\dot{{x}}}_{m-1}^{N}=(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} {\dot{{x}}}_{m}^{0}\,. \end{aligned}$$
(26)

To derive the reverse differentiation of expression (25) we first rewrite it as addition of three vectors:

$$\begin{aligned} {\dot{{x}}}_{m}^N={\dot{{l}}}_{1}+{\dot{{l}}}_{2}+{\dot{{l}}}_{3}\,. \end{aligned}$$
(27)

As shown in the section 2.2.1 of [9], for such an equation the following expression holds in the reverse-mode:

$$\begin{aligned} {\bar{{l}}}_{1}={\bar{{l}}}_{2}={\bar{{l}}}_{3}= {\bar{{x}}}_{m}^N\,. \end{aligned}$$
(28)

Moreover, from section 2.2.2 of [9], the adjoint of a multiplication expression, \({\dot{{l}}}_{2}=(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m})^{(N)} {\dot{{x}}}_{m-1}^{N}\), gives

$$\begin{aligned} {\bar{{x}}}_{m}^{N}=\Big [(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}_{m} )^{(N)}\Big ]^{T}{\bar{{l}}}_{2}\,. \end{aligned}$$
(29)

From (2829) the influence of initial guess in the reverse-mode can be shown to be

$$\begin{aligned} {\bar{{x}}}_{m}^{0}={\bar{{x}}}_{m-1}^{N}= (\mathbf{I}- \mathbf{A}_{m}^{T} \mathbf{D}_{m}^{-1} )^{(N)} {\bar{{x}}}_{m}^{N}\,. \end{aligned}$$
(30)

The vector \({\bar{{x}}}_{m}^{0}\) in (30) is one of the outputs of the differentiated solver (see Algorithm 4). On the other hand, DSR is based on the assumption that the linear systems are fully converged; meaning N is large enough such that

(31)

However, the incomplete convergence causes this assumption to be violated. If the adjoint linear system is not fully solved the term \({\bar{{x}}}_{m}^{0}\) is not zero. Consequently, the sensitivity computation is inaccurate by the error shown in (24).

N matrix-vector products are required to compute (30), which is essentially as expensive as the primal linear solver. However, it can be computed much cheaper as a by-product of a computation that is already part of the DSR.

In order to solve \(\mathbf{A}_{m}^{T}{\bar{{b}}}^{\prime }_{m} = {\bar{{x}}}_{m}^N\) in DSR, one Jacobi iteration is performed as

$$\begin{aligned} {\bar{{b}}}^{\prime n+1}_{m} =\mathbf{D}_{m}^{-1} ({\bar{{x}}}_{m}^N - \mathbf{R}_{m}^{T} {\bar{{b}}}^{\prime n}_{m}). \end{aligned}$$
(32)

If the same number of iterations N are used for (32) as for the primal system and using an initial guess of zero, from (15) one obtains

$$\begin{aligned} {\bar{{b}}}^{\prime N}_{m}&={\bar{{b}}}^{\prime }_{m}+(\mathbf{I}- \mathbf{D}_{m}^{-1} \mathbf{A}^{T}_{m})^{(N)} (0 - {\bar{{b}}}^{\prime }_{m})\nonumber \\&= (\mathbf{I}-(\mathbf{I}-\mathbf{D}^{-1}_{m} \mathbf{A}^{T}_{m})^{(N)}) {\bar{{b}}}^{\prime }_{m}\,. \end{aligned}$$
(33)

Using (33) it can be shown that computing the residual after N iterations can be done with a single matrix-vector product which yields exactly the same result as (30):

(34)

Hence, if the adjoint system of DSR in reverse-mode is not fully solved, the output variable \({\bar{{x}}}_{m}^0\) can be defined as the residual of the system. We call this C-DSR, as in corrected DSR. The DSR and C-DSR approaches are compared in Algorithms 67 and 8.

figure f
figure g
figure h

3.3 Application of C-DSR to other solvers

In the previous section we presented a correctness proof for C-DSR with Jacobi solvers. A similar proof can be established for any other linear solver using linear operators. Linear solvers with non-linear operators,

such as GMRES or CG, do not yield to this type of analysis. However, the test cases shown in the remainder of this paper demonstrates that C-DSR also leads to improved consistency for other solvers, when incomplete convergence is set at levels typical for the primal algorithm.

4 Test cases

In this section we first demonstrate the effectiveness of C-DSR using a one-dimensional heat equation solver that uses Jacobi iterations to solve the linear systems. Then with a three-dimensional CFD solver we show that the application of C-DSR to Krylov-type linear solvers also improves the gradient accuracy.

4.1 One-dimensional (1D) non-linear heat equation

The first validation study is the finite-difference (central differences in space, backward Euler in time) solution to a non-linear 1D steady-state heat conduction problem in a uniform rod lying on the x-axis from \(x_L = 0\) to \(x_R = 1\)

$$\begin{aligned} \dfrac{\partial }{\partial x} (k(T) \dfrac{\partial T}{\partial x})=0, \end{aligned}$$
(35)

where the heat conduction coefficient k is a simple linear function of temperature T, \(k = c_1 + c_2T\), where \(c_1=1.1\) and \(c_2 =0.2\) .

Fig. 1
figure 1

1D non-linear steady-state heat transfer

The domain (see Fig. 1) has 12 nodes and is discretised by central finite difference in space and backward Euler in time. Dirichlet boundary conditions are imposed on both ends. The temperature at the right boundary \(T_R\) is defined as the control variable and the objective function is evaluated as a function of the temperature at one of the internal nodes, \(J = 100\times (T_{(i=1)})^{2}\) . The primal outer loop is iterated enough that in the final outer iterations the error of the linear system is close to machine zero. The Tapenade source-transformation AD tool [15] is used to differentiate the code with checkpointing of all outer iterations in the reverse-mode.

It is worth noting that this is a steady-state problem that does not require time marching; hence the adjoint solution can be computed by linearising only around the final steady state solution, without checkpointing. We solve the primal and its adjoint in this way so that it can serve as a model problem that can be extended to more complex problems such as unsteady or segregated (decoupled) solvers later in the paper.

Two different settings are considered for Jacobi solver (see Table 1). The results are compared in Table 2. The results confirm that when the Jacobi solver is solved to machine precision, the sensitivity (\(\frac{d J}{d T_R}\)) obtained by DSR (in both AD-forward and adjoint) and the second order finite difference computation are in good agreement. However, when the solver is not fully solved, the computed sensitivity with DSR in reverse-mode shows a relative error of 6%. C-DSR improves the accuracy of gradient and reduces the error to machine precision.

Table 1 1D heat transfer: Jacobi solver set up
Table 2 1D heat transfer: comparison of derivative calculation accuracy

4.2 Three-dimensional (3D) S-Bend Duct

The second validation study is an adjoint CFD computation of a VW Golf air climate duct [34], a benchmark case of the About Flow project [26] provided by Volkswagen AG. The flow is steady, laminar and incompressible with a Reynolds number of 300 at the inlet relative to the height of the duct, the domain is discretised with 40,000 hexahedral mesh cells.

The objective function is mass averaged pressure drop between inlet and outlet. To solve the flow, the in-house incompressible flow solver gpde [17] is used, which is based on the finite volume segregated SIMPLE pressure-correction method [25]. The arising linear systems for momentum and pressure correction are solved using bi-conjugate gradient stabilised (Bi-CGSTAB) and conjugate gradient (CG) linear solvers, respectively, from the SPARSKIT library [28]. The spring analogy method [27] is implemented in gpde to deform the volume mesh following a design change. The gpde solver is written in FORTRAN 90 and differentiated by the AD tool Tapenade [15] and without checkpointing all outer iterations.

To compare sensitivities, the surface mesh coordinates of the middle S-section of the duct, \(\mathbf {x}_{i}\), are perturbed by a cosine function,

$$\begin{aligned} \mathbf {x}_{i} = \mathbf {x}_{i} - \alpha \cos ( \dfrac{\pi d_{i}}{2})~\mathbf {n}_{0} ; \quad d_{i} = \min (\Vert \mathbf {x}_{i} - \mathbf {x}_{0}\Vert ,1.0) \end{aligned}$$
(36)

where \(\mathbf {x}_{0}\) and \(\mathbf {n}_{0}\) are the bump centre and the surface normal, respectively. The perturbation is designed to create an inward bump in the duct (see Fig. 2) and the bump height is controlled by the variable \(\alpha\).

Fig. 2
figure 2

3D duct flow with perturbed inward bump

The differentiated code computes the derivative of the objective function at fully converged flow state w.r.t. the design variable, in this case the height of the perturbed bump.

Table 3 Linear solvers set up for the primal, adjoint and forward AD

In practice, the convergence criteria of linear solvers in non-linear numerical methods such as CFD solvers are determined from experience [7, 22]. The solver settings for this duct flow using the gpde solver is shown in Table 3.

Table 4 3D duct flow: comparison of derivative calculation accuracy
Table 5 The effect of iterative linear solver accuracy on DSR in reverse-mode

In addition, using several convergence criteria, different accuracies of iterative linear solver are tested for DSR in reverse-mode to determine when the precision of gradients, \(\dot{J}\), computed with DSR matches that of C-DSR. The settings and the results are shown in Table 5.

The gradient computation comparison in Table 4 demonstrates the validity and significance of the correction for a practical application using Krylov solvers. Table 5 shows that tightening the convergence level improves the accuracy of gradients with DSR, but C-DSR still achieves a higher accuracy at a much smaller computational effort.

5 Summary and conclusions

The correct treatment of iterative linear solvers in forward and reverse-mode AD has been studied. The most commonly used previous method to differentiate linear solvers is based on the assumption that linear systems are fully converged, which in practice is often not the case. The analysis presented in our paper identifies the exact source of errors arising from incompletely converged linear systems used in inner iterations of the solution of non-linear unsteady or segregated problems. We show how this error is linked to the initial guess provided to the linear solver, and how the error accumulates to severely affect adjoint gradients of non-linear solvers. This is also demonstrated in two test cases.

The C-DSR correction term proposed in this paper is shown in our work to be exact for relaxation-type solvers such as Jacobi iterations and other iterative linear solvers. A test case with Jacobi solvers demonstrates the validity of the approach. The C-DSR correction is then applied to a test case from Computational Fluid Dynamics which uses Krylov type solvers for the inner systems. Comparing to DSR, the proposed correction shows significant improvement in the gradient accuracy with much smaller computational cost.

Because the correction formula consists of only a single matrix-vector product and a vector subtraction, the computational cost of computing the correction is small, which makes our method affordable and beneficial for widespread practical application.