A novel modified TRSVD method for large-scale linear discrete ill-posed problems

https://doi.org/10.1016/j.apnum.2020.08.019Get rights and content

Abstract

The truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed problems with a small to moderately sized matrix A. This method replaces the matrix A by the closest matrix Ak of low rank k, and then computes the minimal norm solution of the linear system of equations with a rank-deficient matrix so obtained. The modified TSVD (MTSVD) method improves the TSVD method, by replacing A by a matrix that is closer to A than Ak in a unitarily invariant matrix norm and has the same spectral condition number as Ak. Approximations of the SVD of a large matrix A can be computed quite efficiently by using a randomized SVD (RSVD) method. This paper presents a novel modified truncated randomized singular value decomposition (MTRSVD) method for computing approximate solutions to large-scale linear discrete ill-posed problems. The rank, k, is determined with the aid of the discrepancy principle, but other techniques for selecting a suitable rank also can be used. Numerical examples illustrate the effectiveness of the proposed method and compare it to the truncated RSVD method.

Introduction

This paper is concerned with the computation of approximate solutions of large minimization problems of the formminxRnAxb2, where ARm×n is a large matrix whose singular values gradually decay to zero without a significant gap. In particular, A is severely ill-conditioned and may be rank-deficient. Minimization problems (1.1) with a matrix of this kind often are referred to as discrete ill-posed problems. They arise, for example, from the discretization of linear ill-posed problems, such as Fredholm integral equations of the first kind with a smooth kernel. Throughout this paper 2 denotes the Euclidean vector norm or the spectral matrix norm. We allow mn as well as m<n.

The vector bRm in (1.1) represents measured data that are contaminated by an error eRm, which may stem from measurement or discretization errors. Let bˆRm denote the unknown error-free vector associated with b, i.e.,b=bˆ+e.

We will assume bˆ to be in the range of A, and that a fairly accurate bound for the relative errore2bˆ2ϵ, in b is known. Then a regularized solution of (1.1) can be determined with the aid of the discrepancy principle; see below.

We are interested in computing an approximation of the solution xˆ of minimal Euclidean norm of the unknown error-free least-squares problemminxRnAxbˆ2. Let ARn×m denote the Moore-Penrose pseudoinverse of A. Thenxˆ=Abˆ. Because A has many positive singular values close to zero, the matrix A is of very large norm, and the solution of the available least-squares problem (1.1), given byxˇ=Ab=A(bˆ+e)=xˆ+Ae, typically is dominated by the propagated error Ae, and then is meaningless. This difficulty can be mitigated by replacing the matrix A by a nearby matrix that does not have tiny positive singular values. This replacement commonly is referred to as regularization. One of the most popular regularization methods for discrete ill-posed problem (1.1) of small to moderate size is the truncated singular value decomposition (TSVD); see, e.g., [2], [3], [7]. Introduce the singular value decompositionA=UΣV, where U=[u1,u2,,um]Rm×m and V=[v1,v2,,vn]Rn×n are orthogonal matrices, the superscript denotes transposition, and the nontrivial entries (known as the singular values) of the (possibly rectangular) diagonal matrixΣ=diag[σ1,σ2,,σmin{m,n}]Rm×n are ordered according toσ1σ2σr>σr+1==σmin{m,n}=0. Here r is the rank of A. Define the matrixΣk=diag[σ1,σ2,,σk,0,,0]Rm×n for kr by setting the singular values σk+1,σk+2,,σmin{m,n} in (1.6) to zero. Then the matrixAk=UΣkV is a best rank-k approximation of A in any unitarily invariant matrix norm, such as the spectral and Frobenius norms. We haveAkA2=σk+1,AkAF=i=k+1rσi2,0kr, where F denotes the Frobenius norm, and we define A0=0 and σn+1=0.

The TSVD method replaces the matrix A in (1.1) by Ak. Let xkRn denote the solution of minimal Euclidean norm ofminxRnAkxb2. It is given by xk=Akb. The discrepancy principle prescribes that the truncation index k0 in (1.7) be chosen as the smallest integer such thatAkxkb2bˆ2τϵ, where τ>1 is a user-chosen constant that is independent of the bound ϵ in (1.3); see [3], [7]. We will use the discrepancy principle in the computed examples reported in Section 4. However, other methods for determining k also can be used when no accurate estimate of e is available, such as the L-curve criterion and generalized cross validation; see, e.g., [2], [7], [11], [12], [14].

The modified truncated singular value decomposition (MTSVD) method proposed in [13] replaces the matrix Ak in (1.8) by a matrix that (generally) is closer to A in a unitarily invariant matrix norm with the same condition number as Ak. This replacement often results in more accurate approximations of xˆ when the discrepancy principle is used to determine the truncation index k; see [13] for computed examples.

The TSVD and MTSVD methods are not suitable for application to the solution of large-scale discrete ill-posed problems (1.1) due to the high cost of evaluating the SVD of a large matrix A; see, e.g., [5, p. 493] for counts of arithmetic floating point operations. However, the computational effort can be reduced by computing an approximation of the singular value decomposition by a randomized method and applying the modified truncated singular value decomposition to the computed approximate SVD. This yields the modified truncated randomized singular value decomposition (MTRSVD).

In recent years several randomized algorithms have been proposed for computing approximate factorizations of a large matrix, such as an approximate SVD; see, e.g., Halko et al. [6]. These factorizations have been used to compute approximate solutions to ill-posed problems (1.1) by Tikhonov regularization; see, e.g., Jia and Yang [10] and Xiang and Zou [15], [16].

It is the purpose of the present paper to discuss the use of a randomized singular value decomposition (RSVD) in conjunction with the MTSVD described in [13]. We refer to this scheme as the MTRSVD method. Our reason for regularizing by a truncated singular value decomposition instead of using Tikhonov regularization is that the former method is easier to implement. Moreover, we base our method on the MTSVD instead of on the standard TSVD, because the former typically yields approximations of the desired solution (1.4) of higher quality when the required singular vectors have been computed with high enough accuracy.

The remainder of this paper is organized as follows. Section 2 provides a brief review of the MTSVD, RSVD, and TRSVD methods. The proposed MTRSVD method is described in Section 3, where also its computational complexity and error bounds are presented. Section 4 shows several numerical examples that illustrate the efficiency of the MTRSVD method. Section 5 contains concluding remarks.

Section snippets

The MTSVD method

We review the modified SVD (MTSVD) method introduced in [13]. First consider the TSVD method [3], [7]. It replaces the matrix A in (1.1) by the matrix Ak in (1.8), and determines the least-squares solution of minimal Euclidean norm. We denote this solution by xk and assume that kr=rank(A). The vector xk can be expressed asxk=Akb=j=1rϕj(k)ujbσjvj, where the uj and vj are columns of the matrices U and V in (1.5), respectively, and the filter factors ϕj(k) are defined byϕj(k)={1,1jk,0,k<jr.

The MTRSVD method

This subsection describes a novel modified truncated randomized singular value decomposition (MTRSVD) method for solving large-scale discrete ill-posed problems (1.1). The following result gives a closest matrix to the matrix Aˇ in (2.3) in the spectral and Frobenius norms.

Theorem 2

Let Aˇ=UˇΣˇVˇ be the approximate truncated SVD (2.3) of A, and let 1kkˆ. A closest matrix Aˆkˆ to the matrix Aˇ in the spectral and Frobenius norms with smallest singular value σˇk is given byAˆkˆ=UˇkˆΣˆkˆVˇkˆ,

Numerical experiments

We first consider three linear discrete ill-posed problems that arise from the discretization of linear ill-posed problems in one space-dimension. These problems stem from Hansen's Regularization Tools [8]. The performance of the MTRSVD and TRSVD methods are reported in Subsection 4.1. Linear discrete ill-posed problems that stem from the discretization of linear ill-posed problems in two space-dimensions are discussed in Subsection 4.2. One of the problems is from IR Tools [4]. The problems in

Conclusion

The application of truncated random singular value decomposition methods to the solution of large-scale linear discrete ill-posed problems is discussed. Several methods are described and compared. The choice of method should depend on how quickly the singular values of the problem decay to zero with increasing index. When the singular values decay slowly, application of one step of power iteration is found to be beneficial, because this enhances the accuracy of the computed approximate singular

Acknowledgements

The authors would like to thank Silvia Gazzola for helpful comments on the use of IR Tools. They also would like to thank Silvia Noschese and the referees for comments. Research by G.H. was supported in part by Application Fundamentals Foundation of STD of Sichuan (2020YJ0366) and Key Laboratory of bridge nondestructive testing and engineering calculation Open fund projects (2020QZJ03), and research by L.R. was supported in part by NSF grants DMS-1729509 and DMS-1720259, and research by F.Y.

References (16)

  • M.L. Baart

    The use of auto-correlation for pseudo-rank determination in noisy ill-conditioned least-squares problems

    IMA J. Numer. Anal.

    (1982)
  • C. Brezinski et al.

    Error estimates for linear systems with applications to regularization

    Numer. Algorithms

    (2008)
  • H.W. Engl et al.

    Regularization of Inverse Problems

    (1996)
  • S. Gazzola et al.

    A MATLAB package of iterative regularization methods and large-scale test problems

    Numer. Algorithms

    (2019)
  • G.H. Golub et al.

    Matrix Computations

    (2013)
  • N. Halko et al.

    Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions

    SIAM Rev.

    (2011)
  • P.C. Hansen

    Rank-Deficient and Discrete Ill-Posed Problems

    (1998)
  • P.C. Hansen

    Regularization tools

    Numer. Algorithms

    (2007)
There are more references available in the full text version of this article.

Cited by (8)

  • Maximumly weighted iteration for solving inverse problems in dynamics

    2023, International Journal of Mechanical Sciences
View all citing articles on Scopus
View full text