Abstract

In this paper, we equip with an indefinite scalar product with a specific Hermitian matrix, and our aim is to develop some block Krylov methods to indefinite mode. In fact, by considering the block Arnoldi, block FOM, and block Lanczos methods, we design the indefinite structures of these block Krylov methods; along with some obtained results, we offer the application of this methods in solving linear systems, and as the testifiers, we design numerical examples.

1. Introduction

First of all, we are going to introduce the inner product space that our work is scheduled to be done in that. Recall that the indefinite inner product [.,.] in has the all features of a standard inner product except that it may be nonpositive. In the other words, that is linear in the first argument, antisymmetry, and nondegenerate. The latter means that if for every , then . This kind of inner product may be applied in some areas of sciences and is commonly defined as where is a nonsingular Hermitian matrix, and even in some specific scientific areas such as the theory of relativity or in the research of the polarized light, it may be exclusively as follows:

With this particular , the indefinite inner product [.,.] is referred to as hyperbolic and take the form

In [1] and [2], by considering with the indefinite scalar product (2), a number of the Krylov subspace methods have been reviewed and restructured. These methods are indefinite Arnoldi, indefinite full orthogonalization, and indefinite Lanczos. In this paper, we extend these indefinite Krylov methods to their indefinite block versions which will be discussed more in the subsequence sections.

Considering as (1), we will need to consider the following definitions:

A subspace is said to be nondegenerate, with respect to the indefinite inner product [.,.] if and for all imply that . Otherwise is degenerate. For example, the indefinite inner product [.,.] ensures that itself is always nondegenerate.

If is any nondegenerate nonzero subspace of , then the basis for is said to be an orthogonal basis with respect to the indefinite inner product [.,.], if for , and is said to be an orthonormal basis if in addition to orthogonality, for . If the indefinite inner product [.,.] in this definition is the special indefinite inner product presented in (2), then the above definitions of orthogonal basis and orthonormal basis are called -orthogonal and -orthonormal bases, respectively.

A matrix is said to be -symmetric when and we write .

This paper is classified such that the next section is devoted to recounting the basic definitions and the block Arnoldi and block Arnoldi-Ruhe’s variant. In the third section, the indefinite versions of its previous section algorithms are designed and the use of these methods in solving linear systems is discussed. The fourth section offers the indefinite version of the block Lanczos method and its application in solving linear systems with -symmetric coefficient matrices, and finally, the numerical examples are given in the fifth chapter.

2. Block FOM Method

In some areas of computational sciences, there may be a need to solve large sparse linear systems with several right-hand sides that are given at once. A nonsingular linear systems with p right-hand sides can written as with ACn × n and . Generally, we call such matrices block vectors. Block Krylov methods are iterative methods that have been designed for such problems. Note that for and , the block Krylov subspaces generated by A from X are where “block span” is defined such that

In general, a block Krylov subspace method acts in the same manner as a Krylov subspace method, but at each iteration, the operator is applied to a block of vectors instead of just one. Most Krylov methods can be generalized to block Krylov space solvers (for example, see [37]). Specifically, we recall the algorithms of the three block Krylov methods where our paper is focused on, i.e., the block Arnoldi, block FOM, and block Lanczos algorithms.

The outcome of Algorithm 1 is an orthogonal basis for the block Krylov subspace and also a band upper Hessenberg matrix with subdiagonals.

(1) Choose a unitary matrix of dimension
(2) For Do
(3)  Compute
(4)  
(5)  Compute the QR-factorization of
(6) EndDo

The second algorithm which is the resulting of A. Ruhe’s [8] efforts as shown in Algorithm 2.

(1) Choose initial vectors
(2) For Do
(3)  Set
(4)  Compute
(5)  For Do
(6)   
(7)   
(8)  EndDo
(9)  Compute and
(10) EndDo

Especially, the case coincides with the usual Arnoldi process. According to the algorithm, the vector satisfies the relation where . By line 9, which yields

Thus, in which the matrix is of size and represents the matrix with columns .

Another issue on which we will work in the indefinite mode is the solving of linear systems with multiple right-hand sides. The block generalization of FOM and Lanczos methods is defined in straightforward ways to solve linear systems which are defined in spaces with definite inner products. As a short reminder, consider the linear systems or in matrix notation in which and , and assume that the initial block guess of initial guesses is given and the initial block residual is as follows: in which . Recall that in the case of a single system, that is when , the approximate solution is chosen such that the correction lies in the Krylov subspace

The block Krylov space method for solving the systems (3) is an iterative method that generates approximate solution such that where is the initial residual as (11).

The block FOM algorithm compute the QR-factorization of : in which matrix is unitary and is upper triangular. This factorization provides the first vectors of the block Arnoldi basis.

Each of the approximate solutions has the form

Writing and , we have

Let be the matrix whose upper principal block is an identity matrix. Then, the relation (8) results in

Note that the vector in is a vector whose arrays are zero apart from those from 1 to which are derived from the th column of the upper triangular matrix . The matrix is an matrix. The block FOM approximation eliminates the last rows of and and then solves the resulting system . Then, the approximate solution will be calculated by (15), and finally, from the orthogonality of the column vectors of , the relation (17) yields

3. Indefinite Block FOM Method

In [2], the indefinite Arnoldi’s process builds a -orthogonal basis for a nondegenerated Krylov subspace as shown in Algorithm 3.

(1) Choose a vector such that
(2) Define
(3) For Do:
(4)  For Do:
(5)   Compute and
(6)   Compute
(7)   If then stop
(8)   
(9)   
(10)   
(11)  EndDo
(12) EndDo

In the following, after recalling the indefinite Gram-Schmidt orthogonalization (Algorithm 4), we will express the block analogue of this algorithm.

(1) Input vectors
(2)
(3) If stop
(4) else and
(5) For Do
(6)  Compute , for
(7)  
(8)  If [] = 0 stop else
(9)  
(10) EndDo

Algorithm 4 gives and this implies that in which and . Note that the indefinite modified Gram-Schmidt algorithm is similar to the above algorithm, except that here, the seventh row will be replaced as follows. Now see Algorithm 5. (i)For (ii)  (iii)end

(1) Choose vectors such that the indefinite Gram-Schmidt orthogonalization gives result:
(2) Define
(3) For Do:
(4)  Compute , for
(5)  Compute
(6) EndDo
(7) Compute the indefinite Q-R factorization of

Note that is the mentioned matrix in the indefinite scalar product (2) and is the acquired matrix by the indefinite Gram-Schmidt process. Now, a simple property of the algorithm is proved.

Proposition 1. Denote by the identity matrix and define the following matrices by the above algorithm: Then the following relation holds:

Proof. The relation is straightforward by the following equalities which are derived from algorithm:

Algorithm 6 is the modified version of Algorithm 5 for which the indefinite QR-factorization is calculated by the indefinite modified Gram-Schmidt process.

(1) Choose vectors such that the indefinite Gram-Schmidt orthogonalization gives result:
(2) For Do:
(3)  Compute
(4)   For Do
(5)   
(6)   
(7)   EndDo
(8)  Compute the indefinite QR-factorization of
(9) EndDo
(1) Choose matrix such that for
(2) For Do
(3)  Set
(4)  Compute
(5)  For Do
(6)   
(7)   
(8)  endDo
(9)  Compute
(10)  If stop
(11)  Compute
(12)  
(13)  
(14) End

As shown in Algorithm 6, we express the indefinite version of the block Arnoldi-Ruhe’s variant, as another Krylov subspace method for which acts on a group of vectors instead of just one vector.

According to Algorithm 7, the vector is expressed as where and also . Thus, in which indicates the matrix with columns . Finally, from there for matrix .

Now, similar to the end of the previous section, consider the linear systems with matrix notation in which and , and assume that the initial block guess of initial guesses and the initial block residual are defined similar to the previous section. in which . Recall that in the case of a single system, that is when , the approximate solution is chosen such that the correction lies in the Krylov subspace

A block Krylov space method for solving the systems (3) is an iterative method that generates approximate solution such that where is the initial residual as (28).

The indefinite block FOM computes the indefinite QR-factorization of : such that and .

Each of the approximate solutions has the form

Writing and , we have

Let be the matrix as before. Then, the relation (24) results in

The vector is a vector whose components are zero except those from 1 to which are extracts from the th column of the upper triangular matrix . While and are defined as before, the indefinite block FOM (IBFOM) deletes the last rows of and and solves the resulting system . Then, the approximate solution is computed by (32).

4. Block Lanczos and Indefinite Block Lanczos Methods

In 1950, Lanczos proposed an algorithm (Algorithm 8) [9], which designed an orthogonal transformation of a symmetric matrix into a tridiagonal matrix . It can be applied to Krylov subspace methods for solving the symmetric matrix linear systems as well as the eigenvalue problem of the symmetric matrix. The block Lanczos algorithm was developed by Peter Montgomery and published in 1995, [10]; it is based on, and bears a strong resemblance to, the Lanczos algorithm for finding the eigenvalues of large sparse real matrices.

(1) Choose initial orthonormal vectors and set
(2) For
(3)  
(4)  Compute
(5)  If
(6)   For
(7)    
(8)    
(9)   end
(10)  else
(11)   For
(12)    
(13)    
(14)   end
(15)  end
(16)  For
(17)   
(18)   If
(19)    
(20)   end
(21)   
(22)  end
(23)  If
(24)   
(25)   
(26)   
(27)   end
(28) end
(29) For
(30)  
(31)  
(32)  For
(33)   
(34)   
(35)  end
(36)  For
(37)   
(38)   if
(39)    
(40)   end
(41)  end
(42) end

Due to the login of and into the IBFOM algorithm, the better performance of the BFOM algorithm can be expected than the IBFOM algorithm, and this seen practically in numerous examples. We know there is a Ruhe’s version of the block Lanczos algorithm that is used for the symmetric coefficient matrix mode. Our purpose in this section is to build its indefinite mode, named indefinite block Lanczos (Ruhe’s variant) algorithm, that is used for -symmetric matrices. First, we express the block Lanczos algorithm and then we design its indefinite version (Algorithm 8).

In Algorithm 9, we suggest the indefinite version of the Algorithm 8:

(1) Choose matrix with -orthonormal columns and set and
(2) For
(3)  
(4) end
(5) For
(6)  
(7)  Compute
(8)  If
(9)   For
(10)    
(11)    
(12)   end
(13)  else
(14)   For
(15)    
(16)    
(17)   end
(18)  end
(19)  For
(20)   
(21)   If
(22)    
(23)   end
(24)   
(25)  end
(26)  If
(27)   
(28)   
(29)   
(30)   
(31)   
(32)  end
(33) end
(34) For
(35)  
(36)  
(37)  For
(38)   
(39)   
(40)  end
(41)  For
(42)   
(43)   if
(44)    
(45)   end
(46)  end
(47) end

Algorithm 9 transforms the matrix in to the matrix , and as seen for IBFOM algorithm, the following relation satisfies for them:

As seen for the IBFOM algorithm, by considering the linear system , we gain some relations similar to what obtained at the end of the third section, and by solving the linear systems and by substituting in relation (32), the solution is earned for the system. Here, and are defined similar to and .

5. Numerical Examples

Suppose that is an and -symmetric matrix and is a matrix. Our purpose is to solve the linear system via three methods: block FOM (BFOM), indefinite block FOM (IBFOM), and indefinite block Lanczos (IBLAN). In the following examples, will be considered the time it takes to run each of the algorithms and will be defined as the average of residuals, i.e., then . We define the number of iterations which is chosen arbitrarily.

Example 2. Consider matrix as follows: in which are all tridiagonal matrices with arbitrary arrays belonging to (0, 1). In addition, are symmetric and . With this selection and by defining as below the matrix is -symmetric. If , , and , then the BFOM, IBFOM, and IBLAN method results are as shown in Table 1.

Example 3. Consider the matrices and as follows: in which are all tridiagonal matrices with arbitrary arrays belonging to (0, 10). Besides, , , and are matrices by zero arrays except for their diagonal and subdiagonal arrays which are randomly selected in (0, 1). Also, suppose that and . Under these circumstances, is -symmetric. By letting , , and , we the results shown in Table 2.

Example 4. Consider and as the previous example. By letting , , and , we obtain the results shown in Table 3.

In the above examples, and are matrices with randomly chosen arrays in (0, 1).

6. Conclusion

As can be seen from the discussed algorithms in this paper, the BFOM and IBFOM algorithms are used to solve the linear systems with arbitrary coefficients matrices. But since in the IBFOM algorithm the indefinite inner product is used instead of the standard inner product and the matrix appears in the calculations, the number of arithmetic operations is increased, and as a result, the IBFOM algorithm does not perform better than the BFOM algorithm. But our goal is to solve the block linear systems with the -symmetric coefficient matrices. So the performance of the IBLAN algorithm for solving such systems is important for us. The block linear systems with the -symmetric coefficients matrices can only be solved by the BFOM, IBFOM, and IBLAN algorithms. As can be seen in the numerical examples, the performance of the IBLAN algorithm is far better than the IBFOM algorithm and even better than the BFOM algorithm due to the difference in the number of arithmetic operations, and this is what we have theoretically achieved. The result in a sentence is that the best way to solve the block linear systems with the -symmetric coefficients matrices is to use the IBLAN algorithm.

Data Availability

All data is available.

Conflicts of Interest

The authors declare that they have no conflicts of interest.