Abstract

In this paper, based on the work of Ke and Ma, a modified SOR-like method is presented to solve the absolute value equations (AVE), which is gained by equivalently expressing the implicit fixed-point equation form of the AVE as a two-by-two block nonlinear equation. Under certain conditions, the convergence conditions for the modified SOR-like method are presented. The computational efficiency of the modified SOR-like method is better than that of the SOR-like method by some numerical experiments.

1. Introduction

Consider the absolute value equations (AVE) where and and denotes all the components of the vector by absolute value. Replacing “” in (1) by “” with naturally generates the general AVE [1, 2]. At present, the AVE gradually attracts considerable attention because some optimization problems such as linear programming, convex quadratic programming, and linear complementarity problem [37] can be formulated as the AVE (1).

In recent years, to efficiently find the numerical solution of the AVE (1), a great deal of effort is developing iteration methods. For example, for solving the AVE (1), a generalized Newton method was presented in [8] and is simply described as follows:where ; denotes a diagonal matrix corresponding to . There are other forms of the generalized Newton method; one can see [913] for more details. Clearly, at every iteration step of the generalized Newton method (2), the inverse of the matrix should be computed. Noting that the matrix is changed with the iteration index , the calculations of the generalized Newton method may be very costly. To overcome this changed iteration matrix, the Picard iteration method in [14] is easily considered as follows:

Clearly, the Picard iteration method (3) is needed to compute the inverse of the matrix . Similarly, by reformulating the AVE (1) as a nonlinear equation with two-by-two block form, combining with the classical SOR-like iteration method, an SOR-like iteration method in [15] was proposed to solve it and it is simply described as follows:

Some convergence conditions of the SOR-like iteration method were given when the involved parameter satisfied certain conditions. Further, from the aspect of the involved iteration matrix of the SOR-like iteration method, in [16], some new convergence conditions were presented.

It is noted that if the matrix in (3) or (4) is ill-conditioned, then at every iteration step of the Picard and SOR-like methods an ill-conditioned linear system needs to be solved. In this case, the cost of the calculation of the inverse of the matrix may be high. To overcome the inverse of the matrix , Li [17] extended the classical AOR iteration method for solving the AVE and discussed the convergence properties for the AOR method. By using the Gauss–Seidel splitting, the generalized Gauss–Seidel (GGS) iteration method was presented in [18] to solve the AVE (1).

In this paper, we fasten on the SOR-like iteration method for solving the AVE (1). By expressing equivalently the implicit fixed-point equation of the AVE as a nonlinear equation with two-by-two block form, a modified SOR-like iteration method is presented by a concrete matrix splitting of the involved coefficient matrix. A considerable advantage of the modified SOR-like iteration method is that the inverse of the matrix can be avoided. From this point of view, the computing efficiency of the modified SOR-like iteration method may be better than the SOR-like iteration method when both are used to solve the AVE (1).

For our later analysis, here some terminologies are briefly explained. Let be the finite dimensional Euclidean space, whose norm is denoted by . For , denotes a vector with elements equal to depending on whether the value of the corresponding element of is larger than zero, equal to zero, or less than zero.

The rest of the layout of this paper is divided into three sections. In the second section, the modified SOR-like iteration method is designed and its convergence conditions are presented. In the third section, some numerical experiments are reported. In the fourth section, some concluding remarks are given to end this paper.

2. Modified SOR-Like Iteration Method

In this section, the modified SOR-like iteration method is presented. For this purpose, by using for the AVE (1), we havei.e.,where .

Letwhere , and and are strictly lower and upper triangular matrices obtained from , respectively. If we take where then we have where and . Based on equation (10), the modified SOR-like iteration method is naturally obtained and described below.

The modified SOR-like iteration method: let the initial vectors and be given and . For until the iteration sequence is convergent, calculate

Lemma 1 is quoted for the latter use.

Lemma 1. (see [19]). Let be any root of with . Then, if and only if and .

Let the iteration errors be where is the solution pair of equation (6) and is generated by the iteration method (11). Then, the following convergence conditions with respect to the modified SOR-like iteration method (11) can be given (see Theorem 1).

Theorem 1. Let be nonsingular and . DenoteIf then where

Proof. Let us subtract equation (11) from with being the solution pair of equation (6). Then From (18), we can get It holds that By left-multiplying (20) by the nonnegative matrix we have Let Clearly, if , then . This implies In this way, the iteration sequence produced by the modified SOR-like method (11) can converge to the solution of equation (6).
Next, we just need to get sufficient conditions such that . Assumed that denotes an eigenvalue of the matrix . Then which is equal to Using Lemma 1 for equation (26), is equal to Therefore, if condition (14) holds, then .
If the idea of this proof for the modified SOR-like method (11) is extended to the SOR-like method (4), then the corresponding matrix is as follows: where (see [15]). By simple computations, we can get that if then the SOR-like method (4) is convergent. Therefore, we obtain a new convergence condition for the SOR-like method (4) and see the following result.

Theorem 2. Let the conditions of Theorem 1 be satisfied. Denote If then where

Comparing Theorem 2 with Theorem 3.1 in [15], it is easy to see that the region of the parameter in Theorem 2 is the same as that in Theorem 3.1 in [15]. Both demand in Theorem 2 and Theorem 3.1 in [15]. The difference between Theorem 2 and Theorem 3.1 in [15] is on and . The former is and the latter is

See Theorem 3.1 in [15]. In form, the latter is more complicated than the former.

From Theorem 1, Corollary 1 is obtained.

Corollary 1. Let the conditions of Theorem 1 be satisfied. Denote

If then where

3. Numerical Examples

In this section, two numerical examples are provided to show the effectiveness of the modified SOR-like method from two aspects: the iteration step (denoted by “IT”) and the computing time (denoted by “CPU”). We compared the modified SOR-like method with the SOR-like method [15]. Here, all initial vector for these two testing methods are set to be zero vector, and both are terminated if the relative residual error (RES) satisfiesor if the iteration step is larger than 500. All the tests are performed in MATLAB 7.0.

In the following tables, “MSOR” and “SOR” denote the modified SOR-like method and the SOR-like method [15], respectively. “” denotes the iteration steps larger than 500 or the CPU times (second) larger than 500 seconds.

To get fast convergence rate of the modified SOR-like method and the SOR-like method [15], the experimentally optimal parameter is adopted, which results in the smallest iteration step.

Example 1. (see [6, 7, 17]). Let the AVE in (1) be composed withwithwhere In Table 1, we list some numerical results of the modified SOR-like method and the SOR-like method for Example 1. From Table 1, it is easy to see that both methods can quickly converge to the unique solution for different dimensions when the experimentally optimal parameters are used. An interesting fact is that the experimentally optimal parameters of both methods are the same. Furthermore, the value of the experimentally optimal parameter is stable and unchanged as the different dimension increases. Further, we find that the iteration steps of both methods are also the same. Moreover, the iteration steps of both methods are also stable and unchanged as the different dimension increases. These numerical results show that both methods are suitable to solve the AVE (1).
It is noted that, from the view of the elapsed CPU time, the consumption of the CPU time of the modified SOR-like method is less than that of the SOR-like method. That is to say, the modified SOR-like method has better computational efficiency because it costs much cheaper than the SOR-like method.
In brief, the numerical results in Table 1 show that under certain conditions, the computational efficiency of the modified SOR-like method overmatches the SOR-like method.

Example 2. For the AVE in (1), we chose a random by the following structure:and its all singular values exceed 1. The right-hand side is set to be , whereFor Example 2, we also compare the modified SOR-like method with the SOR-like method in [15] and see Table 2 for the concrete numerical results. Table 2 shows that both methods quickly converge to the unique solution when the experimentally optimal parameters are applied. These numerical results further ensure the observations results of Table 1, i.e., the modified SOR-like method overmatches the SOR-like method in terms of the computational efficiency under certain conditions.

4. Conclusion

In this paper, by equivalently expressing the absolute value equations (AVE) as a nonlinear equation with two-by-two block form, we have presented a modified SOR-like method to solve the AVE and discussed its convergence properties under certain conditions. The computational efficiency of the modified SOR-like method overmatches the SOR-like method in [15] by some numerical experiments under certain conditions.

In addition, it is worth thinking about that it is necessary to find the theoretical optimal parameter to obtain the least iteration step of the modified SOR-like method in the future, although it is a very difficult task.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by National Natural Science Foundation of China (no. 11961082).