Skip to main content

The Picard-HSS-SOR iteration method for absolute value equations

Abstract

In this paper, we present the Picard-HSS-SOR iteration method for finding the solution of the absolute value equation (AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.

1 Introduction

Let \(A \in R^{n\times n}\), \(b \in R^{n}\). We consider the following absolute value equation (AVE):

$$ Ax- \vert x \vert =b, $$
(1.1)

where \(|x|\) denotes the vector in \(R^{n}\) with absolute values of component of x. The AVE (1.1) is the special case of generalized system of absolute value equations of the form

$$ Ax+B \vert x \vert =b, $$
(1.2)

where \(B\in R^{n\times n}\). The system of absolute value equation (1.2) was introduced in [1] and investigated in a more general context in [2].

The importance of the AVE (1.1) arises from the fact that linear programs, bimatrix games and other important problems in optimization all can be reduced to the system of absolute value equations. In recent years, the problem of finding solution of AVE has been attracted much attention and has been studied in the literature [3–18]. For the numerical solution of the AVE (1.1), there exist many efficient numerical methods, such as the SOR-like iteration method [12], the relaxed nonlinear PHSS-like iteration method [15], the Levenberg–Marquardt method [16], the generalized Newton method [17], the Gauss–Seidel iteration method [19] and so on.

Recently, Salkuyeh [18] presented the Picard-HSS iteration method for solving the AVE and established the convergence theory under suitable conditions. Some numerical experiments showed in [18] that the Picard-HSS iteration method is more efficient than the Picard iteration method and generalized Newton method.

In the paper, we present a new version iteration method for finding the solution of the absolute value equation(AVE), which is more efficient than the Picard-HSS iteration method for AVE. The convergence results of the Picard-HSS-SOR iteration method are proved under certain assumptions imposed on the involved parameter. Numerical experiments demonstrate that the Picard-HSS-SOR iteration method for solving absolute value equations is feasible and effective.

This article is arranged as follows. In Sect. 2, we recall the Picard-HSS iteration method and some results that will be used in following analysis. The Picard-HSS-SOR iteration method and its convergence analysis are proposed in Sect. 3. Experimental results and conclusions are given in Sects. 4 and 5, respectively.

2 Preliminaries

Firstly, we present some notations and auxiliary results.

The symbol \(I_{n}\) denotes the \(n\times n\) identity matrix. \(\|A\|\) denotes the spectral norm defined by \(\|A\|:=\max \{\|Ax\|: x\in \mathbf{R}^{n}, \|x\|=1 \}\), where \(\|x\|\) is the 2-norm. For \(x\in \mathbf{R}^{n}\), \(\operatorname{sign}(x)\) denotes a vector with components equal to −1, 0 or 1 depending on whether the corresponding component of x is negative, zero or positive. In addition, \(\operatorname{diag}(\operatorname{sign}(x))\) is a diagonal matrix whose diagonal elements are \(\operatorname{sign}(x)\). A matrix \(A= (a_{ij} )\in R^{m\times n}\) is said to be nonnegative (positive) if its entries satisfy \(a_{ij}\geq 0 (a_{ij}>0)\) for all \(1\leq i\leq m\) and \(1\leq j\leq n\).

Proposition 2.1

([2])

Suppose that \(A \in \mathbf{R}^{n\times n}\) is invertible. If \(\|A^{-1}\|< 1\), then the AVE in (1.1) has a unique solution for any \(b\in \mathbf{R}^{n}\).

Lemma 2.1

([20])

For any vectors \(x=(x_{1},x_{2},\ldots,x_{n})^{T} \in \mathbf{R}^{n}\) and \(y=(y_{1},y_{2},\ldots,y_{n})^{T} \in \mathbf{R}^{n}\), the following results hold:

(I) \(\||x|-|y|\| \leq \|x-y\|\); (II) if \(0 \leq x \leq y\), then \(\|x\| \leq \|y\|\);

(III) if \(x \leq y\) and P is a nonnegative matrix, then \(P x \leq P y\), where \(x\leq y\) denotes \(x_{i}\leq y_{i}, 1\leq i\leq n\).

Let \(A\in \mathbf{R}^{n\times n}\) be a non-Hermitian positive definite matrix. Then the matrix A possesses a Hermitian/skew-Hermitian (HSS) splitting

$$ A=H+S, $$

where

$$ H=\frac{1}{2} \bigl(A+A^{H} \bigr) \quad\textit{and}\quad {{S= \frac{1}{2} \bigl(A-A^{H} \bigr)}}. $$

Algorithm 2.1

(The Picard-HSS iteration method [18])

Given an initial guess \(x^{(0)}\in \mathbf{R}^{n}\) and a sequence \(\{ l_{k} \} ^{\infty }_{k=0}\) of positive integers, compute \(x^{(k+1)}\) for \(k = 0, 1, 2 \),…, using the following iteration scheme until \(\{ x^{(k)} \} \) satisfies the following stopping criterion:

(a) Set \(x^{(k,0)}: = x^{(k)}\);

(b) for \(l=0, 1,\ldots, l_{k}-1\), solve the following linear systems to obtain \(x^{(k,l+1)}\):

$$ \textstyle\begin{cases} ( \alpha I+H )x^{(k,l+\frac{1}{2})} = (\alpha I-S )x^{(k,l)}+ \vert x^{(k)} \vert +b, \\ ( \alpha I+S )x^{(k,l+1)} = (\alpha I-H )x^{(k,l+ \frac{1}{2})}+ \vert x^{(k)} \vert +b, \end{cases} $$
(2.1)

where α is a given positive constant;

(c) set \(x^{(k+1)}:=x^{(k,l_{k})}\).

The \((k+1)\)th iterate of the Picard-HSS method can be written as

$$ \begin{aligned} x^{(k+1)}& =T^{l_{k}}( \alpha )x^{(k)}+\sum^{l_{k}-1}_{j=0}T^{j}( \alpha )G(\alpha ) \bigl( \vert x \vert +b \bigr) \\ &=T^{l_{k}}(\alpha )x^{(k)}+ \bigl(I-T^{l_{k}}(\alpha ) \bigr)A^{-1} \bigl( \vert x \vert +b \bigr),\quad k=0,1,2,\ldots, \end{aligned} $$
(2.2)

where

$$ T(\alpha )=(\alpha I+S)^{-1}(\alpha I-H) (\alpha I+H)^{-1}( \alpha I-S) $$

and

$$ G(\alpha )=2\alpha (\alpha I+S)^{-1}(\alpha I+H)^{-1}. $$

Theorem 2.1

([18])

Let \(A\in \mathbf{R}^{n\times n}\) be a positive definite matrix. If \(v=\|A^{-1}\|<1\), then the AVE (1.1) has a unique solution \(x^{*}\), and for any initial guess \(x^{(0)}\in \mathbf{R}^{n}\) and any sequence of positive integers \(l_{k}, k=0,1,2,\ldots \) , the iteration sequence \(\{ x^{(k)} \} \) generated by the Picard-HSS iteration method converges to \(x^{*}\) provided that \(\widetilde{l}=\lim \inf_{k\to +\infty }l_{k}\geq N\), where N is a natural number satisfying

$$ \bigl\Vert T^{s}(\alpha ) \bigr\Vert < \frac{1-v}{1+v},\quad \forall s\geq N. $$
(2.3)

3 The Picard-HSS-SOR iteration method

In the section, we will introduce the Picard-HSS-SOR iteration method and prove the convergence of the proposed method.

Recently, Ke et al. presented the SOR-like method for solving (1.1) in [13]. Let \(y=|x|\), then the AVE in (1.1) is equivalent to

$$ \textstyle\begin{cases} Ax-y= b, \\ - \vert x \vert +y= 0, \end{cases} $$
(3.1)

that is,

Az:= ( A − I n − D ( x ) I n ) ( x y ) = ( b 0 ) :=b,
(3.2)

where \(D(x):=\operatorname{diag}(\operatorname{sign}(x)), x\in \mathbf{R}^{n}\).

Based on Eq. (3.2), we present the Picard-HSS-SOR iteration method for solving AVE (3.1) as follows.

Algorithm 3.1

(The Picard-HSS-SOR iteration method)

Let \(A\in \mathbf{R}^{n\times n}\) be a positive definite matrix with \(H=\frac{1}{2} (A+A^{T} )\) and \(S=\frac{1}{2} (A-A^{T} )\) being the Hermitian and skew-Hermitian parts of the matrix A, respectively. Given an initial guess \(x^{(0)}\in \mathbf{R}^{n}\) and \(y^{(0)}\in \mathbf{R}^{n}\). Compute \(\{ (x^{(k+1)}, y^{(k+1)} ) \} \) for \(k = 0, 1, 2 \),…, using the following iteration scheme until \(\{ (x^{(k)}, y^{(k)} ) \} \) satisfies the stopping criterion:

(i) Set \(x^{(k,0)}: = x^{(k)}\);

(ii) for \(l=0, 1,\ldots, l_{k}-1\), solve the following linear systems to obtain \(x^{(k,l+1)}\):

$$ \textstyle\begin{cases} ( \alpha I+H )x^{(k,l+\frac{1}{2})} = (\alpha I-S )x^{(k,l)}+y^{(k)}+b, \\ ( \alpha I+S )x^{(k,l+1)} = (\alpha I-H )x^{(k,l+ \frac{1}{2})}+y^{(k)}+b; \end{cases} $$
(3.3)

(iii) set

$$ \textstyle\begin{cases} x^{(k+1)} =x^{(k,l_{k})}, \\ y^{(k+1)} = (1-\tau )y^{(k)}+\tau \vert x^{(k+1)} \vert , \end{cases} $$
(3.4)

where \(\alpha > 0\) and \(0 < \tau < 2\).

Let \((x^{*}, y^{*})\) be the solution pair of the nonlinear equation (3.1) and \((x^{(k)}, y^{(k)})\) be produced by the Algorithm 3.1. Define the iteration errors

$$ e^{x}_{k}=x^{*}-x^{(k)},\qquad e^{y}_{k}=y^{*}-y^{(k)}. $$

Next, we will prove the main result of this paper.

Theorem 3.1

Let \(v=\|A^{-1}\|\), \(\beta =|1-\tau |\) and \(\widetilde{l}=\lim \inf_{k\to +\infty }l_{k}\geq N\), where N is a natural number satisfying (2.3). If \(0<\tau <2\) and

$$ 4\beta \tau v + \bigl(1+\tau ^{2} \bigr) \bigl(1+4v^{2} \bigr)< 1, $$
(3.5)

then the inequality

$$ \left \Vert \begin{pmatrix} e^{x}_{k+1} \\ e^{y}_{k+1} \end{pmatrix} \right \Vert < \left \Vert \begin{pmatrix} e^{x}_{k} \\ e^{y}_{k} \end{pmatrix} \right \Vert $$
(3.6)

holds for \(k=0,1,2,\ldots \) .

Proof

The \((k+1)\)th iterate of the Picard-HSS-SOR iteration method can be written as

$$ \textstyle\begin{cases} x^{(k+1)}=T^{l_{k}}(\alpha )x^{(k)}+(I-T^{l_{k}}( \alpha ))A^{-1} (y^{(k)}+b ), \\ y^{(k+1)}=(1-\tau )y^{(k)}+\tau \vert x^{(k+1)} \vert , \end{cases} $$
(3.7)

where

$$ T(\alpha )=(\alpha I+S)^{-1}(\alpha I-H) (\alpha I+H)^{-1}( \alpha I-S). $$

Since \((x^{*}, y^{*})\) is the solution pair of the nonlinear equation (3.1), from (3.7), we can obtain

$$\begin{aligned} &e^{x}_{k+1}=T^{l_{k}}(\alpha )e^{x}_{k}+ \bigl(I-T^{l_{k}}(\alpha ) \bigr)A^{-1}e^{y}_{k}, \end{aligned}$$
(3.8)
$$\begin{aligned} &e^{y}_{k+1}=(1-\tau )e^{y}_{k}+\tau \bigl( \bigl\vert x^{*} \bigr\vert - \bigl\vert x^{(k+1)} \bigr\vert \bigr). \end{aligned}$$
(3.9)

From Lemma 2.1 and (3.9), we can obtain

$$\begin{aligned} \bigl\Vert e_{k+1}^{y} \bigr\Vert & \leq \vert 1-\tau \vert \cdot \bigl\Vert e_{k}^{y} \bigr\Vert +\tau \bigl\Vert \bigl\vert x^{*} \bigr\vert - \bigl\vert x^{(k+1)} \bigr\vert \bigr\Vert \\ & \leq \vert 1-\tau \vert \cdot \bigl\Vert e_{k}^{y} \bigr\Vert +\tau \bigl\Vert x^{*}-x^{(k+1)} \bigr\Vert \\ &=\beta \cdot \bigl\Vert e_{k}^{y} \bigr\Vert +\tau \bigl\Vert e_{k+1}^{x} \bigr\Vert . \end{aligned}$$
(3.10)

From Theorem 2.1 and (3.8), we have

$$ \begin{aligned} \bigl\Vert e^{x}_{k+1} \bigr\Vert & \leq \bigl\Vert T^{l_{k}}(\alpha ) \bigr\Vert \bigl\Vert e^{x}_{k} \bigr\Vert + \bigl(1+ \bigl\Vert T^{l_{k}}( \alpha ) \bigr\Vert \bigr) \bigl\Vert A^{-1} \bigr\Vert \bigl\Vert e^{y}_{k} \bigr\Vert \\ & \leq \bigl\Vert e^{x}_{k} \bigr\Vert +2v \bigl\Vert e^{y}_{k} \bigr\Vert . \end{aligned} $$
(3.11)

Therefore, from (3.10) and (3.11), we have

$$ \begin{pmatrix} 1 & 0 \\ -\tau & 1 \end{pmatrix} \begin{pmatrix} \Vert e_{k+1}^{x} \Vert \\ \Vert e_{k+1}^{y} \Vert \end{pmatrix} \leq \begin{pmatrix} 1 & 2v \\ 0 & \beta \end{pmatrix} \begin{pmatrix} \Vert e_{k}^{x} \Vert \\ \Vert e_{k}^{y} \Vert \end{pmatrix}. $$
(3.12)

Let P= ( 1 0 Ï„ 1 ) . In this case we have P is nonnegative, i.e. \(P\geq 0\).

According to Lemma 2.1, multiplying (3.12) from left by the nonnegative matrix P, we can obtain

$$ \begin{pmatrix} \Vert e_{k+1}^{x} \Vert \\ \Vert e_{k+1}^{y} \Vert \end{pmatrix} \leq \begin{pmatrix} 1 & 2v \\ \tau & \beta +2\tau v \end{pmatrix} \begin{pmatrix} \Vert e_{k}^{x} \Vert \\ \Vert e_{k}^{y} \Vert \end{pmatrix}.$$
(3.13)

Let

$$ W= \begin{pmatrix} 1& 2v \\ \tau & \beta +2\tau v \end{pmatrix}, $$

thus, we get

$$ \begin{pmatrix} \Vert e_{k+1}^{x} \Vert \\ \Vert e_{k+1}^{y} \Vert \end{pmatrix} \leq \Vert W \Vert \begin{pmatrix} \Vert e_{k}^{x} \Vert \\ \Vert e_{k}^{y} \Vert \end{pmatrix}. $$
(3.14)

Next, we will consider the choice of the parameter Ï„ such that \(\|W\|<1\), therefore the inequality (3.6) holds. Since

$$ W^{T}W= \begin{pmatrix} 1+\tau ^{2} & \tau \beta + 2v(1+\tau ^{2}) \\ \tau \beta + 2v(1+\tau ^{2}) & 4v^{2}+(\beta +2\tau v)^{2} \end{pmatrix}, $$

we can obtain

$$ \det \bigl(W^{T}W \bigr)=\beta ^{2} $$
(3.15)

and

$$ \operatorname{tr} \bigl(W^{T}W \bigr)=\beta ^{2}+4\tau v \beta + \bigl(1+\tau ^{2} \bigr) \bigl(1+4v^{2} \bigr). $$
(3.16)

Suppose λ is an eigenvalue of the matrix \(W^{T}W\) with \(\lambda \geq 0\). Therefore λ will satisfy

$$ \lambda ^{2}-\operatorname{tr} \bigl(W^{T}W \bigr)\lambda +\det \bigl(W^{T}W \bigr)=0. $$
(3.17)

Thus, we can obtain the following relations:

$$ \lambda _{1}+\lambda _{2}=\operatorname{tr} \bigl(W^{T}W \bigr), \qquad\lambda _{1} \lambda _{2}= \det \bigl(W^{T}W \bigr), $$

where \(\lambda _{1}\) and \(\lambda _{2}\) are eigenvalues of the matrix \(W^{T}W\).

If \(0<\tau <2\), we have \(\det (W^{T}W )<1\), that is, \(0\leq \lambda _{1}\lambda _{2}<1\).

From (3.5), we have \(\lambda _{1}+\lambda _{2}<1+\lambda _{1}\lambda _{2}\), that is, \((\lambda _{1}-1)(\lambda _{2}-1)>0\). Hence, we can obtain

$$ 0\leq \lambda _{1}< 1 \quad\text{and}\quad 0\leq \lambda _{2}< 1. $$

Therefore \(\|W\|<1\). The proof is completed. □

4 Numerical results

To illustrate the implementation and efficiency of the Picard-HSS-SOR iteration method, we test the following test problems. All test problems are performed by MATLAB R2019a on a personal computer with 2.4 GHz central processing unit (Intel (R) Core (TM) i5-3210M), 8GB memory. We use a null vector as initial guess and all the experiments are terminated if the current iterations satisfy

$$ \frac{ \Vert b+ \vert x^{(k)} \vert -Ax^{(k)} \Vert _{2}}{ \Vert b \Vert _{2}}\leq 10^{-6}, $$

or if the number of the prescribed iteration steps \(k_{\max }=500\) is exceeded. In addition, the stopping criterion for the inner iterations is set to be

$$ \frac{ \Vert b^{(k)}-As^{(k,l_{k})} \Vert _{2}}{ \Vert b^{(k)} \Vert _{2}}\leq 0.01, $$

where \(b^{(k)}=|x^{(k)}|+b-Ax^{(k,l_{k})}\), \(s^{(k,l_{k})}=x^{(k,l_{k})}-x^{(k,l_{k}-1)}\), \(l_{k}\) is the number of inner iteration steps and a maximum number of iterations 10.

Next,we consider the two-dimensional convection-diffusion equation

$$ \textstyle\begin{cases} { - ({u_{xx}} + {u_{yy}}) + q({u_{x}} + {u_{y}}) + pu = f(x,y),\quad(x,y) \in \Omega,} \\ {u(x,y) = 0, \qquad (x,y) \in \partial \Omega, } \end{cases} $$

where \(\Omega = (0,1)\times (0,1)\), ∂Ω is its boundary, q is a positive constant used to measure the magnitude of the diffusive term and p is a real number. We apply the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms. Let \(h=1/(m+ 1)\) and \(\operatorname{Re}=(qh)/2\) denote the equidistant step size and the mesh Reynolds number, respectively. Then we get a system of linear equations \(Bx = d\), where B is a matrix of order \(n=m^{2}\) of the form

$$ B=T_{x}\otimes I_{m}+I_{m}\otimes T_{y}+pI_{n}, $$

with

$$ T_{x}=\operatorname{tridiag}(t_{2}, t_{1}, t_{3})_{m\times m} \quad\text{and} \quad T_{y}= \operatorname{tridiag}(t_{2}, 0, t_{3})_{m\times m}, $$

where \(t_{1} = 4\), \(t_{2}= -1-\operatorname{Re} \), \(t_{3}=-1+\operatorname{Re}\), \(I_{m}\) and \(I_{n}\) are the identity matrices of order m and n, respectively, ⊗ means the Kronecker product.

For our numerical experiments, we set \(A=B+\frac{1}{2}(L-L^{T})\), where L is the strictly lower part of B, and the right hand side vector b of the AVE(1.1) is taken in such a way that the vector \(x=(x_{1},x_{2},\ldots,x_{n})^{T}\) with \(x_{k}=(-1)^{k}\), \(k=1,2,\ldots,n\), being the exact solution. It is easy to see that the matrix A is non-symmetric positive definite.

The computation of the optimal parameter is often problem-dependent and generally difficult to determine. The optimal parameter α and τ employed in each method is experimentally determined such that it results in the least number of iterations.

In Tables 1 and 2, we present the numerical results with respect to the Picard-HSS (PHSS) and the Picard-HSS-SOR (PHSSR) iterations. We give the elapsed CPU time in seconds for the convergence (denoted CPU), the norm of absolute residual vectors (denoted RES), and the number of iteration steps (denoted IT).

Table 1 Numerical results for the test problem with different values of m and q (\(p=0\))
Table 2 Numerical results for the test problem with different values of m and q (\(p=0.5\))

From Tables 1 and 2, we can see that the Picard-HSS-SOR (PHSSR) iteration method takes fewer iterations and CPU times than the Picard-HSS iteration method. It means the PHSSR iteration method for solving absolute value equations is feasible and effective.

5 Conclusions

In this paper, the Picard-HSS-SOR iteration method is presented to solve the absolute value equation, which is more efficient than the Picard-HSS iteration method. We proved the convergence results of the Picard-HSS-SOR iteration method under certain assumptions. Finally, numerical experiments were also implemented so as to check the effective of the proposed method.

Availability of data and materials

Not applicable.

References

  1. Rohn, J.: A theorem of the alternatives for the equation \(Ax+B|x|=b\). Linear Multilinear Algebra 52, 421–426 (2004)

    Article  MathSciNet  Google Scholar 

  2. Mangasarian, O.L., Meyer, R.R.: Absolute value equations. Linear Algebra Appl. 419, 359–367 (2006)

    Article  MathSciNet  Google Scholar 

  3. Rohn, J.: An algorithm for computing all solutions of an absolute value equation. Optim. Lett. 6, 851–856 (2012)

    Article  MathSciNet  Google Scholar 

  4. Rohn, J., Hooshyarbakhsh, V., Farhadsefat, R.: An iterative method for solving absolute value equations and sufficient conditions for unique solvability. Optim. Lett. 8, 35–44 (2014)

    Article  MathSciNet  Google Scholar 

  5. Noor, M.A., Iqbal, J., Noor, K.I.: On an iterative method for solving absolute value equations. Optim. Lett. 6, 1027–1033 (2012)

    Article  MathSciNet  Google Scholar 

  6. Zainali, N., Lotfi, T.: On developing a stable and quadratic convergent method for solving absolute value equation. J. Comput. Appl. Math. 330, 742–747 (2018)

    Article  MathSciNet  Google Scholar 

  7. Mangasarian, O.L.: Sufficient conditions for the unsolvability and solvability of the absolute value equation. Optim. Lett. 11, 1–7 (2017)

    Article  MathSciNet  Google Scholar 

  8. Wu, S.L., Li, C.X.: The unique solution of the absolute value equations. Appl. Math. Lett. 76, 195–200 (2018)

    Article  MathSciNet  Google Scholar 

  9. Haghani, F.K.: On generalized Traub’s method for absolute value equations. J. Optim. Theory Appl. 166, 619–625 (2015)

    Article  MathSciNet  Google Scholar 

  10. Li, C.X.: A modified generalized Newton method for absolute value equations. J. Optim. Theory Appl. 170, 1055–1059 (2016)

    Article  MathSciNet  Google Scholar 

  11. Tang, J.Y., Zhou, J.C.: A quadratically convergent descent method for the absolute value equation \(Ax+B|x|=b\). Oper. Res. Lett. 47, 229–234 (2019)

    Article  MathSciNet  Google Scholar 

  12. Guo, P., Wu, S.L., Li, C.X.: On the SOR-like iteration method for solving absolute value equations. Appl. Math. Lett. 97, 107–113 (2019)

    Article  MathSciNet  Google Scholar 

  13. Ke, Y.F., Ma, C.F.: SOR-like iteration method for solving absolute value equations. Appl. Math. Comput. 311, 195–202 (2017)

    MathSciNet  MATH  Google Scholar 

  14. Ke, Y.F.: The new iteration algorithm for absolute value equation. Appl. Math. Lett. 99, 105990 (2020)

    Article  MathSciNet  Google Scholar 

  15. Zhang, J.J.: The relaxed nonlinear PHSS-like iteration method for absolute value equations. Appl. Math. Comput. 265, 266–274 (2015)

    MathSciNet  MATH  Google Scholar 

  16. Iqbal, J., Iqbal, A., Arif, M.: Levenberg–Marquardt method for solving systems of absolute value equations. J. Comput. Appl. Math. 282, 134–138 (2015)

    Article  MathSciNet  Google Scholar 

  17. Mangasarian, O.L.: A generalized Newton method for absolute value equations. Optim. Lett. 3, 101–108 (2009)

    Article  MathSciNet  Google Scholar 

  18. Salkuyeh, D.K.: The Picard-HSS iteration method for absolute value equations. Optim. Lett. 8, 2191–2202 (2014)

    Article  MathSciNet  Google Scholar 

  19. Edalatpour, V., Hezari, D., Salkuyeh, D.K.: A generalization of the Gauss–Seidel iteration method for solving absolute value equations. Appl. Math. Comput. 293, 156–167 (2017)

    MathSciNet  MATH  Google Scholar 

  20. Berman, A., Plemmons, R.: Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York (1979)

    MATH  Google Scholar 

Download references

Funding

The work are supported by The Natural Science Foundation of Education Bureau of Anhui Province (KJ2020A0017, KJ2017A432).

Author information

Authors and Affiliations

Authors

Contributions

The author carried out the results, and read and approved the current version of the manuscript.

Corresponding author

Correspondence to Lin Zheng.

Ethics declarations

Competing interests

The author declares that there is no conflict of interests. The author declares that there is no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, L. The Picard-HSS-SOR iteration method for absolute value equations. J Inequal Appl 2020, 258 (2020). https://doi.org/10.1186/s13660-020-02525-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13660-020-02525-3

MSC

Keywords