Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

On Picard-SHSS iteration method for absolute value equation

  • Received: 23 September 2020 Accepted: 22 November 2020 Published: 30 November 2020
  • MSC : 65H10, 47H10

  • Picard-type methods are efficient methods for solving the absolute value equation Ax|x|=b. To further improve the performance of Picard iteration method, a new inexact Picard iteration method, named Picard-SHSS iteration method, is proposed to solve the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to demonstrate the effectiveness of the new method.

    Citation: Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen. On Picard-SHSS iteration method for absolute value equation[J]. AIMS Mathematics, 2021, 6(2): 1743-1753. doi: 10.3934/math.2021104

    Related Papers:

    [1] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [2] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [3] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [4] Liliana Guran, Khushdil Ahmad, Khurram Shabbir, Monica-Felicia Bota . Computational comparative analysis of fixed point approximations of generalized $ \alpha $-nonexpansive mappings in hyperbolic spaces. AIMS Mathematics, 2023, 8(2): 2489-2507. doi: 10.3934/math.2023129
    [5] Sima Karamseraji, Shokrollah Ziari, Reza Ezzati . Approximate solution of nonlinear fuzzy Fredholm integral equations using bivariate Bernstein polynomials with error estimation. AIMS Mathematics, 2022, 7(4): 7234-7256. doi: 10.3934/math.2022404
    [6] Miao Guo, Qingbiao Wu . Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689. doi: 10.3934/math.20221027
    [7] Ebrahim. A. Youness, Abd El-Monem. A. Megahed, Elsayed. E. Eladdad, Hanem. F. A. Madkour . Min-max differential game with partial differential equation. AIMS Mathematics, 2022, 7(8): 13777-13789. doi: 10.3934/math.2022759
    [8] Godwin Amechi Okeke, Akanimo Victor Udo, Rubayyi T. Alqahtani, Nadiyah Hussain Alharthi . A faster iterative scheme for solving nonlinear fractional differential equations of the Caputo type. AIMS Mathematics, 2023, 8(12): 28488-28516. doi: 10.3934/math.20231458
    [9] Maryam Arabameri, Raziyeh Gharechahi, Taher A. Nofal, Hijaz Ahmad . A nonstandard compact finite difference method for a truncated Bratu–Picard model. AIMS Mathematics, 2024, 9(10): 27557-27576. doi: 10.3934/math.20241338
    [10] Junaid Ahmad, Muhammad Arshad, Reny George . A fixed point iterative scheme based on Green's function for numerical solutions of singular BVPs. AIMS Mathematics, 2023, 8(12): 29517-29534. doi: 10.3934/math.20231511
  • Picard-type methods are efficient methods for solving the absolute value equation Ax|x|=b. To further improve the performance of Picard iteration method, a new inexact Picard iteration method, named Picard-SHSS iteration method, is proposed to solve the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to demonstrate the effectiveness of the new method.


    The absolute value equation (AVE) of the form

    Ax|x|=b, (1.1)

    where ARn×n, x,bRn, and |x| denotes the component-wise absolute value of the vector x, i.e., |x|=(|x1|,,|xn|)T, arises in a variety of optimization problems, e.g. linear complementarity problem, linear programming or convex quadratic programming problems; see for example [7,17,18,20,21]. It is a special case of the generalized absolute value equation of the type

    Ax+B|x|=b, (1.2)

    where BRn×n. The generalized absolute value equation (1.2) was introduced in [21] and investigated in a more general context [17,18,20].

    The conditions of the unique solvability of AVE (1.1) and generalized absolute value equation (1.2) have been given in [10,12,13,14,18,21,22,25], for example, in [18], Mangasarian and Meyer have shown that the AVE (1.1) for any constant vector has a unique solvability when the smallest singular values of the involved matrix A are greater than one. When AVE (1.1) has the unique solution, how to find the solution is a major research topic. In this study, we consider the iteration method for solving the AVE (1.1). In recent years, a large variety of methods for solving AVE (1.1) can be found in the literature [1,4,5,6,8,15,16,19,22,24,25]. Among these methods, Picard-type methods capture one's attention. Rohn et al. in [22] proposed the Picard iteration method to solve AVE (1.1)

    x(k+1)=A1(|x(k)|+b),k=0,1,2,, (1.3)

    where x(0) is the initial guess. From (1.3), we can see that there is a linear system with the constant coefficient matrix A that needs to be solved in each iteration of the Picard method. To improve the performance of the Picard method, the linear system with matrix A should be solved by inner iteration, this leads to the inexact Picard iteration method. As an example, Salkuyeh [24] suggested that using Hermitian and skew-Hermitian splitting iteration (HSS) method [2] to approximate the solution of the linear system with A at each Picard iteration, and proposed the Picard-HSS method for solving AVE (1.1). In fact, the Picard-HSS method was proposed originally by Bai and Yang for weakly nonlinear systems in [3]. The sufficient conditions to guarantee the convergence of the Picard-HSS method and some numerical experiments are given to show the effectiveness of the method for solving AVE (1.1) in [24].

    Note that each step of the inner HSS iteration of the Picard-HSS method [3,24] requires solving two linear subsystems, one characterized by a Hermitian coefficient matrix and other by a skew-Hermitian coefficient matrix. The solution of linear subsystem with Hermitian coefficient matrix can be easily obtained by CG method, however, the solution of linear subsystem with skew-Hermitian coefficient matrix is not easy to obtain, in some cases, its solution is as difficult as that of the original linear system. To avoid solving a linear subsystem with skew-Hermitian coefficient matrix in the inner iteration of the inexact Picard method, we use the single-step HSS (SHSS) method [9] to approximate the solution of the linear system with coefficient matrix A and present a new inexact Picard method, abbreviated as Picard-SHSS iteration method, in this paper.

    The rest of this paper is organized as follows. In Section 2, after review some notes and the SHSS iteration method, the Picard-SHSS iteration method for solving AVE (1.1) is described. And then the convergence properties of the Picard-SHSS iteration method is studied. Numerical experiments are presented in Section 3, to show the feasibility and effectiveness of the Picard-SHSS method.

    For convenience, some notation, definitions and results that will be used in the following parts are given below. For a matrix ARn×n, AT represents the transpose of A, and ρ(A) denotes the spectral radius of A. A is said to be positive definite if its symmetric part H=12(A+AT) is positive definite, see [11] for the definition of positive definite matrix in a complex set.

    Let ARn×n be a positive definite matrix, and A=H+S be its Hermitian and skew-Hermitian splitting with H=12(A+AT) and S=12(AAT). Based on the Hermitian and skew-Hermitian splitting of A, Bai et al. [2] presented the HSS method

    {(αI+H)x(l+12)=(αIS)x(l)+q,(αI+S)x(l+1)=(αIH)x(l+12)+q (2.1)

    to solve positive definite system of linear equations Ax=q, here α is a positive iteration parameter. There are two linear subsystems that need to be solved at each step of the HSS iteration method, one is the linear subsystem with coefficient matrix αI+H and the other is the linear subsystem with coefficient matrix αI+S for any positive constant α and identity matrix I; see [2] for more details. The challenges of the HSS iteration method lie in solving the linear subsystem with αI+S, which is as difficult as that of the original linear system in some cases. To avoid solving a linear subsystem with αI+S in the HSS iteration method, the SHSS method was proposed recently [9]. The iteration scheme of the SHSS method used for solving system of the linear equations Ax=q can be written equivalently as

    (αI+H)x(l+1)=(αIS)x(l)+q. (2.2)

    It has been proved in [9] that, under a loose restriction on the iteration parameter α, the SHSS method is convergent to the unique solution of the linear system Ax=q for any initial guess x(0)Rn.

    When A is positive definite matrix, using the HSS method (2.1) as an inner iteration in the Picard method (1.3), Salkuyeh [24] proposed the following Picard-HSS method for solving the AVE (1.1)

    Method 2.1. (The Picard-HSS iteration method) Let ARn×n be a positive definite matrix, H=12(A+AT) and S=12(AAT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)Rn and a sequence {lk}k=0 of positive integers, compute x(k+1) for k=0,1,2, using the following iteration scheme until {x(k)} satisfies the stopping criterion:

    (a). Set x(k,0)=x(k);

    (b). For l=0,1,,lk1, solve the following linear system to obtain x(k,l+1):

    {(αI+H)x(k,l+12)=(αIS)x(k,l)++|x(k)|+b,(αI+S)x(k,l+1)=(αIH)x(k,l+12)++|x(k)|+b,

    where α is a positive constant and I is a the identity matrix;

    (c). Set x(k+1)=x(k,lk).

    The next iterate of x(k+1) can be approximately computed by the SHSS iteration by making use of the splitting A=M(α)N(α) as following (see [9])

    M(α)x(k,l+1)=N(α)x(k,l)+|x(k)|+b,l=0,1,,lk1,k=0,1,2,, (2.3)

    where M(α)=αI+H and N(α)=αIS, α is a positive constant, {lk}k=0 a prescribed sequence of positive integers, and x(k,0)=x(k) is the starting point of the inner SHSS iteration at k-th outer Picard iteration. This leads to the following inexact Picard iteration method, called Picard-SHSS iteration method, for solving the AVE (1.1)

    Method 2.2. (The Picard-SHSS iteration method) Let ARn×n be a positive definite matrx, H=12(A+AT) and S=12(AAT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)Rn and a sequence {lk}k=0 of positive integers, compute x(k+1) for k=0,1,2, using the following iteration scheme:

    (a). Set x(k,0)=x(k);

    (b). For l=0,1,,lk1, solve the following linear system to obtain x(k,l+1):

    (αI+H)x(k,l+1)=(αIS)x(k,l)+|x(k)|+b,

    where α is a positive constant and I is a the identity matrix;

    (c). Set x(k+1)=x(k,lk);

    (d). If x(k+1) satisfies Ax(k+1)|x(k+1)|b2b2107, then stop; otherwise, let x(k)=x(k+1) and go back to (a).

    Compared with the Picard-HSS iteration method [24], a linear subsystem with αI+S is avoided in the inner iteration of the Picard-SHSS iteration method. The involved linear subsystem with αI+H of the Picard-SHSS iteration method can be efficiently solved exactly by a sparse Cholesky factorization, or inexactly by a preconditioned Conjugate Gradient method [23].

    The next theorem provides sufficient condition for the convergence of the Picard-SHSS method to solve the AVE (1.1).

    Theorem 2.1. Let ARn×n be a positive definite matrix and H=12(A+AT) and S=12(AAT) be its Hermitian and skew-Hermitian parts, respectively. Let η=A12<1, α be a constant number such that α>max, where \lambda_{\min} is the smallest eigenvalue of H and \sigma_{\max} is the largest singular-value of S . Then the AVE (1.1) has a unique solution x^* , and for any initial guess x^{(0)}\in \mathbb{R}^n and any sequence of positive integers l_k , k = 0, 1, \cdots , the iteration sequence \{x^{(k)}\}_{k = 0}^\infty produced by the Picard-SHSS iteration method converges to x^* provided that l_k\geq N for all k = 0, 1, \cdots , where N is a natural number satisfying

    \|T(\alpha)^s\|_2 \lt \frac{1-\eta}{1+\eta}\; \; \forall s\geq N

    with T(\alpha) = M(\alpha)^{-1}N(\alpha) .

    Proof. The proof is similar to that of [24, Theorem 1], for self-contained purpose, we give the proof as follows. Based on the iteration scheme (2.3), we can express the (k+1) th iterate x^{(k+1)} of the Picard-SHSS iteration method as

    \begin{equation} x^{(k+1)} = T(\alpha)^{l_k}x^{(k)}+\sum\limits_{j = 0}^{l_k-1}T(\alpha)^{j}M(\alpha)^{-1}(|x^{(k)}|+b), \; k = 0, 1, 2, \cdots. \end{equation} (2.4)

    Note that \eta < 1 , then AVE (1.1) has a unique solution x^*\in \mathbb{R}^n [18] such that

    \begin{equation} x^* = T(\alpha)^{l_k}x^*+\sum\limits_{j = 0}^{l_k-1}T(\alpha)^{j}M(\alpha)^{-1}(|x^*|+b), \; k = 0, 1, 2, \cdots. \end{equation} (2.5)

    By subtracting (2.5) from (2.4) we have

    \begin{equation} x^{(k+1)}-x^* = T(\alpha)^{l_k}(x^{(k)}-x^*)+\sum\limits_{j = 0}^{l_k-1}T(\alpha)^{j}M(\alpha)^{-1}(|x^{(k)}|-x^*). \end{equation} (2.6)

    It follows from [9, Theorem 2.1] that \rho(T(\alpha)) < 1 when \alpha satisfies \alpha > \max\left\{0, \frac{\sigma_{\max}^2-\lambda_{\min}^2}{2\lambda_{\min}}\right\} . In this case, some calculations yield \sum_{j = 0}^{l_k-1}T(\alpha)^{j}M(\alpha)^{-1} = \left(I-T(\alpha)^{l_k}\right)A^{-1}. Now (2.6) becomes

    \begin{eqnarray*} x^{(k+1)}-x^*& = &T(\alpha)^{l_k}(x^{(k)}-x^*)+\left(I-T(\alpha)^{l_k}\right)A^{-1}(|x^{(k)}|-x^*)\\ & = &T(\alpha)^{l_k}\left[(x^{(k)}-x^*)-A^{-1}(|x^{(k)}|-x^*)\right]+A^{-1}(|x^{(k)}|-x^*). \end{eqnarray*}

    Note that \||x|-|y|\|_2\leq\|x-y\|_2 for any x, y\in \mathbb{R}^n , it then follows that

    \left\|x^{(k+1)}-x^*\right\|_2\leq\left(\left\|T(\alpha)^{l_k}\right\|_2(1+\eta)+\eta\right)\left\|x^{(k)}-x^*\right\|_2.

    The condition of \rho(T(\alpha)) < 1 when \alpha satisfies \alpha > \max\left\{0, \frac{\sigma_{\max}^2-\lambda_{\min}^2}{2\lambda_{\min}}\right\} ensures that T(\alpha) tend to 0 as s tend to infinity. Therefore, there is a natural number N such that

    \left\|T(\alpha)^{s}\right\|_2 \lt \varepsilon : = \frac{1-\eta}{1+\eta}\; \; \; \forall s\geq N.

    Now, if l_k\geq N for all k = 0, 1, \cdots , then \left\|x^{(k+1)}-x^*\right\|_2 < \left\|x^{(k)}-x^*\right\|_2 , hence the iteration sequence \{x^{(k)}\}_{k = 0}^\infty produced by the Picard-SHSS iteration method converges to x^* .

    In actual computation, the residual-updating form of the Picard-SHSS iteration method is more convenient, which can be written as following.

    The Picard-SHSS iteration method (residual-updating variant): Let A\in \mathbb{R}^{n\times n} be no-Hermitian positive definite, H = \frac{1}{2}(A+A^T) and S = \frac{1}{2}(A-A^T) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x^{(0)}\in \mathbb{R}^n and a sequence \{l_k\}_{k = 0}^\infty of positive integers, compute x^{(k+1)} for k = 0, 1, 2, \cdots using the following iteration scheme until \{x^{(k)}\} satisfies the stopping criterion:

    (a). Set s^{(k, 0)} = 0 and b^{(k)} = |x^{(k)}|+b-Ax^{(k)} ;

    (b). For l = 0, 1, \cdots, l_k-1 , solve the following linear system to obtain s^{(k, l+1)} :

    (\alpha I+H)s^{(k, l+1)} = (\alpha I-S)s^{(k, l)}+b^{(k)},

    where \alpha is a positive constant and I is the identity matrix;

    (c). Set x^{(k+1)} = x^{(k)}+s^{(k, l_k)} ;

    (d). If x^{(k+1)} satisfies \frac{\|Ax^{(k+1)}-|x^{(k+1)}|-b\|_2}{\|b\|_2}\leq 10^{-7} , then stop; otherwise, let x^{(k)} = x^{(k+1)} and go back to (a).

    In this section, we give an example with numerical experiments to show the effectiveness of the Picard-SHSS iteration method to solve AVE (1.1), to do this, the numerical properties of the Picard-HSS and Picard-SHSS methods are examined and compared experimentally. We use the residual-updating versions of the Picard-HSS iteration method [24] and Picard-SHSS iteration method.

    All the numerical experiments presented in this section have been computed in double precision using some MATLAB R2012b on Intel (R) Core (TM) i5-2400 CPU 3.10 GHz and 4.00 GB of RAM. All runs are started from the initial zero vector and terminated if the current relative residual satisfies

    {\rm RES}: = \frac{\|Ax^{(k)}-|x^{(k)}|-b\|_2}{\|b\|_2}\leq 10^{-7},

    where x^{(k)} is the computed solution by each of the methods at iteration k , and a maximum number of the iterations 500 is used. In addition, the stopping criterion for the inner iterations of the Picard-HSS and Picard-SHSS methods are set to be

    \frac{\|b^{(k)}-As^{(k, l)}\|_2}{\|b^{(k)}\|_2}\leq 0.01,

    and a maximum number of the iterations 10 ( l_k = 10, \; k = 0, 1, 2, \cdots ) for inner iterations are used.

    The coefficient matrix A of AVE (1.1) is given by

    \begin{equation} A = T_x\otimes I_m+I_m\otimes T_y+pI_n, \end{equation} (3.1)

    where I_m and I_n are the identity matrices of order m and n with n = m^2 , \otimes means the Kronecker product, T_x and T_y are tridiagonal matrices T_x = \mbox{tridiag}(t_2, \; t_1, \; t_3)_{m\times m} and T_y = \mbox{tridiag}(t_2, \; 0, \; t_3)_{m\times m} with t_1 = 4 , t_2 = -1-Re , t_3 = -1+Re . Here Re = (qh)/2 and h = 1/(m+1) are the mesh Reynolds number and the equidistant step size, respectively, and q is a positive constant. In fact, the matrix A arising from the finite difference approximation the two-dimensional convection-diffusion equation

    \left\{ \begin{array}{ll} -(u_{xx}+u_{yy})+q(u_x+u_y)+pu = f(x, y), &\; (x, y)\in\Omega, \\ u(x, y) = 0, &\; (x, y)\in\partial\Omega, \end{array}\right.

    where \Omega = (0, \; 1)\times(0, \; 1) , \partial\Omega is its boundary, q is a positive constant used to measure the magnitude of the diffusive term and p is a real number. If we use the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms, then we obtained the matrix A . It is easy to find that for every nonnegative number q the matrix A is in general non-symmetric positive definite [24]. The right-hand side vector of AVE (1.1) is taken such a way that the vector x = (x_1, \; x_2, \; \cdots, \; x_n)^T with x_i = (-1)^ii, \; \; i = 1, 2, \cdots, n be the exact solution.

    In our numerical experiments, the matrix A in AVE (1.1) is defined by (3.1) with different values of q ( q = 0, \; 1, \; 10, \; \mbox{and}\; 100 ) and different values of p ( p = 0\; \mbox{and}\; -1 ). The parameters used in the Picard-HSS and Picard-SHSS iteration methods are chosen to be the ones, which result in the least number of iteration steps of iteration methods, see Tables 1 and 2. In Tables 3 and 4, we present the numerical results with respect to the Picard-HSS and Picard-SHSS iteration methods. We give the elapsed CPU time in seconds for the convergence (denoted by CPU), the number of iterations for the convergence (denoted by IT) and the relative residuals (denoted by RES).

    Table 1.  The values of \alpha for Picard-HSS and Picard-SHSS methods ( p = 0 ).
    m=10 m=20 m=40 m=80
    q=0 Picard-HSS 11.69 12.6 13.4 13
    Picard-SHSS 5.745 6.5 6.4 6.6
    q=1 Picard-HSS 12.01 13.6 14 13
    Picard-SHSS 5.926 6.6 6.75 6.6
    q=10 Picard-HSS 6.76 10.99 13.43 15.8
    Picard-SHSS 3.594 5.52 6.63 8.0
    q=100 Picard-HSS 23.3 23.1 8.7 9.2
    Picard-SHSS 81.5 26.4 4.99 4.57

     | Show Table
    DownLoad: CSV
    Table 2.  The values of \alpha for Picard-HSS and Picard-SHSS methods ( p = -1 ).
    m=10 m=20 m=40 m=80
    q=0 Picard-HSS 13.8 11.2 10.36 10.1
    Picard-SHSS 6.96 5.7 5.21 5.1
    q=1 Picard-HSS 14 11.29 10.4 11
    Picard-SHSS 7.05 5.72 5.2 5.1
    q=10 Picard-HSS 20.55 13.6 11.1 11
    Picard-PHSS 11.2 7.0 5.65 5.2
    q=100 Picard-HSS 27.2 22 11.1 21
    Picard-SHSS 108 33.1 11.57 10.65

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results for different values of m and q ( p = 0 ).
    Methods m=10 m=20 m=40 m=80
    q=0 Picard-HSS IT 36 32 30 28
    CPU 0.0280 0.0371 0.1266 0.9486
    RES 9.8815e-8 9.3643e-8 9.9900e-8 9.9043e-8
    Picard-SHSS IT 37 32 30 29
    CPU 0.0137 0.0245 0.1075 0.9060
    RES 9.4432e-8 9.6857e-8 9.7569e-8 9.9256e-8
    q=1 Picard-HSS IT 35 32 31 28
    CPU 0.0220 0.0457 0.2377 1.9978
    RES 9.1372e-8 9.8614e-8 9.3177e-8 9.7012e-8
    Picard-SHSS IT 36 32 31 29
    CPU 0.0136 0.0273 0.1292 1.1940
    RES 9.8685e-8 9.4248e-8 9.4655e-8 9.6233e-8
    q=10 Picard-HSS IT 29 66 33 36
    CPU 0.0163 0.0939 0.2636 2.5265
    RES 9.5635e-8 9.9755e-8 9.8395e-8 9.9024e-8
    Picard-SHSS IT 29 63 33 37
    CPU 0.0105 0.0528 0.1372 1.4083
    RES 9.6650e-8 9.8439e-8 9.7901e-8 9.8170e-8
    q=100 Picard-HSS IT 11 17 35 146
    CPU 0.0117 0.0342 0.2826 9.6469
    RES 9.8396e-8 9.9503e-8 9.3177e-8 9.7165e-8
    Picard-SHSS IT 44 25 35 140
    CPU 0.0161 0.0246 0.1471 4.9182
    RES 9.8783e-8 9.8253e-8 9.8972e-8 9.7299e-8

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results for different values of m and q ( p = -1 ).
    Methods m=10 m=20 m=40 m=80
    q=0 Picard-HSS IT 24 62 206 769
    CPU 0.0186 0.0944 0.8224 25.7551
    RES 9.6164e-8 9.5608e-8 9.8436e-8 9.8404e-8
    Picard-SHSS IT 21 52 166 614
    CPU 0.0099 0.0444 0.5608 18.5361
    RES 9.6537e-8 9.7425e-8 9.9905e-8 9.8795e-8
    q=1 Picard-HSS IT 24 61 203 908
    CPU 0.0193 0.0877 1.4320 57.8838
    RES 9.7842e-8 9.9013e-8 9.3292e-8 9.9324e-8
    Picard-SHSS IT 21 51 166 598
    CPU 0.0102 0.0421 0.6038 18.9311
    RES 9.5087e-8 9.5556e-8 9.5308e-8 9.8338e-8
    q=10 Picard-HSS IT 14 27 74 297
    CPU 0.0175 0.0519 0.5532 18.6613
    RES 9.8754e-8 9.1215e-8 9.8943e-8 9.6129e-8
    Picard-SHSS IT 14 23 61 204
    CPU 0.0084 0.0283 0.2427 6.8279
    RES 9.0191e-8 9.9463e-8 9.3622e-8 9.7081e-8
    q=100 Picard-HSS IT 14 20 64 65
    CPU 0.0151 0.0372 0.4811 4.2747
    RES 9.8364e-8 9.8128e-8 9.4024e-8 9.8372e-8
    Picard-SHSS IT 83 44 71 63
    CPU 0.0508 0.0384 0.2824 2.2035
    RES 9.9168e-8 9.9270e-8 9.8733e-8 9.5253e-8

     | Show Table
    DownLoad: CSV

    From the Tables 3 and 4, we see that both the Picard-HSS and Picard-SHSS iteration methods can successfully produced approximate solution to the AVE (1.1) for all of the problem-scales n = m^2 and the convective measurements q . For the convergent cases, the CPU time also increases rapidly with the increasing of the problem-scale for all tested iteration methods. Moreover, numerical results in the two tables show that the Picard-SHSS iteration method performs better than the Picard-HSS iteration method in most cases as the former one cost the least CPU time to achieve stopping criterion except the cases of q = 100 , m = 10 and q = 100, \; m = 20 . In addition, for p = -1 , the Picard-SHSS iteration method costs the least number of iteration steps and CPU time to achieve stopping criterion. In summary, the Picard-SHSS iteration method is useful and effective for solving the NP-hard AVE (1.1).

    In this paper, the Picard-SHSS method is proposed for solving the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to confirm our theoretical results and to verify the effectiveness of the new method.

    We would like to thank Professor Davod Khojasteh Salkuyeh of University of Guilan for providing us the MATLAB code of the Picard-HSS method. Many thanks to the Editor and the anonymous referees for their helpful comments on the revision of this article.

    This work was supported by Natural Science Foundation of Northwest Normal University (No. NWNU-LKQN-17-5).

    The authors declare that they have no conflicts of interest.



    [1] L. Abdallah, M. Haddou, T. Migot, Solving absolute value equation using complementarity and smoothing functions, J. Comput. Appl. Math., 327 (2018), 196-207. doi: 10.1016/j.cam.2017.06.019
    [2] Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), 603-626. doi: 10.1137/S0895479801395458
    [3] Z. Z. Bai, X. Yang, On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math., 59 (2009), 2923-2936. doi: 10.1016/j.apnum.2009.06.005
    [4] L. Caccetta, B. Qu, G. Zhou, A globally and quadratically convergent method for absolute value equations, Comput. Optim. Appl., 48 (2011), 45-58. doi: 10.1007/s10589-009-9242-9
    [5] J. M. Feng, S. Y. Liu, A three-step iterative method for solving absolute value equations, J. Math., 2020 (2020), 1-7.
    [6] X. M. Gu, T. Z. Huang, H. B. Li, S. F. Wang, L. Li, Two CSCS-based iteration methods for solving absolute value equations, J. Appl. Anal. Comput., 7 (2017), 1336-1356.
    [7] S. L. Hu, Z. H. Huang, A note on absolute value equations, Optim. Lett., 4 (2010), 417-424. doi: 10.1007/s11590-009-0169-y
    [8] Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195-202.
    [9] C. X. Li, S. L. Wu, A single-step HSS method for non-Hermitian positive definite linear systems, Appl. Math. Lett., 44 (2015), 26-29. doi: 10.1016/j.aml.2014.12.013
    [10] F. Mezzadri, On the solution of general absolute value equations, Appl. Math. Lett., 107 (2020), art. 106462.
    [11] C. L. Wang, Z. Z. Bai, Sufficient conditions for the convergent splittings of non-Hermitian positive definite matrices, Linear Algebra Appl., 330 (2001), 215-218. doi: 10.1016/S0024-3795(01)00275-0
    [12] S. L. Wu, P. Guo, On the unique solvability of the absolute value equation, J. Optim. Theory Appl., 169 (2016), 705-712. doi: 10.1007/s10957-015-0845-2
    [13] S. L. Wu, C. X. Li, The unique solution of the absolute value equations, Appl. Math. Lett., 76 (2018), 195-200. doi: 10.1016/j.aml.2017.08.012
    [14] S. L. Wu, C. X. Li, A note on unique solvability of the absolute value equation, Optim. Lett., 14 (2020), 1957-1960. doi: 10.1007/s11590-019-01478-x
    [15] O. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101-108. doi: 10.1007/s11590-008-0094-5
    [16] O. Mangasarian, Knapsack feasibility as an absolute value equation solvable by successive linear programming, Optim. Lett., 3 (2009), 161-170. doi: 10.1007/s11590-008-0102-9
    [17] O. Mangasarian, Primal-dual bilinear programming solution of the absolute value equation, Optim. Lett., 6 (2012), 1527-1533. doi: 10.1007/s11590-011-0347-6
    [18] O. Mangasarian, R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006), 359-367. doi: 10.1016/j.laa.2006.05.004
    [19] M. Noor, J. Iqbal, K. Noor, E. Al-Said, On an iterative method for solving absolute value equations, Optim. Lett., 6 (2012), 1027-1033. doi: 10.1007/s11590-011-0332-0
    [20] O. Prokopyev, On equivalent reformulations for absolute value equations, Comput. Optim. Appl., 44 (2009), 363-372. doi: 10.1007/s10589-007-9158-1
    [21] J. Rohn, A theorem of the alternatives for the equation {A}x+{B}|x| = b, Linear Multilinear Algebra, 52 (2004), 421-426. doi: 10.1080/0308108042000220686
    [22] J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35-44. doi: 10.1007/s11590-012-0560-y
    [23] Y. Saad, Iterative methods for sparse linear systems, 2 Eds., Society for Industrial and Applied Mathematics, Philadelphia, 2003.
    [24] D. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191-2202. doi: 10.1007/s11590-014-0727-9
    [25] C. Zhang, Q. Wei, Global and finite convergence of a generalized newton method for absolute value equations, J. Optim. Theory Appl., 143 (2009), 391-403. doi: 10.1007/s10957-009-9557-9
  • This article has been cited by:

    1. Miao Guo, Qingbiao Wu, Two effective inexact iteration methods for solving the generalized absolute value equations, 2022, 7, 2473-6988, 18675, 10.3934/math.20221027
    2. Fujie Zhang, Na Huang, Generalized SOR-like iteration method for solving weakly nonlinear systems, 2022, 99, 0020-7160, 1579, 10.1080/00207160.2021.1994961
    3. Wen-Ling Tang, Shu-Xin Miao, On the solvability and Picard-type method for absolute value matrix equations, 2022, 41, 2238-3603, 10.1007/s40314-022-01782-w
    4. Ghodrat Ebadi, Somayeh Seifollahzadeh, Cornelis Vuik, New iterative methods for solving generalized absolute value equations, 2024, 43, 2238-3603, 10.1007/s40314-024-02811-6
    5. Xiaohui Yu, Qingbiao Wu, Two efficient iteration methods for solving the absolute value equations, 2024, 01689274, 10.1016/j.apnum.2024.10.009
    6. Yan-Xia Dai, Ren-Yi Yan, Ai-Li Yang, Minimum Residual BAS Iteration Method for Solving the System of Absolute Value Equations, 2024, 2096-6385, 10.1007/s42967-024-00403-z
    7. Susan H. Mohammad, Abdulghafor Mohammed Al-Rozbayani, Aliaa Burqan, Fractional Integration via Picard Method for Solving Fractional Differential‐Algebraic Systems, 2024, 2024, 1110-757X, 10.1155/2024/8499850
    8. Lu-Lin Yan, Yi-Xin Jiang, Shu-Xin Miao, Xian-Ming Gu, A Relaxation Iteration Method with Three Parameters for Solving Absolute Value Equation, 2024, 2024, 2314-8896, 10.1155/2024/6842559
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3058) PDF downloads(308) Cited by(8)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog