Research article

Two effective inexact iteration methods for solving the generalized absolute value equations

  • Received: 23 May 2022 Revised: 03 August 2022 Accepted: 08 August 2022 Published: 22 August 2022
  • MSC : 47H10, 65H10

  • Modified Newton-type methods are efficient for addressing the generalized absolute value equations. In this paper, to further speed up the modified Newton-type methods, two new inexact modified Newton-type iteration methods are proposed. The sufficient conditions for the convergence of the two proposed inexact iteration methods are given. Moreover, to demonstrate the efficacy of the new method, several numerical examples are provided.

    Citation: Miao Guo, Qingbiao Wu. Two effective inexact iteration methods for solving the generalized absolute value equations[J]. AIMS Mathematics, 2022, 7(10): 18675-18689. doi: 10.3934/math.20221027

    Related Papers:

    [1] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [2] Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen . On Picard-SHSS iteration method for absolute value equation. AIMS Mathematics, 2021, 6(2): 1743-1753. doi: 10.3934/math.2021104
    [3] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [4] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [5] Mouhamad Al Sayed Ali, Miloud Sadkane . Acceleration of implicit schemes for large systems of nonlinear differential-algebraic equations. AIMS Mathematics, 2020, 5(1): 603-618. doi: 10.3934/math.2020040
    [6] Luyao Zhao, Jingyong Tang . Convergence properties of a family of inexact Levenberg-Marquardt methods. AIMS Mathematics, 2023, 8(8): 18649-18664. doi: 10.3934/math.2023950
    [7] Cui-Xia Li, Long-Quan Yong . Modified BAS iteration method for absolute value equation. AIMS Mathematics, 2022, 7(1): 606-616. doi: 10.3934/math.2022038
    [8] Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970
    [9] Rashid Ali, Ilyas Khan, Asad Ali, Abdullah Mohamed . Two new generalized iteration methods for solving absolute value equations using $ M $-matrix. AIMS Mathematics, 2022, 7(5): 8176-8187. doi: 10.3934/math.2022455
    [10] ShiLiang Wu, CuiXia Li . A special shift splitting iteration method for absolute value equation. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
  • Modified Newton-type methods are efficient for addressing the generalized absolute value equations. In this paper, to further speed up the modified Newton-type methods, two new inexact modified Newton-type iteration methods are proposed. The sufficient conditions for the convergence of the two proposed inexact iteration methods are given. Moreover, to demonstrate the efficacy of the new method, several numerical examples are provided.



    The main research of this article is about the generalized absolute value equation (GAVE):

    AxB|x|=b, (1.1)

    where A,BRn×n,bRn and |x|:=(|x1|,|x2|,,|xn|)T. Actually, when B is a identity matrix, the GAVE (1.1) can be easily transformed into the absolute value equation (AVE):

    Ax|x|=b. (1.2)

    The GAVE (1.1) was first proposed by Rohn as a class of nonlinear nondifferentiable optimization problems [1]. After that, the AVE (1.2) and GAVE (1.1) has applications in many areas of optimization, including linear complementarity problem, bimatrix games, constrained least squares problems, and so on, see [1,2,3,4,5,6]. For instance, the famous linear complementarity problem (LCP) [5]: Determining z such that

    Mz+q0, z0 and zT(Mz+q)=0, with MRn×n, qRn. (1.3)

    In [3], Mangasarian has proved that the GAVE (1.1) and LCP (1.3) can be transformed into each other under some conditions, and he also proposed the unique solvability of the AVE. Moreover, many scholars have further studied the solvability and unique solution theory of the AVE (1.2) [7,8,9]. In the early algorithm research, some efficient iteration methods have been proposed to address the GAVE (1.1) and AVE (1.2), which can be found in Mangasarian [2,10,11], Rohn[12], and so on. Then in [13], in order to overcome the difficulty of constructing the gradient of the absolute value term, Mangasarian presented the generalized Newton (GN) algorithm through using the generalized Jacobian matrix. This method may be summed up as follows:

    x(k+1)=(AD(x(k)))1b,  k=0,1,2,..., (1.4)

    where D(x):=diag(sgn(x)), xRn, sgn(x) denotes a vector with components equal to 1, 0, 1 depending on whether the corresponding components of x is positive, zero or negative. Moreover, the GN algorithm is also able to apply on the GAVE (1.1) if we use BD(x(k)) instead of D(x(k)). Based on the GN method, Li[14] proposed a modified generalized Newton method. It should be noticed that the Jacobian matrix is changed with respect to the iteration index k, which will increase the amount of computation greatly. In order to stay away from the changed Jacobian matrix, many literature on the result of the AVE (1.2) and GAVE (1.1) are proposed. For instance, Rohn et al. presented the Picard method when A is invertible in [15]:

    x(k+1)=A1(|x(k)|+b),  k=0,1,2,... . (1.5)

    In [16], Wang et al. added a new term Ωx (Ω is positive semi-definite and it can be guaranteed that A+Ω is invertible) into the general problem and noticed that Ax+Ωx is differential but Ωx+B|x|+b non-differential, then studied a modified Newton-type(MN) iteration method:

    x(k+1)=(A+Ω)1(Ωx(k)+B|x(k)|+b),  k=0,1,2,..., (1.6)

    To further speed up the AOR method, Li et al.[17] presented the generalization version algorithm. Actually, the above algorithms all belong to Newton-based matrix splitting methods and their relaxed versions, which is proposed in [18]. Sometimes the coefficient matrix A has some special properties, for example, A is M-matrix, under which condition Ali et al. [19] proposed two new generalized Gauss-Seidel iteration methods. In [20], a new iteration method was presented by redesigning equivalently the AVE in (1.2) as a 2×2 block nonlinear equation. Also using matrix blocking, the block-diagonal and anti-block-diagonal splitting (BAS) method[21] and modified BAS method[22] are proposed.

    Recently, in order to continue to improve the efficiency of solving the AVE (1.2), some inexact iteration algorithms have been studied. Every step in iteration of the Picard method can be viewed as solving a linear system with the coefficient matrix A. Therefore, Salkuyeh[23] presented the Picard-HSS method for AVE (1.2), which used the HSS method [24] to estimate the result x(k+1) at each Picard iteration. On this basis, Miao et al. [25] used single-step HSS (SHSS) method [26] to address the linear system in Picard iteration and proposed the Picard-SHSS method, a new Picard-type method. Inspired by these, we adopt the modified Newton-type iteration method with faster convergence speed as the outer iterative method, HSS and SHSS as the inner iterative method respectively, then we obtain two new inexact iterative algorithms, abbreviated as MN-HSS method and MN-SHSS method.

    The remainder of this article is laid out as follows. In Section 2, the MN-HSS method is described, and we investigate its convergence property. Section 3 is devoted to introducing the MN-SHSS method and analyzing its convergence. Section 4 contains several numerical experiments, showing the feasibility and efficiency of the two new inexact iteration methods. Section 5 draws some conclusions.

    At the beginning of this section, some notations are given blow. ones(n,1) represents a vector like (1,1,,1)TR. ρ(A) indicates the spectral radius of A. The symbol ()T represents the transepose of a vector or matrix. And we have limk+Ak=0 if and only if ρ(A)<1 (see[27,28]).

    Given ARn×n be a non-Hermitian positive definite matrix. Then H=12(A+AT)S=12(AAT), which are the Hermitian part and skew-Hermitian part of A, A=H+S. Then the HSS iteration method was presented by Bai et al.[24] for addressing positive definite system Ax=b. The iterative process of the HSS method can be expressed as follows:

    {x(l+12)=(αI+H)1((αIS)x(l)+b),x(l+1)=(αI+S)1((αIH)x(l+12)+b), (2.1)

    where α is a positive constant.

    Section 1 has introduced the MN method, which is equivalent to the following form

    (A+Ω)x(k+1)=Ωx(k)+B|x(k)|+b,  k=0,1,2,... . (2.2)

    Similar to the Picard-HSS method, we can choose a suitable Ω to make sure the positive definiteness of A+Ω. The HSS method (2.1) can be applied as the inner iteration method of the MN method for solving the GAVE (1.1), called MN-HSS method.

    Algorithm (The MN-HSS iteration method):

    For k=0,1,..., until converge, do:

    Set x(k,0):=x(k)

    For l=0,1,..., until converge, do:

    {x(k,l+12)=(αI+H)1((αIS)x(k,l)+Ωx(k)+B|x(k)|+b),x(k,l+1)=(αI+S)1((αIH)x(k,l+12)+Ωx(k)+B|x(k)|+b), (2.3)

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1):=x(k,lk).

    Endfor

    The following theorem proves that the MN-HSS algorithm converges under necessary conditions.

    Theorem 2.1. Let the matrix A,BRn×n and choose suitable ΩRn×n such that A+Ω is a positive definite matrix, then H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T) are the Hermitian and skew-Hermitian parts of A+Ω respectively. Let also A1B2 <1 and (A+Ω)12<1B2+Ω2. Then the GAVE (1.1) has a unique solution x, and for any starting vector x(0)Rn and any sequence of positive integers lk, k=0,1,..., the iteration sequence {x(k)}k=0 produced by the MN-HSS iteration method converges to x provided that l=lim infklkN, where N is a natural number satisfying

    Tsα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), s>N,

    where Tα=M1αNα, Mα=12α(αI+H)(αI+S), Nα=12α(αIH)(αIS).

    Proof. Let Gα=M1α, then according to the iteration formula (2.3), we can write the (k+1)th iterate x(k+1) of the MN-HSS iteration method as

    x(k+1)=Tlkαx(k)+lk1j=0TjαGα(Ωx(k)+B|x(k)|+b), k=0,1,2,.... (2.4)

    Since A1B2<1, the GAVE (1.1) has a unique result x [3], which satisfying

    x=Tlkαx+lk1j=0TjαGα(Ωx+B|x|+b), k=0,1,2,.... (2.5)

    By subtracting (2.5) from (2.4), we have

    x(k+1)x=Tlkα(x(k)x)+lk1j=0TjαGα(Ω(x(k)x)+B|x(k)|B|x|). (2.6)

    It follows from [24] that ρ(Tα)<1, then

    lk1j=0TjαGα=(ITlkα)(ITα)1Gα=(ITlkα)(IM1αNα)1M1α=(ITlkα)(MαNα)1=(ITlkα)(A+Ω)1.

    Then the Eq (2.6) turns into

    x(k+1)x=Tlkα(x(k)x)+(ITlkα)(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|)=Tlkα((A+Ω)1A(x(k)x)(A+Ω)1B(|x(k)||x|))+(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|).

    It is obvious that for any x,yRn, |x||y|2≤∥xy2. Therefore,

    x(k+1)x2[Tlkα2((A+Ω)12A2+(A+Ω)12B2)+(A+Ω)12Ω2+(A+Ω)12B2]x(k)x2.

    On the other hand, since ρ(Tα)<1, limsTsα=0. Therefore, there is a natural number N such that

    Tsα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), sN.

    Thus, assume that l=lim infklkN, then we can obtain the result. This completes the proof.

    The MN-HSS method can also be expressed as the residual-updating form:

    Algorithm (The residual-updating variant of MN-HSS iteration method):

    For k=0,1,..., until converge, do:

    Set d(k,0)=0 and b(k)=B|x(k)|+bAx(k)

    For l=0,1,..., until converge, do:

    {d(k,l+12)=(αI+H)1((αIS)d(k,l)+b(k)),d(k,l+1)=(αI+S)1((αIH)d(k,l+12)+b(k)),

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1)=x(k)+d(k,lk).

    Endfor

    From the iteration scheme (2.3) we can notice that the each step of HSS iteration method is equivalent to solving two linear subsystems, whose coefficient matrices are Hermitian and skew-Hermitian respectively. The first equation with Hermitian coefficient matrix can be solved quickly by conjugate gradient method or Cholesky factorization. However, the other subsystem needs much more computation. So as to accelerate the algorithm, we replace the inner iteration method with single-step HSS (SHSS) method, then we present a new inexact method in this section, named as MN-SHSS iteration method.

    Compared with the HSS method, the SHSS method ignores the second subsystem with skew-Hermitian matrix, which can be expressed as

    (αI+H)x(l+1)=(αIS)x(l)+b, (3.1)

    by limiting the iteration parameter α, Li et al.[26] has proved that the SHSS converges to the unique solution for any initial guess x(0).

    Algorithm (The MN-SHSS iteration method):

    For k=0,1,..., until converge, do:

    Set x(k,0):=x(k)

    For l=0,1,..., until converge, do:

    x(k,l+1)=(αI+H)1((αIS)x(k,l)+Ωx(k)+B|x(k)|+b), (3.2)

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1):=x(k,lk).

    Endfor

    The next theorem shows that the MN-SHSS method is convergent under a restriction on the parameter α.

    Theorem 3.1. Let the matrix A,BRn×n and choose suitable ΩRn×n such that A+Ω is a positive definite matrix, then H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T) are the Hermitian and skew-Hermitian parts of A+Ω respectively. Let also A1B2 <1, (A+Ω)12<1B2+Ω2, and α be a constant number such that α>max{0,δ2maxλ2min2λmin}, where δmax is the largest singular value of S and λmin is the smallest eigenvalue of H. Then the GAVE (1.1) has a unique solution x, and for any starting vector x(0)Rn and any sequence of positive integers lk, k=0,1,..., the iteration sequence {x(k)}k=0 produced by the MN-SHSS iteration method converges to x provided that l=lim infklkN, where N is a natural number satisfying

    Ksα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), s>N,

    where Kα=M1αNα, Mα=αI+H, Nα=αIS.

    Proof. According to the iteration formula (3.2), we can get the kth iteration x(k+1) of the MN-SHSS method:

    x(k+1)=Klkαx(k)+lk1j=0KjαM1α(Ωx(k)+B|x(k)|+b). (3.3)

    On the other hand, the unique result x of GAVE (1.1) satisfies

    x=Klkαx+lk1j=0KjαM1α(Ωx+B|x|+b). (3.4)

    Subtracting (3.4) from (3.3), then we have

    x(k+1)x=Klkα(x(k)x)+lk1j=0KjαM1α(Ω(x(k)x)+B|x(k)|B|x|), (3.5)

    In [26], Li et al. has proved that ρ(Kα)<1 when α>max{0,δ2maxλ2min2λmin}, hence lk1j=0KjαM1α=(IKlkα)(A+Ω)1. Then (3.5) becomes

    x(k+1)x=Klkα(x(k)x)+(IKlkα)(A+Ω)1(Ω(x(k)x)+B|x(k)|B|x|). (3.6)

    Therefore, we can calculate that

    x(k+1)x2[Klkα2((A+Ω)12A2+(A+Ω)12B2)+(A+Ω)12Ω2+(A+Ω)12B2]x(k)x2. (3.7)

    Due to the condition of ρ(Kα)<1, we can get that Ksα0 as s. So there is a natural number N such that

    Ksα2<1(A+Ω)12(B2+Ω2)(A+Ω)12(B2+A2), sN.

    Then if l=lim infklkN, it is distinct that x(k+1)x2<∥x(k)x2. That is to say, the iteration sequence {x(k)}k=0converges to x, which is produced by the MN-SHSS iteration method. This completes the proof.

    In the same way, the MN-SHSS method also has residual form:

    Algorithm (residual-updating variant of the MN-SHSS iteration method):

    For k=0,1,..., until converge, do:

    Set d(k,0)=0 and b(k)=B|x(k)|+bAx(k)

    For l=0,1,..., until converge, do:

    d(k,1+1)=(αI+H)1((αIS)d(k,l)+b(k)),

    where H=12((A+Ω)+(A+Ω)T), S=12((A+Ω)(A+Ω)T), α is a positive constant.

    Endfor

    Set x(k+1)=x(k)+d(k,lk).

    Endfor

    In this section, the effectiveness of MN-HSS method and MN-SHSS method is showed by comparing them with other existing inexact iteration method for solving GAVE (1.2): the Picard-HSS method and the Picard-SHSS method. In addition, we also compare our methods with a descent method [29]. For this purpose, we compare from the following aspects: the number of outer iteration steps (denoted by 'IT'), the elapsed CPU time (denoted by 'CPU') and the residual error(denoted by 'RES'), where the 'RES' is set to be

    RES:=Ax(k)B|x(k)|b2b2.

    In our computations, we all use the residual-updating version and optimal parameters α resulting in the least iteration numbers of the algorithms mentioned above and the zero vector as our starting iteration vector. We use the Cholesky factorization to solve the equations whose coefficient matrix is αI+H and the LU factorization for the other subsystem. All runs are aborted once the residual error satisfies RES<=107 or the maximum iteration number kmax=500 is exceeded. As for the inner iterations, we set the stopping criterion of different methods as:

    b(k)Ad(k,l)2bk20.01,

    and a limit on the number of iterations 10(lk=10, k=1,2,...) for inner iterations are used. The parameters used in the desent method are chosen as σ=0.2, δ=0.8, γ=0.001, p=3 and ϵ0=0.01.

    The two numerical experiments below are performed in MATLAB R2020b.

    Example 1.[30] The first example comes from the LCP (1.3). In [3,30], it has been known that the LCP (1.3) can be transformed into the following form:

    (M+I)x(MI)|x|=q    with x=12((MI)z+q), (4.1)

    which belongs to the GAVE (1.1) obviously. Then we let A=M+I and B=MI, where M=ˆM+μI and the right-hand vector q is determined by q=Mz, in which

    ˆM=Tridiag(I,S,I)=[SI000ISI00000IS]Rn×n

    is a block tridiagonal matrix,

    S=Tridiag(1,4,1)=[410001410000014]Rm×m

    is a tridiagonal matrix, n=m2. we can know that z=1.2ones(n,1) is the unique result of the LCP(1.3) and the exact solution of the GAVE(4.1) is x=0.6ones(n,1).

    In this experiment, we pick three scenarios for the parameter μ=4,1,4. The matrices A and M are both symmetric positive definite when μ=4, and symmetric indefinite when μ=4. The matrix A is symmetric positive definite if μ=1, but M symmetric indefinite. We choose four different sizes of n (n=3600,4900,6400,8100) for each parameter μ. Considering the simplicity and efficiency of our experiments, we use the same matrix Ω=5I in the MN-HSS and MN-SHSS methods. In the methods we test, the value α is chosen to result in the fewest iteration steps, see Table 1. In Tables 24, we list the numerical outcomes of the studied four inexact iteration methods for μ=4,1,4. In these tables, "-" indicates that the relevant algorithm is not convergent to the answer under the limitation of the maximum number kmax of iterative steps or even diverge.

    Table 1.  The values of α for the testing methods in Example 1.
    Method n=3600 n=4900 n=6400 n=8100
    μ=4 Picard-HSS 7.6 7.5 7.6 7.5
    Picard-SHSS 0.8 0.1 0.1 0.1
    MN-HSS 11.4 11.6 11.2 11.4
    MN-SHSS 0.1 0.1 0.1 0.1
    μ=1 Picard-HSS - - - -
    Picard-SHSS - - - -
    MN-HSS 10.4 10.4 10.4 10.4
    MN-SHSS 0.4 0.1 0.3 0.1
    μ=4 Picard-HSS - - - -
    Picard-SHSS - - - -
    MN-HSS 8.7 7.9 7.8 7
    MN-SHSS 0.1 0.1 0.1 0.5

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of the testing methods for Example 1 with μ=4.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT 63 62 63 62
    CPU 7.2267 13.2534 23.4046 37.3651
    RES 7.8433e-8 8.7166e-8 7.8872e-8 8.7574e-8
    Picard-SHSS IT 72 73 73 72
    CPU 4.8826 7.5158 13.4642 20.8528
    RES 9.6906e-8 8.8579e-8 8.4765e-8 9.938e-8
    The descent method IT 3 3 3 3
    CPU 3.1777 7.5666 16.1594 31.5073
    RES 1.1654e-9 1.1643e-9 1.1635e-9 1.1628e-9
    MN-HSS IT 12 12 12 12
    CPU 1.8227 3.5410 6.6261 11.3629
    RES 3.7154e-8 3.9922e-8 4.8592e-8 3.8995e-8
    MN-SHSS IT 12 12 12 12
    CPU 0.7606 1.4825 2.6502 4.4347
    RES 4.9155e-8 4.9827e-8 5.0337e-8 5.0737e-8

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of the testing methods for Example 1 with μ = 1.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT - - - -
    CPU - - - -
    RES - - - -
    Picard-SHSS IT - - - -
    CPU - - - -
    RES - - - -
    The descent method IT 6 6 6 6
    CPU 6.1136 14.769 31.1904 64.9632
    RES 2.4038e-9 2.0506e-9 1.7879e-9 1.5849e-9
    MN-HSS IT 69 68 68 68
    CPU 7.9967 14.7294 25.6464 43.1252
    RES 8.7811e-8 9.9034e-8 9.3516e-8 8.8168e-8
    MN-SHSS IT 68 68 68 68
    CPU 4.5587 7.8019 14.1982 22.0090
    RES 9.9988e-8 9.5126e-8 8.7701e-8 8.5202e-8

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of the testing methods for Example 1 with μ = 4.
    Methods n=3600 n=4900 n=6400 n=8100
    Picard-HSS IT - - - -
    CPU - - - -
    RES - - - -
    Picard-SHSS IT - - - -
    CPU - - - -
    RES - - - -
    The descent method IT - - - -
    CPU - - - -
    RES - - - -
    MN-HSS IT 46 46 46 46
    CPU 6.9628 13.1563 21.5587 36.8080
    RES 9.8167e-8 9.3627e-8 8.8073e-8 8.5249e-8
    MN-SHSS IT 46 46 46 46
    CPU 3.1611 5.7271 9.9884 16.9793
    RES 9.7483e-8 9.0358e-8 8.4596e-8 8.2290

     | Show Table
    DownLoad: CSV

    From Table 2, we can see that when μ=4, for all matrix scales the five evaluated methods can successfully create an estimated solution to the GAVE (4.1). And for each tested method, the CPU time also increases with the increase of the scale n of the coefficient matrix. Through the study of the numerical results, we can know that the two inexact iteration methods proposed by us are better than the Picard-HSS and Picard-SHSS methods in the number of iteration steps and CPU time. Although the descent method need less iteration steps, our methods spend less CPU time. When μ=1,4, in which instance the convergence of Picard-HSS and Piard-SHSS iteration methods cannot be guaranteed, hence Tables 3 and 4 show that the two Picard-type methods converge slowly or do not converge, while the MN-HSS and MN-SHSS method can still converge fast. The selectivity of matrix Ω provides higher stability for the algorithm we propose. When μ=1, our methods still have better performance in elapsed CPU time than the descent method generally, and the descent method can not converge under constraints of maximum iteration number.

    Example 2. The coefficient matrix A of the GAVE (1.1) is from the finite difference approximation the two-dimensional convection-diffusion equation

    {(uxx+uyy)+q(ux+uy)+pu=f(x,y), (x,y)S,u(x,y)=0, (x,y)S,

    where S=(0,1)×(0,1), S is its boundary, q is a constant and p is a real number. For discretization, the five-point finite difference scheme are used on the diffusive terms and the central difference scheme on the convective terms respectively, then we can obtain the coefficient matrix A.

    A=TxIm+ImTx+pIn,

    where Im and In are the identity matrices of order m and n with n=m2, means the Kronecker product, Tx=tridiag(a2,a1,a3)m×m and Ty=tridiag(a2,0,a3)m×m with a1=4, a2=1Re, a3=1+Re. Here Re=qh2 and h=1m+1 are the mesh Reynolds number and the equidistant step size, respectively. q is a positive constant. Let the coefficient matrix B=I and the exact solution x=(1,2,3,,(1)ii,,(1)nn)T, then the right-hand side vector can also be determined. In this instance the GAVE becomes a AVE (1.2) actually.

    In our experiments, we test different values of p (p=0,1), q (q=0,1) and n (n=100,400,1600,6400). The parameters α of the four tested methods resulting in the fewest iteration steps are chosen, while the matrix Ω in our methods are chosen as Ω=ωI, see Tables 5 and 6. We refer to [25] for the parameters of Picard-HSS and Picard-SHSS methods.

    Table 5.  The values of α and ω for the testing methods in Example 2 (p=0).
    Methods n=100 n=400 n=1600 n=6400
    q=0 Picard-HSS 11.69 12.6 13.4 13
    Picard-SHSS 5.745 6.5 6.4 6.6
    MN-HSS 2.2 2.5 2.2 2.2
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    MN-SHSS 0.1 0.1 0.1 0.1
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    q=1 Picard-HSS 12.01 13.6 14 13
    Picard-SHSS 5.926 6.6 6.75 6.6
    MN-HSS 2.2 2.4 2.2 2
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I
    MN-SHSS 0.3 0.1 0.1 0.1
    Ω=0.5I Ω=0.7I Ω=0.7I Ω=0.7I

     | Show Table
    DownLoad: CSV
    Table 6.  The values of α and ω for the testing methods in Example 2 (p=1).
    Methods n=100 n=400 n=1600 n=6400
    q=0 Picard-HSS 13.8 11.2 10.36 10.1
    Picard-SHSS 6.96 5.7 5.21 5.1
    MN-HSS 1.1 0.6 0.5 0.3
    Ω=I Ω=I Ω=I Ω=I
    MN-SHSS 0.1 0.1 0.01 0.01
    Ω=I Ω=I Ω=I Ω=I
    q=1 Picard-HSS 14 11.29 10.4 11
    Picard-SHSS 7.05 5.72 5.2 5.1
    MN-HSS 1 0.6 0.3 0.1
    Ω=I Ω=I Ω=I Ω=I
    MN-SHSS 0.1 0.1 0.1 0.1
    Ω=I Ω=I Ω=I Ω=I

     | Show Table
    DownLoad: CSV

    From Tables 7 and 8, the iteration steps reveal that the Picard-type methods and the descent method converges faster than the MN-HSS method, but in terms of the elapsed CPU times, the MN-SHSS method we new proposed has better computing efficiency generally. In Tables 9 and 10, that is when p=1, the iteration counts and CPU time of the Picard-type methods increase dramatically with the increase of matrix scale, which situation is more obvious on the descent method. In contrast, our methods still maintain a low iteration counts and high computational efficiency. Therefore, our new proposed method is very effective.

    Table 7.  Numerical results of the testing methods for Example 2 with p=0 and q=0.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 36 32 30 28
    CPU 0.0149 0.2109 2.8908 41.9898
    RES 9.8815e-8 9.3643e-8 9.9900e-8 9.9043e-8
    Picard-SHSS IT 37 32 30 29
    CPU 0.0086 0.0747 1.5120 23.6302
    RES 9.4432e-8 9.6857e-8 9.7569e-8 9.9256e-8
    The descent method IT 5 5 5 5
    CPU 0.0160 0.0334 0.7701 26.7046
    RES 1.5532e-12 8.5090e-11 3.6631e-16 2.0878e-16
    MN-HSS IT 52 44 41 38
    CPU 0.0120 0.1491 2.4287 32.2431
    RES 9.3740e-8 8.0719e-8 7.6775e-8 9.9322e-8
    MN-SHSS IT 51 44 41 39
    CPU 0.0036 0.0294 0.5693 9.2022
    RES 8.1308e-8 8.0474e-8 7.9437e-8 7.6008e-8

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical results of the testing methods for Example 2 with p=0 and q=1.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 35 32 31 28
    CPU 0.0131 0.2037 3.0079 42.4078
    RES 9.1372e-8 9.8614e-8 9.3177e-8 9.7012e-8
    Picard-SHSS IT 36 32 31 29
    CPU 0.0072 0.0782 1.5742 23.1650
    RES 9.8685e-8 9.4248e-8 9.4655e-8 9.6233e-8
    The descent method IT 6 5 5 5
    CPU 0.0048 0.0278 0.7420 26.1576
    RES 1.5836e-11 3.7231e-14 1.8960e-14 2.1382e-16
    MN-HSS IT 41 44 40 38
    CPU 0.0087 0.1642 2.1150 33.1043
    RES 8.5432e-8 9.9138e-8 9.6349e-8 9.4551e-8
    MN-SHSS IT 41 44 40 38
    CPU 0.0028 0.0299 0.5588 9.0679
    RES 8.4236e-8 9.8677e-8 9.9416e-8 9.8603e-8

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical results of the testing methods for Example 2 with p=1 and q=0.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 24 62 206 -
    CPU 0.0096 0.3811 19.5610 -
    RES 9.6164e-8 9.5608e-8 9.8436e-8 -
    Picard-SHSS IT 21 52 166 -
    CPU 0.0047 0.1255 8.2082 -
    RES 9.6537e-8 9.7425e-8 9.9905e-8 -
    The descent method IT 19 - - -
    CPU 0.0116 - - -
    RES 3.1376e-12 - - -
    MN-HSS IT 5 10 32 73
    CPU 0.0031 0.0700 3.0579 103.9036
    RES 9.5409e-8 7.5888e-8 6.886e-8 9.7832e-8
    MN-SHSS IT 4 5 4 6
    CPU 0.0007 0.0160 0.1692 5.6192
    RES 2.6017e-8 3.1319e-8 3.9399e-9 3.1310e-8

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical results of the testing methods for Example 2 with p=1 and q=1.
    Methods n=100 n=400 n=1600 n=6400
    Picard-HSS IT 24 61 203 -
    CPU 0.0095 0.3904 22.6934 -
    RES 9.7842e-8 9.9013e-8 9.3292e-8 -
    Picard-SHSS IT 21 51 166 -
    CPU 0.0043 0.1228 8.1885 -
    RES 9.5087e-8 9.5556e-8 9.5308e-8 -
    The descent method IT - - - -
    CPU - - - -
    RES - - - -
    MN-HSS IT 6 10 20 42
    CPU 0.0026 0.0735 1.9553 69.7912
    RES 1.0635e-8 6.2950e-8 5.5025e-8 9.4565e-8
    MN-SHSS IT 3 5 13 48
    CPU 0.0008 0.0138 0.6755 37.6225
    RES 8.1615e-8 6.7032e-8 7.7787e-8 9.9529e-8

     | Show Table
    DownLoad: CSV

    In this article, to accelerate the MN iteration method, we propose the MN-HSS and MN-SHSS iteration algorithms for addressing the generalized absolute value equation. Moreover, we give the sufficient conditions to show that our methods are convergent for addressing the GAVE. In the end, we provide some experiments to comfirm the feasibility and effectiveness of our novel presented methods.

    This work is supported by the National Natural Science Foundation of China (Grant no. 1227010254, Research on some efficient and fast algorithms for complex nonlinear problems).

    The authors declare that they have no conflicts of interest.



    [1] J. Rohn, A theorem of the alternatives for the equation Ax+B|x|=b, Linear Multilinear A., 52 (2004), 421–426. https://doi.org/10.1080/0308108042000220686 doi: 10.1080/0308108042000220686
    [2] O. L. Mangasarian, Absolute value programming, Comput. Optim. Appl., 36 (2007), 43–53. https://doi.org/10.1007/s10589-006-0395-5 doi: 10.1007/s10589-006-0395-5
    [3] O. L. Mangasarian, R. R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006), 359–367. https://doi.org/10.1016/j.laa.2006.05.004 doi: 10.1016/j.laa.2006.05.004
    [4] S. L. Wu, P. Guo, Modulus-based matrix splitting algorithms for the quasi-complementarity problems, Appl. Numer. Math., 132 (2018), 127–137. https://doi.org/10.1016/j.apnum.2018.05.017 doi: 10.1016/j.apnum.2018.05.017
    [5] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, 1992.
    [6] N. Zheng, K. Hayami, J. F. Yin, Modulus-type inner outer iteration methods for nonnegative constrained least squares problems, SIAM J. Matrix Anal. Appl., 37 (2016), 1250–1278. https://doi.org/10.1137/141002220 doi: 10.1137/141002220
    [7] J. Rohn, On unique solvability of the absolute value equation, Optim. Lett., 3 (2009), 603–606. https://doi.org/10.1007/s11590-009-0129-6 doi: 10.1007/s11590-009-0129-6
    [8] S. L. Wu, C. X. Li, The unique solution of the absolute value equations, Appl. Math. Lett., 76 (2018), 195–200. https://doi.org/10.1016/j.aml.2017.08.012 doi: 10.1016/j.aml.2017.08.012
    [9] S. L. Wu, C. X. Li, A note on unique solvability of the absolute value equation, Optim. Lett., 14 (2020), 1957–1960. https://doi.org/10.1007/s11590-019-01478-x doi: 10.1007/s11590-019-01478-x
    [10] O. L. Mangasarian, Absolute value equation solution via concave minimization, Optim. Lett., 1 (2007), 3–8. https://doi.org/10.1007/s11590-006-0005-6 doi: 10.1007/s11590-006-0005-6
    [11] O. L. Mangasarian, Linear complementarity as absolute value equation solution, Optim. Lett., 8 (2014), 1529–1534. https://doi.org/10.1007/s11590-013-0656-z doi: 10.1007/s11590-013-0656-z
    [12] J. Rohn, An algorithm for solving the absolute value equations, ELA, 18 (2009), 589–599. https://doi.org/10.13001/1081-3810.1332 doi: 10.13001/1081-3810.1332
    [13] O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101–108. https://doi.org/10.1007/s11590-008-0094-5 doi: 10.1007/s11590-008-0094-5
    [14] C. X. Li, A modified generalized Newton method for absolute value equations, J. Optim. Theory Appl., 170 (2016), 1055–1059. https://doi.org/10.1007/s10957-016-0956-4 doi: 10.1007/s10957-016-0956-4
    [15] J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35–44. https://doi.org/10.1007/s11590-012-0560-y doi: 10.1007/s11590-012-0560-y
    [16] A. Wang, Y. Cao, J. X. Chen, Modified Newton-type iteration methods for generalized absolute value equations, J. Optim. Theory Appl., 181 (2019), 216–230. https://doi.org/10.1007/s10957-018-1439-6 doi: 10.1007/s10957-018-1439-6
    [17] C. X. Li, A generalization of the AOR iteration method for solving absolute value equations, Electron. Res. Arch., 30 (2022), 1062–1074. https://doi.org/10.3934/era.2022056 doi: 10.3934/era.2022056
    [18] H. Y. Zhou, S. L. Wu, C. X. Li, Newton-based matrix splitting method for generalized absolute value equation, J. Comput. Appl. Math., 394 (2021), 113578. https://doi.org/10.1016/j.cam.2021.113578 doi: 10.1016/j.cam.2021.113578
    [19] R. Ali, I. Khan, A. Ali, A. Mohamed, Two new generalized iteration methods for solving absolute value equations using M-matrix, AIMS Mathematics, 7 (2022), 8176–8187. https://doi.org/10.3934/math.2022455 doi: 10.3934/math.2022455
    [20] Y. F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020), 105990. https://doi.org/10.1016/j.aml.2019.07.021 doi: 10.1016/j.aml.2019.07.021
    [21] C. X. Li, S. L. Wu, Block-diagonal and anti-block-diagonal splitting iteration method for absolute value equation, In: Simulation tools and techniques, Springer, Cham, 369 (2021), 572–581. https://doi.org/10.1007/978-3-030-72792-5_45
    [22] C. X. Li, L. Q. Yong, Modified BAS iteration method for absolute value equation, AIMS Mathematics, 7 (2021), 606–616. https://doi.org/10.3934/math.2022038 doi: 10.3934/math.2022038
    [23] D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191–2202. https://doi.org/10.1007/s11590-014-0727-9 doi: 10.1007/s11590-014-0727-9
    [24] Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003): 603–626. https://doi.org/10.1137/S0895479801395458 doi: 10.1137/S0895479801395458
    [25] S. X. Miao, X. T. Xiong, J. Wen, On Picard-SHSS iteration method for absolute value equation, AIMS Mathematics, 6 (2021), 1743–1753. https://doi.org/10.3934/math.2021104 doi: 10.3934/math.2021104
    [26] C. X. Li, S. L. Wu, A single-step HSS method for non-Hermitian positive definite linear systems, Appl. Math. Lett., 44 (2015), 26–29. https://doi.org/10.1016/j.aml.2014.12.013 doi: 10.1016/j.aml.2014.12.013
    [27] O. Axelsson, Iterative solution methods, Cambridge: Cambridge University Press, 1994. https://doi.org/10.1017/CBO9780511624100
    [28] Y. Saad, Iterative methods for sparse linear systems, New York: PWS Press, 2003. https://doi.org/10.1137/1.9780898718003
    [29] J. Y. Tang, J. C. Zhou, A quadratically convergent descent method for the absolute value equation Ax+B|x|=b, Oper. Res. Lette., 47 (2019), 229–234. https://doi.org/10.1016/j.orl.2019.03.014 doi: 10.1016/j.orl.2019.03.014
    [30] Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917-933. https://doi.org/10.1002/nla.680 doi: 10.1002/nla.680
  • This article has been cited by:

    1. Xiaohui Yu, Qingbiao Wu, Two efficient iteration methods for solving the absolute value equations, 2024, 01689274, 10.1016/j.apnum.2024.10.009
    2. Pingfei Dai, Qingbiao Wu, Modulus-based block triangular splitting iteration method for solving the generalized absolute value equations, 2024, 96, 1017-1398, 537, 10.1007/s11075-023-01656-0
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1950) PDF downloads(126) Cited by(2)

Figures and Tables

Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog