Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

A generalization of the AOR iteration method for solving absolute value equations

  • In this paper, based on the accelerated over relaxation (AOR) iteration method, a generalization of the AOR iteration method is presented to solve the absolute value equations (AVE), which is called the GAOR method. The convergence conditions of the GAOR method are obtained. Numerical experiments are presented in order to verify the feasibility of the GAOR method.

    Citation: Cui-Xia Li. A generalization of the AOR iteration method for solving absolute value equations[J]. Electronic Research Archive, 2022, 30(3): 1062-1074. doi: 10.3934/era.2022056

    Related Papers:

    [1] Yantong Guo, Quansheng Wu, Xiaofeng Wang . An extension of high-order Kou's method for solving nonlinear systems and its stability analysis. Electronic Research Archive, 2025, 33(3): 1566-1588. doi: 10.3934/era.2025074
    [2] Yijun Chen, Yaning Xie . A kernel-free boundary integral method for reaction-diffusion equations. Electronic Research Archive, 2025, 33(2): 556-581. doi: 10.3934/era.2025026
    [3] Simon Eberle, Arnulf Jentzen, Adrian Riekert, Georg S. Weiss . Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation. Electronic Research Archive, 2023, 31(5): 2519-2554. doi: 10.3934/era.2023128
    [4] Francisco Javier García-Pacheco, María de los Ángeles Moreno-Frías, Marina Murillo-Arcila . On absolutely invertibles. Electronic Research Archive, 2024, 32(12): 6578-6592. doi: 10.3934/era.2024307
    [5] Xuefei He, Kun Wang, Liwei Xu . Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28(4): 1503-1528. doi: 10.3934/era.2020079
    [6] Ishtiaq Ali . Advanced machine learning technique for solving elliptic partial differential equations using Legendre spectral neural networks. Electronic Research Archive, 2025, 33(2): 826-848. doi: 10.3934/era.2025037
    [7] J. Bravo-Olivares, E. Fernández-Cara, E. Notte-Cuello, M.A. Rojas-Medar . Regularity criteria for 3D MHD flows in terms of spectral components. Electronic Research Archive, 2022, 30(9): 3238-3248. doi: 10.3934/era.2022164
    [8] Dongmei Yu, Yifei Yuan, Yiming Zhang . A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of $ H_+ $-matrices. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007
    [9] Christos Sourdis . A Liouville theorem for ancient solutions to a semilinear heat equation and its elliptic counterpart. Electronic Research Archive, 2021, 29(5): 2829-2839. doi: 10.3934/era.2021016
    [10] Gang Cheng, Yadong Liu . Hybrid short-term traffic flow prediction based on the effect of non-linear sequence noise. Electronic Research Archive, 2024, 32(2): 707-732. doi: 10.3934/era.2024034
  • In this paper, based on the accelerated over relaxation (AOR) iteration method, a generalization of the AOR iteration method is presented to solve the absolute value equations (AVE), which is called the GAOR method. The convergence conditions of the GAOR method are obtained. Numerical experiments are presented in order to verify the feasibility of the GAOR method.



    In this paper, based on the AOR iteration method, we consider the numerical solution of the absolute value equations (AVE)

    Ax|x|=b, (1.1)

    where ARn×n, bRn and |x| denotes all the components of the vector xRn by absolute value. If '|x|' is replaced by 'B|x|' in (1.1), then the general AVE is obtained, see [1,2]. Nowadays, many optimization problems including linear programming, convex quadratic programming and linear complementarity problem [3,4,5] can be formulated as the AVE (1.1) such that the AVE (1.1) has been attracted considerable attention.

    To efficiently solve the AVE (1.1), in recent years, a great deal of effort has been committed to develop iteration methods. For example, in [6], Mangasarian presented a generalized Newton method for solving the AVE (1.1) and simply described below

    x(k+1)=(AD(x(k)))1b,k=0,1,, (1.2)

    where D(x(k)) is a diagonal matrix of the form D(x(k))=diag(sign(x(k))). One can see [7,8,9,10] and find other versions of the generalized Newton method. Clearly, using the generalized Newton method to solve the AVE (1.1), the inverse of the matrix AD(x(k)) should be computed. Noting that the matrix AD(x(k)) is changed with the iteration index k, the computation cost of the generalized Newton method may be highly expensive. To remain the iteration matrix unchanged, the following Picard iteration method was considered in [11]

    x(k+1)=A1(|x(k)|+b),k=0,1,. (1.3)

    When the matrix A in (1.3) is ill-conditioned, in each iteration of the Picard method, we have to deal with this ill-conditioned linear system. To overcome the inverse of the matrix A, by making use of the Hermitian and skew-Hermitian splitting (HSS) of the matrix A in [12], Zhu et al. in [13] presented the nonlinear HSS-like method and discussed the convergence property of the nonlinear HSS-like method. Other versions of the nonlinear HSS-like method, one can see [14,15,16] for more details. In addition, in [17], the classical AOR iteration method has been expanded to solve the AVE. Other relative works are [18,19,20,21,22,23,24].

    Recently, based on the methodology of the Gauss-Seidel method, together with the matrix splitting A=DLU of matrix A, where D=diag(A), L and U are strictly lower and upper triangular matrices obtained from A, respectively, in [25], the generalized Gauss-Seidel (GGS) iteration method was proposed and adopted to solve the AVE (1.1), which is of form

    (DL)x(k+1)|x(k+1)|=U|x(k)|+b,k=0,1,. (1.4)

    Numerical experiments showed the efficiency of the GGS method.

    In this paper, inspired by the work in [25], based on the AOR iteration method, a generalization of the AOR iteration method (GAOR) is presented to solve the AVE (1.1) and its convergence conditions are discussed in detail. By making use of some numerical experiments, we present the effectiveness of the GAOR method.

    The remainder of the paper is organized as follows: Section 2 goes over some preparatory knowledge. Section 3 presents the GAOR iteration method and its convergence conditions. Section 4 reports some numerical results to show the efficiency of the GAOR method. Finally, Section 5 draws some conclusions.

    Let C=(cij)Rn×n. 'diag(C)' denotes the diagonal part of matrix C. For A=(aij),B=(bij)Rn×n, we call AB if aijbij for i,j=1,2,,n. Matrix A is called non-negative if A0; further, AB0 if and only if AB. These definitions carry immediately over to vectors by identifying them with n×1 matrices. Matrix A=(aij)Rn×n is called a Z-matrix if aij0 for ij; an L-matrix if A is a Z-matrix and aii>0 for i=1,,n; an M-matrix if A is a Z-matrix and A10; and an H-matrix if its comparison matrix A=(aij)Rn×n is an M-matrix, where

    aij={|aij| for i=j,|aij| for ij, i,j=1,2,,n.

    If A is an M-matrix and B is a Z-matrix, then AB implies that B is an M-matrix [26]. A matrix A is irreducible if the directed graph associated to A is strongly connected [27].

    Let A=MN. It is called a matrix splitting of A if det(M)0; called regular if M10 and N0; an M-splitting of A if M is an M-matrix and N0. Evidently, if A=MN is an M-splitting and A is a nonsingular M-matrix, then ρ(M1N)<1, where ρ() denotes the spectral radius of the matrix.

    Lemma 2.1. [28] Let A be an H-matrix. Then|A1|A1.

    Lemma 2.2. [29] Let x,yRn. Then |x||y|2xy2.

    In this section, the generalized AOR (GAOR) method is introduced to solve the AVE (1.1). For this purpose, we split A into

    A=1ω(MN), (3.1)

    with

    M=Ω+DrL and N=Ω+(1ω)D+(ωr)L+ωU,

    where Ω is a positive diagonal matrix, ω and r are real parameters with ω0, D=diag(A), L and U are the previously mentioned. Based on the matrix splitting (3.1), the AVE (1.1) is rewritten as the fixed-point equations

    1ω(Ω+DrL)x|x|=1ω[Ω+(1ω)D+(ωr)L+ωU]x+b

    or

    (Ω+DrL)xω|x|=[Ω+(1ω)D+(ωr)L+ωU]x+ωb, (3.2)

    then for k=0,1,, we can define the GAOR method for solving (1.1) below

    (Ω+DrL)x(k+1)ω|x(k+1)|=[Ω+(1ω)D+(ωr)L+ωU]x(k)+ωb. (3.3)

    Lemma 3.1. Let α>β>0. Then αxβ|x|=b with bR has onlyone solution in R. If b0, then the solution isx=bαβ, otherwise x=bα+β.

    Proof. The results in Lemma 3.1 are valid, whose proof is omitted here.

    Next, based on Lemma 3.1, we present Algorithm 1 to solve each step of the GAOR iteration method (3.3) when all the diagonal entries of the matrix Ω+D are greater than ω.

    Algorithm 1:

    For k = 0, 1, , until convergence, do

    Set s=ωb1, x(0)i=0.

    For i=1,2,,n, do

    If s0, then

    x(k+1)i:=saii+ωiiω;

    Else

    x(k+1)i:=saii+ωii+ω;

    Endif

    set

    s:=ωbi+(ωii+(1ω)aii)x(k)i(ωr)i1j=1aijx(k)jωnj=i+1aijx(k)jri1j=1aijx(k+1)j.

    Enddo

    Enddo

    Theorem 3.1. Let Ω be a positive diagonal matrix and all singular valuesof the matrix Ω+DrL exceed ω with ω,r>0. If

    Iω(Ω+DrL)1A2<1ω(Ω+DrL)12, (3.4)

    then the GAOR method (3.3) converges to the uniquesolution of the AVE (1.1) for an arbitrary initial guessx(0)Rn.

    Proof. Since all singular values of the matrix Ω+DrL exceed ω with ω,r>0,

    ω(Ω+DrL)12=ωσmax((Ω+DrL)1)=ωσmin(Ω+DrL)<1.

    Let x be a solution of the AVE (1.1). Then from (3.2) we have

    (Ω+DrL)xω|x|=[Ω+(1ω)D+(ωr)L+ωU]x+ωb. (3.5)

    After subtracting (3.3) from (3.5), we obtain

    (Ω+DrL)(x(k+1)x)=ω(|x(k+1)||x|)+(Ω+(1ω)D+(ωr)L+ωU)(x(k)x),

    which is equal to

    x(k+1)x=(Ω+DrL)1[ω(|x(k+1)||x|)+(Ω+(1ω)D+(ωr)L+ωU)(x(k)x)].

    Taking 2-norm in the latter equation yields

    x(k+1)x2=(Ω+DrL)1[ω(|x(k+1)||x|)+(Ω+(1ω)D+(ωr)L+ωU)(x(k)x)]2ω(Ω+DrL)1(|x(k+1)||x|)2+(Ω+DrL)1(Ω+(1ω)D+(ωr)L+ωU)(x(k)x)2ω(Ω+DrL)12|x(k+1)||x|2+(Ω+DrL)1(Ω+(1ω)D+(ωr)L+ωU)2x(k)x2ω(Ω+DrL)12x(k+1)x2+(Ω+DrL)1(Ω+(1ω)D+(ωr)L+ωU)2x(k)x2=ω(Ω+DrL)12x(k+1)x2+Iω(Ω+DrL)1A2x(k)x2.

    Further,

    x(k+1)x2Iω(Ω+DrL)1A21ω(Ω+DrL)12x(k)x2.

    To show the uniqueness of the solution, let y be another solution of the AVE (1.1). Then, from Eq (3.2) we have

    (Ω+DrL)yω|y|=[Ω+(1ω)D+(ωr)L+ωU]y+ωb. (3.6)

    After subtracting (3.6) from (3.5), similar to the above discussion, we obtain

    yx2Iω(Ω+DrL)1A21ω(Ω+DrL)12yx2<yx2,

    which is contradiction. Therefore, y=x. This completes the proof.

    When w=r in (3.3), the GAOR method reduces to the GSOR method. Its convergence theory is given in Corollary 3.1.

    Corollary 3.1. Let Ω be a positive diagonal matrix and all singular valuesof the matrix Ω+DωL exceed ω with ω>0.If

    Iω(Ω+DωL)1A2<1ω(Ω+DωL)12, (3.7)

    then the GSOR method (3.3) converges to the uniquesolution of the AVE (1.1) for an arbitrary initial guessx(0)Rn.

    Further, in Corollary 3.1, if ω=1, then the convergence theory of GGS is obtained and described in Corollary 3.2.

    Corollary 3.2. Let Ω be a positive diagonal matrix and all singular valuesof the matrix Ω+DL exceed 1. If

    I(Ω+DL)1A2<1(Ω+DL)12, (3.8)

    then the GGS method (3.3) converges to the unique solutionof the AVE (1.1) for an arbitrary initial guessx(0)Rn.

    Remark 3.1. In Corollary 3.2, if Ω is a zero matrix, then the results of Corollary 3.2 are similar to Theorem 3 in [25]. In fact, if we use 'all singular values of the matrix DL exceed 1' instead of 'Let the diagonal entries of A be greater than one and the matrix DLI be strictly row diagonally dominant' in Theorem 3 in [25], then Theorem 3 in [25] holds as well.

    It is not difficult to find that the above theoretical results including Theorem 3.1, Corollary 3.1 and Corollary 3.2 are a litter too general and not easy to verify. To overcome these negative factors, an effective approach is to limit the type of matrix A. Next, we restrict our attention to this situation where A is an H-matrix with positive diagonals. For A being an H-matrix with positive diagonals, a simple convergence condition for the GAOR method (3.3) can be obtained, see Theorem 3.2.

    Theorem 3.2. Let A be an H-matrix with positive diagonals and the positivediagonal matrix ΩωI. If ω and r satisfy

    0<rω1, (3.9)

    then the GAOR method (3.3) converges to the uniquesolution of (1.1) for an arbitrary initial guessx(0)Rn.

    Proof. Since A is an H-matrix with positive diagonals, we have

    A=|D||L||U|.

    Further,

    ADLDrL.

    It follows that matrix Ω+DrL is an H-matrix with positive diagonals. It holds that

    |(Ω+DrL)1|<Ω+DrL1.

    From the proof of Theorem 3.1, we get

    x(k+1)x=(Ω+DrL)1[ω(|x(k+1)||x|)+(Ω+(1ω)D+(ωr)L+ωU)(x(k)x)].

    Further,

    |x(k+1)x|=|(Ω+DrL)1[ω(|x(k+1)||x|)+(Ω+(1ω)D+(ωr)L+ωU)(x(k)x)]|ω|(Ω+DrL)1|||x(k+1)||x||+|(Ω+DrL)1||Ω+(1ω)D+(ωr)L+ωU||x(k)x|ωΩ+DrL1|x(k+1)x|+Ω+DrL1|Ω+(1ω)D+(ωr)L+ωU||x(k)x|. (3.10)

    Since the matrix Ω+DrL is an H-matrix with positive diagonals, the matrix Ω+DrL is an M-matrix and Ω+DrL1>0. Noting that ΩωI, it is easy to obtain that

    ρ(ωΩ+DrL1)=maxωaii+ωii<1,

    where aii+ωii denotes the diagonal elements of matrix Ω+D. Hence, from (3.10), we get

    |x(k+1)x|(IωΩ+DrL1)1Ω+DrL1|Ω+(1ω)D+(ωr)L+ωU||x(k)x|=(Ω+DrLωI)1|Ω+(1ω)D+(ωr)L+ωU||x(k)x|.

    Let

    ˉM=Ω+DrL,ˉN=ωI+|Ω+(1ω)D+(ωr)L+ωU| and ˉA=ˉMˉN.

    Evidently, the GAOR method (3.3) converges to the unique solution of the AVE (1.1) if ρ(ˉM1ˉN)<1. Noting that ˉA is an M-matrix, matrix ˉM is an M-matrix and ˉN0. Then, ˉA=ˉMˉN is an M-splitting. Therefore, ρ(ˉM1ˉN)<1.

    From Theorem 3.2, the convergence conditions of the corresponding GSOR and GGS methods for A being an H-matrix with positive diagonals are obtained, which are omitted.

    In this section, we give some numerical experiments to assess the efficiency of the GAOR method for solving the AVE (1.1). We compare GAOR with AOR in[17] from two aspects: the number of iterations (denoted as IT) and the CPU time in seconds (denoted as CPU). Meanwhile, we also investigate the generalized Newton method, the nonlinear HSS-like method [13] and the Picard-HSS method [14]. The starting iterate is zero vector, all iterations are terminated once the relative residual error satisfies

    Ax(k)|x(k)|b2b2106

    or if the number of the prescribed iteration kmax=500 is exceeded. All the tests are performed in MATLAB R2016B.

    Example 4.1. ([25,30]) Let m be a prescribed positive integer and n=m2. Consider the AVE (1.1), in which ARn×n is given by A=ˆM+μI, where

    ˆM=tridiag(I,S,I)=[SI000ISI000IS00000SI000IS]Rn×n

    with

    S=tridiag(1,4,1)=[4100014100014000004100014]Rm×m,

    and b=Ax|x| with x=(1,1,1,1,,1,1)TRn.

    Example 4.1 can be induced by using the finite difference to discretize the flow of water through a porous dam under the equidistant grid [30].

    Example 4.2. ([31]) Let m be a prescribed positive integer and n=m2. Consider the AVE (1.1), in which ARn×n is given by A=ˆM+μI, where

    ˆM=tridiag(1.5I,S,0.5I)=[S0.5I0001.5IS0.5I0001.5IS00000S0.5I0001.5IS]Rn×n

    with

    S=tridiag(1.5,4,0.5)=[40.50001.540.50001.540000040.50001.54]Rm×m,

    and b=Ax|x| with x=(1,1,1,1,,1,1)TRn.

    To fairly compare the GAOR method with the AOR method, we choose the same parameters to test the GAOR method and the AOR method. Under this consideration, in Tables 14, for Examples 4.1 and 4.2 with μ=2, Ω=0.1I and Ω=0.5I, we list some iteration results to illustrate the convergence behaviors of the GAOR and AOR methods for the different problem sizes of n. We observe that the GAOR and AOR methods can calculate a satisfactory approximation to the solution of the AVE. Fixing ω and n with r increasing, the iteration steps of the GAOR and AOR methods descend. Fixing ω and r with n increasing, the iteration steps of the GAOR and AOR methods may be hardly sensitive to change. Further, we find that the GAOR method requires less iteration steps than the AOR method, but the GAOR method requires more CPU times than the AOR method. The time-consuming of the GAOR method is due to the code only edited by all the components of the matrix. To make the AOR method more competitive, an effective approach is to optimize the edited code, which is an interesting work in the future.

    Table 1.  IT and CPU for Example 4.1 with ω=0.9 and Ω=0.1I.
    r 0.9 0.5 0.1
    n=322 GAOR IT 20 23 27
    CPU 0.3092 0.3764 0.4366
    AOR IT 25 30 35
    CPU 0.0065 0.0083 0.0095
    n=642 GAOR IT 20 23 27
    CPU 4.0596 4.6910 5.4559
    AOR IT 26 30 35
    CPU 0.0122 0.0150 0.0163
    n=1282 GAOR IT 20 24 27
    CPU 58.8267 70.3705 79.2503
    AOR IT 26 30 35
    CPU 0.0474 0.0551 0.0719

     | Show Table
    DownLoad: CSV
    Table 2.  IT and CPU for Example 4.1 with ω=0.9 with Ω=0.5I.
    r 0.9 0.5 0.1
    n=322 GAOR IT 23 26 30
    CPU 0.4073 0.4193 0.5326
    AOR IT 25 30 35
    CPU 0.0038 0.0096 0.0098
    n=642 GAOR IT 23 27 31
    CPU 4.6584 5.4570 6.2490
    AOR IT 26 30 35
    CPU 0.0113 0.0132 0.0150
    n=1282 GAOR IT 23 27 31
    CPU 67.7754 79.5942 93.9553
    AOR IT 26 30 35
    CPU 0.0400 0.0546 0.0738

     | Show Table
    DownLoad: CSV
    Table 3.  IT and CPU for Example 4.2 with ω=0.9 with Ω=0.1I.
    r 0.9 0.5 0.1
    n=322 GAOR IT 15 20 25
    CPU 0.2489 0.3082 0.4191
    AOR IT 20 26 33
    CPU 0.0021 0.0040 0.0046
    n=642 GAOR IT 15 21 26
    CPU 3.1600 4.3562 5.3924
    AOR IT 20 27 34
    CPU 0.0080 0.0114 0.0207
    n=1282 GAOR IT 15 21 27
    CPU 46.3959 68.4617 79.7561
    AOR IT 20 27 34
    CPU 0.0330 0.0689 0.1058

     | Show Table
    DownLoad: CSV
    Table 4.  IT and CPU for Example 4.2 with ω=0.9 with Ω=0.5I.
    r 0.9 0.5 0.1
    n=322 GAOR IT 17 23 29
    CPU 0.3110 0.3740 0.4757
    AOR IT 20 26 33
    CPU 0.0021 0.0031 0.0060
    n=642 GAOR IT 18 24 30
    CPU 3.6997 4.8956 6.0457
    AOR IT 20 27 34
    CPU 0.0081 0.0114 0.0164
    n=1282 GAOR IT 18 24 30
    CPU 52.7953 70.6259 97.6267
    AOR IT 20 27 34
    CPU 0.0332 0.0523 0.0686

     | Show Table
    DownLoad: CSV

    Tables 5 and 6 list the numerical results of the generalized Newton method, where 'GN' denote the generalized Newton method. Comparing the GAOR method, the AOR method with the generalized Newton method, the generalized Newton method require the least iteration steps, but it also takes the most time. Among three methods, the computational efficiency of the generalized Newton method is worst.

    Table 5.  Numerical results of GN for Example 4.1.
    n 322 642 1282
    IT 3 3 3
    CPU 0.2344 4.5520 235.1104

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical results of GN for Example 4.2.
    n 322 642 1282
    IT 2 3 3
    CPU 0.5260 8.2994 508.0766

     | Show Table
    DownLoad: CSV

    Finally, we compare the GAOR method with the nonlinear HSS-like method and the Picard-HSS method under the same parameter, see Table 7. The explanation is that the nonlinear HSS-like method and the Picard-HSS method only contain a parameter, here, we consider the GAOR method with ω=r, i.e., the GSOR method. When the Picard-HSS method is applied, the stopping criterion for its inner iterations is adopted in [13]. In Table 7, 'GSOR', 'NHSS' and 'PHSS', respectively, denote the GSOR method, the nonlinear HSS-like method and the Picard-HSS method, '' denotes that the iteration steps exceed 500, and '()' denotes the outer (inner) iteration steps. From Table 7, the numerical results show that the GSOR method is more effective than the nonlinear HSS-like method and the Picard-HSS method.

    Table 7.  IT and CPU for Example 4.2 with Ω=0.5I.
    ω 1 0.9 0.8
    n=322 GSOR IT 13 17 23
    CPU 0.2578 0.3110 0.4375
    NHSS IT 112 346
    CPU 0.4957 1.5667
    PHSS IT 12(240) 26(520)
    CPU 0.9727 2.2767
    n=642 GSOR IT 13 18 24
    CPU 3.0469 3.6997 5.2188
    NHSS IT 170 471
    CPU 3.3429 9.0310
    PHSS IT 20(400)
    CPU 2.8327
    n=1282 GSOR IT 13 18 24
    CPU 40.0088 52.7953 78.6563
    NHSS IT 286
    CPU 36.3121
    PHSS IT 30(600)
    CPU 54.0620

     | Show Table
    DownLoad: CSV

    Overall, based on the numerical results, the GAOR method displays the good performance when the above presented five testing methods are applied to solve the absolute value equations.

    In this paper, we have designed a generalization AOR (GAOR) method to solve the absolute value equations. Some convergence properties for the proposed GAOR method are gained. Numerical experiments have been reported to verify the efficiency of the proposed method.

    The author thanks five anonymous referees for their constructive suggestions and helpful comments, which led to significant improvement of the original manuscript of this paper. The authors are partially supported by National Natural Science Foundation of China (11961082).

    The authors declare there is no conflicts of interest.



    [1] J. Rohn, A theorem of the alternatives for the equation Ax+B|x|=b, Linear Multilinear A., 52 (2004), 421–426. https://doi.org/10.1080/0308108042000220686 doi: 10.1080/0308108042000220686
    [2] O. L. Mangasarian, Absolute value programming, Comput. Optim. Appl., 36 (2007), 43–53. https://doi.org/10.1007/s10589-006-0395-5 doi: 10.1007/s10589-006-0395-5
    [3] O. L. Mangasarian, Absolute value equations via concave minimization, Optim. Lett., 1 (2007), 1–8. https://doi.org/10.1007/s11590-006-0005-6 doi: 10.1007/s11590-006-0005-6
    [4] R. W. Cottle, G. B. Dantzig, Complementary pivot theory of mathematical programming, Linear Algebra Appl., 1 (1968), 103–125. https://doi.org/10.1016/0024-3795(68)90052-9 doi: 10.1016/0024-3795(68)90052-9
    [5] R. W. Cottle, J. S. Pang, R. E. Stone, The Linear Complementarity Problem, Academic, San Diego, 1992. https://doi.org/10.1007/978-94-015-8330-5_3
    [6] O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101–108. https://doi.org/10.1007/s11590-008-0094-5 doi: 10.1007/s11590-008-0094-5
    [7] L. Caccetta, B. Qu, G. L. Zhou, A globally and quadratically convergent method for absolute value equations, Comput. Optim. Appl., 48 (2011), 45–58. https://doi.org/10.1007/s10589-009-9242-9 doi: 10.1007/s10589-009-9242-9
    [8] S. L. Hu, Z. H. Huang, Q. Zhang, A generalized Newton method for absolute value equations associated with second order cones, J. Comput. Appl. Math., 235 (2011), 1490–1501. https://doi.org/10.1016/j.cam.2010.08.036 doi: 10.1016/j.cam.2010.08.036
    [9] S. Ketabchi, H. Moosaei, An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side, Comput. Math. Appl., 64 (2012), 1882–1885. https://doi.org/10.1016/j.camwa.2012.03.015 doi: 10.1016/j.camwa.2012.03.015
    [10] C. Zhang, Q. J. Wei, Global and finite convergence of a generalized Newton method for absolute value equations, J. Optim. Theory. Appl., 143 (2009), 391–403. https://doi.org/10.1007/s10957-009-9557-9 doi: 10.1007/s10957-009-9557-9
    [11] J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35–44. https://doi.org/10.1007/s11590-012-0560-y doi: 10.1007/s11590-012-0560-y
    [12] Z. Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), 603–626. https://doi.org/10.1137/S0895479801395458 doi: 10.1137/S0895479801395458
    [13] M. Z. Zhu, Y. E. Qi, The nonlinear HSS-like iteration method for absolute value equations, IAENG Inter. J. Appl. Math., 48 (2018), 312–316.
    [14] D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191–2202. https://doi.org/10.1007/s11590-014-0727-9 doi: 10.1007/s11590-014-0727-9
    [15] J. J. Zhang, The relaxed nonlinear PHSS-like iteration method for absolute value equations, Appl. Math. Comput., 265 (2015), 266–274. https://doi.org/10.1016/j.amc.2015.05.018 doi: 10.1016/j.amc.2015.05.018
    [16] C. X. Li, On the modified Hermitian and skew-Hermitian splitting iteration methods for a class of the weakly absolute value equations, J. Inequal. Appl., 2016 (2016), 260. https://doi.org/10.1186/s13660-016-1202-1 doi: 10.1186/s13660-016-1202-1
    [17] C. X. Li, A preconditioned AOR iterative method for the absolute value equations, Int. J. Comp. Methods, 14 (2017), 1750016. https://doi.org/10.1142/S0219876217500165 doi: 10.1142/S0219876217500165
    [18] X. M. Gu, T. Z. Huang, H. B. Li, S. F. Wang, L. Li, Two CSCS-based iteration methods for solving absolute value equations, J. Appl. Anal. Comput., 7 (2017), 1336–1356. https://doi.org/10.11948/2017082 doi: 10.11948/2017082
    [19] J. He, Y. M. Liu, J. K. Tian, Two numerical iteration methods for solving absolute value equations, ScienceAsia, 44 (2018), 40–45. https://doi.org/10.2306/scienceasia1513-1874.2018.44.040 doi: 10.2306/scienceasia1513-1874.2018.44.040
    [20] Z. S. Yu, L. Li, Y. Yuan, A modified multivariate spectral gradient algorithm for solving absolute value equations, Appl. Math. Lett., 121 (2021), 107461. https://doi.org/10.1016/j.aml.2021.107461 doi: 10.1016/j.aml.2021.107461
    [21] C. R. Chen, Y. N. Yang, D. M. Yu, D. R. Han, An inverse-free dynamical system for solving the absolute value equations, Appl. Numer. Math., 168 (2021), 170–181. https://doi.org/10.1016/j.apnum.2021.06.002 doi: 10.1016/j.apnum.2021.06.002
    [22] Y. F. Ke, C. F. Ma, On SOR-like iteration methods for solving weakly nonlinear systems, Optim. Method. Softw., 2020. https://doi.org/10.1080/10556788.2020.1755861
    [23] Y. F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020), 105990. https://doi.org/10.1016/j.aml.2019.07.021 doi: 10.1016/j.aml.2019.07.021
    [24] Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195–202. https://doi.org/10.1016/j.amc.2017.05.035 doi: 10.1016/j.amc.2017.05.035
    [25] V. Edalatpour, D.Hezari, D. K. Salkuyeh, A generalization of the Gauss-Seidel iteration method for solving absolute value equations, Appl. Math. Comput., 293 (2017), 156–167. https://doi.org/10.1016/j.amc.2016.08.020 doi: 10.1016/j.amc.2016.08.020
    [26] A. Berman, R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic, New York, 1979. https://doi.org/10.1016/C2013-0-10361-3
    [27] R. S. Varga, Matrix Iterative Analysis, Springer, Berlin, 2000. https://doi.org/10.2307/3620826
    [28] S. L. Wu, C. X. Li, Two-sweep modulus-based matrix splitting iteration methods for linear complementarity problems, J. Comput. Appl. Math., 302 (2016), 327–339. https://doi.org/10.1016/j.cam.2016.02.011 doi: 10.1016/j.cam.2016.02.011
    [29] J. L. Dong, M. Q. Jiang, A modified modulus method for symmetric positive-definite linear complementarity problems, Numer. Linear Algebra Appl., 16 (2009), 129–143. https://doi.org/10.1002/nla.609 doi: 10.1002/nla.609
    [30] C. M. Elliot, J. R. Ockenden, Weak Variational Methods for Moving Boundary Value Problems, Pitman, London, 1982. https://doi.org/10.1137/1026023
    [31] Z. Z. Bai, Modulus-based matrix splitting iteration methods for linear complementarity problems, Numer. Linear Algebra Appl., 17 (2010), 917–933. https://doi.org/10.1002/nla.680 doi: 10.1002/nla.680
  • This article has been cited by:

    1. Alamgir Khan, Javed Iqbal, Ali Akgül, Rashid Ali, Yuting Du, Arafat Hussain, Kottakkaran Sooppy Nisar, V. Vijayakumar, A Newton-type technique for solving absolute value equations, 2023, 64, 11100168, 291, 10.1016/j.aej.2022.08.052
    2. Miao Guo, Qingbiao Wu, Two effective inexact iteration methods for solving the generalized absolute value equations, 2022, 7, 2473-6988, 18675, 10.3934/math.20221027
    3. Xiaohui Yu, Qingbiao Wu, Two efficient iteration methods for solving the absolute value equations, 2024, 01689274, 10.1016/j.apnum.2024.10.009
    4. Pingfei Dai, Qingbiao Wu, Modulus-based block triangular splitting iteration method for solving the generalized absolute value equations, 2024, 96, 1017-1398, 537, 10.1007/s11075-023-01656-0
    5. Lu-Lin Yan, Yi-Xin Jiang, Shu-Xin Miao, Xian-Ming Gu, A Relaxation Iteration Method with Three Parameters for Solving Absolute Value Equation, 2024, 2024, 2314-8896, 10.1155/2024/6842559
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2142) PDF downloads(219) Cited by(5)

Figures and Tables

Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog