Research article Special Issues

Modified BAS iteration method for absolute value equation

  • In this paper, to improve the convergence speed of the block-diagonal and anti-block-diagonal splitting (BAS) iteration method, we design a modified BAS (MBAS) method to obtain the numerical solution of the absolute value equation. Theoretical analysis shows that under certain conditions the MBAS method is convergent. Numerical experiments show that the MBAS method is feasible.

    Citation: Cui-Xia Li, Long-Quan Yong. Modified BAS iteration method for absolute value equation[J]. AIMS Mathematics, 2022, 7(1): 606-616. doi: 10.3934/math.2022038

    Related Papers:

    [1] ShiLiang Wu, CuiXia Li . A special shift splitting iteration method for absolute value equation. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
    [2] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [3] Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233
    [4] Shu-Xin Miao, Xiang-Tuan Xiong, Jin Wen . On Picard-SHSS iteration method for absolute value equation. AIMS Mathematics, 2021, 6(2): 1743-1753. doi: 10.3934/math.2021104
    [5] Miao Guo, Qingbiao Wu . Two effective inexact iteration methods for solving the generalized absolute value equations. AIMS Mathematics, 2022, 7(10): 18675-18689. doi: 10.3934/math.20221027
    [6] Ahmad Y. Al-Dweik, Ryad Ghanam, Gerard Thompson, M. T. Mustafa . Algorithms for simultaneous block triangularization and block diagonalization of sets of matrices. AIMS Mathematics, 2023, 8(8): 19757-19772. doi: 10.3934/math.20231007
    [7] Sung Woo Choi . Explicit characteristic equations for integral operators arising from well-posed boundary value problems of finite beam deflection on elastic foundation. AIMS Mathematics, 2021, 6(10): 10652-10678. doi: 10.3934/math.2021619
    [8] Rashid Ali, Fuad A. Awwad, Emad A. A. Ismail . The development of new efficient iterative methods for the solution of absolute value equations. AIMS Mathematics, 2024, 9(8): 22565-22577. doi: 10.3934/math.20241098
    [9] Yang Cao, Quan Shi, Sen-Lai Zhu . A relaxed generalized Newton iteration method for generalized absolute value equations. AIMS Mathematics, 2021, 6(2): 1258-1275. doi: 10.3934/math.2021078
    [10] Wen-Ning Sun, Mei Qin . On maximum residual block Kaczmarz method for solving large consistent linear systems. AIMS Mathematics, 2024, 9(12): 33843-33860. doi: 10.3934/math.20241614
  • In this paper, to improve the convergence speed of the block-diagonal and anti-block-diagonal splitting (BAS) iteration method, we design a modified BAS (MBAS) method to obtain the numerical solution of the absolute value equation. Theoretical analysis shows that under certain conditions the MBAS method is convergent. Numerical experiments show that the MBAS method is feasible.



    To establish the efficient iteration method to obtain the numerical solution of the absolute value equation (AVE)

    Ax|x|=b with ARn×n and bRn, (1.1)

    the following equivalent two-by-two block nonlinear equation of the AVE (1.1) is considered in [1]

    {Axy=b,|x|+y=0,

    i.e.,

    ˉAz=[AIˆDI][xy]=[b0]=ˉb, (1.2)

    where ˆD=D(x)=diag(sign(x)),xRn. Afterwards, using the following matrix splitting of matrix ˉA, that is,

    ˉA=[AIˆDI]=[αI+A00αI+I][αIIˆDαI],

    where α is a given appropriate constant, the block-diagonal and anti-block-diagonal splitting (BAS) method for the nonlinear equation (1.2) was designed and described as follows:

    The BAS method: Let initial vectors x(0)Rn and y(0)Rn. For k=0,1, until the iteration sequence {x(k),y(k)} is convergent, calculate

    {x(k+1)=(αI+A)1(αx(k)+y(k)+b),y(k+1)=11+α(ˆDx(k)+αy(k)), (1.3)

    or

    [αI+A00αI+I][x(k+1)y(k+1)]=[αIIˆDαI][x(k)y(k)]+[b0], (1.4)

    where α is a given appropriate constant.

    In [1], some conditions were given to guarantee the convergence of the BAS method. Numerical experiment results showed that the convergence behavior of the BAS method is better than the generalized Newton (GN) method in [2] and the nonlinear HSS-like (NHSS) method [3,4].

    As is known, the AVE (1.1) is widely concerned because it is viewed as a very useful tool in a number of practical problems, including linear programming, the quasi-complementarity problems, bimatrix games, see [5,6,7,8,9]. Recently, many authors have exploited some feasible iteration methods to obtain the numerical solution of the AVE (1.1), see [2,6,10,11,12,13,15,16,17,18,19]. In addition, the AVE (1.1) also looks like the basis pursuit problem and generalized inverses computation, see [23,24,25].

    In this paper, a new iteration method is designed to solve the AVE (1.1). Specifically, to improve the convergence speed of the BAS iteration method, a modified BAS (MBAS) iteration method is developed to solve the AVE (1.1). The convergence property of the MBAS method is studied under certain conditions. The feasibility of the MBAS method is verified by numerical experiments.

    The remainder of the paper is structured below. In Section 2, we establish the modified BAS (MBAS) method to obtain the numerical solution of the AVE (1.1) and present some conditions to guarantee the convergence of the MBAS iteration method. In Section 3, the effectiveness of the MBAS method is verified by numerical experiments. In Section 4, we terminate the paper with some conclusions.

    In this section, we will establish the MBAS iteration method to acquire the numerical solution of the AVE (1.1). For this purpose, our way is to use the x(k+1) of the first equation in (1.3) instead of the x(k) of the second equation in (1.3). Clearly, the goal of our approach is that we not only can enhance the convergence behavior of the BAS iteration method (1.3), but also can reduce the storage requirements of the BAS iteration method (1.3). Based on this approach, we obtain the modified BAS (MBAS) iteration method and describe as follows.

    The MBAS iteration method: Let initial vectors x(0)Rn and y(0)Rn. For k=0,1, until the iteration sequence {x(k),y(k)} is convergent, calculate

    {x(k+1)=(αI+A)1(αx(k)+y(k)+b),y(k+1)=11+α(ˆDx(k+1)+αy(k)), (2.1)

    where α is a given nonnegative constant.

    The structure of the system (1.2) is two-by-two block and looks like the saddle point problem [26]. Further, from the form of the MBAS iteration method (2.1), in a way, it can be also seen as a special case of parameterized inexact Uzawa method. For this, one can see [27] for more details.

    Lemma 2.1 is introduced to obtain some conditions to guarantee the convergence of the MBAS iteration method.

    Lemma 2.1. [14] Let x2ax+b=0 with a,bR, and λ be any root of this equation. Then |λ|<1 iff |b|<1 and |a|<1+b.

    Let the iteration errors be

    exk=xx(k),eyk=yy(k),

    where x and y satisfy Eq (1.2). Then the following main result with respect to the MBAS iteration method (2.1) can be obtained. denotes the Euclid norm.

    Theorem 2.2. Let ARn×n be nonsingular. Denote

    β=(αI+A)1,

    where α is a given nonnegative constant. When

    α2β<(1+α)<1β, (2.2)

    the MBAS iteration method (2.1) is convergent.

    Proof. Combining (1.2) with (2.1), we have

    {exk+1=α(αI+A)1exk+(αI+A)1eyk,eyk+1=11+α(ˆDexk+1+αeyk). (2.3)

    From (2.3), we can get

    exk+1=α(αI+A)1exk+(αI+A)1eykα(αI+A)1exk+(αI+A)1eykα(αI+A)1exk+(αI+A)1eyk=αβexk+βeyk

    and

    eyk+1=11+α(ˆDexk+1+αeyk)11+αˆDexk+1+α1+αeyk=11+αˆDexk+1+α1+αeyk11+αˆDexk+1+α1+αeyk11+αexk+1+α1+αeyk.

    Further, let zk=ek=(exk,eyk)T, then

    zk+1(αββαβ1+αα+β1+α)zk(αββαβ1+αα+β1+α)2zk1(αββαβ1+αα+β1+α)kz0.

    Let

    T=(αββαβ1+αα+β1+α).

    Clearly, if ρ(T)<1, then limkTk=0. This implies

    limkexk=0 andlimkeyk=0.

    In this way, the MBAS iteration method (2.1) can converge to the solution of the AVE (1.1).

    Next, we just need to get the sufficient condition for ρ(T)<1. Let λ be an eigenvalue of the matrix T. Then λ satisfies

    (λαβ)(λα+β1+α)αβ21+α=0,

    equivalently,

    λ2(αβ+α+β1+α)λ+α2β1+α=0. (2.4)

    Using Lemma 2.1 for Eq (2.4), |λ|<1 if and only if

    |α2β1+α|<1

    and

    |αβ+α+β1+α|<1+α2β1+α.

    Therefore, ρ(T)<1 when the condition (2.2) holds.

    Theorem 2.3. Let ARn×n be symmetric positive definite, λmin indicate the minimum eigenvalue of matrix A and α0. When

    1<λmin,

    the MBAS iteration method (2.1) is convergent.

    Proof. By calculation, we have

    β(1+α)=(1+α)(αI+A)1=(1+α)(αI+A)1=1+αα+λmin

    and

    βα2=α2(αI+A)1=α2(αI+A)1=α2α+λmin<1+α.

    Obviously, when 1<λmin, we have β(1+α)<1.

    Corollary 2.4. Let ARn×n be nonsingular and α0. When

    ||A111+2α, (2.5)

    the MABS iteration method (2.1) is convergent.

    Proof. Utilizing the Banach perturbation lemma in [20], we obtain

    β(1+α)(1+α)A11αA1 and βα2α2A11αA1.

    To make the condition (2.2) valid, we only need

    (1+α)A11αA1<1 and α2A11αA1<1+α.

    By the simple computations, it is easy to see that the results of Corollary 2.4 hold under the condition (2.5).

    Corollary 2.4 is the same as Corollary 1 in [1]. That is to say, the convergence conditions of Corollary 2.1 are suitable for the BAS method.

    In this section, to detect the feasiblity of the MBAS method (2.1) to gain the numerical solution of the AVE (1.1), we give some numerical experiments. Here, by the iteration steps (IT) and the CPU time (CPU) in seconds, we contrast the MBAS method (2.1) with the SOR-like method in [21], the BAS method (1.3), and the following new iteration (NI) method in [22]

    {x(k+1)=A1(y(k)+b),y(k+1)=α|x(k+1)|+(1α)y(k), (3.1)

    where α>0.

    In these testing four methods, we choose zero vector as all initial vectors, and all iterations of these four methods are stopped when RES106, where 'RES' indicates the relative residual error and is of form

    RES=Ax(k)|x(k)|bb,

    or the number of iteration outnumbers 500. The right-hand side vector b of the AVE (1.1) is taken in a way such that the vector x=(x1,x2,,)T with

    xi=(1)i,i=1,2,,

    is the exact solution. The iteration parameters used in the above four iteration methods are the experimental optimal ones, which minimize the numbers of iteration steps. If the experimental optimal iteration parameters form an interval, then they are further optimized according to the least CPU time. Naturally, in the following tables, αexp indicates the the experimentally optimal parameters for these testing four methods. Here, we use MATLAB R2016B for all the tests.

    Example 1. Let the AVE in (1.1) be

    A=tridiag(1,4,1)Rn×n,x=(1,1,1,1,)TRn,

    and b=Ax|x| in [21].

    For Example 1, the numerical results (including IT, CPU and αexp) for these testing four methods are listed in Table 1. Clearly, these testing four methods can converge rapidly under the corresponding experimentally optimal parameters. Further, on the base of these numerical results in Table 1, we can find that the number of iterations of these testing four methods are nearly sensitivity when the mesh sizes are changed. According to the numerical results in Table 1, the MBAS method has better computational efficiency by the iteration steps and the CPU times, compared with the BAS method, the SOR-like method and the NI method.

    Table 1.  Numerical comparison of Example 1.
    n 3000 4000 5000 6000 7000
    MBAS IT 11 11 11 11 11
    CPU 0.0019 0.0029 0.0031 0.0056 0.0071
    αexp 0.02 0.02 0.02 0.02 0.02
    BAS IT 21 21 21 21 21
    CPU 0.2632 0.4424 0.6071 0.8973 1.1976
    αexp 0.02 0.02 0.02 0.01 0.02
    SOR-like IT 16 16 16 17 17
    CPU 0.0042 0.0066 0.0079 0.0099 0.0117
    αexp 1 1 1 1.02 1.01
    NI IT 16 16 16 17 17
    CPU 0.0048 0.0066 0.0076 0.0088 0.0126
    αexp 1 1 1 1.01 1.01

     | Show Table
    DownLoad: CSV

    Example 2. Let the AVE in (1.1) be A=ˉA+μI, where

    ˉA=[WII0000WII0000WI00IWI000W]Rn×n,

    with

    W=tridiag(1,4,1)=[4100014100014000004100014]Rm×m,

    and

    x=(1,1,1,1,)TRn and b=Ax|x|.

    Similar to Table 1, for Example 2, Tables 2 and 3 for the different value of μ enumerate the numerical results (including IT, CPU and αexp) of these testing four methods. In our numerical experiments, we take μ=2,4. These numerical results in Tables 2 and 3 further verify the observed result from Table 1, i.e., the MBAS method precedes the BAS method[1], the SOR-like method [21] and the NI method [22] in the aspect of the computational efficiency under certain conditions.

    Table 2.  Numerical comparison of Example 2 with μ=2.
    m 60 70 80 90
    MBAS IT 11 11 11 11
    CPU 0.2154 0.3107 0.4328 0.5799
    αexp 0.01 0.03 0.01 0.02
    BAS IT 21 21 21 21
    CPU 1.7489 3.2116 5.5408 8.9070
    αexp 0.03 0.03 0.01 0.01
    SOR-like IT 16 16 16 17
    CPU 0.3434 0.4983 0.7140 0.8760
    αexp 1 1 1 1.02
    NI IT 16 16 16 17
    CPU 0.3706 0.5125 0.8135 0.8948
    αexp 1 1 1 1.01

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical comparison of Example 2 with μ=4.
    m 60 70 80 90
    MBAS IT 8 8 8 8
    CPU 0.1630 0.2465 0.3345 0.4238
    αexp 0.01 0.01 0.01 0.01
    BAS IT 15 15 15 15
    CPU 1.2469 2.2818 3.9165 6.3493
    αexp 0.01 0.02 0.01 0.01
    SOR-like IT 12 12 12 12
    CPU 0.2430 0.4105 0.4834 0.7020
    αexp 1.01 1.01 1.01 1
    NI IT 12 12 12 12
    CPU 0.2617 0.3308 0.5424 0.6945
    αexp 1 1 1 1

     | Show Table
    DownLoad: CSV

    Example 3. [22] Consider the AVE in (1.1), where the matrix A=ˆA+μIm2(μ0) with

    ˆA=Tridiag(Im,Sm,Im)Rm2×m2,Sm=tridiag(1,4,1)Rm×m,

    and

    x=(1,1,1,1,)TRn and b=Ax|x|.

    For Example 3, we still investigate the efficiency of the above four methods. The corresponding numerical results are listed in Tables 4 and 5. The numerical results in Table 4 correspond to μ=4, naturally, the numerical results in Table 5 correspond to μ=8.

    Table 4.  Numerical comparison of Example 3 with μ=4.
    m 30 50 70 90
    MBAS IT 8 8 8 8
    CPU 0.0082 0.0223 0.0576 0.0967
    αexp 0.02 0.01 0.01 0.01
    BAS IT 15 15 15 15
    CPU 0.0156 0.0755 0.2196 0.5442
    αexp 0.01 0.01 0.01 0.01
    SOR-like IT 11 12 12 12
    CPU 0.0121 0.0345 0.1028 0.1775
    αexp 1 1.01 1.01 1
    NI IT 11 12 12 12
    CPU 0.0128 0.0340 0.1210 0.1736
    αexp 1 1 1 1

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical comparison of Example 3 with μ=8.
    m 30 50 70 90
    MBAS IT 7 7 7 7
    CPU 0.0061 0.0178 0.0477 0.0792
    αexp 0.06 0.06 0.03 0.04
    BAS IT 13 13 13 13
    CPU 0.0129 0.0612 0.2009 0.5094
    αexp 0.008 0.008 0.005 0.006
    SOR-like IT 9 9 9 9
    CPU 0.0087 0.0253 0.1080 0.1155
    αexp 1.01 1 1 1.02
    NI IT 9 9 9 9
    CPU 0.0106 0.0265 0.0816 0.1277
    αexp 1 1 1 1.01

     | Show Table
    DownLoad: CSV

    These numerical results in Tables 4 and 5 still verify the observed result from Table 1. Under the corresponding experimentally optimal parameters, the four testing methods converge rapidly to the unique solution of Example 3. Among four testing methods, the computational efficiency of the MBAS method is best, compared with the BAS method [1], the SOR-like method [21] with the NI method [22] under certain conditions.

    In this paper, to accelerate the block-diagonal and anti-block-diagonal splitting (BAS) iteration method, we have presented a modified BAS (MBAS) iteration method to solve the absolute value equation (AVE). To guarantee the convergence of the MBAS method, we give some convergence conditions under certain conditions. Numerical experiments manifest that the MBAS method compared to some existing numerical methods (such as the SOR-like method [21] and the NI method [22]) is feasible for the AVE under certain conditions.

    The authors would like to thank two anonymous referees for providing helpful suggestions, which greatly improved the paper. This research was supported by National Natural Science Foundation of China (No.11961082) and Key Project of Shaanxi Provincial Education Department under grant 20JS021.

    The authors declare that they have no competing interests.



    [1] C. X. Li, S. L. Wu, Block-diagonal and anti-block-diagonal splitting iteration method for absolute value equation, In: Simulation tools and techniques, 12th EAI International Conference, SIMUtools 2020, Guiyang, China, 369 (2021), 572–581. doi: 0.1007/978-3-030-72792-5_45.
    [2] O. L. Mangasarian, A generalized Newton method for absolute value equations, Optim. Lett., 3 (2009), 101–108. doi: 10.1007/s11590-008-0094-5. doi: 10.1007/s11590-008-0094-5
    [3] Z. Z. Bai, X. Yang, On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math., 59 (2009), 2923–2936. doi: 10.1016/j.apnum.2009.06.005. doi: 10.1016/j.apnum.2009.06.005
    [4] M. Z. Zhu, Y. E. Qi, The nonlinear HSS-like iteration method for absolute value equations, arXiv. Available from: https://arXiv.org/abs/1403.7013v4.
    [5] J. Rohn, A theorem of the alternatives for the equation Ax+B|x|=b, Linear Multilinear A., 52 (2004), 421–426. doi: 10.1080/0308108042000220686. doi: 10.1080/0308108042000220686
    [6] O. L. Mangasarian, Absolute value programming, Comput. Optim. Applic., 36 (2007), 43–53. doi: 10.1007/s10589-006-0395-5. doi: 10.1007/s10589-006-0395-5
    [7] O. L. Mangasarian, R. R. Meyer, Absolute value equations, Linear Algebra Appl., 419 (2006), 359–367. doi: 10.1016/j.laa.2006.05.004. doi: 10.1016/j.laa.2006.05.004
    [8] S. L. Wu, P. Guo, Modulus-based matrix splitting algorithms for the quasi-complementarity problems, Appl. Numer. Math., 132 (2018), 127–137. doi: 10.1016/j.apnum.2018.05.017. doi: 10.1016/j.apnum.2018.05.017
    [9] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, Society for Industrial and Applied Mathematics, 2009. doi: 10.1137/1.9780898719000.
    [10] J. Rohn, An algorithm for solving the absolute value equations, Electron. J. Linear Algebra, 18 (2009), 589–599. doi: 10.13001/1081-3810.1332. doi: 10.13001/1081-3810.1332
    [11] J. Rohn, V. Hooshyarbakhsh, R. Farhadsefat, An iterative method for solving absolute value equations and sufficient conditions for unique solvability, Optim. Lett., 8 (2014), 35–44. doi: 10.1007/s11590-012-0560-y. doi: 10.1007/s11590-012-0560-y
    [12] D. K. Salkuyeh, The Picard-HSS iteration method for absolute value equations, Optim. Lett., 8 (2014), 2191–2202. doi: 10.1007/s11590-014-0727-9. doi: 10.1007/s11590-014-0727-9
    [13] O. L. Mangasarian, A hybrid algorithm for solving the absolute value equation, Optim. Lett., 9 (2015), 1469–1474. doi: 10.1007/s11590-015-0893-4. doi: 10.1007/s11590-015-0893-4
    [14] S. L. Wu, T. Z. Huang, X. L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math., 228 (2009), 424–433. doi: 10.1016/j.cam.2008.10.006. doi: 10.1016/j.cam.2008.10.006
    [15] C. X. Li, S. L. Wu, Modified SOR-like iteration method for absolute value equations, Math. Probl. Eng., 2020 (2020), 9231639. doi: 10.1155/2020/9231639. doi: 10.1155/2020/9231639
    [16] A. X. Wang, H. J. Wang, Y. K. Deng, Interval algorithm for absolute value equations, Cent. Eur. J. Math., 9 (2011), 1171–1184. doi: 10.2478/s11533-011-0067-2. doi: 10.2478/s11533-011-0067-2
    [17] S. Ketabchi, H. Moosaei, An efficient method for optimal correcting of absolute value equations by minimal changes in the right hand side, Comput. Math. Appl., 64 (2012), 1882–1885. doi: 10.1016/j.camwa.2012.03.015. doi: 10.1016/j.camwa.2012.03.015
    [18] C. Zhang, Q. J. Wei, Global and finite convergence of a generalized newton method for absolute value equations, J. Optim. Theory. Appl., 143 (2009), 391–403. doi: 10.1007/s10957-009-9557-9. doi: 10.1007/s10957-009-9557-9
    [19] C. X. Li, A preconditioned AOR iterative method for the absolute value equations, Inter. J. Comput. Meth., 14 (2017), 1750016. doi: 10.1142/S0219876217500165. doi: 10.1142/S0219876217500165
    [20] J. M. Ortega, W. C. Rheinboldt, Iterative solution of nonlinear equations in several variables, Academic Press, 1970. doi: 10.1016/C2013-0-11263-9.
    [21] Y. F. Ke, C. F. Ma, SOR-like iteration method for solving absolute value equations, Appl. Math. Comput., 311 (2017), 195–202. doi: 10.1016/j.amc.2017.05.035. doi: 10.1016/j.amc.2017.05.035
    [22] Y. F. Ke, The new iteration algorithm for absolute value equation, Appl. Math. Lett., 99 (2020), 105990. doi: 10.1016/j.aml.2019.07.021. doi: 10.1016/j.aml.2019.07.021
    [23] T. Saha, S. Srivastava, S. Khare, P. S. Stanimirovic, M. D. Petkovic, An improved algorithm for basis pursuit problem and its applications, Appl. Math. Comput., 355 (2019), 385–398. doi: 10.1016/j.amc.2019.02.073. doi: 10.1016/j.amc.2019.02.073
    [24] M. D. Petkovic, Generalized Schultz iterative methods for the computation of outer inverses, Comput. Math. Appl., 67 (2014), 1837–1847. doi: 10.1016/j.camwa.2014.03.019. doi: 10.1016/j.camwa.2014.03.019
    [25] M. D. Petkovic, M. A. Krstic, K. P. Rajkovic, Rapid generalized Schultz iterative methods for the computation of outer inverses, J. Comput. Appl. Math., 344 (2018), 572–584 doi: 10.1016/j.cam.2018.05.048. doi: 10.1016/j.cam.2018.05.048
    [26] M. Benzi, G. H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer., 14 (2005), 1–137. doi: 10.1017/S0962492904000212. doi: 10.1017/S0962492904000212
    [27] Z. Z. Bai, Z. Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl., 428 (2008), 2900–2932. doi: 10.1016/j.laa.2008.01.018. doi: 10.1016/j.laa.2008.01.018
  • This article has been cited by:

    1. Miao Guo, Qingbiao Wu, Two effective inexact iteration methods for solving the generalized absolute value equations, 2022, 7, 2473-6988, 18675, 10.3934/math.20221027
    2. Xin-Hui Shao, Shao-Xiong Yang, The relaxed MGHSS-like method for absolute value equations, 2023, 37, 0354-5180, 8845, 10.2298/FIL2326845S
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2831) PDF downloads(121) Cited by(2)

Figures and Tables

Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog