1.
Introduction
The standard absolute value equation (AVE) is in the form of
where A∈Rn×n is an M-matrix, |x| represents all the elements of the vector x∈Rn by absolute value and b∈Rn. If "|x|" is replaced by "B|x|" in (1.1), then the general AVE is obtained, see [24,30]. The AVE has received considerable attention recently, as it is suitable for a wide variety of optimization problems, e.g., linear programming, linear complementarity problems (LCP) and convex quadratic programming [1,2,3,4,5,6,7,9,10,11,12,13,14,15,16,23,25,26].
In recent years, a wide variety of procedures have been developed for solving AVE (1.1). For example, Wu and Li [34] presented a special shift splitting technique for determining the AVE (1.1) and performed a convergence analysis. Ke and Ma [19] established the SOR-like process to solve the AVE (1.1). Chen et al. [8] modified the approach of [19] and analyzed the SOR-like approach using optimal parameters. Fakharzadeh and Shams [12] recommended the mixed-type splitting iterative scheme for determining (1.1) and established the convergence properties. Hu with Huang [17] have developed the AVE system as an LCP without any premise and demonstrated the existence and convexity properties. Caccetta et al. [7] studied a smoothing Newton procedure for solving (1.1) and established that the procedure is globally convergent when ‖A−1‖<1. Ning and Zhou [40] evaluated improved adaptive differential evolution for AVEs; in this technique, they use local and global search. Salkuyeh [41] addressed the Picard HSS iteration approach and provided sufficient conditions for its convergence, while Edalatpour et al. [11] offered a generalization of the Gauss-Seidel (GGS) approach for AVE (1.1). Cruz et al. [39] utilized the inexact non-smooth Newton approach and designated global linear convergence of the approach. Moosaei et al. [22] proposed two techniques for determining AVE (1.1), namely, the Newton technique with the Armijo step and the Homotopy perturbation technique. For more details, see [18,20,27,28,29,31,32,33,34,35,36,37,38,43].
In this article, inspired by the work in [11], based on the GGS iteration method, the new generalized Gauss-Seidel (NGGS) iteration methods are presented to solve the AVE (1.1), and its convergence conditions are discussed in detail. By using some numerical tests, we demonstrate the efficacy of the newly developed methods.
The rest of the article is designed as follows: Section 2 discusses some preliminary information. Section 3 provides details of the proposed methodologies and its convergence conditions. Section 4 reports some tests to indicate the efficiency of the offered methods. Finally, section 5 draws some conclusions.
2.
Preliminaries
Here, we will provide some notations, the description of an M-matrix, as well as some helpful lemmas for the later research.
Let A=(aij)∈Rn×n, we represent the absolute value, tridiagonal and infinity norm of A as |A|=(|aij|), Trd(A) and ∥A∥∞, respectively. The matrix A∈Rn×n is called an Z-matrix if aij≤0 for i≠j, and an M-matrix if it is a nonsingular Z-matrix and with A−1≥0.
Lemma 2.1. [33] The matrix A=(aij)∈Rn×n is said to be strictly diagonally dominant when
Furthermore, if A is strictly diagonally dominant, then A is invertible.
Lemma 2.2. [33] Consider z,x∈Rn. Then ‖|z|−|x|‖∞≤‖z−x‖∞.
3.
Two NGGS methods
Here, we discuss the two NGGS methods: Method Ⅰ represents the first method, while Method Ⅱ represents the second method.
3.1. Method Ⅰ for AVE
By revising the AVE (1.1)
By multiplying λ on both sides, we obtain
Let
where, DA, L and U respectively, are the diagonal, the strictly lower and upper-triangular parts of A. Moreover, ˉΩ=Ψ(2−Ψ)(I−D)−1, where 0≤Ψ≤2 and I stands for the identity matrix. Using Eqs (3.1) and (3.2), the Method Ⅰ is suggested as:
Using the scheme, so Eq (3.3) can be written as
Where i=0,1,2,..., and 0<λ≤1. Note that if λ=1 and ˉΩ=0, then the Eq (3.4) is reduces to the GGS method [11].
In order to demonstrate the convergence of Method Ⅰ, we prove the theorem listed below.
Theorem 3.1. Assume that the diagonal elements of matrix A are all greater than one, and the DA−L−I matrix is a strict row-wise diagonally dominant matrix. If
Then the sequence {xi} of Method Ⅰ converges to the unique solution x⋆ of AVE (1.1).
Proof. We will show first ‖(ˉΩ+DA−λL)−1‖∞<1. Clearly, if we put L=0, then
If we consider that L≠0, we get
if we take
Taking both side by (ˉΩ+DA)−1, we get
where Q=λ(ˉΩ+DA)−1L and t=(1,1,...,1)T. Also, we have
Thus, from Eqs (3.6) and (3.7), we get
So, we obtain
To show the uniqueness of the solution, let x⋆ and z⋆ be two not the same solutions of the AVE (1.1). Using Eq (3.4), we get
From Eqs (3.8) and (3.9), we get
Using Lemma 2.2 and Eq (3.5), the above equation can be written as
The above results are contradictory. Finally, x⋆=z⋆.
In order to verify the convergence, let x⋆ is a unique solution of (1.1). So, from Eq (3.8) and
we deduce
Based on Lemma 2.2 and the infinity norm, we get
and since ‖(ˉΩ+DA−λL)−1‖∞<1 it follows that
According to this inequality, the convergence of Method Ⅰ is possible when condition Eq (3.5) is fulfilled.
3.2. Method Ⅱ for AVE
Here, we outline Method Ⅱ of the NGGS method. By using Eqs (3.1) and (3.2), we can formulate Method Ⅱ to determine AVE (1.1) as follows:
In order to demonstrate the convergence of Method Ⅱ, we prove the theorem listed below.
Theorem 3.2. Assume that the diagonal elements of matrix A are all greater than one, and the DA−L−I matrix is a strict row-wise diagonally dominant matrix. Then the sequence {xi} of Method Ⅱ converges to the unique solution x⋆ of AVE (1.1).
Proof. The uniqueness result follows from Theorem 3.1. To demonstrate the convergence, suppose
By using Eqs (3.2) and (3.10), we get
Therefore, xi+1 solves the AVE (1.1).
4.
Numerical tests
The purpose of this section is to present a number of numerical tests that demonstrate the effectiveness of new approaches from three perspectives: The iteration steps (Itr), computing time (Time), and norm of absolute residual vectors (RVS). Where, 'RVS' is defined by
All calculations are run on Intel (C) Core (TM) i5-3337U, 4 GB RAM, 1.80 GHz, and MATLAB (2016a). Furthermore, the zero vector is the initial vector for Example 4.1.
Problem 4.1. Let
Calculate b=Ax⋆−|x⋆|, where x⋆=((−1)i,(i=1,2,..,n))T∈Rn. We describe the suggested methods in comparison with the optimal parameters SOR-like algorithm given in [8] (written as SLM using ω=0.825), the special shift splitting algorithm presented in [34] (written as SSM), and the GGS technique shown in [11]. In Table 1, we examine the results.
All methods in Table 1 analyze the solution x⋆ for various values of n, respectively. Clearly, Method Ⅰ is more effective than SLM and SSM procedures, and the 'Time' of Method Ⅰ is less than the GGS. Moreover, Method Ⅱ demonstrates high computational performance from the perspective of 'Itr' and 'Time'.
Problem 4.2. Let A=M+I∈Rn×n and the vector b=Ax⋆−|x⋆|∈Rn, such that
where Hn=Trd(−1.5,4,−0.5)∈Rv×v and n=v2. Here, use the same initial vector and stopping criteria described in [12]. We compare the presented techniques with the AOR method [21], the mixed-type splitting (MT) iterative scheme [12] and the GGS method [11]. Table 2 provides the numerical data.
In Table 2, we present the numeric outcomes of the AOR method, MT method, GGS method, Method Ⅰ and Method Ⅱ, respectively. We can conclude from these outcomes that our proposed methods are more efficient than AOR and MT and GGS techniques.
Problem 4.3. Let A=M+4I∈Rn×n and the vector b=Ax⋆−|x⋆|∈Rn, such that
where Hn=Trd(−1,4,−1)∈Rv×v, I∈Rv×v is the unit matrix and n=v2. In this problem, we use the same initial vector and stopping criteria described in [12]. We compare the offered procedures with the AOR method [21], the mixed-type splitting (MT) iterative scheme [12], and the technique presented in [14] (expressed by SISA). The computational outcomes are listed in Table 3.
All methods in Table 3 analyze the solution x⋆ for various values of n, respectively. Clearly, Method Ⅰ is more effective than AOR and MT procedures, and the 'Time' of Method Ⅰ is less than the SISA method. Moreover, Method Ⅱ demonstrates high computational performance from the perspective of 'Itr' and 'Time'.
Problem 4.4. Let
and b=Ax⋆−|x⋆|∈Rn. Using the same initial vector and the stopping criteria described in [14]. We compare the novel approaches with the technique offerd in [14] (expressed by SISA using ω=1.0455), the SOR-like method proposed in [19] (written by SOR) and the modulus-based SOR method presented in [42] (written as MSOR). The outcomes are listed in Table 4.
It is clear from Table 4 that all the tested methods provide a quick calculation of AVE (1.1). We observe that the 'Itr' and 'Time' of the recommended methods are less than the existing techniques. The results of our study indicate that our suggested methods for AVEs are feasible and highly effective.
5.
Conclusions
In this work, two NGGS methods (Method Ⅰ and Method Ⅱ) are presented to solve the AVEs. The convergence properties of the strategies are examined. A number of experiments have been conducted in order to establish the effectiveness of the new approaches.
The GGS technique has been successfully extended by two additional parameters when A is an M-matrix. The cases for more general coefficient matrices are the next issue to be considered.
Appendix
The following is an explanation of how our proposed techniques can be implemented. From Ax−|x|=b, we have
Thus, we can approximate xi+1 as follows,
This process is known as the Picard technique [31]. Now, we examine the procedure for Method Ⅰ.
Algorithm for Method I. (1) Choose the parameters, an starting vector x0∈Rn and set i=0.
(2) Compute yi=xi+1≈A−1(|xi|+b),
(3) Calculate xi+1=λ(ˉΩ+DA−λL)−1|yi|+(ˉΩ+DA−λL)−1[((1−λ)(ˉΩ+DA)+λ(ˉΩ+U))xi+λb].
(4) If xi+1=xi, then stop. Else, apply i=i+1 and repeat step 2.
For Method Ⅱ, follow the same steps.
Conflict of interest
The authors declare there is no conflicts of interest.