1.
Introduction
We focus on the following generalized absolute value equations (GAVE):
where A,B∈Rn×n and b∈Rn. Here, the notation "|⋅|" denotes the absolute value. Particularly, if B=I or B is invertible, GAVE (1.1) is simplified to the absolute value equations (AVE):
Especially, when B=0, the GAVE (1.1) is simplified to the linear system Ax=b, which plays a significant role in scientific computing problems.
The main significance of the GAVE (1.1) and the AVE (1.2) is that many problems in different fields may be transformed into the form of absolute value equations. Such as the linear programming problems, the linear complementarity problems (LCP), the quadratic programming, the mixed integer programming, the bimatrix game, and so on, see e.g. [1,2,3,4,5,6] for more details. For example, give a matrix Q∈Rn×n and a vector q∈Rn, the linear complementarity problems (LCP) is to find z∈Rn so that
Lately, to obtain the numerical solutions of the GAVE (1.1) and the AVE (1.2), some numerical methods have been proposed, such as the Newton-type method [7,8,9,10,11,12], the sign accord method [13], the SOR-like iteration method [14,15,16], the neural network method [17,18] and so on.
The GAVE (1.1) can be transformed into the form of nonlinear equations, so some Newton-type methods are formed to obtain the solutions of the GAVE (1.1) and the AVE (1.2). In [9], using the generalized Jacobian matrix and based on the famous Newton iteration method, the generalized Newton (GN) method is directly established for solving the AVE (1.2). In [19], the application of GN method is extended to the GAVE (1.1). The Jacobian matrix of GN method changes with the iteration. When solving the large-scale problems, it is infeasible and wasteful, especially the Jacobian matrix is ill-conditioned. To overcome this shortcoming, a modified Newton-type (MN) method is proposed in [8]. To balance the MN method, SSMN methods are established in [20,21]. Then, a more common Newton-based matrix splitting (NMS) method is established in [11]. The matrix A is expressed as A=M−N. At every iteration, we need to calculate the linear system with coefficient matrix Ω+M, where Ω is a given positive semi-definite matrix. However, if Ω+M is ill-conditioned, solving this linear system may be costly or impossible in practice. This motivates us to introduce a nonnegative real parameter θ≥0 in the MN iteration frame based on matrix splitting and propose a new relaxed Newton-based matrix splitting (RNMS) method to solve the GAVE (1.1). If different splits are selected, the new RNMS method can be simplified into the above methods, and a series of new methods can be generated.
The layout of the rest is organized as follows. In Section 2, we establish a new relaxed method to solve the GAVE (1.1). In Section 3, the associated convergence analysis is given. The numerical results are reported in In Section 4, and some conclusions are given in final section.
2.
The relaxed Newton-based matrix splitting iteration method
The existence of the nonlinear term B|x| makes solving the GAVE (1.1) can be transformed into solving the nonlinear function F(x)
Let F(x)=H(x)+G(x), where H(x) is a differentiable function and G(x) is a Lipschitz continuous function, and Ω is a positive semi-definite matrix. Setting
and
we can substitute them into the MN iteration frame proposed in [22]. If the Jacobian matrix
is invertible, the modified Newton-type (MN) iteration method for solving the GAVE (1.1) is established as follows:
Algorithm 2.1. [8] The modified Newton-type (MN) iteration method.
Step 1. Choose an arbitrary initial guess x(0)∈Rn and the norm of relative residual vector as "RES", and let k:=0;
Step 2. For k=0,1,2,⋯, computing x(k+1)∈Rn by
where Ω+A is nonsingular and Ω is a known positive semi-definite matrix;
Step 3. If |x(k+1)−x(k)|<RES, break. Otherwise, let k+1 replace k and go to Step 2.
Based on the MN method, Zhou [11] proposed the Newton-based matrix splitting (NMS) iteration method to solve the GAVE (1.1) combining Newton method with matrix splitting technique.
Let
and set
Zhou substituted them into the MN iteration frame proposed in [22] and put forward the Newton-based matrix splitting iteration method as follows:
Algorithm 2.2. [11] The Newton-based matrix splitting (NMS) iteration method.
Step 1. Choose an arbitrary initial guess x(0)∈Rn and the norm of relative residual vector as "RES", and let k:=0;
Step 2. For k=0,1,2,⋯, computing x(k+1)∈Rn by
where Ω+M is nonsingular and Ω is a known positive semi-definite matrix;
Step 3. If |x(k+1)−x(k)|<RES, break. Otherwise, let k+1 replace k and go to Step 2.
In this paper, we give a new method for solving the GAVE (1.1). Let A=M−N and introduce a nonnegative real parameter θ≥0 in the MN iteration frame proposed in [22], we can set
and
Therefore, we can obtain the new RNMS method and describe below:
Algorithm 2.3. The relaxed Newton-type matrix splitting (RNMS) iteration method.
Step 1. Choose an arbitrary initial guess x(0)∈Rn and the norm of relative residual vector as "RES", and let k:=0;
Step 2. For k=0,1,2,⋯, computing x(k+1)∈Rn by
where Ω+θM is nonsingular, Ω is a known positive semi-definite matrix and θ≥0 is a nonnegative relaxation parameter;
Step 3. If |x(k+1)−x(k)|<RES, break. Otherwise, let k+1 replace k and go to Step 2.
Remark 2.1. Obviously, if we put θ=1, then the RNMS method (2.4) is simplified to the NMS method [11]. Because the Picard method [22,23] and the MN method [8] are two special cases of the NMS method [11], they are also special cases of the new RNMS method (2.4).
Algorithm 2.3 gives a more common framework of the relaxed Newton-type matrix splitting iteration method for solving the GAVE (1.1). If the matrix A is split into D−L−U, where D is the diagonal part of A, L and U are the strictly lower and upper triangle parts of A respectively, the following series of relaxed Newton-type matrix splitting forms will be generated:
(a) Let M=A, N=Ω=0 and θ=1, Algorithm 2.3 is simplified to the Picard method [22,23]
(b) Let M=A, N=0 and θ=1, Algorithm 2.3 is simplified to the MN method [8]
(c) Let M=D and N=L+U, Algorithm 2.3 gives the relaxed Newton-based Jacobi (RNJ) method
(d) Let M=D−L and N=U, Algorithm 2.3 gives the relaxed Newton-based Gauss-Seidel (RNGS) method
(e) Let M=1αD−L and NP=(1α−1)D+U, Algorithm 2.3 gives the relaxed Newton-type SOR (RNSOR) method
(f) Let M=1α(D−βL) and N=1α((1−α)D+(α−β)L+αU), Algorithm 2.3 gives the relaxed Newton-type AOR (RNAOR) method
(g) Let M=H and N=−S, where H=12(A+AT) and S=12(A−AT), Algorithm 2.3 gives the relaxed Newton-type Hermitian and Skew-Hermitian (RNHSS) method
Remark 2.2. Let M=12(A+Ω), N=−12(A−Ω), we can obtain the relaxed SSMN (RSSMN) method
If θ=1, the RSSMN method is simplified to the SSMN method [20,21]
3.
Convergence property
We will discuss the convergence analysis of the RNMS method for solving the GAVE (1.1) in this section. Firstly, according to the matrix 2-norm, two convergence conditions in common cases are given. Secondly, if A is positive definite or H+-matrix, then some special convergence theorems of the RNMS method are provided. The RNMS method can be simplified to the NMS method, so its convergence conditions can be obtained immediately.
Before that, briefly introduce the following symbols and definitions. Let A=(aij)∈Rn×n. If aij≤0 for any i≠j, then A is a Z-matrix. Let 0 represent a zero matrix. If A−1≥0 and A is a Z-matrix, then A is a nonsingular M-matrix. If the comparison matrix ⟨A⟩ of A is an M-matrix, then A is an H-matrix, where the form of ⟨A⟩=(⟨a⟩ij) is as follows:
If A is an H-matrix with positive diagonal terms, then A is an H+-matrix. If A is symmetric and satisfies xTAx>0 for all nonzero vectors x, then A is symmetric positive definite. We define |A|=(|aij|) and ρ(A) represent the absolute value matrix and the spectral radius, respectively.
3.1. General sufficient convergence property
Theorems 3.1 and 3.2 present the general sufficient convergence of Algorithm 2.3 when the correlation matrix is invertible.
Theorem 3.1. Let A=M−N be its splitting, A,B∈Rn×n, b∈Rn, θ≥0 be a nonnegative relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θM is invertible.
If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Proof. Suppose that the GAVE (1.1) has a solution x∗, then x∗ satisfies the following equation:
which is equal to
Subtracting (3.3) from (2.4) gives the error expression as follows:
Noticing that Ω+θM is invertible, we have
Using the 2-norm for (3.5), we get
According to the condition (3.1), the {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Theorem 3.2. Let A=M−N be nonsingular, where M is also nonsingular, A,B∈Rn×n, b∈Rn, θ>0 be a positive relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θM is nonsingular.
If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Proof. From the Banach Lemma [25], we can have
Then the conclusion is drawn from Theorem 3.1.
Assuming B=I, the GAVE (1.1) can be simplified to the AVE (1.2). Therefore, the RNMS method is also suitable for solving the AVE (1.2).
Corollary 3.1. Let A=M−N be its splitting, A∈Rn×n, b∈Rn, θ≥0 be a nonnegative relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θM is invertible.
If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 to solve the AVE (1.2) is convergent.
Corollary 3.2. Let A=M−N be nonsingular, where M is also nonsingular, A∈Rn×n, b∈Rn, θ>0 be a positive relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θM is nonsingular.
If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 to solve the AVE (1.2) is convergent.
Based on Remark 2.2, we can draw the following corollaries.
Corollary 3.3. Let A,B∈Rn×n, b∈Rn, θ≥0 be a nonnegative relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θA is invertible.
If
then the iterative sequence {x(k)}+∞k=1 created by RSSMN method is convergent.
Corollary 3.4. Let A∈Rn×n be nonsingular, B∈Rn×n, b∈Rn, θ>0 be a positive relaxation parameter and Ω be a positive semi-definite matrix which makes Ω+θA is nonsingular.
If
then the iterative sequence {x(k)}+∞k=1 created by RSSMN method is convergent.
Remark 3.1. If we put θ=1, the RSSMN method is simplified to the SSMN method [21,22]. Therefore, the convergence conditions of SSMN method in [21,22] can be obtained from Corollaries 3.3 and 3.4.
3.2. Special sufficient convergence property
If A is positive definite or H+-matrix, we can get Theorem 3.3–3.5, for Ω=ωI with ω>0, respectively.
Theorem 3.3. Let A=H+S be a positive definite matrix, where H=12(A+AT) and S=12(A−AT), θ>0 be a positive relaxation parameter and Ω=ωI with ω>0.
Further denote that λmin and λmax are the minimum and the maximum eigenvalues of the matrix H respectively, σmax is the maximum value of the absolute values of the eigenvalues of the matrix S and assume ‖B‖2=τ. If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Proof. In fact, according to Theorem 3.1, we only need to get
By assumptions, we have
It follows, if
then the {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Assuming B=I, a similar corollary can be obtained.
Corollary 3.5. Let A=H+S be a positive definite matrix, where H=12(A+AT) and S=12(A−AT), θ>0 be a positive relaxation parameter and Ω=ωI with ω>0.
Further denote that λmin and λmax are the minimum and the maximum eigenvalues of the matrix H respectively, and σmax is the maximum value of the absolute values of the eigenvalues of the matrix S. If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 to solve the AVE (1.2) is convergent.
Moreover, let A=M−N, where M is symmetric positive definite, we can get Theorem 3.4.
Theorem 3.4. Let A=M−N be its splitting, where M is symmetric positive definite, A,B∈Rn×n, b∈Rn, θ>0 be a positive relaxation parameter and Ω=ωI with ω>0.
Further denote that λmin and λmax are the minimum and the maximum eigenvalues of the matrix M, respectively, and assume ‖M−1N‖2=σ, ‖B‖2=τ. If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Proof. According to (3.6), we only need to get
It can be transformed into proving the following equation:
Since
and
we just need
This implies under the condition (3.18), the {x(k)}+∞k=1 created by Algorithm 2.3 is convergent.
Theorem 3.5. Let A∈Rn×n be an H+-matrix, A=M−N be H− splitting, θ>0 be a positive relaxation parameter and Ω=ωI with ω>0. If
then the iterative sequence {x(k)}+∞k=1 created by Algorithm 2.3 converges.
Proof. By the assumptions and [26], we have
Using the absolute value for (3.5), we get
Since
It follows that, if the condition (3.24) is satisfied, then the {x(k)}+∞k=1 created by Algorithm 2.3 converges.
Corollary 3.6. Let A∈Rn×n be an H+-matrix, A=M−N be H− splitting, θ>0 be a positive relaxation parameter and Ω=ωI with ω>0. If
then the {x(k)}+∞k=1 created by Algorithm 2.3 to solve the AVE (1.2) converges.
4.
Numerical experiments
This section provides two numerical examples to compare the Picard method [23,24], the MN method [8], the NMS method [11] and the RNMS method in terms of the iteration step number (indicated as "IT"), the amount of CPU time (indicated as "CPU") and the norm of relative residual vector (indicated as "RES"). Here, "RES" was set to be
Here, we use MATLAB R2020B for all the experiments. All numerical computations are started from the initial vector
The iteration is terminated once RES <10−6 or the largest number of iteration step kmax exceeds 500. In the following table, "−" denotes kmax is larger than 500 or the CPU times are larger than 500 s.
The article [3,9] show that if the eigenvalue of Q is not 1, the LCP (1.3) can lead to
If we let A=I+Q, B=Q−I, b=q, (4.1) converts to the form of the GAVE (1.1). According to this, we give the following examples.
Example 4.1. [2] Let n=m2. We consider the LCP (1.3), where Q=ˆQ+μI∈Rn×n and q=−Qz∗∈Rn with
and
is the unique solution of the LCP. In this situation, the unique solution of the GAVE (1.1) is
In [11], the NJ, NGS and NSOR methods are the representatives of the proposed NMS method. Compared with the Picard method and the MN method, it is concluded that the calculation efficiency of NMS method is higher in some cases. In our actual experiments, we can take Ω=ˆQ,0.5ˆQ and μ=4,−1. Different α will affect the performance of the NSOR method, so the best experimental parameter is recorded as αexp which minimizes the iteration step number of the NSOR method.
To demonstrate the superiority of the new RNMS method, we can take Ω=θˆQ,0.5θˆQ and μ=4,−1 in our actual experiments. Obviously, the NMS method is a special case of θ=1 in the new RNMS method. By computation, matrices Q and A, when μ=4 are symmetric positive definite. Therefore, Cholesky decomposition can be used to assist in inversion. When μ=−1, the former is symmetric indefinite, and the latter is symmetric positive definite. Therefore, LU decomposition can be used to assist the work of inversion.
Here, the RNJ, RNGS, RNSOR methods are used as representatives of the new RNMS method. The efficiency of RNSOR method is affected by different factors α and θ, so the best experimental parameters are recorded as αexp and θexp, which minimizes the number of iterative steps of RNSOR method. Similarly, different Ω will affect the performance of the MN method, and the best experimental parameter is recorded as θexp. When the iteration step numbers are the same, take the minimum value of RES. The Tables 1–4 list the results of numerical tests (including IT, CPU and RES) for the eight test methods, i.e., the Picard method, the MN method, the NJ method, the RNJ method, the NGS method, the RNGS method, the NSOR method and the RNSOR method.
From the experimental data in Tables 1–4, it is easy to see that when the grid size n increases, the iteration step number and CPU time of the eight methods also increase. We can find that the RNSOR method has the least iteration step number and CPU time, and the Picard method has the most. The RNJ method, the RNGS method and the RNSOR method cost less than the NJ method, the NGS method and the NSOR method, respectively.
Example 4.2. [2,27] Let n=m2. We consider the LCP (1.3), where Q=ˆQ+μI∈Rn×n and q=−Qz∗∈Rn with
and
is the unique solution of the LCP. In this situation, the unique solution of the GAVE (1.1) is
For comparison, we can take
and
in our actual experiments. Therefore, Ω in the NMS method is equal to ˆQ,0.5ˆQ.
For Example 4.2, we still compare the above eight methods: the Picard method, the MN method, the NJ method, the RNJ method, the NGS method, the RNGS method, the NSOR method and the RNSOR method.
By computation, matrix A, when μ=4 is a strictly diagonally dominant H+-matrix, and when μ=−1 is an irreducible and weakly diagonally dominant H+-matrix. In the implementation operation, to assist the inversion, sparse Cholesky decomposition or sparse LU decomposition can be used.
From Tables 5 and 6, we can get the same conclusion as Tables 1 and 2. The RNSOR method has the least iteration step number and CPU time than other test methods. The numerical results in Tables 7 and 8 show that the Picard method and the MN method are not convergent, the NJ, NGS, NSOR methods are convergent, but there are many iterative steps.
The number of iteration steps of our new method is much less than that of the previous method. From these experimental data, we can again conclude that the new relaxed Newton-type matrix splitting (RNMS) method is superior to the Picard method, the MN method, and the NMS method for solving the GAVE (1.1).
5.
Conclusions
In this paper, a class of relaxed Newton-type matrix splitting iteration methods have been established for solving the generalized absolute value equations by introducing a parameter and using the matrix splitting technique. To ensure the convergence of the RNMS method, some sufficient theorems are given. We use two numerical experiments from the LCP to show that compared with the existing Picard method [23,24], the existing MN method [8], the existing NMS method [11], the new RNMS method is feasible under certain conditions.
Acknowledgments
The authors would like to thank the three anonymous referees for providing helpful suggestions, which greatly improved the paper. This research was supported by the Fundamental Research Funds for the Central Universities (N2224005-1).
Conflict of interest
The authors declare that they have no competing interests.