1.
Introduction
To establish the efficient iteration method to obtain the numerical solution of the absolute value equation (AVE)
the following equivalent two-by-two block nonlinear equation of the AVE (1.1) is considered in [1]
i.e.,
where ˆD=D(x)=diag(sign(x)),x∈Rn. Afterwards, using the following matrix splitting of matrix ˉA, that is,
where α is a given appropriate constant, the block-diagonal and anti-block-diagonal splitting (BAS) method for the nonlinear equation (1.2) was designed and described as follows:
The BAS method: Let initial vectors x(0)∈Rn and y(0)∈Rn. For k=0,1,⋯ until the iteration sequence {x(k),y(k)} is convergent, calculate
or
where α is a given appropriate constant.
In [1], some conditions were given to guarantee the convergence of the BAS method. Numerical experiment results showed that the convergence behavior of the BAS method is better than the generalized Newton (GN) method in [2] and the nonlinear HSS-like (NHSS) method [3,4].
As is known, the AVE (1.1) is widely concerned because it is viewed as a very useful tool in a number of practical problems, including linear programming, the quasi-complementarity problems, bimatrix games, see [5,6,7,8,9]. Recently, many authors have exploited some feasible iteration methods to obtain the numerical solution of the AVE (1.1), see [2,6,10,11,12,13,15,16,17,18,19]. In addition, the AVE (1.1) also looks like the basis pursuit problem and generalized inverses computation, see [23,24,25].
In this paper, a new iteration method is designed to solve the AVE (1.1). Specifically, to improve the convergence speed of the BAS iteration method, a modified BAS (MBAS) iteration method is developed to solve the AVE (1.1). The convergence property of the MBAS method is studied under certain conditions. The feasibility of the MBAS method is verified by numerical experiments.
The remainder of the paper is structured below. In Section 2, we establish the modified BAS (MBAS) method to obtain the numerical solution of the AVE (1.1) and present some conditions to guarantee the convergence of the MBAS iteration method. In Section 3, the effectiveness of the MBAS method is verified by numerical experiments. In Section 4, we terminate the paper with some conclusions.
2.
The MBAS method
In this section, we will establish the MBAS iteration method to acquire the numerical solution of the AVE (1.1). For this purpose, our way is to use the x(k+1) of the first equation in (1.3) instead of the x(k) of the second equation in (1.3). Clearly, the goal of our approach is that we not only can enhance the convergence behavior of the BAS iteration method (1.3), but also can reduce the storage requirements of the BAS iteration method (1.3). Based on this approach, we obtain the modified BAS (MBAS) iteration method and describe as follows.
The MBAS iteration method: Let initial vectors x(0)∈Rn and y(0)∈Rn. For k=0,1,⋯ until the iteration sequence {x(k),y(k)} is convergent, calculate
where α is a given nonnegative constant.
The structure of the system (1.2) is two-by-two block and looks like the saddle point problem [26]. Further, from the form of the MBAS iteration method (2.1), in a way, it can be also seen as a special case of parameterized inexact Uzawa method. For this, one can see [27] for more details.
Lemma 2.1 is introduced to obtain some conditions to guarantee the convergence of the MBAS iteration method.
Lemma 2.1. [14] Let x2−ax+b=0 with a,b∈R, and λ be any root of this equation. Then |λ|<1 iff |b|<1 and |a|<1+b.
Let the iteration errors be
where x∗ and y∗ satisfy Eq (1.2). Then the following main result with respect to the MBAS iteration method (2.1) can be obtained. ‖⋅‖ denotes the Euclid norm.
Theorem 2.2. Let A∈Rn×n be nonsingular. Denote
where α is a given nonnegative constant. When
the MBAS iteration method (2.1) is convergent.
Proof. Combining (1.2) with (2.1), we have
From (2.3), we can get
and
Further, let zk=ek=(exk,eyk)T, then
Let
Clearly, if ρ(T)<1, then limk→∞Tk=0. This implies
In this way, the MBAS iteration method (2.1) can converge to the solution of the AVE (1.1).
Next, we just need to get the sufficient condition for ρ(T)<1. Let λ be an eigenvalue of the matrix T. Then λ satisfies
equivalently,
Using Lemma 2.1 for Eq (2.4), |λ|<1 if and only if
and
Therefore, ρ(T)<1 when the condition (2.2) holds.
Theorem 2.3. Let A∈Rn×n be symmetric positive definite, λmin indicate the minimum eigenvalue of matrix A and α≥0. When
the MBAS iteration method (2.1) is convergent.
Proof. By calculation, we have
and
Obviously, when 1<λmin, we have β(1+α)<1.
Corollary 2.4. Let A∈Rn×n be nonsingular and α≥0. When
the MABS iteration method (2.1) is convergent.
Proof. Utilizing the Banach perturbation lemma in [20], we obtain
To make the condition (2.2) valid, we only need
By the simple computations, it is easy to see that the results of Corollary 2.4 hold under the condition (2.5).
Corollary 2.4 is the same as Corollary 1 in [1]. That is to say, the convergence conditions of Corollary 2.1 are suitable for the BAS method.
3.
Numerical experiments
In this section, to detect the feasiblity of the MBAS method (2.1) to gain the numerical solution of the AVE (1.1), we give some numerical experiments. Here, by the iteration steps (IT) and the CPU time (CPU) in seconds, we contrast the MBAS method (2.1) with the SOR-like method in [21], the BAS method (1.3), and the following new iteration (NI) method in [22]
where α>0.
In these testing four methods, we choose zero vector as all initial vectors, and all iterations of these four methods are stopped when RES≤10−6, where 'RES' indicates the relative residual error and is of form
or the number of iteration outnumbers 500. The right-hand side vector b of the AVE (1.1) is taken in a way such that the vector x=(x1,x2,…,)T with
is the exact solution. The iteration parameters used in the above four iteration methods are the experimental optimal ones, which minimize the numbers of iteration steps. If the experimental optimal iteration parameters form an interval, then they are further optimized according to the least CPU time. Naturally, in the following tables, αexp indicates the the experimentally optimal parameters for these testing four methods. Here, we use MATLAB R2016B for all the tests.
Example 1. Let the AVE in (1.1) be
and b=Ax∗−|x∗| in [21].
For Example 1, the numerical results (including IT, CPU and αexp) for these testing four methods are listed in Table 1. Clearly, these testing four methods can converge rapidly under the corresponding experimentally optimal parameters. Further, on the base of these numerical results in Table 1, we can find that the number of iterations of these testing four methods are nearly sensitivity when the mesh sizes are changed. According to the numerical results in Table 1, the MBAS method has better computational efficiency by the iteration steps and the CPU times, compared with the BAS method, the SOR-like method and the NI method.
Example 2. Let the AVE in (1.1) be A=ˉA+μI, where
with
and
Similar to Table 1, for Example 2, Tables 2 and 3 for the different value of μ enumerate the numerical results (including IT, CPU and αexp) of these testing four methods. In our numerical experiments, we take μ=2,4. These numerical results in Tables 2 and 3 further verify the observed result from Table 1, i.e., the MBAS method precedes the BAS method[1], the SOR-like method [21] and the NI method [22] in the aspect of the computational efficiency under certain conditions.
Example 3. [22] Consider the AVE in (1.1), where the matrix A=ˆA+μIm2(μ≥0) with
and
For Example 3, we still investigate the efficiency of the above four methods. The corresponding numerical results are listed in Tables 4 and 5. The numerical results in Table 4 correspond to μ=4, naturally, the numerical results in Table 5 correspond to μ=8.
These numerical results in Tables 4 and 5 still verify the observed result from Table 1. Under the corresponding experimentally optimal parameters, the four testing methods converge rapidly to the unique solution of Example 3. Among four testing methods, the computational efficiency of the MBAS method is best, compared with the BAS method [1], the SOR-like method [21] with the NI method [22] under certain conditions.
4.
Conclusions
In this paper, to accelerate the block-diagonal and anti-block-diagonal splitting (BAS) iteration method, we have presented a modified BAS (MBAS) iteration method to solve the absolute value equation (AVE). To guarantee the convergence of the MBAS method, we give some convergence conditions under certain conditions. Numerical experiments manifest that the MBAS method compared to some existing numerical methods (such as the SOR-like method [21] and the NI method [22]) is feasible for the AVE under certain conditions.
Acknowledgments
The authors would like to thank two anonymous referees for providing helpful suggestions, which greatly improved the paper. This research was supported by National Natural Science Foundation of China (No.11961082) and Key Project of Shaanxi Provincial Education Department under grant 20JS021.
Conflict of interest
The authors declare that they have no competing interests.