Picard-type methods are efficient methods for solving the absolute value equation Ax−|x|=b. To further improve the performance of Picard iteration method, a new inexact Picard iteration method, named Picard-SHSS iteration method, is proposed to solve the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to demonstrate the effectiveness of the new method.
1.
Introduction
The absolute value equation (AVE) of the form
where A∈Rn×n, x,b∈Rn, and |x| denotes the component-wise absolute value of the vector x, i.e., |x|=(|x1|,⋯,|xn|)T, arises in a variety of optimization problems, e.g. linear complementarity problem, linear programming or convex quadratic programming problems; see for example [7,17,18,20,21]. It is a special case of the generalized absolute value equation of the type
where B∈Rn×n. The generalized absolute value equation (1.2) was introduced in [21] and investigated in a more general context [17,18,20].
The conditions of the unique solvability of AVE (1.1) and generalized absolute value equation (1.2) have been given in [10,12,13,14,18,21,22,25], for example, in [18], Mangasarian and Meyer have shown that the AVE (1.1) for any constant vector has a unique solvability when the smallest singular values of the involved matrix A are greater than one. When AVE (1.1) has the unique solution, how to find the solution is a major research topic. In this study, we consider the iteration method for solving the AVE (1.1). In recent years, a large variety of methods for solving AVE (1.1) can be found in the literature [1,4,5,6,8,15,16,19,22,24,25]. Among these methods, Picard-type methods capture one's attention. Rohn et al. in [22] proposed the Picard iteration method to solve AVE (1.1)
where x(0) is the initial guess. From (1.3), we can see that there is a linear system with the constant coefficient matrix A that needs to be solved in each iteration of the Picard method. To improve the performance of the Picard method, the linear system with matrix A should be solved by inner iteration, this leads to the inexact Picard iteration method. As an example, Salkuyeh [24] suggested that using Hermitian and skew-Hermitian splitting iteration (HSS) method [2] to approximate the solution of the linear system with A at each Picard iteration, and proposed the Picard-HSS method for solving AVE (1.1). In fact, the Picard-HSS method was proposed originally by Bai and Yang for weakly nonlinear systems in [3]. The sufficient conditions to guarantee the convergence of the Picard-HSS method and some numerical experiments are given to show the effectiveness of the method for solving AVE (1.1) in [24].
Note that each step of the inner HSS iteration of the Picard-HSS method [3,24] requires solving two linear subsystems, one characterized by a Hermitian coefficient matrix and other by a skew-Hermitian coefficient matrix. The solution of linear subsystem with Hermitian coefficient matrix can be easily obtained by CG method, however, the solution of linear subsystem with skew-Hermitian coefficient matrix is not easy to obtain, in some cases, its solution is as difficult as that of the original linear system. To avoid solving a linear subsystem with skew-Hermitian coefficient matrix in the inner iteration of the inexact Picard method, we use the single-step HSS (SHSS) method [9] to approximate the solution of the linear system with coefficient matrix A and present a new inexact Picard method, abbreviated as Picard-SHSS iteration method, in this paper.
The rest of this paper is organized as follows. In Section 2, after review some notes and the SHSS iteration method, the Picard-SHSS iteration method for solving AVE (1.1) is described. And then the convergence properties of the Picard-SHSS iteration method is studied. Numerical experiments are presented in Section 3, to show the feasibility and effectiveness of the Picard-SHSS method.
2.
The Picard-SHSS method
For convenience, some notation, definitions and results that will be used in the following parts are given below. For a matrix A∈Rn×n, AT represents the transpose of A, and ρ(A) denotes the spectral radius of A. A is said to be positive definite if its symmetric part H=12(A+AT) is positive definite, see [11] for the definition of positive definite matrix in a complex set.
Let A∈Rn×n be a positive definite matrix, and A=H+S be its Hermitian and skew-Hermitian splitting with H=12(A+AT) and S=12(A−AT). Based on the Hermitian and skew-Hermitian splitting of A, Bai et al. [2] presented the HSS method
to solve positive definite system of linear equations Ax=q, here α is a positive iteration parameter. There are two linear subsystems that need to be solved at each step of the HSS iteration method, one is the linear subsystem with coefficient matrix αI+H and the other is the linear subsystem with coefficient matrix αI+S for any positive constant α and identity matrix I; see [2] for more details. The challenges of the HSS iteration method lie in solving the linear subsystem with αI+S, which is as difficult as that of the original linear system in some cases. To avoid solving a linear subsystem with αI+S in the HSS iteration method, the SHSS method was proposed recently [9]. The iteration scheme of the SHSS method used for solving system of the linear equations Ax=q can be written equivalently as
It has been proved in [9] that, under a loose restriction on the iteration parameter α, the SHSS method is convergent to the unique solution of the linear system Ax=q for any initial guess x(0)∈Rn.
When A is positive definite matrix, using the HSS method (2.1) as an inner iteration in the Picard method (1.3), Salkuyeh [24] proposed the following Picard-HSS method for solving the AVE (1.1)
Method 2.1. (The Picard-HSS iteration method) Let A∈Rn×n be a positive definite matrix, H=12(A+AT) and S=12(A−AT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)∈Rn and a sequence {lk}∞k=0 of positive integers, compute x(k+1) for k=0,1,2,⋯ using the following iteration scheme until {x(k)} satisfies the stopping criterion:
(a). Set x(k,0)=x(k);
(b). For l=0,1,⋯,lk−1, solve the following linear system to obtain x(k,l+1):
where α is a positive constant and I is a the identity matrix;
(c). Set x(k+1)=x(k,lk).
The next iterate of x(k+1) can be approximately computed by the SHSS iteration by making use of the splitting A=M(α)−N(α) as following (see [9])
where M(α)=αI+H and N(α)=αI−S, α is a positive constant, {lk}∞k=0 a prescribed sequence of positive integers, and x(k,0)=x(k) is the starting point of the inner SHSS iteration at k-th outer Picard iteration. This leads to the following inexact Picard iteration method, called Picard-SHSS iteration method, for solving the AVE (1.1)
Method 2.2. (The Picard-SHSS iteration method) Let A∈Rn×n be a positive definite matrx, H=12(A+AT) and S=12(A−AT) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x(0)∈Rn and a sequence {lk}∞k=0 of positive integers, compute x(k+1) for k=0,1,2,⋯ using the following iteration scheme:
(a). Set x(k,0)=x(k);
(b). For l=0,1,⋯,lk−1, solve the following linear system to obtain x(k,l+1):
where α is a positive constant and I is a the identity matrix;
(c). Set x(k+1)=x(k,lk);
(d). If x(k+1) satisfies ‖Ax(k+1)−|x(k+1)|−b‖2‖b‖2≤10−7, then stop; otherwise, let x(k)=x(k+1) and go back to (a).
Compared with the Picard-HSS iteration method [24], a linear subsystem with αI+S is avoided in the inner iteration of the Picard-SHSS iteration method. The involved linear subsystem with αI+H of the Picard-SHSS iteration method can be efficiently solved exactly by a sparse Cholesky factorization, or inexactly by a preconditioned Conjugate Gradient method [23].
The next theorem provides sufficient condition for the convergence of the Picard-SHSS method to solve the AVE (1.1).
Theorem 2.1. Let A∈Rn×n be a positive definite matrix and H=12(A+AT) and S=12(A−AT) be its Hermitian and skew-Hermitian parts, respectively. Let η=‖A−1‖2<1, α be a constant number such that α>max, where \lambda_{\min} is the smallest eigenvalue of H and \sigma_{\max} is the largest singular-value of S . Then the AVE (1.1) has a unique solution x^* , and for any initial guess x^{(0)}\in \mathbb{R}^n and any sequence of positive integers l_k , k = 0, 1, \cdots , the iteration sequence \{x^{(k)}\}_{k = 0}^\infty produced by the Picard-SHSS iteration method converges to x^* provided that l_k\geq N for all k = 0, 1, \cdots , where N is a natural number satisfying
with T(\alpha) = M(\alpha)^{-1}N(\alpha) .
Proof. The proof is similar to that of [24, Theorem 1], for self-contained purpose, we give the proof as follows. Based on the iteration scheme (2.3), we can express the (k+1) th iterate x^{(k+1)} of the Picard-SHSS iteration method as
Note that \eta < 1 , then AVE (1.1) has a unique solution x^*\in \mathbb{R}^n [18] such that
By subtracting (2.5) from (2.4) we have
It follows from [9, Theorem 2.1] that \rho(T(\alpha)) < 1 when \alpha satisfies \alpha > \max\left\{0, \frac{\sigma_{\max}^2-\lambda_{\min}^2}{2\lambda_{\min}}\right\} . In this case, some calculations yield \sum_{j = 0}^{l_k-1}T(\alpha)^{j}M(\alpha)^{-1} = \left(I-T(\alpha)^{l_k}\right)A^{-1}. Now (2.6) becomes
Note that \||x|-|y|\|_2\leq\|x-y\|_2 for any x, y\in \mathbb{R}^n , it then follows that
The condition of \rho(T(\alpha)) < 1 when \alpha satisfies \alpha > \max\left\{0, \frac{\sigma_{\max}^2-\lambda_{\min}^2}{2\lambda_{\min}}\right\} ensures that T(\alpha) tend to 0 as s tend to infinity. Therefore, there is a natural number N such that
Now, if l_k\geq N for all k = 0, 1, \cdots , then \left\|x^{(k+1)}-x^*\right\|_2 < \left\|x^{(k)}-x^*\right\|_2 , hence the iteration sequence \{x^{(k)}\}_{k = 0}^\infty produced by the Picard-SHSS iteration method converges to x^* .
In actual computation, the residual-updating form of the Picard-SHSS iteration method is more convenient, which can be written as following.
The Picard-SHSS iteration method (residual-updating variant): Let A\in \mathbb{R}^{n\times n} be no-Hermitian positive definite, H = \frac{1}{2}(A+A^T) and S = \frac{1}{2}(A-A^T) be the Hermitian and skew-Hermitian parts of A respectively. Given an initial guess x^{(0)}\in \mathbb{R}^n and a sequence \{l_k\}_{k = 0}^\infty of positive integers, compute x^{(k+1)} for k = 0, 1, 2, \cdots using the following iteration scheme until \{x^{(k)}\} satisfies the stopping criterion:
(a). Set s^{(k, 0)} = 0 and b^{(k)} = |x^{(k)}|+b-Ax^{(k)} ;
(b). For l = 0, 1, \cdots, l_k-1 , solve the following linear system to obtain s^{(k, l+1)} :
where \alpha is a positive constant and I is the identity matrix;
(c). Set x^{(k+1)} = x^{(k)}+s^{(k, l_k)} ;
(d). If x^{(k+1)} satisfies \frac{\|Ax^{(k+1)}-|x^{(k+1)}|-b\|_2}{\|b\|_2}\leq 10^{-7} , then stop; otherwise, let x^{(k)} = x^{(k+1)} and go back to (a).
3.
Numerical experiments
In this section, we give an example with numerical experiments to show the effectiveness of the Picard-SHSS iteration method to solve AVE (1.1), to do this, the numerical properties of the Picard-HSS and Picard-SHSS methods are examined and compared experimentally. We use the residual-updating versions of the Picard-HSS iteration method [24] and Picard-SHSS iteration method.
All the numerical experiments presented in this section have been computed in double precision using some MATLAB R2012b on Intel (R) Core (TM) i5-2400 CPU 3.10 GHz and 4.00 GB of RAM. All runs are started from the initial zero vector and terminated if the current relative residual satisfies
where x^{(k)} is the computed solution by each of the methods at iteration k , and a maximum number of the iterations 500 is used. In addition, the stopping criterion for the inner iterations of the Picard-HSS and Picard-SHSS methods are set to be
and a maximum number of the iterations 10 ( l_k = 10, \; k = 0, 1, 2, \cdots ) for inner iterations are used.
The coefficient matrix A of AVE (1.1) is given by
where I_m and I_n are the identity matrices of order m and n with n = m^2 , \otimes means the Kronecker product, T_x and T_y are tridiagonal matrices T_x = \mbox{tridiag}(t_2, \; t_1, \; t_3)_{m\times m} and T_y = \mbox{tridiag}(t_2, \; 0, \; t_3)_{m\times m} with t_1 = 4 , t_2 = -1-Re , t_3 = -1+Re . Here Re = (qh)/2 and h = 1/(m+1) are the mesh Reynolds number and the equidistant step size, respectively, and q is a positive constant. In fact, the matrix A arising from the finite difference approximation the two-dimensional convection-diffusion equation
where \Omega = (0, \; 1)\times(0, \; 1) , \partial\Omega is its boundary, q is a positive constant used to measure the magnitude of the diffusive term and p is a real number. If we use the five-point finite difference scheme to the diffusive terms and the central difference scheme to the convective terms, then we obtained the matrix A . It is easy to find that for every nonnegative number q the matrix A is in general non-symmetric positive definite [24]. The right-hand side vector of AVE (1.1) is taken such a way that the vector x = (x_1, \; x_2, \; \cdots, \; x_n)^T with x_i = (-1)^ii, \; \; i = 1, 2, \cdots, n be the exact solution.
In our numerical experiments, the matrix A in AVE (1.1) is defined by (3.1) with different values of q ( q = 0, \; 1, \; 10, \; \mbox{and}\; 100 ) and different values of p ( p = 0\; \mbox{and}\; -1 ). The parameters used in the Picard-HSS and Picard-SHSS iteration methods are chosen to be the ones, which result in the least number of iteration steps of iteration methods, see Tables 1 and 2. In Tables 3 and 4, we present the numerical results with respect to the Picard-HSS and Picard-SHSS iteration methods. We give the elapsed CPU time in seconds for the convergence (denoted by CPU), the number of iterations for the convergence (denoted by IT) and the relative residuals (denoted by RES).
From the Tables 3 and 4, we see that both the Picard-HSS and Picard-SHSS iteration methods can successfully produced approximate solution to the AVE (1.1) for all of the problem-scales n = m^2 and the convective measurements q . For the convergent cases, the CPU time also increases rapidly with the increasing of the problem-scale for all tested iteration methods. Moreover, numerical results in the two tables show that the Picard-SHSS iteration method performs better than the Picard-HSS iteration method in most cases as the former one cost the least CPU time to achieve stopping criterion except the cases of q = 100 , m = 10 and q = 100, \; m = 20 . In addition, for p = -1 , the Picard-SHSS iteration method costs the least number of iteration steps and CPU time to achieve stopping criterion. In summary, the Picard-SHSS iteration method is useful and effective for solving the NP-hard AVE (1.1).
4.
Conclusions
In this paper, the Picard-SHSS method is proposed for solving the absolute value equation. The sufficient condition for the convergence of the proposed method for solving the absolute value equation is given. A numerical example is given to confirm our theoretical results and to verify the effectiveness of the new method.
Acknowledgements
We would like to thank Professor Davod Khojasteh Salkuyeh of University of Guilan for providing us the MATLAB code of the Picard-HSS method. Many thanks to the Editor and the anonymous referees for their helpful comments on the revision of this article.
This work was supported by Natural Science Foundation of Northwest Normal University (No. NWNU-LKQN-17-5).
Conflicts of interest
The authors declare that they have no conflicts of interest.