
In this paper, we consider the solution of linear weighted complementarity problem (denoted by LWCP). Firstly, we introduce a new class of weighted complementary functions and show that its continuously differentiable. On this basis, the LWCP is reconstructed as a smooth system of equations, and then solved by the Levenberg-Marquardt method. The convergence of the algorithm is proved theoretically and numerical experiments are carried out. The comparative experiments show that the algorithm has some advantages in computing time and iteration times.
Citation: Panjie Tian, Zhensheng Yu, Yue Yuan. A smoothing Levenberg-Marquardt algorithm for linear weighted complementarity problem[J]. AIMS Mathematics, 2023, 8(4): 9862-9876. doi: 10.3934/math.2023498
[1] | Xiaorui He, Jingyong Tang . A smooth Levenberg-Marquardt method without nonsingularity condition for wLCP. AIMS Mathematics, 2022, 7(5): 8914-8932. doi: 10.3934/math.2022497 |
[2] | Luyao Zhao, Jingyong Tang . Convergence properties of a family of inexact Levenberg-Marquardt methods. AIMS Mathematics, 2023, 8(8): 18649-18664. doi: 10.3934/math.2023950 |
[3] | Lu Zhang, Xiaoni Chi, Suobin Zhang, Yuping Yang . A predictor-corrector interior-point algorithm for P∗(κ)-weighted linear complementarity problems. AIMS Mathematics, 2023, 8(4): 9212-9229. doi: 10.3934/math.2023462 |
[4] | Linsen Song, Gaoli Sheng . A two-step smoothing Levenberg-Marquardt algorithm for real-time pricing in smart grid. AIMS Mathematics, 2024, 9(2): 4762-4780. doi: 10.3934/math.2024230 |
[5] | Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970 |
[6] | Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384 |
[7] | Xi-Ming Fang . General fixed-point method for solving the linear complementarity problem. AIMS Mathematics, 2021, 6(11): 11904-11920. doi: 10.3934/math.2021691 |
[8] | Deshu Sun . Note on error bounds for linear complementarity problems involving BS-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109 |
[9] | Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677 |
[10] | Xin-Hui Shao, Wan-Chen Zhao . Relaxed modified Newton-based iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(2): 4714-4725. doi: 10.3934/math.2023233 |
In this paper, we consider the solution of linear weighted complementarity problem (denoted by LWCP). Firstly, we introduce a new class of weighted complementary functions and show that its continuously differentiable. On this basis, the LWCP is reconstructed as a smooth system of equations, and then solved by the Levenberg-Marquardt method. The convergence of the algorithm is proved theoretically and numerical experiments are carried out. The comparative experiments show that the algorithm has some advantages in computing time and iteration times.
The Weighted Complementarity Problem (WCP), which is to find a pair of (x,s,y)∈Rn×Rn×Rm such that
x⩾0,s⩾0,xs=w,F(x,s,y)=0, | (1.1) |
where, F:R2n+m→Rn+m is a continuously differentiable function, w∈Rn+ is the given weight vector, xs:=x∘s is the componentwise product of the vectors x and s. When w=0, WCP (1.1) reduces to the classical Nonlinear Complementarity Problem (NCP). At present, there are many effective algorithms [1,2,3,4,5] that can solve NCP. For examples, Newton method [1], Quasi-Newton method [2], L-M method [3,4], Neural-Networks method [5] etc. If
F(x,s,y)=Px+Qs+Ry−a, | (1.2) |
problem (1.1) is the Linear Weighted Complementarity Problem (LWCP) studied in this paper, which is to find a pair of (x,s,y)∈Rn×Rn×Rm such that
x⩾0,s⩾0,xs=w,Px+Qs+Ry=a, | (1.3) |
where, P∈R(n+m)×n,Q∈R(n+m)×n,R∈R(n+m)×m,a∈Rn+m, are given matrices and vector. In addition, when
F(x,s,y)=(∇f(x)−s−ATyAx−b), | (1.4) |
problem (1.1) is the perturbed Karush-Kuhn-Tucker(KKT) condition for the following Nonlinear Programming(NLP)
minf(x),s.t.Ax=b,x≥0. | (1.5) |
Problem(1.3) was introduced by Potra [6] in 2012 and has been widely studied for its important applications in management, market equilibrium, etc. Many equilibrium problems can also be transformed into LWCP to solve, such as the famous Fisher market equilibrium problem [7], and the quadratic programming and weighted center problem [6].
In recent years, many effective algorithms have been proposed to solve problem (1.1) or (1.3) [8,9,10,11,12,13]. For examples, Chi et al. [9] proposed the full-Newton step infeasible interior-point method for solving LWCP. Zhang et al. [12] proposed the smoothing Newton type method for solving LWCP. Tang et al. [13] proposed the nonmonotone L-M method for NWCP. The interior point method depends on the choice of initial value. The classical Newton method needs the positive definite of Hessian matrix, otherwise, it is difficult to guarantee that the Newton direction is descending. The L-M method does not depend on the choice of initial values, nor does it require the positive definiteness of the Hessian matrix. Therefore, this paper mainly considers using L-M method to solve problem (1.3). Motivated by [13], we consider using a nonmonotone L-M method to solve LWCP.
LWCP is a more general complementary model. For the solution of this model, we hope to use the WCP functions obtained by the extension of NCP functions. However, due to the existence of weighting term, not all NCP functions can be directly extended to WCP functions. For NCP function in the form of FB function, many scholars have extended it to WCP function. In this paper, motivated by the smoothed penalty function for [14], we construct a smoothng function for WCP. And then use L-M method to approximate the equivalent reconstruction equations of problem (1.3). The comparison experiment of random generation shows the feasibility and effectiveness of our algorithm.
The following notations will be used throughout this paper. The superscript T denotes transpose. R denotes real numbers, Rn represents the set of all n dimensional real column vectors. The matrix I denotes the identity matrix, and ‖⋅‖ denotes 2-norm. All vectors in this article are column vectors.
In this section, we study a class of complementary functions with participation weights and discuss its properties. Based on this weighted complementary function, the equivalent reconstruction equations of problem (1.3) are given.
Definition 2.1. For a fixed c⩾0, a function ϕ:R2→R is called a weighted complementarity function [13], if it satisfies
ϕc(a,b)=0⇔a⩾0,b⩾0,ab=c. | (2.1) |
When c=0, ϕc(a,b) reduces to the NCP function.
In this paper, to solve the LWCP (1.3), we hope to use the WCP functions obtained by the extension of NCP functions. However, due to the existence of weighting term, not all NCP functions can be directly generalized to WCP functions. For example, the two piecewise NCP functions given in [2]:
ϕ(a,b)={3a−(a2b),b⩾a>0,or3b>−a⩾0;3a−(b2a),a>b>0,or3a>−b⩾0;9a+9b,else. | (2.2) |
ϕ(a,b)={k2a,b⩾k|a|;2kb−(b2a),a>|b|k;2k2a+2kb+(b2a),a<−|b|k;k2a+4kb,b⩽−k|a|. | (2.3) |
For FB function, many scholars have extended it to WCP function. For example, Liu et al. [11] based on the symmetric disturbance FB function in [15] constructed:
ϕc(μ,a,b)=(1+μ)(a+b)−√(a+μb)2+(μa+b)2+2c+2μ2, | (2.4) |
where, c is a given nonnegative vector.
Zhang[12] proposed:
ϕθ(μ,a,b,c)=√a2+b2−2θab+2(1+θ)c+2μ−a−b, | (2.5) |
where, θ∈(−1,1],c is a given nonnegative vector.
In addition, [13] provides another smooth function:
ϕcτ,q(a,b)=(a+b)q−(√a2+b2+(τ−2)ab+(4−τ)c)q, | (2.6) |
where, c is a given nonnegative vector, τ∈[0,4) is a constant, q>1 is an odd integer. Compared with (2.4) and (2.5), (2.6) does not need to introduce the smoothing factor μ. By controlling the value of q, smoothing can be achieved. This smoothing method will be used to smooth the new WCP function given below.
ϕcτ(a,b)=a+b−√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c, | (2.7) |
where, c is a given nonnegative vector, τ∈[0,1] is a constant.
Since Eq (2.7) is not smooth, we make the following smoothing treatment:
ϕcτ,q(a,b)=(a+b)q−(√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c)q, | (2.8) |
where, c is a given nonnegative vector, τ∈[0,1] is a constant, q>1 is an odd integer.
Theorem 2.1. Let ϕcτ,q be defined by (2.8) with τ∈[0,1] and q>1 being a positive odd interger. Then ϕqτ is a family of WCP functions, i.e.,
ϕcτ,q(a,b)=0⇔a⩾0,b⩾0,ab=c. | (2.9) |
Proof. Since for any α,β∈R and any positive odd interger q, there is αq=βq⇔α=β. So we have
ϕcτ,q(a,b)=0⇔(a+b)q=(√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c)q⇔a+b=√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c⇔ϕcτ(a,b)=0. | (2.10) |
That is to say, we only need to prove that ϕcτ(a,b) is a family of WCP functions. On the one hand, we fist suppose that ∀a,b∈R satisfy, ϕcτ(a,b)=0 i.e.,
√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c=a+b. | (2.11) |
By squaring the two sides of (2.11), we have 2(1+τ)ab=2(1+τ)c, which together with τ∈[0,1]. yields ab=c. By substituing ab=c into (2.2), we have √a2+b2+2ab=a+b⩾0. Since c=ab⩾0, it follows that a⩾0,b⩾0. On the other hand, we suppose that a⩾0,b⩾0,ab=c, then a+b⩾0 and
√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c=√a2+b2+2ab=|a+b|=a+b. | (2.12) |
Which implies that ϕcτ(a,b)=0.
Lemma 2.1. Let ϕcτ,q be defined by (2.8) with τ∈[0,1] and q>1 being a positive odd interger. Let
hcτ(a,b)=√τ(a−b)2+(1−τ)(a2+b2)+2(1+τ)c. | (2.13) |
Then
(ⅰ)When q>1, ϕcτ,q is continuously differentiable at any (a,b)∈R2 with
∇ϕcτ,q=[∂ϕcτ,q∂a∂ϕcτ,q∂b], | (2.14) |
where
∂ϕcτ,q∂a=q[(a+b)q−1−hcτ(a,b)q−2(a−τb)],∂ϕcτ,q∂b=q[(a+b)q−1−hcτ(a,b)q−2(b−τa)]. |
(ⅱ)When q>3, ϕcτ,q is twice continuously differentiable at any (a,b)∈R2 with
∇2ϕcτ,q(a,b)=[∂2ϕcτ,q∂a2∂2ϕcτ,q∂a∂b∂2ϕcτ,q∂b∂a∂2ϕcτ,q∂b2], | (2.15) |
where
∂2ϕqτ∂a2=q{(q−1)(a+b)q−2−hτ(a,b,c)q−4[(q−2)(a−τb)2+hτ(a,b,c)2]},∂2ϕcτ,q∂b2=q{(q−1)(a+b)q−2−hcτ(a,b)q−4[(q−2)(b−τa)2+hcτ(a,b)2]}, |
∂2ϕcτ,q∂a∂b=∂2ϕcτ,q∂b∂a=q{(q−1)(a+b)q−2−hcτ(a,b)q−4[(q−2)(a−τb)(b−τa)−τhcτ(a,b)2]}. |
Lemma 2.2. Let ϕcτ,q be defined by (2.8) with τ∈[0,1] and q>1 being a positive odd interger. Defining the closed and convex set Ω(u):={u∈R2|‖u‖⩽θ}, where θ is a positive constant. Then:
(ⅰ)When q>1, ϕcτ,q is Lipschitz continuous on Ω(u) for any θ>0.
(ⅱ)When q>3, ∇ϕcτ,q is Lipschitz continuous on Ω(u) for any θ>0.
Since ϕcτ,q and ∇ϕcτ,q are bounded on the set Ω(u), therefore the conclusion (ⅰ) and (ⅱ) can be obtained from the Mean-Value-Theorem.
Given weight vector w∈Rn+, let z:=(x,s,y)∈R2n+m and
H(z)=H(x,s,y):=(F(x,s,y)Φwτ,q(x,s)), | (2.16) |
where
Φwτ,q(x,s)=(ϕw1τ,q(x1,s1)⋮ϕwnτ,q(xn,sn)). | (2.17) |
Then the solution of LWCP (1.3) is equivalent to the approximate solution of the system of equations H(z)=0.
Lemma2.3. Let H(z):R2n+m→R2n+m,Φwτ,q:R2n→Rn be defined by (2.16) and (2.17), respectively. Then:
(ⅰ)Φwτ,q(x,s) is continuously differentiable at any z=(x,s,y)∈R2n+m.
(ⅱ)H(z) is continuously differentiable at any z=(x,s,y)∈R2n+m with its Jacobian
H′(z)=(F′xF′sF′yD1D20), | (2.18) |
where
D1=diag{q[(xi+si)q−1−hwiτ(xi,si)q−2(xi−τsi)]},i=1,2,⋯,n.D2=diag{q[(xi+si)q−1−hwiτ(xi,si)q−2(si−τxi)]},i=1,2,⋯,n.hwiτ(xi,si)=√τ(xi−si)2+(1−τ)(xi2+si2)+2(1+τ)wi,i=1,2,⋯,n. |
Let H(z) be defined by (2.16), then its value function M:R2n+m→R+ can be defined as:
M(z):=12‖H(z)‖2. | (2.19) |
Obviously, the solution of LWCP (1.3) is also equivalent to the approximate solution of the system of equations M(z)=0. In addition, the following conclusion can be obtained from the Lemma 2.3.
Lemma 2.4. Let M:R2n+m→R+ be defined by (2.19), then M(z) is continuously differentiable at any z∈R2n+m, and ∇M(z)=H′(z)TH(z).
In this section, based on the WCP function in Section 2, we will give the smooth L-M type algorithm and its convergence.
Algorithm3.1 (A smooth L-M method)
Step 0: Choose θ,σ,γ,δ∈(0,1) and z0:=(x0,s0,y0)∈R2n+m, let 0⩽ε⩽1, and C0=M(z0). Choose a sequence {ηk|∀k⩾0,ηk∈(0,1)}, set k:=0.
Step 1: Compute H(zk). If ‖H(zk)‖⩽ε then stop.
Step 2: Let μk:=θ‖H(zk)‖2. Compute the search direction dk∈R2n+m by
∇M(zk)+(H′(zk)TH′(zk)+μkI)dk=0. | (3.1) |
Step 3: If dk satisfies
‖H(zk+dk)‖⩽σ‖H(zk)‖. | (3.2) |
Then let αk:=1, and go to step 5. Otherwise, go to step 4.
Step 4: Set jk be the smallest nonnegative integer j satisfying
M(zk+δjdk)⩽Ck−γ‖δjdk‖2. | (3.3) |
let αk:=δjk, and go to step 5.
Step 5: Set zk+1:=zk+αkdk and
Qk+1:=ηkQk,Ck+1:=ηkQkCk+M(zk+1)Qk+1. | (3.4) |
Step 6: Let k:=k+1, and go to step 1.
Existing L-M type methods [16,17,18] are usually designed based on the Armijo line search. While algorithm 3.1 adopts a nonmonotone derivate free line search. The choice of ηk controls the degree of nonmonotoicity. If ηk≡0, then the line search is monotone.
Theorem3.1. Let {zk} be the sequence generated by Algorithm 3.1. Then, {zk} satisfying M(zk)⩽Ck for all k⩾0.
Proof. By Algorithm 3.1 C0=M(z0). We first assume that M(zk)⩽Ck. If ∇M(zk)=0, then Algorithm 3.1 terminates. Otherwise ∇M(zk)≠0 which implies that H(zk)≠0, hence μk=θ‖H(zk)‖2>0. So the matrix H′(zk)TH′(zk)+μkI is positive definite. Thus the search direction dk in step 3 is well-defined and dk≠0. Since ∇M(zk)≠0, we have
∇M(zk)Tdk=−dkT(H′(zk)TH′(zk)+μkI)dk<0. | (3.5) |
This implies that dk is a descent direction of M(zk) at the point zk. Next we will prove that at least one step size is obtained by step 4. Inversely, we assume that for any j, M(zk+δjdk)>Ck−γ‖δjdk‖2, then
M(zk+δjdk)>Ck−γ‖δjdk‖2⩾M(zk)−γ‖δjdk‖2, | (3.6) |
thereby
M(zk+δjdk)−M(zk)+γ‖δjdk‖2δj>0. | (3.7) |
By letting j→∞ in (3.7), we have ∇M(zk)Tdk⩾0, which contradicts (3.5). Therefore, we can always get zk+1 by Step 3 or Step 4. If zk+1 is generated by step 3, i.e., ‖H(zk+dk)‖⩽σ‖H(zk)‖, then 12‖H(zk+dk)‖2⩽12σ2‖H(zk)‖2, so M(zk+1)⩽σ2M(zk). And because, σ∈(0,1), therefore, we have M(zk+1)⩽σ2M(zk)<M(zk)⩽Ck. If zk+1 is generated by step 4, we can get M(zk+1)⩽Ck directly. So, from(3.4), we can get that Ck⩾ηkQkM(zk+1)+M(zk+1)Qk+1=M(zk+1). Hence, we conclude that M(zk)⩽Ck for all k⩾0.
Next, we first suppose that ∇M(zk)≠0 for all k⩾0. In order to discuss the convergence of algorithm 3.1, we need the following lemma.
Lemma 3.1. Let {zk} be the sequence generated by Algorithm 3.1, then there exists a nonnegative constant C∗ such that
lim | (3.8) |
Proof. By Theorem3.1, we can get 0 \leqslant M\left( {{z^k}} \right) \leqslant {C_k} for all k \geqslant 0 and {C_{k + 1}} \leqslant \frac{{{\eta _k}{Q_k}{C_k} + {C_k}}}{{{Q_{k + 1}}}} = {C_k}. Hence, by The Monotone Bounded Theorem, there exists a nonnegative constant {C^ * } such that \mathop {\lim }\limits_{k \to \infty } {C_k} = {C^ * } . By the definition of {Q_k} , we have
{Q_{k + 1}} = 1 + \mathop \Sigma \limits_{i = 0}^k \mathop \Pi \limits_{j = 0}^i {\eta _{k - j}} \leqslant 1 + \mathop \Sigma \limits_{i = 0}^k \eta _{\max }^{i + 1} \leqslant \mathop \Sigma \limits_{i = 0}^\infty \eta _{\max }^i = \frac{1}{{1 - {\eta _{\max }}}}. | (3.9) |
Hence, we conclude that {\eta _k}{Q_k} \leqslant \frac{{{\eta _{\max }}}}{{1 - {\eta _{\max }}}} is bounded, which together with \mathop {\lim }\limits_{k \to \infty } {C_k} = {C^ * } yields \mathop {\lim }\limits_{k \to \infty } {\eta _{k - 1}}{Q_{k - 1}}\left( {{C_k} - {C_{k - 1}}} \right) = 0. So, it follows from (3.4) that
\begin{array}{l} M\left( {{z^{k + 1}}} \right) = {Q_{k + 1}}{C_{k + 1}} - {\eta _k}{Q_k}{C_k} = \left( {{\eta _k}{Q_k} + 1} \right){C_{k + 1}} - {\eta _k}{Q_k}{C_k} \\ \;\;\;\;\;\;\;\;\;\;\;\;\;= {\eta _k}{Q_k}\left( {{C_{k + 1}} - {C_k}} \right) + {C_{k + 1}}. \end{array} | (3.10) |
Hence
\mathop {\lim }\limits_{k \to \infty } M\left( {{z^k}} \right) = \mathop {\lim }\limits_{k \to \infty } \left[ {{\eta _{k - 1}}{Q_{k - 1}}\left( {{C_k} - {C_{k - 1}}} \right) + {C_k}} \right] = {C^ * }. | (3.11) |
We complete the proof.
Theorem3.2. Let \left\{ {{z^k}} \right\} be the sequence generated by Algorithm 3.1. Then any accumulation point {z^ * } of \left\{ {{z^k}} \right\} is a stationary point of M\left( z \right) .
Proof. By Lemma 3.1, we have \mathop {\lim }\limits_{k \to \infty } M\left( {{z^k}} \right) = \mathop {\lim }\limits_{k \to \infty } {C_k} = {C^ * }, {C^ * } \geqslant 0 . If {C^ * } = 0 , then \mathop {\lim }\limits_{k \to \infty } H\left( {{z^k}} \right) = 0 which together with Lemma 2.4 yields \nabla M\left( {{z^ * }} \right) = 0 . In the following, we discuss the case of {C^ * } > 0 . Set N: = \left\{ {k\left| {\left\| {H\left( {{z^k} + {d_k}} \right)} \right\| \leqslant \sigma \left\| {H\left( {{z^k}} \right)} \right\|} \right.} \right\} . Then N must be a finite set, otherwise M\left( {{z^{k + 1}}} \right) \leqslant {\sigma ^2}M\left( {{z^k}} \right) holds for infinitely many k . By letting k \to \infty with k \in N , we can have {C^ * } \leqslant {\sigma ^2}{C^ * } and 1 \leqslant {\sigma ^2} which contradicts \sigma \in \left( {0, 1} \right) . Therefore, we can suppose that there exists an index \bar k > 0 such that \left\| {H\left( {{z^k} + {d_k}} \right)} \right\| > \sigma \left\| {H\left( {{z^k}} \right)} \right\| for all k \geqslant \bar k . Thereby, there exists a {j_k} such that M\left( {{z^{k + 1}}} \right) \leqslant {C_k} - \gamma {\left\| {{\delta ^{{j_k}}}{d_k}} \right\|^2} , i.e.,
\gamma {\left\| {{\delta ^{{j_k}}}{d_k}} \right\|^2} \leqslant {C_k} - M\left( {{z^{k + 1}}} \right). | (3.12) |
Next, we suppose that {z^ * } is the limit of the subsequence {\left\{ {{z^k}} \right\}_{k \in K}} \subset \left\{ {{z^k}} \right\} where K \in \left\{ {0, 1, 2, \cdots } \right\} , i.e., \mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } {z^k} = {z^ * } . Hence, by the continuity, we have {C^ * } = M\left( {{z^ * }} \right) = \frac{1}{2}{\left\| {H\left( {{z^ * }} \right)} \right\|^2} . By \mathop {\lim }\limits_{k \to \infty } {\mu _k} = \mathop {\lim }\limits_{k \to \infty } \theta {\left\| {H\left( {{z^k}} \right)} \right\|^2} = \mathop {\lim }\limits_{k \to \infty } 2\theta M\left( {{z^k}} \right) = 2\theta {C^ * } , we can get that
\mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } \left[ {H'{{\left( {{z^k}} \right)}^T}H'\left( {{z^k}} \right) + {\mu _k}I} \right] = H'{\left( {{z^ * }} \right)^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I. | (3.13) |
According to the proof process of theorem 3.1, the matrix H'{\left( {{z^k}} \right)^T}H'\left( {{z^k}} \right) + {\mu _k}I is a symmetric positive definite matrix. In addition, because of {C^ * } > 0 , the matrix H'{\left( {{z^ * }} \right)^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I is also symmetric positive definite matrix. Hence, we have
\mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } {\left[ {H'{{\left( {{z^k}} \right)}^T}H'\left( {{z^k}} \right) + {\mu _k}I} \right]^{ - 1}} = {\left[ {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right]^{ - 1}}. | (3.14) |
and
\mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } {d_k} = {d^ * } = - {\left[ {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right]^{ - 1}}\nabla M\left( {{z^ * }} \right). | (3.15) |
By (3.5), we can get
\nabla M{\left( {{z^ * }} \right)^T}{d^ * } = \mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } \nabla M{\left( {{z^k}} \right)^T}{d^k} \leqslant 0. | (3.16) |
By letting k \to \infty with k \in N in (3.12), we have \mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } \left\| {{\delta ^{{j_k}}}{d_k}} \right\| = 0 . If {\delta ^{{j_k}}} > 0 , then \mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } {d_k} = {d^ * } = 0 which together with (3.15) yields \nabla M\left( {{z^ * }} \right) = 0 . Otherwise, \mathop {\lim }\limits_{k\left( { \in K} \right) \to \infty } {\delta ^{{j_k}}} = 0 . From step 4 and Theorem 3.1
M({z^k} + {\delta ^{{j_k} - 1}}{d_k}) > {C_k} - \gamma {\left\| {{\delta ^{{j_k} - 1}}{d_k}} \right\|^2} \geqslant M({z^k}) - \gamma {\left\| {{\delta ^{{j_k} - 1}}{d_k}} \right\|^2}, | (3.17) |
i.e.,
\frac{{M({z^k} + {\delta ^{{j_k} - 1}}{d_k}) - M({z^k})}}{{{\delta ^{{j_k} - 1}}}} + \gamma {\left\| {{\delta ^{{j_k} - 1}}{d_k}} \right\|^2} > 0. | (3.18) |
Now that M\left( z \right) is continuously differentiable at {z^ * } , so we have
\nabla M{\left( {{z^ * }} \right)^T}{d^ * } \geqslant 0. | (3.19) |
Then, from (3.16), we can get \nabla M{\left( {{z^ * }} \right)^T}{d^ * } = 0 and
\begin{array}{l} {\left( {{d^ * }} \right)^T}\left( {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right){d^ * } = - \nabla M{\left( {{z^ * }} \right)^T}{\left[ {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right]^{ - 1}}\left( {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right){d^ * } \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= - \nabla M{\left( {{z^ * }} \right)^T}{d^ * } = 0. \end{array} |
Since the matrix H'{\left( {{z^ * }} \right)^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I is a positive matrix, so we have
{d^ * } = - {\left[ {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right]^{ - 1}}\nabla M{\left( {{z^ * }} \right)^T} = 0. | (3.20) |
Now that the matrix {\left[ {H'{{\left( {{z^ * }} \right)}^T}H'\left( {{z^ * }} \right) + 2\theta {C^ * }I} \right]^{ - 1}} is also positive matrix, we can get \nabla M\left( {{z^ * }} \right) = 0.
In this section, we carry out some numerical experiments on the LWCP by Algorithm 3.1. All experiments were conducted on a ThinkPad480 with a 1.8GHz CPU and 8.0GB RAM. The codes are run in MATLAB R2018b under Win10.
We first generate the matrices P, Q, R and vector a by following way:
P = \left( {\begin{array}{*{20}{c}} A \\ M \end{array}} \right), Q = \left( {\begin{array}{*{20}{c}} 0 \\ I \end{array}} \right), P = \left( {\begin{array}{*{20}{c}} 0 \\ { - {A^T}} \end{array}} \right), a = \left( {\begin{array}{*{20}{c}} b \\ { - f} \end{array}} \right), | (4.1) |
where A \in {R^{m \times n}} is a full row rank matrix with m < n , the matrix M is an n \times n symmetric semidefinite matrix, b \in {R^m}, f \in {R^n}. In our algorithm we set: \gamma = 0.01, \sigma = 0.5, \delta = 0.8, \theta = {10^{ - 4}}. The initial points are choosing as : {x^0} = \left( {1, \cdots , 1} \right), {s^0} = \left( {1, \cdots , 1} \right), {y^0} = \left( {0, \cdots , 0} \right).
In the course of experiments, we generate LWCP (1.3) by the following two ways.
(ⅰ) We take A = randn\left( {m, n} \right) with rank\left( A \right) = m , and M = \frac{{B{B^T}}}{{\left\| {B{B^T}} \right\|}} with B = rand\left( {n, n} \right) . we first generate \hat x = rand\left( {n, 1} \right), f = rand\left( {n, 1} \right) , then we set \hat b: = A\hat x, \hat s = M\hat x + f, w = \hat x\hat s .
(ⅱ) We choose a = \left( {\begin{array}{*{20}{c}} b \\ { - f} \end{array}} \right) - \xi where \xi \in {R^{n + m}} is a noise. We choose M = diag(v) with v = rand\left( {n, 1} \right) . The matrix A and vectors b, f are generated in the same way as (ⅰ). In the course of experiments, we take \xi = {10^{ - 4}}rand(1, 1)p with p: = {\left( {1, 1, 0, \cdots , 0} \right)^T} \in {R^{n + m}} .
First, in order to observe the local convergence of algorithm 3.1, we conducted two sets of random test experiments on LWCP (ⅰ) with n = 1000, m = 500 . Figure 1 gives the convergence curve of \left\| {H\left( {{z^k}} \right)} \right\| at the k -th iteration. We can clearly see that algorithm 3.1 is locally fast, or at least locally superlinear.
Next, we conducted comparative experiments with [13]. In the course of experiments, the parameters in the WCP functions \phi _{\tau , q}^w are respectively taken as \tau = 0.5, q = 3, \tau = 1, q = 3 and \tau = 0.3, q = 3, \tau = 0.8, q = 3 . The numerical results are presented in Tables 1, 2, Figures 2 and 3 respectively. Where AIT, ACPU, ANH are respectively the average number of iterations, the average CPU time (unit seconds), and the average number \left\| {H\left( {{z^k}} \right)} \right\| of iterations at the end of 10 random experiments. LM represents our experimental result, TLM is the experimental result of [13].
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.9 | 0.6960 | 5.0131×10-12 | 8.0 | 0.7015 | 6.0903×10-13 |
7.8 | 0.6974 | 5.5794×10-12 | 7.7 | 0.6906 | 1.2630×10-11 | ||
7.6 | 0.6703 | 8.5289×10-12 | 7.9 | 0.7025 | 3.5098×10-13 | ||
400 | 800 | 8.1 | 2.4705 | 5.5548×10-13 | 8.8 | 3.1241 | 5.2707×10-13 |
8.2 | 2.5097 | 7.6171×10-13 | 8.9 | 2.6100 | 2.2961×10-13 | ||
8.2 | 2.6300 | 2.4813×10-13 | 8.1 | 2.4039 | 3.6750×10-12 | ||
500 | 1000 | 8.1 | 4.4569 | 1.2136×10-12 | 8.1 | 4.3590 | 2.2894×10-12 |
8.4 | 4.7697 | 3.1192×10-13 | 8.4 | 4.4993 | 4.5153×10-12 | ||
8.2 | 4.8820 | 2.7039×10-12 | 8.4 | 4.4767 | 9.3738×10-13 | ||
600 | 1500 | 7.9 | 11.2160 | 9.7961×10-12 | 8.0 | 11.6639 | 1.0240×10-12 |
8.0 | 11.4230 | 1.0008×10-13 | 8.0 | 11.6522 | 9.3154×10-13 | ||
8.0 | 11.5575 | 1.0238×10-12 | 7.9 | 11.4497 | 1.0559×10-11 | ||
1000 | 1500 | 9.6 | 18.4934 | 5.6351×10-12 | 9.5 | 18.6699 | 1.6880×10-11 |
9.9 | 19.0396 | 5.2759×10-12 | 11.1 | 21.6384 | 5.9206×10-11 | ||
8.4 | 16.3751 | 1.2735×10-11 | 10.9 | 21.3177 | 7.6313×10-12 |
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.5 | 0.6642 | 1.5973×10-11 | 8.0 | 0.7155 | 9.7275×10-12 |
7.4 | 0.6675 | 8.8485×10-12 | 8.4 | 0.7429 | 8.7482×10-13 | ||
7.6 | 0.6661 | 2.5321×10-12 | 8.1 | 0.7167 | 6.7422×10-13 | ||
400 | 800 | 8.0 | 2.3642 | 2.4919×10-13 | 8.8 | 2.6212 | 4.6000×10-12 |
8.0 | 2.3791 | 4.5892×10-13 | 8.2 | 2.5740 | 2.4604×10-13 | ||
8.2 | 2.4293 | 9.2368×10-13 | 9.0 | 2.6885 | 1.3216×10-12 | ||
500 | 1000 | 8.0 | 4.3592 | 6.2691×10-13 | 8.3 | 4.5328 | 3.5736×10-12 |
8.1 | 4.3174 | 3.1221×10-13 | 8.2 | 4.3540 | 3.2290×10-13 | ||
7.9 | 4.2089 | 9.7440×10-12 | 9.9 | 5.3469 | 6.5691×10-12 | ||
600 | 1500 | 7.9 | 11.3807 | 9.2057×10-12 | 8.9 | 12.9567 | 9.7825×10-13 |
7.8 | 11.2766 | 1.3435×10-11 | 8.0 | 11.6116 | 9.9437×10-13 | ||
8.0 | 11.5494 | 9.8875×10-13 | 9.2 | 13.3792 | 1.0247×10-12 | ||
1000 | 1500 | 9.3 | 17.6422 | 7.8120×10-12 | 8.9 | 17.2609 | 3.4824×10-12 |
8.7 | 16.3247 | 4.9019×10-12 | 8.8 | 17.3407 | 4.7999×10-11 | ||
9.3 | 18.1968 | 7.8112×10-12 | 9.4 | 18.4024 | 1.3738×10-11 |
Tables 1 and 2 show the numerical results for LWCP (ⅰ). Where, the parameters are taken as \tau = 0.5, q = 3;\tau = 1, q = 3 respectively. It can be seen from the table that no matter what value \tau takes, our algorithm 3.1 has less iteration time or higher accuracy than algorithm 1 in [13].
Figures 2 and 3 show the numerical results for solving LWCP (ⅱ). Where, the parameters are respectively taken as \tau = 0.3, q = 3, m = \frac{n}{2};\tau = 0.8, q = 3, m = \frac{n}{2} . It can be seen from the figure that with the increase of dimension, the AIT of algorithm 3.1 fluctuates slightly, but it is always smaller than the AIT in [13]. The ACPU increases steadily and always smaller than the ACPU in [13].
When \tau = 0.6, q = 3, m = \frac{n}{2} , Figure 4 shows the ACPU and AIT comparison line graphs for LWCP (ⅰ) and LWCP (ⅱ) solved by algorithms 3.1 and [13] respectively. It can be seen from the figure that after adding noise to LWCP (ⅰ), the solution speed of both algorithms decreases, but our algorithm still has certain advantages.
In general, the problems generated by numerical experiments converge in a few iterations. The number of iterations varies slightly with the dimension of the problem. Our algorithm is effective for the linear weighted complementarity problem LWCP (1.3), because each problem can be successfully solved in a very short time with a small number of iterations. Numerical results show the feasibility and effectiveness of the algorithm 3.1.
Based on the idea of L-M method, with the help of a new class of WCP functions {\varphi }_{\tau , q}^{c}(a, b), we give the algorithm 3.1 for solving the LWCP (1.3). Under certain conditions, our algorithm can obtain the approximate solution of LWCP (1.3). Numerical experiments show the feasibility and effectiveness of the algorithm 3.1.
The authors declare no conflicts of interest.
[1] |
Z. S. Yu, Y. Qin, A cosh-based smoothing Newton method for P0 nonlinear complementarity problem, Nonlinear Anal. Real., 12 (2011), 875–884. https://doi.org/10.1016/j.nonrwa.2010.08.012 doi: 10.1016/j.nonrwa.2010.08.012
![]() |
[2] |
Z. S. Yu, Z. L. Wang, K. Su, A double nonmonotone Quasi-Newton method for nonlinear complementarity problem based on piecewise NCP functions, Math. Probl. Eng., 2020. https://doi.org/10.1155/2020/6642725 doi: 10.1155/2020/6642725
![]() |
[3] |
K. Ueda, N. Yamashita, Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem, J. Optim. Theory Appl., 152 (2012), 450–467. https://doi.org/10.1007/s10957-011-9907-2 doi: 10.1007/s10957-011-9907-2
![]() |
[4] |
J. L. Zhang, X. S. Zhang, A smoothing Levenberg–Marquardt method for NCP, Appl. Math. Comput., 178 (2006), 212–228. https://doi.org/10.1016/j.amc.2005.11.036 doi: 10.1016/j.amc.2005.11.036
![]() |
[5] |
J. H. Alcantara, J. S. Chen, A new class of neural networks for NCPs using smooth perturbations of the natural residual function, J. Comput. Appl. Math., 407 (2022), 114092. https://doi.org/10.1016/j.cam.2022.114092 doi: 10.1016/j.cam.2022.114092
![]() |
[6] |
F. A. Potra, Weighted complementarity problems–a new paradigm for computing equilibria, SIAM J. Optim., 22 (2012), 1634–1654. https://doi.org/10.1137/110837310 doi: 10.1137/110837310
![]() |
[7] |
Y. Y. Ye, A path to the arrow-debreu competitive market equilibrium, Math. Program., 111 (2008), 315–348. https://doi.org/10.1007/s10107-006-0065-5 doi: 10.1007/s10107-006-0065-5
![]() |
[8] |
S. Asadi, Z. Darvay, G. Lesaja, N. Mahdavi-Amiri, F. A. Potra, A full-Newton step interior-point method for monotone weighted linear complementarity problems, J. Optim. Theory Appl., 186 (2020), 864–878. https://doi.org/10.1007/s10957-020-01728-4 doi: 10.1007/s10957-020-01728-4
![]() |
[9] |
X. N. Chi, G. Q. Wang, A full-Newton step infeasible interior-point method for the special weighted linear complementarity problem, J. Optim. Theory Appl., 190 (2021), 108–129. https://doi.org/10.1007/s10957-021-01873-4 doi: 10.1007/s10957-021-01873-4
![]() |
[10] |
J. Y. Tang, H. C. Zhang, A nonmonotone smoothing Newton algorithm for weighted complementarity problem, J. Optim. Theory Appl., 189 (2021), 679–715. https://doi.org/10.1007/s10957-021-01839-6 doi: 10.1007/s10957-021-01839-6
![]() |
[11] |
Z. Y. Liu, J. Y. Tang, A new smoothing-type algorithm for nonlinear weighted complementarity problem, J. Appl. Math. Comput., 64 (2020), 215–226. https://doi.org/10.1007/s12190-020-01352-5 doi: 10.1007/s12190-020-01352-5
![]() |
[12] |
J. Zhang, A smoothing Newton algorithm for weighted linear complementarity problem, Optim. Lett., 10 (2016), 499–509. https://doi.org/10.1007/s11590-015-0877-4 doi: 10.1007/s11590-015-0877-4
![]() |
[13] |
J. Y. Tang, J. C. Zhou, Quadratic convergence analysis of a nonmonotone Levenberg-Marquardt type method for the weighted nonlinear complementarity problem. Comput. Optim. Appl., 80 (2021), 213–244. https://doi.org/10.1007/s10589-021-00300-8 doi: 10.1007/s10589-021-00300-8
![]() |
[14] |
X. H. Liu, W. Wu, Coerciveness of some merit functions over symmetric cones, J. Ind. Manag. Optim., 5 (2009), 603–613. https://doi.org/10.3934/jimo.2009.5.603 doi: 10.3934/jimo.2009.5.603
![]() |
[15] |
Z. H. Huang, J. Y. Han, D. C. Xu, L. P. Zhang, The non-interior continuation methods for solving the P0 function nonlinear complementarity problem, Sci. Chin. Series Math., 44 (2001), 1107–1114. https://doi.org/10.1007/BF02877427 doi: 10.1007/BF02877427
![]() |
[16] |
P. Jin, C. Ling, H. F. Shen, A smoothing Levenberg-Marquardt algorithm for semi-infnite programming, Comput. Optimiz. Appl., 60 (2015), 675–695. https://doi.org/10.1007/s10589-014-9698-0 doi: 10.1007/s10589-014-9698-0
![]() |
[17] | J. L. Zhang, J. Chen, A smoothing Levenberg-Marquardt type method for LCP, J. Comput. Math., 22 (2004), 735–752. |
[18] |
W. A. Liu, C.Y. Wang, A smoothing Levenberg–Marquardt method for generalized semi-infnite programming, Comput. Appl. Math., 32 (2013), 89–105. https://doi.org/10.1007/s40314-013-0013-y doi: 10.1007/s40314-013-0013-y
![]() |
1. | Nadezhda Vernikovskaya, Yuliya Ivanova, Artem Sheboltasov, Victor Chumachenko, Lyubov Isupova, Modeling of a Two-Bed Reactor for Low-Temperature Removal of Nitrogen Oxides in Nitric Acid Production, 2023, 13, 2073-4344, 535, 10.3390/catal13030535 | |
2. | Tiantian Fan, Jingyong Tang, New smooth weighted complementarity functions and a cubically convergent method for wLCP, 2024, 1862-4472, 10.1007/s11590-024-02139-4 |
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.9 | 0.6960 | 5.0131×10-12 | 8.0 | 0.7015 | 6.0903×10-13 |
7.8 | 0.6974 | 5.5794×10-12 | 7.7 | 0.6906 | 1.2630×10-11 | ||
7.6 | 0.6703 | 8.5289×10-12 | 7.9 | 0.7025 | 3.5098×10-13 | ||
400 | 800 | 8.1 | 2.4705 | 5.5548×10-13 | 8.8 | 3.1241 | 5.2707×10-13 |
8.2 | 2.5097 | 7.6171×10-13 | 8.9 | 2.6100 | 2.2961×10-13 | ||
8.2 | 2.6300 | 2.4813×10-13 | 8.1 | 2.4039 | 3.6750×10-12 | ||
500 | 1000 | 8.1 | 4.4569 | 1.2136×10-12 | 8.1 | 4.3590 | 2.2894×10-12 |
8.4 | 4.7697 | 3.1192×10-13 | 8.4 | 4.4993 | 4.5153×10-12 | ||
8.2 | 4.8820 | 2.7039×10-12 | 8.4 | 4.4767 | 9.3738×10-13 | ||
600 | 1500 | 7.9 | 11.2160 | 9.7961×10-12 | 8.0 | 11.6639 | 1.0240×10-12 |
8.0 | 11.4230 | 1.0008×10-13 | 8.0 | 11.6522 | 9.3154×10-13 | ||
8.0 | 11.5575 | 1.0238×10-12 | 7.9 | 11.4497 | 1.0559×10-11 | ||
1000 | 1500 | 9.6 | 18.4934 | 5.6351×10-12 | 9.5 | 18.6699 | 1.6880×10-11 |
9.9 | 19.0396 | 5.2759×10-12 | 11.1 | 21.6384 | 5.9206×10-11 | ||
8.4 | 16.3751 | 1.2735×10-11 | 10.9 | 21.3177 | 7.6313×10-12 |
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.5 | 0.6642 | 1.5973×10-11 | 8.0 | 0.7155 | 9.7275×10-12 |
7.4 | 0.6675 | 8.8485×10-12 | 8.4 | 0.7429 | 8.7482×10-13 | ||
7.6 | 0.6661 | 2.5321×10-12 | 8.1 | 0.7167 | 6.7422×10-13 | ||
400 | 800 | 8.0 | 2.3642 | 2.4919×10-13 | 8.8 | 2.6212 | 4.6000×10-12 |
8.0 | 2.3791 | 4.5892×10-13 | 8.2 | 2.5740 | 2.4604×10-13 | ||
8.2 | 2.4293 | 9.2368×10-13 | 9.0 | 2.6885 | 1.3216×10-12 | ||
500 | 1000 | 8.0 | 4.3592 | 6.2691×10-13 | 8.3 | 4.5328 | 3.5736×10-12 |
8.1 | 4.3174 | 3.1221×10-13 | 8.2 | 4.3540 | 3.2290×10-13 | ||
7.9 | 4.2089 | 9.7440×10-12 | 9.9 | 5.3469 | 6.5691×10-12 | ||
600 | 1500 | 7.9 | 11.3807 | 9.2057×10-12 | 8.9 | 12.9567 | 9.7825×10-13 |
7.8 | 11.2766 | 1.3435×10-11 | 8.0 | 11.6116 | 9.9437×10-13 | ||
8.0 | 11.5494 | 9.8875×10-13 | 9.2 | 13.3792 | 1.0247×10-12 | ||
1000 | 1500 | 9.3 | 17.6422 | 7.8120×10-12 | 8.9 | 17.2609 | 3.4824×10-12 |
8.7 | 16.3247 | 4.9019×10-12 | 8.8 | 17.3407 | 4.7999×10-11 | ||
9.3 | 18.1968 | 7.8112×10-12 | 9.4 | 18.4024 | 1.3738×10-11 |
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.9 | 0.6960 | 5.0131×10-12 | 8.0 | 0.7015 | 6.0903×10-13 |
7.8 | 0.6974 | 5.5794×10-12 | 7.7 | 0.6906 | 1.2630×10-11 | ||
7.6 | 0.6703 | 8.5289×10-12 | 7.9 | 0.7025 | 3.5098×10-13 | ||
400 | 800 | 8.1 | 2.4705 | 5.5548×10-13 | 8.8 | 3.1241 | 5.2707×10-13 |
8.2 | 2.5097 | 7.6171×10-13 | 8.9 | 2.6100 | 2.2961×10-13 | ||
8.2 | 2.6300 | 2.4813×10-13 | 8.1 | 2.4039 | 3.6750×10-12 | ||
500 | 1000 | 8.1 | 4.4569 | 1.2136×10-12 | 8.1 | 4.3590 | 2.2894×10-12 |
8.4 | 4.7697 | 3.1192×10-13 | 8.4 | 4.4993 | 4.5153×10-12 | ||
8.2 | 4.8820 | 2.7039×10-12 | 8.4 | 4.4767 | 9.3738×10-13 | ||
600 | 1500 | 7.9 | 11.2160 | 9.7961×10-12 | 8.0 | 11.6639 | 1.0240×10-12 |
8.0 | 11.4230 | 1.0008×10-13 | 8.0 | 11.6522 | 9.3154×10-13 | ||
8.0 | 11.5575 | 1.0238×10-12 | 7.9 | 11.4497 | 1.0559×10-11 | ||
1000 | 1500 | 9.6 | 18.4934 | 5.6351×10-12 | 9.5 | 18.6699 | 1.6880×10-11 |
9.9 | 19.0396 | 5.2759×10-12 | 11.1 | 21.6384 | 5.9206×10-11 | ||
8.4 | 16.3751 | 1.2735×10-11 | 10.9 | 21.3177 | 7.6313×10-12 |
m | n | LM | TLM | ||||
AIT | ACPU | ANH | AIT | ACPU | ANH | ||
200 | 500 | 7.5 | 0.6642 | 1.5973×10-11 | 8.0 | 0.7155 | 9.7275×10-12 |
7.4 | 0.6675 | 8.8485×10-12 | 8.4 | 0.7429 | 8.7482×10-13 | ||
7.6 | 0.6661 | 2.5321×10-12 | 8.1 | 0.7167 | 6.7422×10-13 | ||
400 | 800 | 8.0 | 2.3642 | 2.4919×10-13 | 8.8 | 2.6212 | 4.6000×10-12 |
8.0 | 2.3791 | 4.5892×10-13 | 8.2 | 2.5740 | 2.4604×10-13 | ||
8.2 | 2.4293 | 9.2368×10-13 | 9.0 | 2.6885 | 1.3216×10-12 | ||
500 | 1000 | 8.0 | 4.3592 | 6.2691×10-13 | 8.3 | 4.5328 | 3.5736×10-12 |
8.1 | 4.3174 | 3.1221×10-13 | 8.2 | 4.3540 | 3.2290×10-13 | ||
7.9 | 4.2089 | 9.7440×10-12 | 9.9 | 5.3469 | 6.5691×10-12 | ||
600 | 1500 | 7.9 | 11.3807 | 9.2057×10-12 | 8.9 | 12.9567 | 9.7825×10-13 |
7.8 | 11.2766 | 1.3435×10-11 | 8.0 | 11.6116 | 9.9437×10-13 | ||
8.0 | 11.5494 | 9.8875×10-13 | 9.2 | 13.3792 | 1.0247×10-12 | ||
1000 | 1500 | 9.3 | 17.6422 | 7.8120×10-12 | 8.9 | 17.2609 | 3.4824×10-12 |
8.7 | 16.3247 | 4.9019×10-12 | 8.8 | 17.3407 | 4.7999×10-11 | ||
9.3 | 18.1968 | 7.8112×10-12 | 9.4 | 18.4024 | 1.3738×10-11 |