Research article

A smooth Levenberg-Marquardt method without nonsingularity condition for wLCP

  • Received: 12 September 2021 Revised: 16 February 2022 Accepted: 22 February 2022 Published: 07 March 2022
  • MSC : 65K05, 90C33

  • In this paper we consider the weighted Linear Complementarity Problem (wLCP). By using a smooth weighted complementarity function, we reformulate the wLCP as a smooth nonlinear equation and propose a Levenberg-Marquardt method to solve it. The proposed method differentiates itself from the current Levenberg-Marquardt type methods by adopting a simple derivative-free line search technique. It is shown that the proposed method is well-defined and it is globally convergent without requiring wLCP to be monotone. Moreover, the method has local sub-quadratic convergence rate under the local error bound condition which is weaker than the nonsingularity condition. Some numerical results are reported.

    Citation: Xiaorui He, Jingyong Tang. A smooth Levenberg-Marquardt method without nonsingularity condition for wLCP[J]. AIMS Mathematics, 2022, 7(5): 8914-8932. doi: 10.3934/math.2022497

    Related Papers:

    [1] Lin Zheng, Liang Chen, Yanfang Ma . A variant of the Levenberg-Marquardt method with adaptive parameters for systems of nonlinear equations. AIMS Mathematics, 2022, 7(1): 1241-1256. doi: 10.3934/math.2022073
    [2] Luyao Zhao, Jingyong Tang . Convergence properties of a family of inexact Levenberg-Marquardt methods. AIMS Mathematics, 2023, 8(8): 18649-18664. doi: 10.3934/math.2023950
    [3] Dingyu Zhu, Yueting Yang, Mingyuan Cao . An accelerated adaptive two-step Levenberg–Marquardt method with the modified Metropolis criterion. AIMS Mathematics, 2024, 9(9): 24610-24635. doi: 10.3934/math.20241199
    [4] Linsen Song, Gaoli Sheng . A two-step smoothing Levenberg-Marquardt algorithm for real-time pricing in smart grid. AIMS Mathematics, 2024, 9(2): 4762-4780. doi: 10.3934/math.2024230
    [5] Panjie Tian, Zhensheng Yu, Yue Yuan . A smoothing Levenberg-Marquardt algorithm for linear weighted complementarity problem. AIMS Mathematics, 2023, 8(4): 9862-9876. doi: 10.3934/math.2023498
    [6] Iftikhar Ahmad, Hira Ilyas, Muhammad Asif Zahoor Raja, Tahir Nawaz Cheema, Hasnain Sajid, Kottakkaran Sooppy Nisar, Muhammad Shoaib, Mohammed S. Alqahtani, C Ahamed Saleel, Mohamed Abbas . Intelligent computing based supervised learning for solving nonlinear system of malaria endemic model. AIMS Mathematics, 2022, 7(11): 20341-20369. doi: 10.3934/math.20221114
    [7] Khalil Ur Rehman, Wasfi Shatanawi, Zead Mustafa . Artificial intelligence (AI) based neural networks for a magnetized surface subject to tangent hyperbolic fluid flow with multiple slip boundary conditions. AIMS Mathematics, 2024, 9(2): 4707-4728. doi: 10.3934/math.2024227
    [8] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
    [9] Li Dong, Jingyong Tang . New convergence analysis of a class of smoothing Newton-type methods for second-order cone complementarity problem. AIMS Mathematics, 2022, 7(9): 17612-17627. doi: 10.3934/math.2022970
    [10] Khalil Ur Rehman, Wasfi Shatanawi, Zeeshan Asghar, Haitham M. S. Bahaidarah . Neural networking analysis for MHD mixed convection Casson flow past a multiple surfaces: A numerical solution. AIMS Mathematics, 2023, 8(7): 15805-15823. doi: 10.3934/math.2023807
  • In this paper we consider the weighted Linear Complementarity Problem (wLCP). By using a smooth weighted complementarity function, we reformulate the wLCP as a smooth nonlinear equation and propose a Levenberg-Marquardt method to solve it. The proposed method differentiates itself from the current Levenberg-Marquardt type methods by adopting a simple derivative-free line search technique. It is shown that the proposed method is well-defined and it is globally convergent without requiring wLCP to be monotone. Moreover, the method has local sub-quadratic convergence rate under the local error bound condition which is weaker than the nonsingularity condition. Some numerical results are reported.



    This paper considers the weighted Linear Complementarity Problem (wLCP) introduced by Potra [14] which consists in finding vectors xRn,sRn,yRm such that

    (wLCP)x,s0,Px+Qs+Ry=d,xs=w. (1.1)

    Here PR(n+m)×n,QR(n+m)×n,RR(n+m)×m are given matrices, dRn+m is a given vector, w0 is a given weight vector (the data of the problem) and xs is the componentwise product of the vectors x and s. The matrix R is assumed to have full column rank. If the weight vector w is chosen to be the zero vector, then the wLCP reduces to the general LCP studied in [21]. The wLCP is called monotone, if

    PΔx+QΔs+RΔy=0impliesΔxTΔs0.

    The wLCP can be used for modeling a larger class of problems from science and engineering. For example, Fisher's competitive market equilibrium model can be formulated as a wLCP that can be efficiently solved by interior-point methods [14]. Moreover, the Quadratic Programming and Weighted Centering problem, which generalizes the notion of a Linear Programming and Weighted Centering problem proposed by Anstreicher [1], can be formulated as a monotone wLCP [14]. Lately, Chi et al. [3] and Gowda [10] studied wLCP on Euclidean Jordan algebras.

    Since Potra introduced the notion of wLCP [14], many numerical algorithms have been proposed for solving this problem. One class of effective algorithms is interior-point methods. For example, Potra [14] proposed two interior-point methods for solving general monotone wLCPs. In [15], Potra proposed a corrector-predictor interior-point method for solving the sufficient wLCP. Asadi et al. [2] introduced a full-Newton step interior-point method for solving the monotone wLCP. Chi et al. [4] proposed a full-modified-Newton infeasible interior-point method for solving a special wLCP. It is worth pointing out that, to establish the computational complexity of interior-point methods, one usually requires that the wLCP is monotone (e.g., [2,14]). Another class of effective algorithms for solving the wLCP is smoothing Newton-type algorithms. This class of algorithms is to use a smoothing function to reformulate the wLCP as a system of smooth nonlinear equations and then solve it by Newton method. For example, Zhang [22] presented a smoothing Newton algorithm for solving the wLCP. Tang [16] proposed a variant nonmonotone smoothing algorithm for solving the wLCP with improved numerical results. Tang and Zhang [17] proposed a nonmonotone smoothing Newton algorithm for solving general wCPs. Notice that, to ensure Newton step be feasible, smoothing Newton-type algorithms in [16,17,22] also require that the wLCP is monotone. Moreover, to obtain local fast convergence rate, smoothing Newton-type algorithms in [16,22] need the nonsingularity condition.

    Lately, based on a nonsmooth weighted complementarity function, Tang and Zhou [19] reformulated the wLCP as a nonsmooth nonlinear equation and proposed a damped Gauss-Newton method to solve it. Their method can be used to solve nonmonotone wLCPs and has local quadratic convergence under the local error bound condition which is weaker than the nonsingularity condition. Motivated by their work, in this paper we introduce a weighted complementarity function which is smooth everywhere. By using this function, we reformulate the wLCP as a smooth nonlinear equation and propose a Levenberg-Marquardt method to solve it. Different from current Levenberg-Marquardt type methods (e.g., [7,8,9]), the proposed method is designed based on a simple derivative-free line search technique. Compared with smoothing Newton-type algorithms in [16,17,22], the proposed method has two advantages. (i) It is well-defined and is globally convergent without any additional condition. Hence it can be used to solve nonmonotone wLCPs. (ii) It has local sub-quadratic convergence rate under the local error bound condition. It is worth pointing out that, to obtain the local fast convergence rate, classical Levenberg-Marquardt methods (e.g., [7,8,9]) also need the condition that the Jacobian is Lipschitz continuous. In this paper we show that this condition holds for our method (see, Lemma 4.1 below). We also report some numerical results which indicate that our method is more effective for solving monotone and nonmonotone wLCPs than the damped Gauss-Newton method studied in [19].

    This paper is organized as follows. In Section 2, we reformulate the wLCP as a smooth nonlinear equation and propose a Levenberg-Marquardt method to solve it. In Section 3, we give its global convergence. In Section 4, we analyze its local sub-quadratic convergence under the local error bound condition. Numerical results are reported in Section 5. Some conclusions are given in Section 6.

    Throughout this paper, Rn denotes the set of all n dimensional real vectors. All vectors are column vectors and for simplicity, the column vector (uT1,,uTn)T is written as (u1,,un) where uiRni. denotes the 2-norm. For any vector xRn, we denote x by vec(xi) and the diagonal matrix whose ith diagonal element is xi by diag(xi). For a given set SRn and for any uRn, dist(u,S)=infvS{uv}. For any α,β>0, α=O(β) (respectively, α=o(β)) means that limsupβ0αβ< (respectively, limsupβ0αβ=0).

    The weighted complementarity function serves an important role in designing Newton-type methods for solving the wLCP. For a fixed c0, a function ϕc(a,b):R2R is called a weighted complementarity function if it satisfies

    ϕc(a,b)=0a0,b0,ab=c.

    One popular weighted complementarity function is

    ϕc(a,b):=a+ba2+b2+2c,(a,b)R2.

    When c=0, ϕc(a,b) is the well-known Fischer-Burmeister function for nonlinear complementarity problems. Obviously, ϕc(a,b) is not continuously differentiable (smooth) everywhere. By using ϕc(a,b), Tang and Zhou [19] reformulated the wLCP as the following nonsmooth nonlinear equation:

    Φ(x,s,y):=(Px+Qs+Rydϕw1(x1,s1)ϕwn(xn,sn))=0,

    where w=(w1,...,wn)T0 is the weight vector given in the wLCP. Tang and Zhou [19] presented a damped Gauss-Newton method to solve Φ(x,s,y)=0 and established its local quadratic convergence under the local error bound condition.

    In this paper, for a fixed c0, we consider the following nonnegative function:

    ψc(a,b):=12(ϕc(a,b))2=12(a+ba2+b2+2c)2,(a,b)R2. (2.1)

    The following lemma shows that ψc is a weighted complementarity function and it is smooth everywhere.

    Lemma 2.1. Let ψc be defined by (2.1).Then the following results hold.

    (i) ψc is a weighted complementarity function, i.e.,

    ψc(a,b)=0a0,b0,ab=c.

    (ii) ψc is continuously differentiable at any (a,b)R2 and

    ψc(a,b)=[aψc(a,b)bψc(a,b)],

    where aψc(0,0)=bψc(0,0)=2c and for any (a,b)(0,0),

    aψc(a,b)=(1aa2+b2+2c)ϕc(a,b),bψc(a,b)=(1ba2+b2+2c)ϕc(a,b).

    (iii) For any (a,b)R2, one has

    aψc(a,b)bψc(a,b)0.

    (iv) For any (a,b)R2, one has

    ψc(a,b)=0aψc(a,b)=0bψc(a,b)=0.

    Proof. The result (i) obviously holds. By a direct computation, we can obtain the result (ii). Moreover, for any (a,b)R2, by the result (ii), we have aψc(0,0)bψc(0,0)=2c0, and if (a,b)(0,0), then we also have

    aψc(a,b)bψc(a,b)=(1aa2+b2+2c)(1ba2+b2+2c)(ϕc(a,b))20,

    where the inequality holds because

    01aa2+b2+2c2,01ba2+b2+2c2.

    Now we prove the result (iv). First, we prove the implication

    ψc(a,b)=0aψc(a,b)=0.

    Since obviously holds, we only need to prove . By the result (ii), aψc(0,0)=0 gives c=0 and so ψc(0,0)=0. For (a,b)(0,0), aψc(a,b)=0 gives ϕc(a,b)=0 or 1aa2+b2+2c=0. If ϕc(a,b)=0, then ψc(a,b)=0 and we are done. If 1aa2+b2+2c=0, then a=a2+b2+2c which yields b=c=0 and a=|a|. These implies that ϕc(a,b)=0 and so ψc(a,b)=0. By the same way, we can also prove ψc(a,b)=0bψc(a,b)=0. The proof is completed.$

    Figures 1 and 2 illustrate the geometrical interpretations of ϕc and ψc with c=1 which show that they are very different.

    Figure 1.  z=ϕc(a,b).
    Figure 2.  z=ψc(a,b).

    By using the weighted complementarity function ψc given in (2.1), we now reformulate the wLCP as the following smooth nonlinear equation:

    H(x,s,y):=(Px+Qs+Rydψw1(x1,s1)ψwn(xn,sn))=0. (2.2)

    By Lemma 2.1, we have the following result.

    Lemma 2.2. (i) H(x,s,y)=0 if and only if (x,s,y) is a solution of the wLCP.

    (ii) H(x,s,y) is continuously differentiable at any (x,s,y)R2n+m withthe Jacobian

    H(x,s,y)=(PQRdiag(xiψwi(xi,si))diag(siψwi(xi,si))0), (2.3)

    in which xiψwi(,) and siψwi(,) are given in Lemma 2.1 (ii).

    Let z:=(x,s,y) and H(z) be given in (2.2). We define the merit function Ψ:R2n+mR as

    Ψ(z):=12H(z)2. (2.4)

    Then, by Lemma 2.2, we have the following result.

    Lemma 2.3. (i) Ψ(z)=0 if and only if z=(x,s,y) is a solution of the wLCP.

    (ii) Ψ(z) is continuously differentiable at any zR2n+m with its gradient Ψ(z)=H(z)TH(z).

    Now we describe our method as follows.

    Algorithm 2.1. (A smooth Levenberg-Marquardt method)

    Step 0: Choose parameters θ,ρ,γ(0,1) and an initial point z0:=(x0,s0,y0).Set k:=0.

    Step 1: If Ψ(zk)=0, then stop.

    Step 2: Set μk:=θH(zk)δ where δ[1,2] is a constant. Compute the search direction dkR2n+m by solving

    [H(zk)TH(zk)+μkI]dk=Ψ(zk). (2.5)

    Step 3: Find a step-size αk:=ρmk, where mk is the smallest nonnegative integer m satisfying

    H(zk+ρmdk)H(zk)γρmdk2. (2.6)

    Step 4: Set zk+1:=zk+αkdk. Set k:=k+1 and go to Step 1.

    The algorithmic framework of Algorithm 2.1 is much simpler than many Levenberg-Marquardt type methods (e.g, [7,8,9,13,18,23]). The main feature of Algorithm 2.1 is that it adopts a simple derivative-free line search in Step 3 which avoids computing the gradient Ψ(zk).

    Theorem 2.1. Algorithm 2.1 is well-defined.

    Proof. For some k, if Ψ(zk)0, then H(zk)0 and hence μk=θH(zk)δ>0. So, H(zk)TH(zk)+μkI is positive definite and the search direction dk in Step 2 is well-defined. Since Ψ(zk)0, by (2.5) we have dk0 and

    Ψ(zk)Tdk=dTk[H(zk)TH(zk)+μkI]dk<0. (2.7)

    This implies that dk is a descent direction of the merit function Ψ(z) at zk. Next, we show that there exists at least a nonnegative integer m satisfying (2.6). On the contrary, we suppose that for any nonnegative integer m,

    H(zk+ρmdk)H(zk)>γρmdk2.

    Multiplying both sides of the above inequality by 12[H(zk+ρmdk)+H(zk)], we have

    Ψ(zk+ρmdk)Ψ(zk)ρm>12γρmdk2[H(zk+ρmdk)+H(zk)]. (2.8)

    Since Ψ is continuously differentiable at zk, by letting m in (2.8), we have Ψ(zk)Tdk0 which contradicts (2.7). So, we can find a step-size αk in Step 3 and get the (k+1)-th iteration zk+1=zk+αkdk in Step 4. Hence, Algorithm 2.1 is well-defined.

    In the following, we assume Ψ(zk)0 for all k0, so that Algorithm 2.1 generates an infinite sequence {zk}. To establish the global convergence, we need the following lemma.

    Lemma 3.1. Let {zk} be the sequence generated by Algorithm 2.1.Then one has:

    (i) dk12θH(zk)1δ2 for all k0;

    (ii) limkαkdk=0.

    Proof. For any k0, we suppose that the singular value decomposition of H(zk) is

    H(zk)=UTkΣkVk,

    where Uk and Vk are orthogonal matrices and Σk=diag(σk1,...,σkr,0,...,0) with σk1σkr>0. Then, by (2.5) we have for any k0,

    dk=[H(zk)TH(zk)+μkI]1H(zk)TH(zk)=VTkdiag(σk1(σk1)2+μk,...,σkr(σkr)2+μk,0,...,0)UkH(zk)12μkH(zk),

    where the inequality holds because σki(σki)2+μk12μk for all i=1,...,r. This together with μk=θH(zk)δ gives the result (i). Moreover, by (2.6) we have

    γαkdk2H(zk)H(zk+1). (3.1)

    Since {H(zk)} is a monotonically decreasing sequence by (2.6), there exists a constant H0 such that limkH(zk)=H. This together with (3.1) proves the result (ii).

    Theorem 3.1. Let zbe any accumulation point of the sequence {zk} generated by Algorithm 2.1. Thenz is a stationary point of the merit function Ψ(z), i.e., Ψ(z)=0.Moreover, if H(z) is nonsingular, then H(z)=0 and so z=(x,s,y) is a solution of the rmwLCP.

    Proof. Without loss of generality, we may assume that z is the limit of the subsequence {zk}kK where K{0,1,...}, i.e, lim(K)kzk=z. Then, by the continuity of H and H,

    lim(K)kH(zk)=H(z),lim(K)kH(zk)=H(z),

    and consequently

    lim(K)kΨ(zk)=Ψ(z)=12H(z)2,lim(K)kμk=θH(z)δ,
    lim(K)kΨ(zk)=lim(K)kH(zk)TH(zk)=H(z)TH(z)=Ψ(z).

    Obviously, Ψ(z)=0 when H(z)=0. Now we assume H(z)>0 and will derive a contradiction. Since {H(zk)} is a monotonically decreasing sequence, by Lemma 3.1 (i), we have for all kK,

    dk12θH(zk)1δ212θH(z0)1δ2.

    Thus, {dk}kK has a convergent subsequence and we may assume lim(K1)kdk=d where K1K. In the following, we show that d=0. In fact, if d0, then by Lemma 3.1 (ii), we have lim(K1)kαk=0. Moreover, from Step 3 we have for all kK1,

    H(zk+ρ1αkdk)H(zk)>γρ1αkdk2.

    Multiplying both sides of this inequality by 12[H(zk+ρ1αkdk)+H(zk)], we have for all kK1,

    Ψ(zk+ρ1αkdk)Ψ(zk)ρ1αk>12γρ1αkdk2[H(zk+ρ1αkdk)+H(zk)]. (3.2)

    Since Ψ is continuously differentiable at z, by letting k with kK1 in (3.2), we have

    Ψ(z)Td0. (3.3)

    On the other hand, by (2.7) we have

    lim(K1)kΨ(zk)Tdk=Ψ(z)Td0. (3.4)

    So, from (3.3) and (3.4) we have Ψ(z)Td=0, which together with (2.5) gives

    (d)T[H(z)TH(z)+θH(z)δI]d=Ψ(z)Td=0.

    Since H(z)>0, the matrix H(z)TH(z)+θH(z)δI is positive definite. Hence, we have d=0. A contradiction is derived. Therefore, we have d=0. Then, by letting k with kK1 in (2.5), we have

    Ψ(z)=[H(z)TH(z)+θH(z)δI]d=0.

    This proves the first result. The second result follows from Ψ(z)=H(z)TH(z)=0. We complete the proof.

    Theorem 3.2. Let {zk} be the sequence generated by Algorithm 2.1. If {zk} has an isolated accumulation pointz, then the whole sequence {zk} converges to z.

    Proof. By Lemma 3.1 (ii), we have limkzk+1zk=0. This together with [6,Proposition 8.3.10] proves the theorem.

    At the end of this section, we consider a special wLCP which consists in finding vectors xRn,sRn such that

    (P0wLCP)x,s0,s=Mx+q,xs=w, (3.5)

    where qRn and MRn×n is a P0 matrix, that is, for any ξRn with ξ0, there exists an index i0{1,...,n} such that ξi00 and ξi0(Mξ)i00.

    When w is chosen to be the zero vector, the P0 wLCP (3.5) reduces to the following standard P0 LCP:

    (P0LCP)x,s0,s=Mx+q,x,s=0,

    which has many applications in economics and engineering and has been extensively studied in literatures (e.g., [11,12,24,25,26]). By using the weighted complementarity function ψc(a,b) defined by (2.1), we can reformulate the P0 wLCP as the smooth nonlinear equation

    H(x,s)=(Mx+qsψw1(x1,s1)ψwn(xn,sn))=0 (3.6)

    and apply Algorithm 2.1 to solve it. For the P0 wLCP, we have the following global convergence result.

    Theorem 3.3. Let {(xk,sk)} be the sequence generated by Algorithm 2.1 for solving the nonlinear equation (3.6).Then any accumulation point (x,s) of {(xk,sk)} is a solution of the P0 wLCP.

    Proof. Since the P0 wLCP is a special case of the wLCP, Theorem 3.1 still holds. So, we have Ψ(x,s)=H(x,s)TH(x,s)=0. By (3.6), we have

    H(x,s)=(MIdiag(xiψwi(xi,si))diag(siψwi(xi,si))), (3.7)

    in which xiψwi(,) and siψwi(,) are given in Lemma 2.1 (ii). Then, it follows from H(x,s)TH(x,s)=0 that

    MT(Mx+qs)+vec(xiψwi(xi,si)ψwi(xi,si))=0, (3.8)
    (Mx+qs)+vec(siψwi(xi,si)ψwi(xi,si))=0. (3.9)

    Now we assume Mx+qs0. Since M is a P0 matrix and so is MT, there exists an index i0{1,...,n} such that (Mx+qs)i00 and

    (Mx+qs)i0(MT(Mx+qs))i00. (3.10)

    By (3.8) and (3.9), we have

    (MT(Mx+qs))i0=xi0ψwi0(xi0,si0)ψwi0(xi0,si0), (3.11)
    (Mx+qs)i0=si0ψwi0(xi0,si0)ψwi0(xi0,si0), (3.12)

    which together with (3.10) gives

    xi0ψwi0(xi0,si0)si0ψwi0(xi0,si0)(ψwi0(xi0,si0))20. (3.13)

    On the other hand, by Lemma 2.1 (iii), we have

    xi0ψwi0(xi0,si0)si0ψwi0(xi0,si0)(ψwi0(xi0,si0))20. (3.14)

    From (3.13) and (3.14) it holds that

    xi0ψwi0(xi0,si0)si0ψwi0(xi0,si0)(ψwi0(xi0,si0))2=0.

    This together with Lemma 2.1 (iv) gives ψwi0(xi0,si0)=0. Then, by (3.12) we have (Mx+qs)i0=0 which contradicts the choice of the index i0 such that (Mx+qs)i00. Hence, Mx+qs=0. Furthermore, by (3.8) we have xiψwi(xi,si)ψwi(xi,si)=0 which together with Lemma 2.1 (iv) yields ψwi(xi,si)=0 for all i=1,...,n. Therefore, H(x,s)=0 and (x,s) is a solution of the P0 wLCP.

    Let H(z) be given in (2.2). Denote

    Z:={zR2n+m|H(z)=0}.

    In this section, we assume that the whole sequence {zk} generated by Algorithm 2.1 converges to some point zZ. Then limkH(zk)=H(z)=0. Now we make the following assumption.

    Assumption 4.1. H(z) provides a local error bound on some neighbourhood of z, i.e., thereexist constants ξ>0 and ϵ>0 such that

    H(z)ξdist(z,Z),zN(z,ϵ)={zR2n+m|zzϵ}. (4.1)

    As it is well-known, Assumption 4.1 is the local error bound condition which is weaker than the nonsingularity condition. It is worth pointing out that, to obtain the local quadratic convergence, classical Levenberg-Marquardt methods (e.g., [7,8,9]) also need to assume that the Jacobian is Lipschitz continuous on the set N(z,ϵ). In the following, we show that this assumption holds for our method.

    Lemma 4.1. The Jacobian H(z) given in (2.3) is Lipschitz continuous on R2n+m, i.e., there exists a constant M>0 such that

    H(z)H(˜z)Mz˜z,z,˜zR2n+m. (4.2)

    Proof. Obviously, we only need to prove that the gradient ψc(a,b) given in Lemma 2.1 (ii) is Lipschitz continuous on R2. First, by [5,Lemma 3.1], ψc(a,b) is Lipschitz continuous on R2 when c=0. Now we consider c>0. Then ψc(a,b) is twice continuously differentiable at any (a,b)R2 with

    2ψc(a,b)=[2aaψc(a,b)2abψc(a,b)2baψc(a,b)2bbψc(a,b)], (4.3)

    where

    2aaψc(a,b)=(b2+2c(a2+b2+2c)3)ϕc(a,b)+(1aa2+b2+2c)2,
    2bbψc(a,b)=(a2+2c(a2+b2+2c)3)ϕc(a,b)+(1ba2+b2+2c)2,
    2abψc(a,b)=2baψc(a,b)=ab(a2+b2+2c)3ϕc(a,b)
    +(1aa2+b2+2c)(1ba2+b2+2c).

    Since

    b2+2c(a2+b2+2c)31a2+b2+2c,
    a2+2c(a2+b2+2c)31a2+b2+2c,
    |ab|(a2+b2+2c)312a2+b2+2c,

    also notice that

    |ϕc(a,b)||a+b|+a2+b2+2c(2+1)a2+b2+2c,

    we have

    |b2+2c(a2+b2+2c)3ϕc(a,b)|2+1,
    |a2+2c(a2+b2+2c)3ϕc(a,b)|2+1,
    |ab(a2+b2+2c)3ϕc(a,b)|2+12.

    Moreover, it is clear that

    01aa2+b2+2c2,01ba2+b2+2c2.

    Thus, we have

    |2aaψc(a,b)|2+5,|2bbψc(a,b)|2+5,
    |2abψc(a,b)|=|2baψc(a,b)|2+92.

    This implies that there exists a constant C>0 independent of (a,b) such that

    2ψc(a,b)C,(a,b)R2.

    Then, by the Mean-Value Theorem, we have

    ψc(a,b)ψc(˜a,˜b)C(a,b)(˜a,˜b)

    holds for all (a,b),(˜a,˜b)R2. The proof is completed.

    By Lemma 4.1, we can directly have

    H(u)H(v)H(v)(uv)Muv2,u,vN(z,ϵ), (4.4)

    and there exists a constant L>0 such that

    H(u)H(v)Luv,u,vN(z,ϵ). (4.5)

    In the following, we denote ˉzk as the vector in Z that satisfies

    zkˉzk=dist(zk,Z).

    Lemma 4.2. Let {zk} be the sequence generated by Algorithm 2.1.If Assumption 4.1 holds, then for all sufficiently large k,

    H(zk)+H(zk)dkM2+θLδdist(zk,Z)1+δ2, (4.6)
    dkM2+θLδθξδdist(zk,Z). (4.7)

    Proof. Notice that for all sufficiently large k,

    ˉzkzzkˉzk+zkz2zkz,

    which implies that ˉzk sufficiently close to z. So, by (4.5), for all sufficiently large k,

    μk=θH(zk)δ=θH(zk)H(ˉzk)δθLδzkˉzkδ. (4.8)

    For any k0, we consider the following optimization problem:

    mindR2n+mφk(d):=H(zk)+H(zk)d2+μkd2. (4.9)

    Then, the search direction dk generated by (2.5) is the solution of (4.9) because φk(d) is a strictly convex quadratic function and dk is a stationary point of φk(d). Hence, for all sufficiently large k, by (4.4) and (4.8) we have

    φk(dk)φk(ˉzkzk)=H(zk)+H(zk)(ˉzkzk)2+μkzkˉzk2M2zkˉzk4+θLδzkˉzk2+δ(M2+θLδ)zkˉzk2+δ=(M2+θLδ)dist(zk,Z)2+δ. (4.10)

    This together with (4.9) yields

    H(zk)+H(zk)dkφk(dk)M2+θLδdist(zk,Z)1+δ2.

    Moreover, for all sufficiently large k, by Assumption 4.1, we have

    μk=θH(zk)δθξδdist(zk,Z)δ,

    which together with (4.9) and (4.10) gives

    dkφk(dk)μkM2+θLδθξδdist(zk,Z).

    The proof is completed.

    Theorem 4.1. Let {zk} be the sequence generated by Algorithm 2.1.If Assumption 4.1 holds, then for all sufficiently large k, we have

    zk+1=zk+dk, (4.11)
    dist(zk+1,Z)=O(dist(zk,Z)1+δ2). (4.12)

    Proof. By (4.7), for all sufficiently large k,

    zk+dkzzkz+dkc1zkz, (4.13)

    where c1:=1+M2+θLδθξδ. This implies that zk+dk sufficiently close to z. Hence, by (4.4), for all sufficiently large k,

    H(zk+dk)H(zk)H(zk)dkMdk2, (4.14)

    which together with (4.6) and (4.7) gives

    H(zk+dk)H(zk)+H(zk)dk+Mdk2M2+θLδdist(zk,Z)1+δ2+M(M2+θLδ)θξδdist(zk,Z)2c2dist(zk,Z)1+δ2, (4.15)

    where c2:=M2+θLδ+M(M2+θLδ)θξδ. Thus, by (4.1) and (4.15), for all sufficiently large k,

    dist(zk+dk,Z)1ξH(zk+dk)c2ξdist(zk,Z)1+δ2. (4.16)

    Let ˜zk be the vector in Z that satisfies zk+dk˜zk=dist(zk+dk,Z). Then, by (4.13) and (4.16), for all sufficiently large k,

    ˜zkzzk+dk˜zk+zk+dkz=dist(zk+dk,Z)+zk+dkzc2ξdist(zk,Z)1+δ2+c1zkz(c2ξ+c1)zkz,

    which yields ˜zk sufficiently close to z. So, by (4.1), (4.5) and (4.16), also notice that H(˜zk)=0 as ˜zkZ, for all sufficiently large k,

    H(zk+dk)=H(zk+dk)H(˜zk)Lzk+dk˜zk=Ldist(zk+dk,Z)Lc2ξdist(zk,Z)1+δ2Lc2ξ2H(zk)1+δ2. (4.17)

    Moreover, by (4.1) and (4.7), for all sufficiently large k,

    dk2M2+θLδθξδdist(zk,Z)2M2+θLδθξδ+2H(zk)2. (4.18)

    Hence, by (4.17) and (4.18) we have

    limkH(zk+dk)+γdk2H(zk)=0,

    which implies that for all sufficiently large k,

    H(zk+dk)+γdk2H(zk).

    This shows that the step-size αk=1 is accepted in Step 3 for all sufficiently large k. Consequently, for all sufficiently large k, we have zk+1=zk+dk which together with (4.16) prove the theorem.

    By Theorem 4.1, similarly as the proof of [19,Theorem 5.2], we can obtain the following sub-quadratic convergence property.

    Theorem 4.2. Let {zk} be the sequence generated by Algorithm 2.1.If Assumption 4.1 holds, then one has

    zk+1z=O(zkz1+δ2).

    In this section, we report some numerical results of Algorithm 2.1. All experiments are carried on a PC with CPU of Inter(R) Core(TM)i7-7700 CPU @ 3.60 GHz and RAM of 8.00GB. The codes are written in MATLAB and run in MATLAB R2018a environment. The parameters used in Algorithm 2.1 are chosen as θ=104,ρ=0.8,γ=104,δ=1.

    We apply Algorithm 2.1 to solve the wLCP (1.1) in which

    P=(AM),Q=(0I),R=(0AT),d=(bf), (5.1)

    where ARm×n is a full row rank matrix with m<n, MRn×n, bRm and fRn. This example comes from [14]. For the purposes of comparison, we also apply the the damped Gauss-Newton method studied by Tang and Zhou [19] to solve this test problem. In our experiments, we test the following two class of wLCPs.

    (The monotone wLCP) We choose A=randn(m,n) with the rank of A being m and M=BBT/BBT with B=rand(n,n). Then we choose ˆx=rand(n,1), f=rand(n,1) and set b:=Aˆx, ˆs:=Mˆx+f and w:=ˆxˆs. This wLCP is monotone. For each problem with sizes n(=2m), we generate ten instances and solve them by using the following three starting points:

    (i) x0=s0=(1,...,1)T, y0=(0,...,0)T;

    (ii) x0=s0=(1,0,...,0)T, y0=(0,...,0)T; (iii) x0=rand(n,1),s0=rand(n,1),y0=rand(m,1).

    We use H(zk)105 as the stopping criterion. Numerical results are listed in Table 1 where SLMM denotes the smooth Levenberg-Marquardt method studied in this paper, DGNM denotes the damped Gauss-Newton method studied in [19], SP denotes the starting point, AIT and ACPU denote the average number of iterations and the average CPU time in seconds respectively. From Table 1, we can see that SLMM has the advantage over DGNM, especially for large scale test problems.

    Table 1.  Numerical results of solving the monotone wLCP.
    SLMM DGNM
    SP n AIT ACPU AIT ACPU
    (i) 200 8.9 0.08 6.5 0.07
    400 9.0 0.50 8.0 0.48
    600 9.0 1.13 9.0 1.20
    800 9.5 2.55 9.6 2.59
    1000 10.0 4.39 10.3 4.78
    1200 10.0 6.86 11.0 7.80
    1400 10.0 10.28 12.0 12.86
    1600 10.0 14.23 12.3 18.08
    1800 10.0 19.29 13.0 25.58
    2000 10.0 26.40 13.9 35.92
    (ii) 200 12.0 0.10 8.3 0.08
    400 12.0 0.64 10.1 0.58
    600 12.0 1.57 12.0 1.65
    800 12.0 3.06 13.3 3.51
    1000 12.0 5.50 15.0 6.63
    1200 12.2 9.23 16.3 11.55
    1400 12.4 12.82 18.0 19.75
    1600 12.9 18.67 19.1 28.52
    1800 13.0 25.22 20.6 41.08
    2000 13.0 32.59 21.8 56.01
    (iii) 200 10.4 0.11 7.8 0.07
    400 11.0 0.54 9.0 0.45
    600 11.0 1.39 10.1 1.37
    800 11.1 2.86 11.7 3.15
    1000 11.0 4.99 12.3 5.69
    1200 11.1 7.57 13.7 9.78
    1400 11.0 11.05 14.2 15.34
    1600 11.5 15.99 15.5 22.71
    1800 11.8 23.89 16.1 32.25
    2000 11.9 30.73 17.0 45.10

     | Show Table
    DownLoad: CSV

    (The nonmonotone wLCP) We choose M=B1/B1B2/B2 with B1=rand(n,n) and B2=rand(n,n). The matrix A and vectors b,f,w are generated by the same way as before. Since the matrix M is not symmetric positive semidefinite, this class of wLCP may be nonmonotone. For each problem with sizes n(=2m), we also generate ten instances and solve them by using three starting points as before. Moreover, we use H(zk)105 and iter<50 as the stopping criterion where iter denotes the number of iterations. Numerical results are listed in Table 2, where stands for that the method fails to solve some instances as the iteration number is grater than 50 and the average is based on the successful instances through our numerical reports. From Table 2, we can see that, although both SLMM and DGNM can be applied to solve nonmonotone wLCPs, the former has better numerical performance than the latter.

    Table 2.  Numerical results of solving the nonmonotone wLCP.
    SLMM DGNM
    SP n AIT ACPU AIT ACPU
    (i) 200 9.0 0.08 9.0 0.08
    400 9.2 0.62 11.4 1.12
    600 9.3 1.43 12.1 2.12
    800 9.7 3.05 13.3 5.50
    1000 10.0 4.44 14.6 6.70
    1200 10.5 7.78 15.8 11.72
    1400 10.3 13.19 17.1 19.08
    1600 10.3 18.23 18.2 28.91
    1800 10.0 24.32 18.9 43.12
    2000 10.2 27.42 20.1 57.39
    (ii) 200 11.4 0.12 8.8 0.07
    400 12.1 1.42 10.7 1.03
    600 12.0 3.76 11.1 3.52
    800 12.4 4.59 12.0 5.30
    1000 12.3 7.97 13.2 8.70
    1200 12.3 12.14 14.1 15.72
    1400 12.2 18.50 15.0 34.08
    1600 13.3 30.11 15.5 45.08
    1800 12.1 38.27 16.4 50.12
    2000 12.3 48.22 17.1 66.39
    (iii) 200 10.0 0.15 8.3 0.12
    400 10.0 0.65 9.7 0.59
    600 10.3 3.45 11.1 3.69
    800 10.4 4.19 12.1 5.91
    1000 10.6 8.60 12.7 9.79
    1200 11.0 12.46 13.7 14.18
    1400 10.9 17.84 14.1 21.96
    1600 11.0 20.61 15.1 25.85
    1800 11.0 25.55 15.6 40.53
    2000 11.0 32.49 16.2 49.71

     | Show Table
    DownLoad: CSV

    Based on a smooth weighted complementarity function, we reformulated the wLCP as a smooth nonlinear equation and proposed a Levenberg-Marquardt method to solve it. The proposed method is well-defined and it is globally convergent without any additional condition. Moreover, we proved that the proposed method has local sub-quadratic convergence rate under the local error bound condition which is weaker than the nonsingularity condition. Numerical results show that our method is very effective for solving monotone and nonmonotone wLCPs. In Algorithm 2.1, the Eq (2.5) is solved exactly in each iteration which maybe expensive for large-scale wLCPs. As a future research issue, it is worth investigating smooth inexact Levenberg-Marquardt method for solving wLCPs. Moreover, Wang and Fan [20] lately established the local convergence rate of Levenberg-Marquardt method under the Hölderian local error bound condition which is more general than Assumption 4.1 in this paper. Thus, another interesting issue is whether Algorithm 2.1 in this paper has local fast convergence rate under the Hölderian local error bound condition.

    This paper is partly supported by Natural Science Foundation of Henan Province (222300420520) and Key scientific research projects of Higher Education of Henan Province (22A110020).

    All authors declare no conflicts of interest in this paper.



    [1] K. M. Anstreicher, Interior-point algorithms for a generalization of linear programming and weighted centering, Optim. Method. Softw., 27 (2012), 605–612. https://doi.org/10.1080/10556788.2011.644791 doi: 10.1080/10556788.2011.644791
    [2] S. Asadi, Z. Darvay, G. Lesaja, N. Mahdavi-Amiri, F. Potra, A full-Newton step interior-point method for monotone weighted linear complementarity problems, J. Optim. Theory Appl., 186 (2020), 864–878. https://doi.org/10.1007/s10957-020-01728-4 doi: 10.1007/s10957-020-01728-4
    [3] X. N. Chi, M. S. Gowda, J. Tao, The weighted horizontal linear complementarity problem on a Euclidean Jordan algebra, J. Global Optim., 73 (2019), 153–169. https://doi.org/10.1007/s10898-018-0689-z doi: 10.1007/s10898-018-0689-z
    [4] X. N. Chi, Z. P. Wan, Z. J. Hao, A full-modified-Newton step O(n) infeasible interior-point method for the special weighted linear complementarity problem, J. Ind. Manag. Optim., 2021. https://doi.org/10.3934/jimo.2021082
    [5] J. S. Chen, The semismooth-related properties of a merit function and adescent method for the nonlinear complementarity problem, J. Glob. Optim., 36 (2006), 565–580. https://doi.org/10.1007/s10898-006-9027-y doi: 10.1007/s10898-006-9027-y
    [6] F. Facchinei, J. S. Pang, Finite-dimensional variational inequalities and complementarity problems, New York: Springer, 2003.
    [7] J. Y. Fan, J. Y. Pan, Convergence properties of a self-adaptive Levenberg-Marquardt algorithm under local error bound condition, Comput. Optim. Appl., 34 (2006), 47–62. https://doi.org/10.1007/s10589-005-3074-z doi: 10.1007/s10589-005-3074-z
    [8] J. Y. Fan, Accelerating the modified Levenberg-Marquardt method for nonlinear equations, Math. Comput., 83 (2014), 1173–1187. https://doi.org/10.1090/S0025-5718-2013-02752-4 doi: 10.1090/S0025-5718-2013-02752-4
    [9] J.Y. Fan, Y. X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, 74 (2005), 23–39. https://doi.org/10.1007/s00607-004-0083-1 doi: 10.1007/s00607-004-0083-1
    [10] M. S. Gowda, Weighted LCPs and interior point systems for copositive linear transformations on Euclidean Jordan algebras, J. Glob. Optim., 74 (2019), 285–295. https://doi.org/10.1007/s10898-019-00760-7 doi: 10.1007/s10898-019-00760-7
    [11] Z. H. Huang, J. Sun, A non-interior continuation algorithm for the P0 or P LCP with strong global and local convergence properties, Appl. Math. Optim., 52 (2005), 237–262. https://doi.org/10.1007/s00245-005-0827-0 doi: 10.1007/s00245-005-0827-0
    [12] Z. H. Huang, L. P. Zhang, J. Y. Han, A hybrid smoothing-nonsmooth Newton-type algorithm yielding an exact solution of the P0-LCP, J. Comput. Math., 22 (2004), 797–806.
    [13] W. L. Liu, C. Y. Wang, A smoothing Levenberg-Marquardt method for generalized semi-infinite programming, Comput. Appl. Math., 32 (2013), 89–105. https://doi.org/10.1007/s40314-013-0013-y doi: 10.1007/s40314-013-0013-y
    [14] F. A. Potra, Weighted complementarity problems-A new paradigm for computing equilibria, SIAM J. Optim., 22 (2012), 1634–1654. https://doi.org/10.1137/110837310 doi: 10.1137/110837310
    [15] F. A. Potra, Sufficient weighted complementarity problems, Comput. Optim. Appl., 64 (2016), 467–488. https://doi.org/10.1007/s10589-015-9811-z doi: 10.1007/s10589-015-9811-z
    [16] J. Y. Tang, A variant nonmonotone smoothing algorithm with improved numerical results for large-scale LWCPs, Comput. Appl. Math., 37 (2018), 3927–3936. https://doi.org/10.1007/s40314-017-0554-6 doi: 10.1007/s40314-017-0554-6
    [17] J. Y. Tang, H. C. Zhang, A nonmonotone smoothing Newton algorithm for weighted complementarity problems, J. Optim. Theory Appl., 189 (2021), 679–715. https://doi.org/10.1007/s10957-021-01839-6 doi: 10.1007/s10957-021-01839-6
    [18] J. Y. Tang, J. C. Zhou, Quadratic convergence analysis of a nonmonotone Levenberg-Marquardt type method for the weighted nonlinear complementarity problem, Comput. Optim. Appl., 80 (2021), 213–244. https://doi.org/10.1007/s10589-021-00300-8 doi: 10.1007/s10589-021-00300-8
    [19] J. Y. Tang, J. C. Zhou, A modified damped Gauss-Newton method for non-monotone weighted linear complementarity problems, Optim. Method. Softw., 2021. https://doi.org/10.1080/10556788.2021.1903007
    [20] H. Y. Wang, J. Y. Fan, Convergence rate of the Levenberg-Marquardt method under Hölderian local error bound, Optim. Method. Softw., 35 (2020), 767–786. https://doi.org/10.1080/10556788.2019.1694927 doi: 10.1080/10556788.2019.1694927
    [21] Y. Y. Ye, A fully polynomial-time approximation algorithm for computing a stationary point of the general linear complementarity problem, Math. Oper. Res., 18 (1993), 334–345. https://doi.org/10.1287/moor.18.2.334 doi: 10.1287/moor.18.2.334
    [22] J. Zhang, A smoothing Newton algorithm for weighted linear complementarity problem, Optim. Lett., 10 (2016), 499–509. https://doi.org/10.1007/s11590-015-0877-4 doi: 10.1007/s11590-015-0877-4
    [23] J. L. Zhang, J. Chen, A smoothing Levenberg-Marquardt type method for LCP, J. Comput. Math., 22 (2004), 735–752.
    [24] Y. B. Zhao, D. Li, A globally and locally superlinearly convergent non-interior-point algorithm for P0 LCPs, SIAM J. Optim., 13 (2003), 1195–1221. https://doi.org/10.1137/S1052623401384151 doi: 10.1137/S1052623401384151
    [25] J. L. Zhang, J. Chen, A new noninterior predictor-corrector method for the P0 LCP, Appl. Math. Optim., 53 (2006), 79–100. https://doi.org/10.1007/s00245-005-0836-z doi: 10.1007/s00245-005-0836-z
    [26] L. P. Zhang, X. S. Zhang, Global linear and quadratic one-step smoothing Newton method for P0-LCP, J. Glob. Optim., 25 (2003), 363–376. https://doi.org/10.1023/A:1022528320719 doi: 10.1023/A:1022528320719
  • This article has been cited by:

    1. Lu Zhang, Xiaoni Chi, Suobin Zhang, Yuping Yang, A predictor-corrector interior-point algorithm for $ P_{*}(\kappa) $-weighted linear complementarity problems, 2023, 8, 2473-6988, 9212, 10.3934/math.2023462
    2. Liang Chen, Yanfang Ma, Guotao Wang, A New Modified Levenberg–Marquardt Method for Systems of Nonlinear Equations, 2023, 2023, 2314-4785, 1, 10.1155/2023/6043780
    3. Rong Li, Mingyuan Cao, Guoling Zhou, A New Adaptive Accelerated Levenberg–Marquardt Method for Solving Nonlinear Equations and Its Applications in Supply Chain Problems, 2023, 15, 2073-8994, 588, 10.3390/sym15030588
    4. Jingyong Tang, Jinchuan Zhou, The solvability of weighted complementarity problems and a smoothing Newton algorithm under the local error bound, 2023, 0233-1934, 1, 10.1080/02331934.2023.2269943
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1805) PDF downloads(65) Cited by(4)

Figures and Tables

Figures(2)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog