Processing math: 93%
Research article

An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem

  • Received: 17 March 2022 Revised: 26 April 2022 Accepted: 14 May 2022 Published: 01 July 2022
  • MSC : 49N10, 49N35, 65K05, 93D22, 93D52

  • Nonlinear Bilevel programming (NBLP) problem is a hard problem and very difficult to be resolved by using the classical method. In this paper, Karush-Kuhn-Tucker (KKT) condition is used with Fischer-Burmeister function to convert NBLP problem to an equivalent smooth single objective nonlinear programming (SONP) problem. An active-set strategy is used with Barrier method and trust-region technique to solve the smooth SONP problem effectively and guarantee a convergence to optimal solution from any starting point. A global convergence theory for the active-set barrier trust-region (ACBTR) algorithm is studied under five standard assumptions. An applications to mathematical programs are introduced to clarify the effectiveness of ACBTR algorithm. The results show that ACBTR algorithm is stable and capable of generating approximal optimal solution to the NBLP problem.

    Citation: B. El-Sobky, G. Ashry, Y. Abo-Elnaga. An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem[J]. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882

    Related Papers:

    [1] B. El-Sobky, G. Ashry . An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307
    [2] B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan . A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems. AIMS Mathematics, 2024, 9(3): 6528-6554. doi: 10.3934/math.2024318
    [3] B. El-Sobky, M. F. Zidan . A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games. AIMS Mathematics, 2023, 8(6): 13706-13724. doi: 10.3934/math.2023696
    [4] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [5] Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry . A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems. AIMS Mathematics, 2025, 10(2): 2509-2540. doi: 10.3934/math.2025117
    [6] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [7] Yueting Yang, Hongbo Wang, Huijuan Wei, Ziwen Gao, Mingyuan Cao . An adaptive simple model trust region algorithm based on new weak secant equations. AIMS Mathematics, 2024, 9(4): 8497-8515. doi: 10.3934/math.2024413
    [8] Adisak Hanjing, Panadda Thongpaen, Suthep Suantai . A new accelerated algorithm with a linesearch technique for convex bilevel optimization problems with applications. AIMS Mathematics, 2024, 9(8): 22366-22392. doi: 10.3934/math.20241088
    [9] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [10] Mouhamad Al Sayed Ali, Miloud Sadkane . Acceleration of implicit schemes for large systems of nonlinear differential-algebraic equations. AIMS Mathematics, 2020, 5(1): 603-618. doi: 10.3934/math.2020040
  • Nonlinear Bilevel programming (NBLP) problem is a hard problem and very difficult to be resolved by using the classical method. In this paper, Karush-Kuhn-Tucker (KKT) condition is used with Fischer-Burmeister function to convert NBLP problem to an equivalent smooth single objective nonlinear programming (SONP) problem. An active-set strategy is used with Barrier method and trust-region technique to solve the smooth SONP problem effectively and guarantee a convergence to optimal solution from any starting point. A global convergence theory for the active-set barrier trust-region (ACBTR) algorithm is studied under five standard assumptions. An applications to mathematical programs are introduced to clarify the effectiveness of ACBTR algorithm. The results show that ACBTR algorithm is stable and capable of generating approximal optimal solution to the NBLP problem.



    The NBLP problem is a nonlinear optimization problem that is constrained by another nonlinear optimization problem. This mathematical programming model arises when two independent decision makers, ordered within a hierarchical structure, have conflicting objectives. The decision maker at the lower level has to optimize her objective under the given parameters from the upper level decision maker, who, in return, with complete information on the possible reactions of the lower, selects the parameters so as to optimize her own objective. The decision maker with the upper level objective, fu(t,v) takes the lead, and chooses her decision vector t. The decision maker with lower level objective, fl(t,v), reacts accordingly by choosing her decision vector v to optimize her objective, parameterized in t. Note that the upper level decision maker is limited to influencing, rather than controlling, the lower level's outcome. In fact, the problem has been proved to be NP-hard [5]. However, the NBLP problem is used so extensively in transaction network, finance budget, resource allocation, price control etc. Various approaches have been devoted to study this field, which leads to a a speedy development in theories and algorithms, see [1,3,30,32,41]. For detailed exposition, the reader can review [23,25,35].

    A mathematical formulation for the NBLP problem is

    minfu(t,v)s.t.gu(t,v)0,minfl(t,v),s.t.gl(t,v)0,t0,v0, (1.1)

    where tn1 and vn2.

    Let n=n1+n2, and assume that the functions fu:n, fl:n, gu:nm1, and gl:nm2 are at least twice continuously differentiable function in our method.

    Several approaches have been proposed to solve the NBLP problem 1.1, see [2,13,14,25,27,37,40,42]. KKT conditions one of these approaches and used in this paper to convert the original NBLP problem 1.1 to the following one-level programming problem:

    mint,vfu(t,v)s.t.gu(t,v)0,vfl(t,v)+vgl(t,v)λl=0,gl(t,v)0,(λl)jglj(t,v)=0,j=1,...,m2,(λl)j0,j=1,...,m2,t0andv0, (1.2)

    where λlm2 is a multiplier vector associated with inequality constraint gl(t,v). problem 1.2 is non-convex and non-differentiable, moreover the regularity assumptions which are needed to successfully handle smooth optimization problems are never satisfied and it is not good to use our approach to solve problem 1.2. Dempe [13] presents smoothing method for the NBLP problem and the same method is also presented in [28] for programming with complementary constraints. Following this smoothing method we can propose our approach for the NBLP problem. Before presenting our approach for the NBLP problem, we give some definitions firstly.

    Definition 1.1. The Fischer-Burmeister function is Ψ(˜a,˜b):2 defined by Ψ(˜a,˜b)=˜a+˜b˜a2+˜b2 and the perturbed Fischer-Burmeister function is Ψ(˜a,˜b,ˆε):3 defined by Ψ(˜a,˜b,ϵ)=˜a+˜b˜a2+˜b2+ϵ.

    The Fischer-Burmeister function has the property that Ψ(˜a,˜b)=0 if and only if ˜a0, ˜b0, and ˜a˜b=0. It is non-differentiable at ˜a=˜b=0. Its perturbed variant satisfies Ψ(˜a,˜b,ϵ)=0 if and only if ˜a>0, ˜b>0, and ˜a˜b=ϵ2 for ϵ>0. This function is smooth with respect to ˜a, ˜b, for ϵ>0. for more details see [8,9,10,28].

    In this paper, to allow the proposed algorithm ACBTR solve the NBLP problem 1.1 and satisfy the asymptotic stability conditions, we use the following changed perturbed Fischer-Burmeister function:

    ˜Ψ(˜a,˜b,ϵ)=˜a2+˜b2+ϵ˜a˜b. (1.3)

    It is obvious that the changed perturbed Fischer-Burmeister function ˜Ψ(˜a,˜b,ϵ) has the same property with the function Ψ(˜a,˜b,ϵ). Using the Fischer-Burmeister function 1.3, problem 1.2 equivalents to the following single objective constrained optimization problem

    mint,vfu(t,v)s.t.gu(t,v)0,vfl(t,v)+vgl(t,v)μ=0,g2lj+(λl)j2+ϵ(λl)j+glj=0,j=1,...,m2,t0andv0. (1.4)

    Let x=(t,v)T, m=n2+m2 then the above problem can be written as SONP problem as follows

    minimizefu(x)subjecttohl(x)=0,gu(x)0,x0, (1.5)

    where fu:n, hl:nm, and gu:nm1 are at least twice continuously differentiable functions.

    Various approaches have been proposed to solve the SONP problem 1.5, see [4,7,16,17,18,19,24]. In this paper, we use an active-set with barrier method to reduce SONP problem 1.5 to equivalent equality constrained optimization problem. So, we can use one of methods which are used for solving equality constrained optimization problem.

    In this paper, we use a trust-region technique which is successful approach for solving SONP problem and is very important to ensure global convergence from any starting point. The trust-region strategy can induce strongly global convergence. It is more robust when it deals with rounding errors. It does not require the Hessian of the objective function must be positive definite or the objective function of the model must be convex. Also, some criteria are used to test the trial step is acceptable or no. If it is not acceptable, then the subproblem must be resolved with a reduced the trust-region radius. For the detailed expositions, the reader review [17,20,21,22,23,24,33,36,45,46,47,48].

    A projected Hessian method which is suggested by [6,38] and used by [19,20,22], utilizes in this paper to treat the difficulty of having an infeasible trust-region subproblem. In this method, the trial step is decomposed into two components and each component is computed by solving a trust-region unconstrained subproblem.

    Under standard five assumptions, a global convergence theory for ACBTR algorithm is introduced. Moreover, numerical experiments display that ACBTR algorithm performers effectively and efficiently in pursuance.

    The balance of this paper is organized as follows. A detailed description for the proposed method to solve SONP problem 1.5 is introduced in the next section. Section 3 is devoted to analysis of the global convergence of ACBTR algorithm. In Section 4, we report preliminary numerical results. Finally, some further remark is given in Section 5.

    Notations: We use . to denote the Euclidean norm .2. The ith component of any vector such as x is written as x(i). The jth trial iterate of iteration k is denoted by kj. Subscript k refers to iteration indices. For example, fukfu(xk), hlkhl(xk), gukgu(xk), WkW(xk), xLskxLs(xk,λk;σk), and so on to denote the function value at a particular point.

    In this section, firstly, we will introduce the detailed description for the active-set strategy with barrier method to reduce SONP problem 1.5 to equality constrained optimization problem. Secondly, to solve the equality constrained optimization problem and guarantee convergence from any starting point, we will introduce the detailed description for the trust-region algorithm. Finally, we will introduce the main steps for the main algorithm ACBTR to solve NBLP problem 1.1.

    Motivated by the active-set strategy which is introduced by [12] and used by [17,18,19,20,21], we define a 0-1 diagonal matrix W(x)m2×m2 whose diagonal entries are

    wi(x)={1,if gui(x)0,0,if gui(x)<0, (2.1)

    where i=1,...,m2. By Using the diagonal matrix W(x)m2×m2, we can transform problem 1.5 to the following equality constrained optimization problem with positive variables

    minimizexfu(x)subjecttohl(x)=0,gu(x)TW(x)gu(x)=0,x0.

    Penalty methods usually more suitable on problems with equality constraints. These methods are usually generate a sequence of points that converges to a solution of the problem from the exterior of the feasible region. An advantage of penalty methods is that they do not request the iterates to be strictly feasible. In this paper we use the penalty method to reduce the above problem to the following equality constrained optimization problem with positive variables

    minimizexfu(x)+σ2W(x)gu(x)2subjecttohl(x)=0,x0, (2.2)

    where σ is a positive parameter. Let F+(x)={x|x>0}.

    Motivated by the barrier method which is discussed in [[7,26,43], problem 2.2, for any xF+ can be written as follows

    minimizexfu(x)sni=1ln(x(i))+σ2W(x)gu(x)2subjecttohl(x)=0, (2.3)

    for decreasing sequence of barrier parameters s converging to zero, see [26].

    The Lagrangian function associated with problem 2.3 is

    Ls(x,λ;σ)=fu(x)sni=1ln(x(i))+λThl(x)+σ2W(x)gu(x)2, (2.4)

    where λm is a multiplier vector associated with the equality constraint hl(x)=0.

    The first-order necessary condition for the strictly positive point x to be a local minimizer of problem 2.3 is that there exists a Lagrange multiplier vector λm, such that (x,λ) satisfies the following nonlinear system

    fu(x)sX1e+hl(x)λ+σgu(x)W(x)gu(x)=0hl(x)=0,

    where X is diagonal matrix whose diagonal entries are (x1,...,xn)F+. Let sX1e=yn be an auxiliary variable, then the above system can be written as follows

    fu(x)y+hl(x)λ+σgu(x)W(x)gu(x)=0, (2.5)
    Xyse=0, (2.6)
    hl(x)=0, (2.7)

    where xF+. The conditions [2.5–2.7] are called the barrier KKT conditions. For more details see [26].

    Applying Newton's method to the nonlinear system (2.5)–(2.7), we have

    (H+σgu(x)W(x)gu(x)Thl(x)IY0Xhl(x)T00)(dxdλdy)=(xLs(x,λ;σ)Xysehl(x)), (2.8)

    where H is the Hessian matrix of the following function or an approximation to it

    s(x,λ)=fu(x)+λThl(x)sni=1ln(x(i)).

    The matrix Y is a diagonal matrix whose diagonal entries are (y1,...,yn) and xLs(x,λ;σ)=fu(x)y+hl(x)λ+σgu(x)W(x)gu(x).

    From the second equation of the system (2.8) we have

    dy=y+sX1eX1Ydx. (2.9)

    To decrease the dimension of system 2.8, we eliminate dy from the first equation of the system 2.8 by using Eq 2.9 as follows

    (H+σgu(x)W(x)gu(x)T)dx+hl(x)dλI(y+sX1eX1Ydx)=xLs(x,λ;σ)

    Using Eq 2.6, we have the following system

    (Bhl(x)hl(x)T0)(dxdλ)=(xLs(x,λ;σ)hl(x)). (2.10)

    where, B=H+X1Y+σgu(x)W(x)gu(x)T.

    We notice that, the system 2.10 is equivalent to the first order necessary condition for the following sequential quadratic programming problem

    minimizeLs(x,λ;σ)+xLs(x,λ;σ)Td+12dTBdsubjecttohl(x)+hl(x)Td=0. (2.11)

    That is, the point (x,λ) that satisfies the KKT conditions for subproblem 2.11 will satisfy the KKT conditions for problem 1.5. A methods which are used to solve subproblem 2.11 is a local methods. That is, it may not converge to a stationary point if the starting point is far away from the solution. To guarantee convergence from any starting point, we use the trust-region technique.

    By using trust-region technique to ensure convergence of subproblem 2.11 and estimate the step dk, we solve the following subproblem

    minimizexLskTd+12dTBkdsubjecttohl(xk)+hl(xk)Td=0,dδk, (2.12)

    where 0<δk represents the radius of the trust-region. The subproblem 2.12 may be infeasible because there may be no intersecting points between hyperplane of the linearized constraints hl(x)+hl(x)Td and the constraint dδk. Even if they intersect, there is no guarantee that this will keep true if δk is reduced, see [11]. So, a projected Hessian technique is used in our approach to overcome this problem. This technique was suggested by [6,38] and used by [19,20,22]. In this technique, the trial step dk is decomposed into two orthogonal components: the normal component dnk to improve feasibility and the tangential component dtk to improve optimality. Each of dnk and dtk is evaluated by solving unconstrained trust-region subproblem.

    To compute the normal component dn

    minimizehlk+hTlkdn2subjecttodnζδk, (2.13)

    for some ζ(0,1). To solve the subproblem 2.13, we use a conjugate gradient method which is introduced by [39] and used by [23], see Algorithm 2.1 in [23]. It is very cheap if the problem is large-scale and the Hessian is indefinite. By using the conjugate gradient method, the following condition is hold

    hlk2hlk+hTlkdnk2ϑ1{hlk2hlk+hTlkdncpk2}, (2.14)

    for some ϑ1(0,1]. That is, the normal predicted decrease obtained by the normal component dnk is greater than or equal to a fraction of the normal predicted decrease obtained by the normal Cauchy step dncpk. The normal Cauchy step dncpk is defined as

    dncpk=αncpkhlkhlk, (2.15)

    where the parameter αncpk is given by

    αncpk={hlkhlk2(hlk)Thlkhlk2if hlkhlk3hTlkhlkhlk)2δk and hTlkhlkhlk)>0,δkhlkhlkotherwise. (2.16)

    Once dnk is estimated, we will compute dtk=Zkˉdtk. A matrix Zk is the matrix whose columns form a basis for the null space of (hlk)T.

    To compute the tangential component dtk

    To estimate the tangential component dtk, let

    q(d)=Ls(x,λ;σ)+xLs(x,λ;σ)Td+12dTBd. (2.17)

    and using the conjugate gradient method [23] to solve the following trust-region subproblem

    minimize[ZTkqk(dnk)]Tˉdt+12ˉdtTZTkBkZkˉdtsubjecttoZkˉdtΔk, (2.18)

    where qk(dnk)=xLsk+Bkdnk and Δk=δ2kdnk2.

    Let a tangential predicted decrease which is obtained by the tangential component dtk be

    Tpredk(ˉdtk)=qk(dnk)qk(dnk+Zkˉdtk). (2.19)

    Since the conjugate gradient method is used to solve subproblem (2.18) and estimate dtk, then the following condition holds

    Tpredk(ˉdtk)ϑ2Tpredk(ˉdtcpk), (2.20)

    for some ϑ2(0,1]. This condition clarified that the tangential predicted decrease which is obtained by tangential step ˉdtk is greater than or equal to a fraction of the tangential predicted decrease obtained by a tangential Cauchy step ˉdtcpk. The tangential Cauchy step ˉdtcpk is defined as follows

    ˉdtcpk=αtcpkZTkqk(dnk), (2.21)

    where the parameter αtcpk is given by

    αtcpk={ZTkqk(dnk)2(ZTkqk(dnk))TˉBkZTkqk(dnk)if ZTkqk(dnk)3(ZTkqk(dnk))TˉBkZTkqk(dnk)Δk and (ZTkqk(dnk))TˉBkZTkqk(dnk)>0,ΔkZTkqk(dnk)otherwise, (2.22)

    such that ˉBk=ZTkBkZk.

    Once estimating dtk, we set dk=dnk+dtk and xk+1=xk+dk. To guarantee that xk+1F+ at every iteration k, we need to evaluate the damping parameter μk.

    To estimate the damping parameter μk

    The damping parameter μk is defined as follows:

    μk=min{mini{θ(i)k},1}, (2.23)

    where

    θ(i)k={x(i)kd(i)k,if d(i)k<01otherwise.

    To be decided whether the scale step μkdk will be accepted or no, we need to a merit function. The merit function is the function which is tie the objective function fu(x) with the constraints hl(x) and gu(x) in such a way that progress in the merit function means progress in solving problem. In the proposed algorithm, we use the following an augmented Lagrange function as a merit function, see [31].

    Φs(x,λ;σ;ρ)=s(x,λ)+σ2W(x)gu(x)2+ρhl(x)2, (2.24)

    where ρ>0 is a penalty parameter.

    To be decided whether the point (xk+μkdk,λk+1) will be taken as a next iterate or no, we need to define the actual reduction Aredk and the predicted reduction Predk in the merit function Φs(x,λ;σ;ρ).

    In the proposed algorithm, Aredk is defined as follows

    Aredk=Φs(xk,λk;σk;ρk)Φs(xk+μkdk,λk+1;σk;ρk).

    Also Aredk can be written as follows,

    Aredk=s(xk,λk)s(xk+1,λk)ΔλTkhlk+1+σk2[Wkgu(xk)2Wk+1guk+12]+ρk[hlk2hlk+12], (2.25)

    where Δλk=(λk+1λk).

    In the proposed algorithm, Predk is defined as follows

    Predk=xs(xk,λk)Tμkdk12μ2kdTk˜HkdkΔλTk(hlk+hTlkμkdk)+σk2[Wkgu(xk)2Wk(gu(xk)+gu(xk)Tμkdk)2]+ρk[hlk2hlk+hTlkμkdk2]. (2.26)

    where xls(x,λ)=fu(x)y+hl(x)λ and ˜H=H+X1Y.

    Also, Predk can be written as follows

    Predk=qk(0)qk(μkdk)ΔλTk(hlk+hTlkμkdk)+ρk[hlk2hlk+hTlkμkdk2]. (2.27)

    where the quadratic form q(d) in 2.17 can be written as follows

    q(d)=s(x,λ)+xs(x,λ)Td+12dT˜Hd+σ2[W(x)gu(x)2W(x)(gu(x)+gu(x)Td)2]. (2.28)

    To update ρk

    To ensure that Predk0, we update the penalty parameter ρk utilizing the following scheme.

    Algorithm 2.1. If

    Predkρk2[hlk2hlk+hTlkμkdk2], (2.29)

    then, set

    ρk=2[qk(μkdk)qk(0)+ΔλTk(hlk+hTlkμkdk)]hlk2hlk+hTlkμkdk2+β0, (2.30)

    where β0>0 is a small fixed constant.

    Else, set

    ρk+1=max{ρk,σ2k}. (2.31)

    End if.

    For more details, see [15,16,17,18,19].

    To test the scaling step μkdk and update δk

    The framework to test the scaling step μkdk and update δk is presented in the following algorithm.

    Algorithm 2.2. Choose 0<γ1<γ2<1, 0<α1<1<α2, and δminδ0δmax.

    While AredkPredk(0,γ1) or Predk0.

    Set δk=α1dk and return to evaluate a new trial step and end while.

    If AredkPredk[γ1,γ2). Set xk+1=xk+μkdk and δk+1=max(δk,δmin).

    End if.

    If AredkPredk[γ2,1]. Set xk+1=xk+μkdk and δk+1=min{δmax,max{δmin,α2δk}}.

    End if.

    To update the positive parameter σk

    To update the positive parameter σk, we use the following scheme.

    Algorithm 2.3. If

    12Tpredk(ˉdtk)gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),Δk}. (2.32)

    Set σk+1=σk.

    Else, set σk+1=2σk. End if.

    For more details see [18,23].

    Finally, the algorithm is stopped when ZTkxsk+gu(xk)Wkgu(xk)+hlkε1 or dkε2, for some ε1,ε2>0.

    A trust-region algorithm

    The framework of the trust-region algorithm to solve subproblem 2.12 are summarized as follows.

    Algorithm 2.4. (Trust-region algorithm)

    Step 0. Starting with x0F+. Evaluate y0 and λ0. Set s0=0.1, ρ0=1, σ0=1, and β0=0.1.

    Choose ε1, ε2, α1, α2, γ1, and γ2 such that 0<ε1, 0<ε2, 0<α1<1<α2, and 0<γ1<γ2<1.

    Choose δmin, δmax, and δ0 such that δminδ0δmax. Set k=0.

    Step 1. If ZTkxsk+gu(xk)Wkgu(xk)+hlkε1, then stop.

    Step 2. (How to compute dk)

    a). Evaluate the normal component dnk by solving subproblem (2.13).

    b). Evaluate the tangential component ˉdtk by solving subproblem (2.18).

    c). Set dk=dnk+Zkˉdtk.

    Step 3. If dkε2, then stop.

    Step 4. (How to compute μk)

    a). Compute the damping parameter μk using (2.23).

    b). Set xk+1=xk+μkdk.

    Step 5. Compute the vector yk+1, by using the following equation

    yk+1=skXk1eXk1Ykμkdk. (2.33)

    The above equation is obtained from (2.9).

    Step 6. Compute Wk+1 given by (2.1).

    Step 7. Evaluate λk+1 by solving the following subproblem

    minimizefk+1yk+1+hlk+1λ+ρkguk+1Wk+1guk+12. (2.34)

    Step 8. Using scheme 2.1 to update the penalty parameter ρk.

    Step 9. Using Algorithm (2.2) to test the scaled step μkdk and update the radius δk.

    Step 10. Update the positive parameter σk using scheme 2.3.

    Step 11. To Update the barrier parameter sk, set sk+1=sk10.

    Step 12. Set k=k+1 and go to Step 1.

    In the following subsection we will clarify the main steps for solving NBLP problem 1.1.

    The framework to solve NBLP problem 1.1 are summarized in the following algorithm.

    Algorithm 2.5. (An active-set-barrier-trust-region (ACBTR) algorithm)

    Step 1. Using KKT optimality conditions for the lower level problem 1.1 to reduce problem 1.1 to one-level problem 1.2.

    Step 2. Using Fischer-Burmeister function 1.3 with ϵ=0.001 to obtain the smooth problem 1.4 and which is equivalent problem 1.5.

    Step 3. Using An active set strategy with Barrier method to obtain subproblem 2.11.

    Step 4. Using trust-region Algorithm 2.4 to solve subproblem 2.11 and obtained approximate solution for problem 1.5.

    In the following section we will introduce a global convergence analysis for ACBTR algorithm.

    let Ω be a convex subset of n that contains all iterates xkF+ and (xk+μkdk)F+. To prove the global convergence theory of ACBTR algorithm on Ω, we assume that the following assumptions are hold.

    Assumptions

    [A1]. The functions fu(x), hl(x), and gu(x) are twice continuously differentiable function for all 0<xS.

    [A2]. All of fu(x), fu(x), 2fu(x), gu(x), gu(x), hl(x), hl(x), 2hli(x) for i=1,...,m, and (hlk)[(hlk)T(hlk)]1 are uniformly bounded in S.

    [A3]. The columns of the matrix hl(x) are linearly independent.

    [A4]. The sequence {λk} is bounded.

    [A5]. The sequence of matrices {˜Hk} is bounded.

    In the above assumptions, even though we assume that hl(x) has full column rank for all xkF+, we do not require gu(x) has full column rank for all xkF+. So, we may have other kinds of stationary points which are presented in the following definitions.

    Definition 3.1. A point xF+ is called a Fritz John (FJ) point if there exist γ, λ, and ν, not all zeros, such that

    τf(x)+hl(x)λ+gu(x)ν=0, (3.1)
    hl(x)=0, (3.2)
    Wgu(x)=0, (3.3)
    (ν)i(gu(x))i=0,i=1,...,m2, (3.4)
    τ,(ν)i0,i=1,...,m2. (3.5)

    Equations (3.1)–(3.5) are called FJ conditions. More details see [4].

    If τ0, then the point (x,1,λτ,ντ) is called a KKT point and FJ conditions are called the KKT conditions.

    Definition 3.2. A point xF+ is called an infeasible Fritz John (IFJ) point if there exist τ, λ, and ν such that

    τfu(x)+hl(x)λ+gu(x)ν=0, (3.6)
    hl(x)=0, (3.7)
    gu(x)Wgu(x)=0butWgu(x)>0, (3.8)
    (ν)i(gu(x))i0,i=1,...,m2, (3.9)
    τ,(ν)i0,i=1,...,m2. (3.10)

    Equations (3.6)–(3.10) are called IFJ conditions.

    If τ0, then the point (x,1,λτ,ντ) is called an infeasible KKT point and IFJ conditions are called infeasible KKT conditions.

    Lemma 3.1. Under assumptions A1A5, a subsequence {xki} of the iteration sequence asymptotically satisfies IFJ conditions if it satisfies:

    1). limkihl(xki)=0.

    2). limkiWkigu(xki)>0.

    3). limki{mindnm2Wki(guki+gTukiZkiμkiˉdt)2}=limkiWkiguki2.

    Proof. To simplify the notations, let the subsequence {ki} be renamed to {k}. Let ˆdk be a minimizer of minimizeˉdtWk(gu(xk)+gu(xk)TZkμkˉdt)2, then it satisfies

    ZTkgu(xk)Wkgu(xk)μk+ZTkgu(xk)Wkgu(xk)TZkμ2kˆdk=0. (3.11)

    From condition 3, we have

    limk{2μkˆdkTZTkgu(xk)Wkgu(xk)+μ2kˆdkTZTkgu(xk)Wkgu(xk)TZkˆdk}=0. (3.12)

    Now, we will consider two cases:

    Firstly, if limkˆdk=0, then from (3.11) we have limkμkZTkgu(xk)Wkgu(xk)=0.

    Secondly, if limkˆdk0, then multiplying (3.11) from the left by 2ˆdTk and subtract it from the limit (3.12), we have limkWkgu(xk)TZkμkˆdk2=0. This implies limkμkZTkgu(xk)Wkgu(xk)=0.

    That is, in either case, we have

    limkZTkgu(xk)Wkgu(xk)=0. (3.13)

    Take (νk)i=(Wkgu(xk))i, i=1,...,p. Since limkWkgu(xk)>0, then limk(νk)i0, for i=1,...,p and limk(νk)i>0, for some i. Therefore limkZTkgu(xk)νk=0. But this implies the existence of a sequence {λk} such that limk{hlkλk+gu(xk)νk}=0. Thus IFJ conditions are hold in the limit with τ=0.

    The following lemma clarify that, for any subsequence {xki} of the iteration sequence that asymptotically satisfies the FJ conditions, the corresponding subsequence of smallest singular values of {ZTkgu(xk)Wk} is not bounded away from zero. That is, asymptotically the gradient of the active constraints are linearly dependent.

    Lemma 3.2. Under assumptions A1A5, a subsequence {xki} of the iteration sequence asymptotically satisfies FJ conditions if it satisfies:

    1). limkih(xki)=0.

    2). For all ki, Wkiguki>0 and limkiWkiguki=0.

    3). limki{mindnpWki(guki+gTukiZkiμkiˉdt)2Wkiguki2}=1.

    Proof. The proof of this lemma is similar to the proof of Lemma 4.4 in [19].

    In the following section, we introduce some basic lemmas which are requisite to prove global convergence analysis for ACBTR algorithm.

    In this section, we introduce some significant lemmas which are required to prove global convergence theory for ACBTR algorithm.

    Lemma 3.3. Under assumptions A1 and A3, W(x)gu(x) is Lipschitz continuous in Ω.

    Proof. The proof of this lemma is similar to the proof of Lemma 4.1 of [12].

    From the above lemma, we conclude that gu(x)TW(x)gu(x) is differentiable and gu(x)W(x)gu(x) is Lipschitz continuous in Ω.

    Lemma 3.4. At any iteration k, let E(xk)m2×m2 be a diagonal matrix whose diagonal entries are

    (ek)i={1if (gu(xk))i<0 and (guk+1)i0,1if (gu(xk))i0 and (guk+1)i<0,0otherwise, (3.14)

    where i=1,2,...,m2. Then

    Wk+1=Wk+Ek. (3.15)

    Proof. See Lemma 6.2 of [17].

    Lemma 3.5. Under assumptions A1A3, there exists at any iteration k, a constant C1>0 independent of k such that

    Ekgu(xk)C1dk, (3.16)

    where Ekm2×m2 is the diagonal matrix whose diagonal entries are defined in (3.14).

    Proof. See Lemma 6.3 of [17].

    Lemma 3.6. Under assumptions A1A3, there exists at any iteration k, a constant 0<C2 independent of k such that

    dnkC2hlk. (3.17)

    Proof. Since dnk is normal to the tangent space, then we have

    dnk=hlk(hTlkhlk)1hTlkdk=hlk(hTlkhlk)1[hlk+hTlkdkhlk]hlk(hTlkhlk)1[hlk+hTlkdk+hlk]hlk(hTlkhlk)1hlk.

    where hlk+hTlkdkhlk. Using the assumptions A1A5, we have the desired result.

    The next lemma clarifies how delicate the definition of Aredk is as an approximation to Predk.

    Lemma 3.7. Under assumptions A1A5, there exists a constant 0<C3, such that

    |AredkPredk|C3μkρkdk2. (3.18)

    Proof. From the definition of Aredk (2.25) and using (3.15), we have

    Aredk=s(xk,λk)s(xk+1,λk)ΔλTkhlk+1+σk2[gu(xk)TWkgu(xk)gTuk+1(Wk+Ek)guk+1]+ρk[hlk2hlk+12].

    From the above equation, the definition of Predk (2.26), and using the inequality of Cauchy-Schwarz, we have

    |AredkPredk|12μ2k|dTk[Hlk2s(xk+ξ1dk)]dk|+μ2k2dTkX1kYkdk+σkμk(gu(xk)g(xk+ξ2μkdk))Wkgu(xk)dk+σkμ2k2dTk[gu(xk)Wkgu(xk)Tg(xk+ξ2dk)Wkg(xk+ξ2μkdk)T]dk+σkμ2k2Ekgu(xk)2+σkμkg(xk+ξ2μkdk)Ekgu(xk)dk+σkμ2k2dTk[g(xk+ξ2dk)Ekg(xk+ξ2μkdk)T]dk+μk|Δλk[hlkh(xk+ξ2dk)]Tdk|+2ρkμk|[(hlkh(xk+ξ2μkdk))hlk]Tdk|+ρkμ2k|dTk[hlkhTlkh(xk+ξ2μkdk)h(xk+ξ2μkdk)T]dk|,

    for some ξ1 and ξ2(0,1). By using assumptions A1A5, ρkσk, ρk1, and inequality (3.16), we have

    |AredkPredk|μk[κ1dk2+κ2ρkdk3+κ3ρkdk2hlk], (3.19)

    where κ1, κ2, and κ3 are positive constants. Since ρk1, dkδmax, and hlk is uniformly bounded, then inequality (3.18) hold.

    The proof of the following two lemmas depends on the fact that dnk and ˉdtk satisfy the condition of the fraction of Cauchy decrease.

    Lemma 3.8. Under assumptions A1A5, there exists a constant 0<C4 such that

    hlk2hlk+hTlkdnk2C4hlkmin{hlk,δk}. (3.20)

    Proof. From the definition of the normal Cauchy step (2.15), we will consider two cases:

    Firstly, if dncpk=δkhlkhlk(hlkhlk) and δkhTlkhlkhlk2hlkhlk3, then we have

    hlk2hlk+hTlkdncpk2=2(hlkhlk)TdncpkdncpTkhlkhTlkdncpk=2δkhlkhlkδ2khTlkhlkhlk2hlkhlk22δkhlkhlkδkhlkhlkδkhlkhlk. (3.21)

    Secondly, if dncpk=hlkhlk2hTlkhlkhlk2(hlkhlk) and δkhTlkhlkhlk2hlkhlk3, then we have

    hlk2hlk+hTlkdncpk2=2(hlkhlk)TdncpkdncpTkhlkhTlkdncpk=2hlkhlk4hTlkhlkhlk2hlkhlk4hTlkhlkhlk2=hlkhlk4hTlkhlkhlk2hlkhlk2hTlkhlkhlk2. (3.22)

    Using assumption A3, we have hlkhlkhlk(hTlkhlk)1hlk. Hence, from inequalities (2.14), (3.21), (3.22), and using assumption A2, we obtain the inequality (3.20).

    From the above lemma and the fact that

    hlk2hlk+hTlkμkdnk2μk[hlk2hlk+hTlkdnk2],

    where μk(0,1], then we have

    hlk2hlk+hTlkμkdnk2C4μkhlkmin{hlk,δk}. (3.23)

    From the way of updating ρk shown in Step 8 in Algorithm (2.4) and above inequality, we have

    Predk12C4μkρkhlkmin{hlk,δk}. (3.24)

    Lemma 3.9. Under assumptions A1A5, there exists a constant 0<C5, such that

    Tpredk(ˉdtk)C5ZTkqk(dnk)min{ZTkqk(dnk)ˉBk,Δk}. (3.25)

    Proof. From the definition of the tangential Cauchy step (2.21), we will consider two cases:

    Firstly, if ˉdtcpk=ΔkZTkqk(dnk)ZTkqk(dnk) and Δk(ZTkqk(dnk))TˉBkZTkqk(dnk)ZTkqk(dnk)3, then we have

    Tpredk(ˉdtcpk)=qk(dnk)qk(dnk+Zkˉdtcpk)=(ZTkqk(dnk))Tˉdtcpk12ˉdtcpTkˉBkˉdtcpk=ΔkZTkqk(dnk)Δ2k2ZTkqk(dnk)2[(ZTkqk(dnk))TˉBkZTkqk(dnk)]ΔkZTkqk(dnk)12ΔkZTkqk(dnk)12ΔkZTkqk(dnk). (3.26)

    Secondly, if ˉdtcpk=ZTkqk(dnk)2ZTkqk(dnk))TˉBkZTkqk(dnk)ZTkqk(dnk) and Δk(ZTkqk(dnk))TˉBkZTkqk(dnk)ZTkqk(dnk)3, then we have

    Tpredk(ˉdtcpk)=qk(dnk)qk(dnk+Zkˉdtcpk)=(ZTkqk(dnk))Tˉdtcpk12ˉdtcpTkˉBkˉdtcpk=ZTkqk(dnk)4(ZTkqk(dnk))TˉBkZTkqk(dnk)ZTkqk(dnk)42(ZTkqk(dnk))TˉBkZTkqk(dnk)=ZTkqk(dnk)42(ZTkqk(dnk))TˉBkZTkqk(dnk)ZTkqk(dnk)22ˉBk. (3.27)

    Hence, from inequalities (2.20), (3.26), (3.27), and using assumptions A1A5, we obtain the desired result.

    From (2.19), (3.25), and the fact that

    qk(μkdnk)qk(μk(dnk+Zkˉdtk))μk[qk(dnk)qk(dnk+Zkˉdtk)],

    where μk(0,1], then we have

    qk(μkdnk)qk(μkdk)C5μkZTkqk(dnk)min{ZTkqk(dnk)ˉBk,Δk}. (3.28)

    That is

    Tpredk(μkˉdtk)C5μkZTkqk(dnk)min{ZTkqk(dnk)ˉBk,Δk}. (3.29)

    The following lemma clarifies that if at any iteration k, the point xkF+ is not feasible, then algorithm ACBTR can not loop infinitely without finding an acceptable step.

    Lemma 3.10. Under assumptions A1A5. If hlkε>0, then the condition AredkjPredkjγ1 will be satisfied for some finite j.

    Proof. From inequalities (3.18), (3.24), and the condition hlkε, we have

    AredkPredk1∣=AredkPredkPredk2C3δ2kC4εmin{ε,δk}.

    Now as the trial step dkj gets rejected, δkj becomes small and eventually we will have

    |AredkjPredkj1|2C3δkjC4ε.

    For j finite, this inequality implies that, the acceptance rule will be met. This completes the proof.

    Lemma 3.11. Under assumptions A1A5 and the jth trial step of iteration k satisfies,

    dkjmin{(1γ1)C44C3,1}hlk, (3.30)

    then the step accepted.

    Proof. The proof of this lemma by contradiction. Assume that the inequality (3.30) holds and the step dkj is rejected. From inequalities (3.18), (3.24), and using inequality (3.30), we have

    (1γ1)<|AredkjPredkj|Predkj<2C3dkjC4hlk12(1γ1).

    This is a contradiction and this completes the proof.

    Lemma 3.12. Under assumptions A1A5 and for all jth trial step of any iteration k, then δkj satisfies

    δkjmin{δminb1,α1(1γ1)C44C3,α1}hlk, (3.31)

    where b1>0 is a constant.

    Proof. For all jth trial step of any iteration k, we will consider tow cases:

    Firstly, if j=1 and the step accepted, then δkδmin. Hence,

    δkδminδminb1hlk, (3.32)

    where b1=supxShlk. Then (3.31) holds in this case.

    Secondly, if j>1, then there exists at least one rejected trial step and hence from Lemma (3.11) we have

    dki>min{(1γ1)C44C3,1}hlk,

    for all i=1,2,...j1. From Algorithm 2.2 and dki is a rejected trial step, then we have

    δkj=α1dkj1>α1min{(1γ1)C44C3,1}hlk. (3.33)

    From inequalities (3.32) and (3.32) the desired result is obtained.

    The next lemma prove that as long as hlk is bound away from zero, the trust-region radius is also bound away from zero.

    Lemma 3.13. Under assumptions A1A5. If hlkε>0, then

    δkjC6,

    where C6>0 is a constant.

    Proof. The proof follows directly by taking

    C6=εmin{δminb1,α1(1γ1)C44C3,α1}, (3.34)

    in inequality (3.31).

    In this section, we clarify the convergence of the sequence of iteration when the positive parameter σk.

    Lemma 3.14. Under assumptions A1A5. If ρk is increased at any iteration k, then

    σkμkhlk2C7, (3.35)

    where C7 is a positive constant.

    Proof. From the way of updating the positive penalty parameter ρk, we notice that ρk is increased at a given iteration k according to one of the two rules (2.31) or (2.30). Suppose that ρk is increased according to the rule (2.30), then

    ρk2[hlk2hlk+hTlkμkdk2]=[qk(μkdk)qk(0)+ΔλTk(hlk+hTlkμkdk)]+b02[hlk2hlk+hTlkμkdk2].

    Using inequalities (3.23) and (3.31), then we have

    ρk2C4μkhlk2min{δminb1,α1(1γ1)C44C3,α1}xs(xk,λk))Tμkdk+12μ2kdTk˜Hkdk+ΔλTk(hlk+hTlkμkdk)+σk2[Wk(gu(xk)+gu(xk)Tμkdk)2Wkgu(xk)2]+b02[hlk2hlk+hTlkμkdk2].

    According to rule (2.31), we have ρkσ2k. Hence

    σ2k2C4μkhlk2min{δminb1,α1(1γ1)C34C4,α1}(xs(xk,λk))Tμkdk+12μ2kdTk˜Hkdk+ΔλTk(hlk+hTlkμkdk)+σk2[Wk(gu(xk)+gu(xk)Tμkdk)2+b02hlk2.

    Then,

    σk2C4μkhlk2min{δminb1,α1(1γ1)C34C4,α1}1σk[(xs(xk,λk))Tμkdk+12μ2kdTk˜Hkdk+ΔλTk(hlk+hTlkμkdk)+b02hlk2]+12Wk(gu(xk)+gu(xk)Tμkdk)21σk[|xs(xk,λk))Tdk|+12|dTk˜Hkdk|+|ΔλTk(hlk+hTlkμkdk)|+b02hlk2]+12Wk(gu(xk)+gu(xk)Tμkdk)2,

    where μk1. Using the Cauchy-Schwarz inequality, assumptions A3A5, and the fact that dkδmax, the proof is completed.

    Lemma 3.15. Under assumptions A1A5. If σk and there exists an infinite subsequence {ki}of the iteration sequence at which ρk is increased, then

    limkihki=0. (3.36)

    Proof. The proof follows directly from limkiμki=1, σk, and Lemma (3.14).

    Theorem 3.1. Under assumptions A1A5. If σk, then

    limkhlk=0. (3.37)

    Proof. The proof similar to the proof of Theorem 4.18 [19].

    We notice from the way of updating σ that, the sequence {σk} is unbounded only when there exist an infinite subsequence of indices {ki}, at which

    12Tpredk(μkˉdtk)<gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),Δk}. (3.38)

    The following lemma shows that, if σk and lim supkWkgu(xk)>0, then the iteration sequence generated by the algorithm ACBTR has a subsequence that satisfies IFJ conditions in the limit.

    Lemma 3.16. Under assumptions A1A5. If σk and there exists a subsequence {kj} of indices indexing iterates that satisfy Wkgu(xk)ε>0 for all k{kj}, then a subsequence of the iteration sequence indexed {kj} satisfies the IFJ conditions as k.

    Proof. The proof is by contradiction. Let the subsequence {kj} be renamed to {k} to simplify the notation. Suppose that there is no a subsequence of the sequence of iterates that satisfies IFJ conditions in the limit. Then we have |Wkgu(xk)2Wk(gu(xk)+gu(xk)TZkμkˉdtk)2|ε1>0 from Lemma (3.1). Also we have Zkgu(xk)Wkgu(xk)ε2>0 from (3.13). Since

    ZTkgu(xk)Wk(gu(xk)+gu(xk)Tdnk)ZTkgu(xk)Wkgu(xk)ZTkgu(xk)Wkgu(xk)Tdnk,

    and using (3.17), then we have

    ZTkgu(xk)Wk(gu(xk)+gu(xk)Tdnk)ε2C2ZTkgu(xk)Wkgu(xk)Thlk.

    But {hlk} convergence to zero and ZTkgu(xk)Wkgu(xk)T is bounded. Then ZTkgu(xk)Wk(gu(xk)+gu(xk)Tdnk)ε22 and therefore

    ZTkqk(dnk)σkZTkgu(xk)Wk(gu(xk)+gu(xk)Tdnk)ZTk(xsk+˜Hkdnk)σkε22ZTk(xsk+˜Hkdnk).

    Hence inequality (3.29) can be written as follows

    Tpredk(μkˉdtk)12C5μkσk[ε221σkZTk[xsk+˜Hkdnk]min{Δk,ε221σkZTk[xsk+˜Hkdnk]ZTkgu(xk)Wkgu(xk)TZk+1σkZTk˜HkZk}.

    That is for k sufficiently large we have

    Tpredk(μkˉdtk)ε24C5μkσkmin{Δk,ε22ZTkgu(xk)Wkgu(xk)TZk}.

    Since σk, then there exists infinite number of acceptable iterates at which (3.38) holds. That is, there exists a contradiction unless σkΔk is bounded. Hence Δk0 and therefore dk0. Now we will consider two cases:

    Firstly, if Wkgu(xk)2Wk(gu(xk)+gu(xk)TZkμkˉdtk)2>ε1, we have

    σk{Wkgu(xk)2Wk(gu(xk)+gu(xk)TZkμkˉdtk)2|}>σkε1. (3.39)

    Thus, from (2.19), (3.39), and using assumptions A3A5, we have Tpredk(μkˉdtk). That is, the left hand side of inequality (3.38) goes to infinity while the right hand side of the same inequality goes to zero. That is, there exists a contradiction in this case.

    Secondly, if Wkgu(xk)2Wk(gu(xk)+gu(xk)TZkμkˉdtk)2<ε1, then

    σk{Wkgu(xk)2Wk(gu(xk)+gu(xk)TZkμkˉdtk)2|}<σkε1,

    where σk as k. Similar to the above case, Tpredk(μkˉdtk). This gives a contradiction in this case with Tpredk(μkˉdtk)>0. This two contradictions prove the lemma.

    The following lemma shows that if limkσk and lim infkWkgu(xk)=0, then the iteration sequence generated by the algorithm ACBTR has a subsequence that satisfies FJ conditions in the limit.

    Lemma 3.17. Under assumptions A1A5. Let {kj} be a subsequence of iterates that satisfy Wkgu(xk)>0 for all k{kj} and limkjWkjgkj=0. If limkσk=, then a subsequence of {kj} satisfies FJ conditions in the limit.

    Proof. The proof of this lemma is similar to the proof of Lemma 4.20 [19].

    In this section, we will continue our discussion assuming that the parameter σk is bounded. We mean that there exists an integer ˉk such that for all kˉk, σk=ˉσ<, and

    12Tpredk(μkˉdtk)gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),Δk}. (3.40)

    From assumptions A3, A5, and assumption (3.40), we can say that there exists a constant 0<b2 such that for all kˉk

    Bkb2,ZTkBkb2,andZTkBkZkb2, (3.41)

    where Bk=˜Hk+ˉσgu(xk)Wkgu(xk)T.

    Lemma 3.18. Under assumptions A1A5, there exists a constant C8>0 such that

    qk(0)qk(μkdnk)ΔλTk(hlk+hTlkμkdk)C8μkhlk, (3.42)

    for all kˉk.

    Proof. By using the definition (2.28), we have

    qk(0)qk(μkdnk)=(xs(xk,λk))Tμkdnk12μ2kdnTk˜Hkdnk+ˉσ2[Wkgu(xk)2Wk(gu(xk)+gu(xk)Tμkdnk)2]=(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))Tμkdnk12μ2kdnkT(˜Hk+ˉσgu(xk)Wkgu(xk)T)dnk=(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))Tμkdnk12μ2kdnkTBkdnk.

    That is,

    qk(0)qk(μkdnk)ΔλTk(hlk+hTlkμkdk)=(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))Tμkdnk12μ2kdnkTBkdnkΔλTk(hlk+hTlkμkdk)μkxs(xk,λk)dnkˉσμkgu(xk)Wkgu(xk)dnkμ2kBkdnk2Δλkhlk+hTlkμkdkμk[xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk]dnkμkΔλkhlkdnk.

    By using inequality (3.17), we can obtain the following inequality

    qk(0)qk(μkdnk)ΔλTk(hlk+hTlkμkdk)μk[(xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk+Δλkhlk)C2]hlk.

    From assumptions A3A5, the fact that dnkδmax, and using (3.41), then for all kˉk there exists a constant C8>0 such that inequality (3.42) hold. This completes the proof.

    Lemma 3.19. Under assumptions A1A5, we have

    Predk12C5μkZTkqk(dnk)min{Δk,ZTkqk(dnk)ˉBk}+gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),Δk}C8μkhlk+ρk[hlk2hlk+hTlkμkdk2], (3.43)

    for all kˉk.

    Proof. Since the definition of Predk (2.27) can be written as follows

    Predk=[qk(μkdnk)qk(μkdk)]+[qk(0)qk(μkdnk)ΔλTk(hlk+hTlkμkdk)]+ρk[hlk2hlk+hTlkμkdk2].

    and by using (2.19), we have

    Predk=12Tpredk(μkˉdtk)+12Tpredk(μkˉdtk)+[qk(0)qk(μkdnk)ΔλTk(hlk+hTlkμkdk)]+ρk[hlk2hlk+hTlkμkdk2].

    Using inequalities (3.29), (3.40), and (3.42), we can obtain the desired result.

    Lemma 3.20. Under assumptions A1A5. If ρk increased at iteration k, then there exists a constant C9>0 such that

    ρkμkmin{hlk,δk}C9. (3.44)

    Proof. Since ρk is increased at iteration k, then from (2.30) we have

    ρk2[hlk2hlk+hTlkμkdk2]=[qk(μkdk)qk(μkdnk)]+[qk(μkdnk)qk(0)]+ΔλTk(hlk+hTlkμkdk)+b02[hlk2hlk+hTlkμkdk2]=12Tpredk(μkˉdtk)12Tpredk(μkˉdtk)+[qk(μkdnk)qk(0)+ΔλTk(hlk+hTlkμkdk)]+b02[hlk2hlk+hTlkμkdk2].

    Applying inequality (3.23) to the left hand side and inequalities (3.29), (3.40), and (3.42) to the right hand side, we obtain

    ρk2C4μkhlkmin{δk,hlk}C52μkZTkqk(dnk)min{Δk,ZTkqk(dnk)ˉBk}gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),Δk}+C8μkhlk+b02hlk2C8μkhlk+b02hlk2.

    The rest of the proof follows using the fact that μk1 and assumption A3.

    Lemma 3.21. Under assumptions A1A5. If ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))+gu(xk)Wkgu(xk)ε>0 and hlkηδk where η>0 is given by

    ηmin{ε6b2C2δmax,32C2,C5ε12C8min{2ε3δmax,1},ε4C8min{εδmax,1}}. (3.45)

    then there exists a constant C10>0, such that

    PredkC10μkδk+ρk[hlk2hlk+hTlkμdk2]. (3.46)

    Proof. Suppose that ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))ε2, then gu(xk)Wkgu(xk)ε2. From inequality (3.17) and using (3.41), we have

    ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk)ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))ZTkBkdnkZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))b2C2hlkε2b2C2ηδk.

    Since ηε6b2C2δmax, then we have

    ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk)ε2ε6ε3. (3.47)

    Because Δk=δk2dnk2 and dnkC2hlkC2ηδkC232C2δk=32δk, hence Δ2k=δ2kdnk2δ2k34δ2k=14δ2k. Thus,

    Δk12δk. (3.48)

    From inequalities (3.43), (3.47) and (3.48), we have

    Predk12C5μkZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk)min{ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk)+Bkdnk),12δk}+gu(xk)Wkgu(xk)min{gu(xk)Wkgu(xk),12δk}C8μkhlk+ρk[hlk2hlk+hTlkμdk2]C5μkε12δkmin{2ε3δmax,1}+με4min{εδmax,1}δk12C8ημkδk12C8ημkδk+ρk[hlk2hlk+hTlkμkdk2].

    Since ηmin{C5ε12C8min{2ε3δmax,1},ε4C8min{εδmax,1}}, then we have

    PredkC5μkε24min{2ε3δmax,1}δk+με8min{2εδmax,1}δk+ρk[hlk2hlk+hTlkμkdk2].

    The result follows if we take C10=min{C5ε24min{2ε3δmax,1},ε8min{2εδmax,1}}.

    We can easily see from lemma 3.21 that, at any iteration at which ZTk(xs(xk,λk)+ˉσgu(xk)Wkgu(xk))+gu(xk)Wkgu(xk)ε and hlkηδk, where η is given by (3.45), there is no need to increase the value of ρk. It is only increased when hlkηδk.

    Lemma 3.22. Under assumptions A1A5. If ρkj increased at the jth trial iterate of any iteration k, then

    ρkjμkjhlkC11, (3.49)

    where C11>0 is a constant.

    Proof. The proof of this lemma follows directly from inequalities (3.31) and (3.44).

    Lemma 3.23. Under assumptions A1A5. If ρk, then

    limkihlki=0, (3.50)

    where {ki} is a subsequence of iterates at which the penalty parameter is increased.

    Proof. The proof of this lemma follows directly from Lemma 3.22 and limkμk=1.

    In this section, we will prove the main global convergence theorems for the proposed algorithm ACBTR.

    Theorem 3.2. Assume that assumptions A1A5 hold, then the sequence of iterates generated by ACBTR algorithmsatisfies

    limkhlk=0. (3.51)

    Proof. Suppose that lim supkhlkε, where ε>0 is a constant. Then there exists an infinite subsequence of indices {kj} indexing iterates that satisfy hkjε2. From Lemma (3.10), we know that there exists an infinite sequence of acceptable steps, so to simplify, we assume that all members of the sequence {kj} are acceptable iterates. Now we will consider two cases:

    Firstly, we consider that, if {ρk} is unbounded. Then there exists an infinite number of iterates {ki} at which ρk is increased. From Lemma (3.23) and for k sufficiently large, we can say {ki}{kj}=. Let kζ1 and kζ2 be two consecutive iterates at which ρk is increased and kζ1<k<kζ2, for any k{kj}. Notice that, ρk is the same for all iterates between kζ1 and kζ2. Since all the iterates of {kj} are acceptable, then

    ΦkΦk+1=Aredkγ1Predk,

    for all k{kj}. Using inequality (3.24), we have

    ΦkΦk+1ρkγ1C4μk2hlkmin{hlk,δk}.

    Summing over all acceptable iterates that lie between kζ1 and kζ2, we have

    kζ21k=kζ1ΦkΦk+1ρkγ1C4μkε4min{^C6,ε2},

    where ^C6 is as C6 in (3.34), with ε is replaced by ε2. Hence,

    s(xkζ1,μkζ1;ˉσ)s(xkζ2,μkζ2;ˉσ)ρkζ1+[hlkζ12hkζ22]γ1C4ε4min{^C6,ε2}.

    Since ρk, then for kζ1 sufficiently large, we have

    s(xkζ1,λkζ1;ˉσ)s(xkζ2,λkζ2;ˉσ)ρkζ1<γ1C4ε8min{^C6,ε2}.

    Therefore,

    hlkζ12hlkζ22γ1C4ε8min{^C6,ε2}.

    But this leads to a contradiction with Lemma (3.23) unless ε=0.

    Secondly, if {ρk} is bounded, then there exists an integer ˜k such that for all k˜k, ρk=˜ρ. Hence from inequality (3.24), we have for any ˆk{kj} and ˆk˜k

    Predˆk˜ρC4μˆk2hlˆkmin{δˆk,hlˆk}ε˜ρC4μˆk4min{ε2δmax,1}δˆk. (3.52)

    Since all the iterates of {kj} are acceptable, then for any ˆk{kj}, we have

    ΦˆkΦˆk+1=Aredˆkγ1Predˆk.

    Using inequality (3.52), we have

    ΦˆkΦˆk+1γ1ε˜ρC4μˆk4min{ε2δmax,1}δˆk.

    Using Lemma (3.13), we have

    ΦˆkΦˆk+1γ1ε˜ρC4μˆk4min{ε2δmax,1}^C6>0.

    Thus there exists a contradiction with the fact that {Φk} is bounded when the sequence of the penalty parameter {ρk} is bounded. Hence, in both cases the supposition is not correct and the theorem is proved.

    Theorem 3.3. Under assumptions A1A5, the sequence of iterates generated by ACBTR algorithm satisfies

    lim infk[ZTkxsk+gu(xk)Wkgu(xk)]=0. (3.53)

    Proof. To prove this theorem we will prove

    lim infk[ZTk(xsk+ˉσgu(xk)Wkgu(xk))+gu(xk)Wkgu(xk)]=0, (3.54)

    by contradiction. That is, we assume ZTk(xsk+ˉσgu(xk)Wkgu(xk))+gu(xk)Wkgu(xk)>ε and there exists an infinite subsequence {ki} of the iteration sequence such that hlki>ηδki. Since hlki0 as ki0, then

    limkiδki=0.

    Let kj be any iteration in {ki}. Then we will consider two cases:

    Firstly, if {ρk} is unbounded and the trial step j1 of iteration k is rejected. Thus hlk>ηδkj=α1ηdkj1. Hence, from inequalities (3.24), (3.19), and dkj1 was rejected, we have

    (1γ1)|Aredkj1Predkj1|Predkj1[2κ1dkj1+2κ2ρkj1dkj1hlk+2κ3ρkj1dkj12]ρkj1C4min(α1η,1)hlk2κ1ρkj1C4α1ηmin(α1η,1)+2κ2+2κ3α1ηC4α1ηmin(α1η,1)dkj1.

    Since {ρk} is unbounded, then there exists an iterate ˆk sufficiently large such that for all kˆk, we have

    ρkj1<4κ1C4α1ηmin(α1η,1)(1γ1).

    and

    dkj1C4α1ηmin(α1η,1)(1γ1)4(κ2+κ3α1η).

    From the way of updating the radius of the trust region, we have

    δkj=α1dkj1C4α21ηmin(α1η,1)(1γ1)4(κ2+κ3α1η).

    But this is a contradiction and this means that δkj can not go to zero in this case.

    Secondly, if {ρk} is bounded and there exists an integer ˉk and a constant ˉρ such that for all kˉk, ρk=ˉρ. Let j be a trial step of iteration k at which hk>ηδkj. Now we will consider the following two cases:

    I). If j=1, then from our way of updating the radius of the trust-region, we have δkjδmin. That is, δkj is bounded in this case.

    II). If j>1 and hlk>ηδkl for all l=1,,j, then for all rejected trial steps l=1,,j1 of iteration k, we have

    (1γ1)|AredklPredkl|Predkl2C3dklC4min(η,1)hlk.

    That is

    δkj=α1dkj1α1C4min(η,1)(1γ1)hlk2C3α1C4min(η,1)(1γ1)η2C3δk1α1C4min(η,1)(1γ1)η2C3δmin.

    This means that, δkj is bounded.

    Otherwise, if j>1 and hlk>ηδkl holds for some l, then there exists an integer β1 such that hlk>ηδkl holds for l=β1+1,...,j and hlkηδkl for l=1,...,β1. As in the above case, we can write

    δkjα1C4min(α,1)(1γ1)2C3hlkα1C4min(η,1)(1γ1)η2C3δkβ1+1. (3.55)

    But from the way of updating the radius of the trust-region, we have

    δkβ1+1α1dkβ1. (3.56)

    Since hlkηδkl for l=1,...,β1, then from Lemma (3.21) and the fact that dkβ1 is rejected, we have

    (1γ1)|Aredkβ1Predkβ1|Predkβ12C3ˉρdkβ1C10.

    This implies

    dkβ1C10(1γ1)2C3ˉρ.

    This implies that, dkβ1 is bounded. Hence, δkj is bounded in this case too. But this is a contradiction. That is hlkηδkj for all kj sufficiently large.

    Letting kjˉk and using Lemma (3.21), we have

    ΦkjΦkj+1=Aredkjγ1Predkjγ1C10δkj.

    As k, then

    limkδkj=0. (3.57)

    That is δkj is not bounded below. But this leads to a contradiction and to prove this contradiction we will consider the following two cases:

    i). If kj>ˉk and the step was accepted at j=1, then δkδmin. Hence δkj is bounded in this case.

    ii). If j>1 and there exists at least one rejected trial step dkj1. Then from Lemmas (3.7) and (3.21), we have

    (1γ1)<ˉρC3dkj12C10δkj1.

    From the way of updating δkj we have

    δkj=α1dkj1>α1C10(1γ1)ˉρC3.

    Hence δkj is bounded in this case too. But this contradicts (3.57). This means that, the supposition is incorrect. Hence,

    lim infk[ZTk(xsk+ˉσgu(xk)Wkgu(xk))+gu(xk)Wkgu(xk)]=0.

    But this also implies (3.53). This completes the proof of the theorem.

    From the above two theorems, we conclude that, given any ε>0, the algorithm terminates because ZTkxsk+gu(xk)Wkgu(xk)+hlk<ε, for some finite k.

    Algorithm ACBTR was implemented as a MATLAB code and run under MATLAB version 8.2.701 (R2013b) 64-bit(win64). We begin by a starting point x0F+ and the following parameter setting is used: δmin=104, δ0=max(dcp0,δmin), δmax=104δ0, γ1=104, γ2=0.75, α1=0.5, α2=2 and ε=108.

    Secondly, an extensive variety of possible numeric NBLP problems are introduced to clarify the effectiveness of the proposed ACBTR algorithm.

    For each test problem, 10 independent runs with different initial starting point are proceeded to observe the matchmaking of the results. Statistical results of all test problems are summarized in Table 1. The results in Table 1 show that the resuls by the ACBTR Algorithm (2.5) are approximate or equal to those by the compared algorithms in the literature.

    Table 1.  Comparisons of the results by ACBTR Algorithm 2.5 and Methods in reference.
    Problem (t,v) fu iter CPUs (t,v) fu
    fl nfunc time fl
    name ACBTR ACBTR ACBTR ACBTR Ref. Ref.
    prob1 [34] (0.8438, 0.7657, -2.0769 14 1.77 (0.8438, 0.7657, 0) -2.0769
    1.121e-8) -0.5863 16 -0.5863
    prob2 [34] (0.609, 0.391, 0, 0.6086 12 2.1 (0.609, 0.391, 0, 0.6426
    0, 1.828) 1.6713 15 0, 1.828) 1.6708
    prob3 [34] (0.97, 3.14, -8.92 9 3.09 (0.97, 3.14, -8.92
    2.6, 1.8) -6.05 10 2.6, 1.8) -6.05
    prob4 [34] (.5, .5, .5, .5) -1 13 1.87 (0.5, 0.5, 0.5, 0.5) -1
    0 15 0
    prob5 [34] (10.03, 9.9691) 100.58 6 1.8 (10.03, 9.969) 100.58
    0.0012 8 0.001
    prob6 [34] (1.6879, 0.8805, 0) -1.3519 8 4.5 NA 3.57
    7.4991 12 2.4
    prob7 [34] (1, 0) 17 10 2.05 (1, 0) 17
    1 11 1
    prob8 [34] (0.75, 0.75, -2.25 9 1.05 (3/2, 3/2, 3/2, -2.1962
    0.75, 0.75) 0 11 3/2) 0
    prob9 [29] (11.138, 5) 2209.8 11 1.85 (11.25, 5) 2250
    222.52 13 197.753
    prob10 [29] (1, 0, 7.6287e-08) 7.6287e-08 7 3.34 (1, 0, 1) 1
    -7.6287e-08 9 -1
    prob11 [44] (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 8 42.311 (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2
    0.3148 11 0.3148
    prob12 [29] (3, 5) 9 10 2.23 (3, 5) 9
    0 14 0
    prob13 [44] (0, 1.7405, -15.548 6 2.5 (0, 2, 1.875, 0.9063) -12.68
    1.8497, 0.9692) -1.4247 7 -1.016
    prob14 [44] (10.016, 0.81967) 81.328 8 2.15 (10.04, 0.1429) 82.44
    -0.3359 11 0.271

     | Show Table
    DownLoad: CSV

    In Table 1, we adding the average of number of iterations (iter), the average of number of function evaluations (nfunc), the average of value of CPU time (CPUs) per seconds.

    For comparison, we have included the corresponding results of the avarge value of CPU time (CPUs) which are obtained by Methods in [34] (Table 2), [29] (Table 3), and [44] (Table 4) respectively. It is obviously from the results that our algorithm ACBTR is qualified for treating NBLP problems even the upper and the lower levels are convex or not and the results converge to the optimal solution which is similarly or approximate to the optimal that reported in literature. Finally, it is obviously from the comparison between the solutions obtained by using ACBTR algorithm with literature, that ACBTR is able to find the optimal solution of all problems by a small number of iterations, small number of function evaluations, and less time.

    Table 2.  Comparisons of the results by ACBTR (2.5) and Method [34].
    Problem (t,v) fu CPUs (t,v) fu CPUs
    fl fl
    name ACBTR ACBTR ACBTR method [34] method [34] method [34].
    prob1 (0.8438, 0.7657, -2.0769 1.77 (0.8462, 0.769 2, 0) -2.0769 1.734
    1.121e-8) -0.5863 -0.5917
    prob2 (0.609, 0.391, 0, 0.6086 2.1 (0.6111, 0.3889, 0, 0.6389 2.375
    0, 1.828) 1.6713 0, 1.8333) 1.6806
    prob3 (0.97, 3.14, -8.92 3.9 (1.031 6, 3.097 8, -8.9172 3.315
    2.6, 1.8) -6.05 2.597 0, 1.792 9) -6.137 0
    prob4 (0.5, 0.5, 0.5, 0.5) -1 1.87 (0.5, 0.5, 0.5, 0.5) -1 1.576
    0 0
    prob5 (10.03, 9.9691) 100.58 1.8 (10, 10) 100 1.825
    0.0012 0
    prob6 (1.6879, 0.8805, 0) -1.3519 4.5 (1.8889, 0.8889, 0) -1.2099 4.689
    7.4991 7.6173
    prob7 (1, 0) 17 2.05 (1, 0) 17 1.769
    1 1
    prob8 (0.75, 0.75, -2.25 1.05 (0.75, 0.75, -2.25 1.124
    0.75, 0.75) 0 0.75, 0.75) 0

     | Show Table
    DownLoad: CSV
    Table 3.  Comparisons of the results by ACBTR (2.5) and Method [29].
    Problem (t,v) fu CPUs (t,v) fu CPUs
    fl fl
    name ACBTR ACBTR ACBTR method [29] method [29] method [29].
    prob9 (11.138, 5) 2209.8 1.85 (11.25, 5) 2250 2.21
    222.52 197.753
    prob10 (1, 0, 7.6287e-08) 7.6287e-08 3.34 (1, 0, -1) -1 3.38
    -7.6287e-08 1
    prob12 (3, 5) 9 2.23 (3, 5) 9 -
    0 0

     | Show Table
    DownLoad: CSV
    Table 4.  Comparisons of the results by ACBTR (2.5) and Method [44].
    Problem (t,y) fu CPUs (t,y) fu CPUs
    fl fl
    name ACBTR ACBTR ACBTR method [44] method [44] method [44].
    prob3 (0.97, 3.14, -8.92 3.9 (1.03, 3.097, -8.92 11.854
    2.6, 1.8) -6.05 2.59, 1.79 -6.14
    prob5 (10.03, 9.9691) 100.58 1.8 (10, 10) 100.014 5.888
    0.0012 4.93e-7
    prob6 (1.6879, 0.8805, 0) -1.3519 4.5 (1.8888, 0.888) -1.2091 25.332
    7.4991 7.6145
    prob11 (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 42.311 (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 107.55
    0.3148 0.3148
    prob13 (0, 1.7405, -15.548 2.5 (4.4e-7, 2, -12.65 14.42
    1.8497, 0.9692) -1.4247 1.875, 0.9063) -1.021
    prob14 (10.016, 0.81967) 81.328 2.15 (10.0164, 0.8197) 18.3279 4.218
    -0.3359 -0.3359

     | Show Table
    DownLoad: CSV

    Problem 1 [34]:

    mintfu=v21+v22+t24ts.t.0t2,minvfl=v21+0.5v22+v1v2+(13t)v1+(1+t)v2,s.t.2v1+v22t1,v10,v20.

    Problem 2 [34]:

    mintfu=v21+v23v1v34v27t1+4t2s.t.t1+t21,t10,t20minvfl=v21+0.5v22+0.5v23+v1v2+(13t1)v1+(1+t2)v2,s.t.2v1+v2v3+t12t2+20,v10;v20v30.

    Problem 3 [34]:

    mintfu=0.1(t21+t22)3v14v2+0.5(v21+v22)s.t.minvfl=0.5(v21+5v22)2v1v2t1v1t2v2,s.t.0.333v1+v220,v10.333v220,v10,v20,

    Problem 4 [34]:

    mintfu=t212t1+t222t2+v21+v22s.t.t10,t20minvfl=(v1t1)2+(v2t2)2,s.t.0.5v11.5,0.5v21.5,

    Problem 5 [34]:

    mintfu=t2+(v10)2s.t.t+v0,0t15,minvfl=(t+2v30)2,s.t.t+v20,0v20,

    Problem 6 [34]:

    mintfu=(t1)2+2v212ts.t.t0,minvfl=(2v14)2+(2v21)2+tv1,s.t.4t+5v1+4v212,4t5v1+4v24,4t4v1+5v24,4t+4v1+5v24,v10,v20,

    Problem 7 [34]:

    mintfu=(t5)2+(2v+1)2s.t.t0,minvfl=(2v1)21.5tv,s.t.3t+v3,t0.5v4,t+v7,v0.

    Problem 8 [34]:

    mintfu=t213t1+t223t2+v21+v22s.t.t10,t20,minvfl=(v1t1)2+(v2t2)2,s.t.0.5v11.5,0.5v21.5,

    Problem 9 [29]:

    mintfu=16t2+9v2s.t.4t+v0,t0,minvfl=(t+v20)4,s.t.4t+v500,v0.

    Problem 10 [29]:

    mintfu=t3v1+v2s.t.0t1,minvfl=v2s.t.tv110,v21+tv21,v20.

    Problem 11 [44]:

    mintfu=8t14t2+4v140v24v3s.t.t10,t20minvfl=1+t1+t2+2v1v2+v36+2t1+v1+v23v3,s.t.v1+v2+v3+v4=1,2t1v1+2v20.5v3+v5=1,2t2+2v1v20.5v3+v6=1,vi0,i=1,...,6.

    Problem 12 [29]:

    mintfu=(t3)2+(v2)2s.t.2t+v10,t2v+20,t+2v140,0t8,minvfl=(v5)2s.t.v0.

    Problem 13 [44]:

    mintfu=t213t224v1+v22s.t.t21+2t24,t10,t20,minvfl=2t21+v215v2,s.t.t212t1+2t222v1+v23,t2+3v14v24,v10,v20.

    Problem 14 [44]:

    \begin{array}{ll} \min \nolimits _{t} & f_u = (t-1)^2+(v-1)^2\\ s.t. \; & t \geq 0,\\ \;\;\;\;\;\min \nolimits _{v} &f_l = 0.5v^2+500v-50tv\\ s.t. \;&v\geq 0. \end{array}

    In this paper, we introduce an effective solution algorithm to solve NBLP problem with positive variables. This algorithm based on using KKT condition with Fischer-Burmeister function to transform NBLP problem into an equivalent smooth SONP problem. An active-set strategy with barrier method and the trust-region mechanism is used to ensure global convergence from any starting point. ACBTR algorithm can reduce the number of iteration and the number of function evaluation. The projected Hessian mechanism is used in ACBTR algorithm to overcome the difficulty of having an infeasible trust region subproblem. A global convergence theory of ACBTR algorithm is studied under five standard assumptions.

    Preliminary numerical experiment on the algorithm is presented. The performance of the algorithm is reported. The numerical results show that our approach is of value and merit further investigation. For future work, there are many question should be answered

    ● Our approach used to transform problem 1.2 which is not smooth to smooth problem.

    ● Using the interior-point method guarantees the converges quadratically to a stationary point.

    The authors declare that there is no conflict of interest in this paper.



    [1] D. Aksen, S. Akca, N. Aras, A bilevel partial interdiction problem with capacitated facilities and demand outsourcing, Comput. Oper. Res., 41 (2014), 346–358. https://doi.org/10.1016/j.cor.2012.08.013 doi: 10.1016/j.cor.2012.08.013
    [2] Y. Abo-Elnaga, M. El-Shorbagy, Multi-sine cosine algorithm for solving nonlinear bilevel programming problems, Int. J. Comput. Int. Sys., 13 (2020), 421–432. https://doi.org/10.2991/ijcis.d.200411.001 doi: 10.2991/ijcis.d.200411.001
    [3] A. Burgard, P. Pharkya, C. Maranas, Optknock: A bilevel programming framework for identifying gene knockout strategies formicrobial strain optimization, Biotechnol. Bioeng., 84 (2003), 647–657. https://doi.org/10.1002/bit.10803 doi: 10.1002/bit.10803
    [4] M. Bazaraa, H. Sherali, C. Shetty, Nonlinear programming theory and algorithms, Hoboken: John Wiley and Sons, 2006.
    [5] O. Ben-Ayed, O. Blair, Computational difficulty of bilevel linear programming, Oper. Res., 38 (1990), 556–560. https://doi.org/10.1287/opre.38.3.556 doi: 10.1287/opre.38.3.556
    [6] R. Byrd, Omojokun, Robust trust-region methods for nonlinearly constrained optimization, The second SIAM conference on optimization, 1987.
    [7] R. Byrd, J. Gilbert, J. Nocedal, A trust region method based on interior point techniques for nonlinear programming, Math. Program., 89 (2000), 149–185. https://doi.org/10.1007/PL00011391 doi: 10.1007/PL00011391
    [8] J. Chen, The semismooth-related properties of a merit function and a descent method for the nonlinear complementarity problem, J. Glob. Optim., 36 (2006), 565–580. https://doi.org/10.1007/s10898-006-9027-y doi: 10.1007/s10898-006-9027-y
    [9] J. Chen, On some NCP-functions based on the generalized Fischer-Burmeister function, Asia Pac. J. Oper. Res., 24 (2007), 401–420. https://doi.org/10.1142/S0217595907001292 doi: 10.1142/S0217595907001292
    [10] J. Chen, S. Pan, A family of NCP-functions and a descent method for the nonlinear complementarity problem, Comput. Optim. Appl., 40 (2008), 389–404. https://doi.org/10.1007/s10589-007-9086-0 doi: 10.1007/s10589-007-9086-0
    [11] J. Dennis, M. Heinkenschloss, L. Vicente. Trust-region interior-point SQP algorithms for a class of nonlinear programming problems. SIAM J. Control Optim., 36 (1998), 1750–1794. https://doi.org/10.1137/S036012995279031
    [12] J. Dennis, M. El-Alem, K. Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, SIAM J. Optim., 9 (1999), 291–315. https://doi.org/10.1137/S1052623494276208 doi: 10.1137/S1052623494276208
    [13] S. Dempe, Foundation of bilevel programming, London: Kluwer Academic Publishers, 2002.
    [14] T. Edmunds, J. Bard, Algorithms for nonlinear bilevel mathematical programs, IEEE T. Syst. Man Cy., 21 (1991), 83–89. https://doi.org/10.1109/21.101139 doi: 10.1109/21.101139
    [15] B. El-Sobky, A robust trust-region algorithm for general nonlinear constrained optimization problems, PhD thesis, Alexandria University, 1998.
    [16] B. El-Sobky, A global convergence theory for an active trust region algorithm for solving the general nonlinear programming problem, Appl. Math. Comput., 144 (2003), 127–157. https://doi.org/10.1016/S0096-3003(02)00397-1 doi: 10.1016/S0096-3003(02)00397-1
    [17] B. El-Sobky, A Multiplier active-set trust-region algorithm for solving constrained optimization problem, Appl. Math. Comput., 219 (2012), 928–946. https://doi.org/10.1016/j.amc.2012.06.072 doi: 10.1016/j.amc.2012.06.072
    [18] B. El-Sobky, An interior-point penalty active-set trust-region algorithm, J. Egypt. Math. Soc., 24 (2016), 672–680. https://doi.org/10.1016/j.joems.2016.04.003 doi: 10.1016/j.joems.2016.04.003
    [19] B. El-Sobky, An active-set interior-point trust-region algorithm, Pac. J. Optim., 14 (2018), 125–159.
    [20] B. El-Sobky, A. Abotahoun, An active-set algorithm and a trust-region approach in constrained minimax problem, Comp. Appl. Math., 37 (2018), 2605–2631. https://doi.org/10.1007/s40314-017-0468-3 doi: 10.1007/s40314-017-0468-3
    [21] B. El-Sobky, A. Abotahoun, A trust-region algorithm for solving mini-max problem, J. Comput. Math., 36 (2018), 776–791. https://doi.org/10.4208/jcm.1705-m2016-0735 doi: 10.4208/jcm.1705-m2016-0735
    [22] B. El-Sobky, Y. Abouel-Naga, Multi-objective optimal load flow problem with interior-point trust-region strategy, Electr. Pow. Syst. Res., 148 (2017), 127–135. https://doi.org/10.1016/j.epsr.2017.03.014 doi: 10.1016/j.epsr.2017.03.014
    [23] B. El-Sobky, Y. Abouel-Naga, A penalty method with trust-region mechanism for nonlinear bilevel optimization problem, J. Comput. Appl. Math., 340 (2018), 360–374. https://doi.org/10.1016/j.cam.2018.03.004 doi: 10.1016/j.cam.2018.03.004
    [24] B. El-Sobky, Y.Abo-Elnaga, A. Mousa, A. El-Shorbagy, Trust-region based penalty barrier algorithm for constrained nonlinear programming problems: An application of design of minimum cost canal sections, Mathematics, 9 (2021), 1551. https://doi.org/10.3390/math9131551 doi: 10.3390/math9131551
    [25] B. El-Sobky, G. Ashry, An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem, AIMS Mathematics, 7 (2022), 5534–5562. https://doi.org/10.3934/math.2022307 doi: 10.3934/math.2022307
    [26] A. Fiacco, G. McCormick. Nonlinear programming: Sequential unconstrained minimization techniques, New York: John Wiley and Sons, 1968.
    [27] J. Falk, J. M. Liu, On bilevel programming, Part I: General nonlinear cases, Math. Program., 70 (1995), 47–72. https://doi.org/10.1007/BF01585928 doi: 10.1007/BF01585928
    [28] F. Facchinei, H. Y. Jiang, L. Q. Qi, A smoothing method for mathematical programming with equilibrium constraints, Math. Program., 85 (1999), 107–134. https://doi.org/10.1007/s10107990015a doi: 10.1007/s10107990015a
    [29] H. Gumus, A. Flouda, Global optimization of nonlinear bilevel programming problems, J. Global Optim., 20 (2001), 1–31. https://doi.org/10.1023/A:1011268113791 doi: 10.1023/A:1011268113791
    [30] J. L. Gonzalez Velarde, J. F. Camacho-Vallejo, G. Pinto Serranoo, A scatter search algorithm for solving a bilevel optimization model for determining highway tolls, Comput. Syst., 19 (2015), 5–16. https://doi.org/10.13053/CyS-19-1-1916 doi: 10.13053/CyS-19-1-1916
    [31] M. Hestenes, Muliplier and gradient methods, J. Optim. Theorey Appl., 4 (1969), 303–320. https://doi.org/10.1007/BF00927673 doi: 10.1007/BF00927673
    [32] G. Hibino, M. Kainuma, Y. Matsuoka, Two-level mathematical programming for analyzing subsidy options to reduce greenhouse-gas emissions, IIASA Working Paper, 1996.
    [33] D. Kouri, M. Heinkenschloss, D. Ridzal, B. van Bloemen Waanders, A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty, SIAM J. Sci. Comput., 35 (2013), A1847–A1879. https://doi.org/10.1137/120892362 doi: 10.1137/120892362
    [34] H. Li, Y. C. Jiao, L. Zhang, Orthogonal genetic algorithm for solving quadratic bilevel programming problems. J. Syst. Eng. Electron., 21 (2010), 763–770. https://doi.org/10.3969/j.issn.1004-4132.2010.05.008
    [35] Y. B. Lv, T. S. Hu, G. M. Wang, Z. P. Wan, A neural network approach for solving nonlinear bilevel programming problem, Comput. Math. Appl., 55 (2008), 2823–2829. https://doi.org/10.1016/j.camwa.2007.09.010 doi: 10.1016/j.camwa.2007.09.010
    [36] N. N. Li, D. Xue, W. Y. Sun, J. Wang, A stochastic trust-region method for unconstrained optimization problems, Math. Probl. Eng., 2019 (2019), 8095054. https://doi.org/10.1155/2019/8095054 doi: 10.1155/2019/8095054
    [37] L. M. Ma, G. M. Wang, A Solving algorithm for nonlinear bilevel programing problems based on human evolutionary model, Algorithms, 13 (2020), 260. https://doi.org/10.3390/a13100260 doi: 10.3390/a13100260
    [38] E. Omojokun, Trust-region strategies for optimization with nonlinear equality and inequality constraints, PhD thesis, University of Colorado, 1989.
    [39] T. Steihaug, The conjugate gradient method and trust-region in large scale optimization, SIAM J. Numer. Anal., 20 (1983), 626–637. https://doi.org/10.1137/0720042 doi: 10.1137/0720042
    [40] G. Savard, J. Gauvin, The steepest descent direction for the nonlinear bilevel programming problem, Oper. Res. Lett., 15 (1994), 265–272. https://doi.org/10.1016/0167-6377(94)90086-8 doi: 10.1016/0167-6377(94)90086-8
    [41] S. Sadatrasou, M. Gholamian, K. Shahanaghi, An application of data mining classification and bi-level programming for optimal credit allocation, Decis. Sci. Lett., 4 (2015), 35–50. https://doi.org/10.5267/j.dsl.2014.9.005 doi: 10.5267/j.dsl.2014.9.005
    [42] N. Thoai, Y. Yamamoto, A. Yoshise, Global optimization method for solving mathematical programs with linear complementarity constraints, Mathematical programs with complementarity, 2002.
    [43] M. Ulbrich, S. Ulbrich, L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Program., 100 (2004), 379–410. https://doi.org/10.1007/s10107-003-0477-4 doi: 10.1007/s10107-003-0477-4
    [44] Y. L. Wang, Y. C. Jiao, H. Li, An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-handling scheme, IEEE T. Syst. Man Cy. C, 35 (2005), 221–232. https://doi.org/10.1109/TSMCC.2004.841908 doi: 10.1109/TSMCC.2004.841908
    [45] X. Wang, Y. X. Yuan, A trust region method based on a new affine scaling technique for simple bounded optimization, Optim. Method. Softw., 28 (2013), 871–888. https://doi.org/10.1080/10556788.2011.622378 doi: 10.1080/10556788.2011.622378
    [46] X. Wang, Y. X. Yuan, An augmented Lagrangian trust region method for equality constrained optimization, Optim. Method. Softw., 30 (2015), 559–582. https://doi.org/10.1080/10556788.2014.940947 doi: 10.1080/10556788.2014.940947
    [47] Y. X. Yuan, Recent advances in trust region algorithms. Math. Program., 151 (2015), 249–281. https://doi.org/10.1007/s10107-015-0893-2
    [48] M. Zeng, Q. Ni, A new trust region method for nonlinear equations involving fractional mode. Pac. J. Optim., 15 (2019), 317–329.
  • This article has been cited by:

    1. B. El-Sobky, M. F. Zidan, A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games, 2023, 8, 2473-6988, 13706, 10.3934/math.2023696
    2. B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan, A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems, 2024, 9, 2473-6988, 6528, 10.3934/math.2024318
    3. Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry, A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems, 2025, 10, 2473-6988, 2509, 10.3934/math.2025117
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1687) PDF downloads(85) Cited by(3)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog