Loading [MathJax]/jax/output/SVG/jax.js
Research article

An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem

  • Received: 22 September 2021 Revised: 16 December 2021 Accepted: 29 December 2021 Published: 10 January 2022
  • MSC : 93D52, 49N35, 93D22, 49N10, 65K05

  • In this paper, a nonlinear bilevel programming (NBLP) problem is transformed into an equivalent smooth single objective nonlinear programming (SONP) problem utilized slack variable with a Karush-Kuhn-Tucker (KKT) condition. To solve the equivalent smooth SONP problem effectively, an interior-point Newton's method with Das scaling matrix is used. This method is locally method and to guarantee convergence from any starting point, a trust-region strategy is used. The proposed algorithm is proved to be stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem.

    A global convergence theory of the proposed algorithm is introduced and applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach.

    Citation: B. El-Sobky, G. Ashry. An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem[J]. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307

    Related Papers:

    [1] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [2] B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan . A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems. AIMS Mathematics, 2024, 9(3): 6528-6554. doi: 10.3934/math.2024318
    [3] B. El-Sobky, M. F. Zidan . A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games. AIMS Mathematics, 2023, 8(6): 13706-13724. doi: 10.3934/math.2023696
    [4] Habibe Sadeghi, Fatemeh Moslemi . A multiple objective programming approach to linear bilevel multi-follower programming. AIMS Mathematics, 2019, 4(3): 763-778. doi: 10.3934/math.2019.3.763
    [5] Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry . A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems. AIMS Mathematics, 2025, 10(2): 2509-2540. doi: 10.3934/math.2025117
    [6] Yueting Yang, Hongbo Wang, Huijuan Wei, Ziwen Gao, Mingyuan Cao . An adaptive simple model trust region algorithm based on new weak secant equations. AIMS Mathematics, 2024, 9(4): 8497-8515. doi: 10.3934/math.2024413
    [7] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
    [8] Sani Aji, Poom Kumam, Aliyu Muhammed Awwal, Mahmoud Muhammad Yahaya, Kanokwan Sitthithakerngkiet . An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Mathematics, 2021, 6(8): 8078-8106. doi: 10.3934/math.2021469
    [9] Mouhamad Al Sayed Ali, Miloud Sadkane . Acceleration of implicit schemes for large systems of nonlinear differential-algebraic equations. AIMS Mathematics, 2020, 5(1): 603-618. doi: 10.3934/math.2020040
    [10] Yiting Zhang, Chongyang He, Wanting Yuan, Mingyuan Cao . A novel nonmonotone trust region method based on the Metropolis criterion for solving unconstrained optimization. AIMS Mathematics, 2024, 9(11): 31790-31805. doi: 10.3934/math.20241528
  • In this paper, a nonlinear bilevel programming (NBLP) problem is transformed into an equivalent smooth single objective nonlinear programming (SONP) problem utilized slack variable with a Karush-Kuhn-Tucker (KKT) condition. To solve the equivalent smooth SONP problem effectively, an interior-point Newton's method with Das scaling matrix is used. This method is locally method and to guarantee convergence from any starting point, a trust-region strategy is used. The proposed algorithm is proved to be stable and capable of generating approximal optimal solution to the nonlinear bilevel programming problem.

    A global convergence theory of the proposed algorithm is introduced and applications to mathematical programs with equilibrium constraints are given to clarify the effectiveness of the proposed approach.



    Bilevel programming problem has increasingly been addressed in the literature, both from the theoretical and computational points of view [14]. This model has been widely applied to decentralized planning problems involving a decision progress with a hierarchical structure. It is characterized by the existence of two optimization problems in which the constraint region of the first-level problem is implicitly determined by another optimization problem. The NBLP problem is hard to solve. In fact, the problem has been proved to be NP-hard [8]. However, the NBLP problem is used so extensively in resource allocation, finance budget, price control, transaction network etc. [1,7,28,29,39] that many researches have been devoted to this field, which leads to a rapid development in theories and algorithms. For the detailed expositions, the reader may consult [21,33].

    In this paper we will consider the following NBLP problem

    mintfu(t,y)s.t.gu(t,y)0,minyfl(t,y),s.t.gl(t,y)0, (1.1)

    where tn1 and yn2. The functions fu:n1+n2, fl:n1+n2, gu:n1+n2m1, and gl:n1+n2m2 are assumed to be at least twice continuously differentiable function.

    There are several approaches have proposed to solve problem 1.1, see [2,3,25,35,40]. One of these approaches and used in this paper, is converted the original two level problems to a single level one by replacing the lower level optimization problem with its Karush-Kuhn-Tucker (KKT) conditions, see [24,41]. By KKT optimality conditions for the lower-level problem, then we can reduce the NBLP problem 1.1 to one-level programming problem. This problem is non-convex and non-differentiable, moreover the regularity assumptions which are needed to successfully handle smooth optimization problems are never satisfied and it is not good to use our approach. So, we add slack variables for inequalities constraints in problem 1.1.

    By adding slack variables sum1 and slm2 to the upper inequality constraint gu(t,y) and the lower inequality constraint gl(t,y) respectively, then NBLP problem 1.1 can be written as follows

    mintfu(t,y)s.t.gu(t,y)+su=0,minyfl(t,y),s.t.gl(t,y)+sl=0,su0,sl0.

    The above NBLP problem can be simplified as follows

    mintfu(t,y)s.t.˜gu(t,y,su)=0,minyfl(t,y),s.t.˜gl(t,y,sl)=0,s0, (1.2)

    where ˜gu(t,y,su)=gu(t,y)+sum1, ˜gl(t,y,sl)=gl(t,y)+slm2, and s=(su,sl)Tm1+m2.

    Applying KKT conditions only on the lower-level problem without the constraint s0, then we can reduce the NBLP problem 1.2 to the following smooth SONP problem:

    mintfu(t,y)s.t.˜gu(t,y,su)=0,yfl(t,y)+y˜gl(t,y,sl)μl=0,˜gl(t,y,sl)=0,s0, (1.3)

    where μlm2 is a Lagrange multiplier vector associated with equality constraint ˜gl(t,y,sl), see [5].

    Using problem 1.3, to overcome the difficulty that problem 1.1 does not satisfy any regularity assumptions, which are needed for successfully handling smooth optimization problems, and pave the way for using the proposed approach to solve problem 1.1. To simply our discussion, we introduce the following notations. x=(t,y,s)Tn, n=n1+n2+m1+m2, h(x)m represents the vector of equality constraints such that m=m1+m2+n2. Then problem 1.3 can be written as follows

    minimizefu(x)subjecttoh(x)=0,vxw, (1.4)

    where v{{}}n, w{{+}}n, and v<w.

    Various approaches have been proposed to solve the SONP problem 1.4, see [5,9,10,11,15,16,17,18,19]. In this paper, we use Newton's interior point method with Das scaling matrix [12] to solve problem 1.4. Newton's method converges quadratically to a stationary point under reasonable assumptions if the starting point sufficiently closed to the stationary point. It may not converge if the starting point is far away from the stationary point. To guarantee convergence from any starting point, a trust-region strategy is used. The trust-region strategy can induce strongly global convergence, which is very important method for solving SONP problem and is more robust when it deals with rounding errors. It does not require the objective function of the model be convex or the Hessian of the objective function must be positive definite. Also, some criteria are used to test the trial step is acceptable or no. If it is not acceptable, then the subproblem must be resolved with a reduced the trust-region radius. For the detailed expositions, the reader may consult [4,17,20,21,22,23,30,32,36,42,43,45,46].

    A reduced hessian technique is used in this paper to overcome some difficulties in trust-region subproblem. This technique was suggested by [6,37] and used by [19,20].

    In this paper, we use the symbol, fukfu(xk), hkh(xk), PkP(xk) k(xk,λk), xkx(xk,λk), and so on to denote the function value at a particular point. Finally, all norms are l2-norms.

    The rest of the paper is organized as follows. In section 2, we introduce detailed description for the proposed method to solve problem 1.4. Section 3 is devoted to analysis of the global convergence of the proposed algorithm. Section 4 contains implementation of the proposed algorithm and the results of test problems. Section 5 contains concluding remarks.

    In this section, firstly, we will consider the detailed description for the Newton's interior-point method with Das scaling matrix to solve SONP problem 1.4. Secondly, to guarantee convergence from any starting point, we will introduce the detailed description for trust-region strategy. Finally, we clarify main steps for general algorithm to solve NBLP 1.1.

    Motivated by the impressive computational performance of Newton's interior-point method for solving SONP problem 1.4, let

    (x,λ)=fu(x)+λTh(x), (2.1)

    be a Lagrangian function associated with problem 1.4 without the constraints vxw, and let

    L(x,λ,μv,μw)=(x,λ)μvT(xv)μwT(wx), (2.2)

    be a Lagrangian function associated with problem 1.4 with the constraints vxw. The vectors λm, μvn, and μwn represent Lagrange multiplier vectors associated with the constraints h(x)=0, 0(xv), and 0(wx) respectively. Let ˆG={x:vxw} and int(ˆG)={x:v<x<w}.

    The first-order necessary conditions for the point x to be a local minimizer of problem 1.4 are the existence of multipliers λm, μvn+, and μwn+, such that (x,λ,μv,μw) satisfies

    x(x,λ)μv+μw=0, (2.3)
    h(x)=0, (2.4)
    vxw, (2.5)

    and for all i corresponding to x(i) with finite bound, we have

    (μv)(i)(x(i)v(i))=0, (2.6)
    (μw)(i)(w(i)x(i))=0, (2.7)

    where x(x,λ)=fu(x)+h(x)λ.

    The proposed algorithm here, like its predecessors in [12,18,19], starts at a point strictly feasible with respect to the bounds on the variables and produces iterates that are strictly feasible with respect to the bounds (i.e. 'in the interior'). Define a diagonal scaling matrix P(x)=diag(p(x) whose diagonal elements p(x) are given by

    p(i)(x)={(x(i)v(i)),ifv(i)>and(x(x,λ))(i)0,(w(i)x(i)),ifw(i)<+and(x(x,λ))(i)<0,1,otherwise. (2.8)

    Using the matrix P(x), then (x,λ,μv,μw) satisfy the KKT conditions [2.3-2.7] if and only if

    P2(x)x(x,λ)=0, (2.9)
    h(x)=0. (2.10)

    For more details about the proof, see [12].

    Applying Newton's method on the nonlinear system [2.9-2.10], then we have

    [P2(x)2x(x,λ)+diag(x(x,λ))diag(θ(x))]Δx+P2(x)h(x)Δλ=P2(x)x(x,λ), (2.11)
    h(x)TΔx=h(x). (2.12)

    where θ(x) is a vector whose components are given by

    θ(i)(x)={1,ifv(i)>and(x(x,λ))(i)0,1,ifw(i)<+and(x(x,λ))(i)<0,0,otherwise. (2.13)

    For more details see [18].

    In our method, the matrix P(x) must be nonsingular, so we restrict the point xint(ˆG). Multiplying both sides of equation 2.11 by P1(x), then we have

    [P(x)2x(x,λ)+P1(x)diag(x(x,λ))diag(θ(x))]Δx+P(x)h(x)Δλ=P(x)x(x,λ),h(x)TΔx=h(x).

    Substituting Δx=P(x)d in the above two system, then we have

    [P(x)H(x,λ)P(x)+diag(x(x,λ))diag(θ(x))]d+P(x)h(x)Δλ=P(x)x(x,λ), (2.14)
    (P(x)h(x))Td=h(x), (2.15)

    where H(x,λ)=2x(x,λ) represents the Hessian of the Lagrange function 2.1 or an approximation to it. It is easy to see that the step generated by the above system coincides with the solution of the following quadratic programming subproblem

    minimize(x,λ)+(P(x)x(x,λ))Td+12dTBdsubjecttoh(x)+(P(x)h(x))Td=0, (2.16)

    where B=P(x)H(x,λ)P(x)+diag(x(x,λ))diag(θ(x)). This means that, the point (x,λ) that satisfies the KKT conditions for subproblem 2.16 will satisfy the KKT conditions for problem 1.4.

    Although Newton's method converges quadratically to a stationary point under reasonable assumptions, it may not converge to a stationary point if the starting point is far away from the solution. To overcome this disadvantage and to guarantee convergence from any starting point, we use the trust-region technique.

    Trust-region methods can induce strongly global convergence, which are very important methods for solving a smooth nonlinear programming problem and are more robust when they deal with rounding errors. It does not require the objective function of the model be convex. Also, it does not demand the Hessian of the objective function must positive definite.

    The trust-region subproblem associated with problem 2.16 is

    minimizeqk(Pkdk)=k+(Pkxk)Td+12dTBkdsubjecttohk+(Pkhk)Td=0,dδk, (2.17)

    where δk>0 is the radius of the trust-region.

    Subproblem 2.17 may be infeasible, because there may be no intersecting points between the constraint dδk and hk+(Pkhk)Td=0 constraints. Even if they intersect, there is no warranty that this will continue true if δk is decreased. For more details see [13]. To overcome this difficulty, we use a reduced hessian technique. This technique was suggested by [6,37] and used by [19,20]. In this technique, the trial step d is decomposed into two orthogonal components: the normal component dn to improve feasibility and the tangential component dtk to improve optimality. Each of components is computed by solving unconstrained trust-region subproblem.

    ● How to estimate the normal component dnk

    The normal component dnk is computed by solving the following trust-region subproblem

    minimizehk+(Pkhk)Tdn2subjecttodnζδk, (2.18)

    for some 0<ζ<1. To solve the subproblem 2.18, we use a conjugate gradient method which is introduced by [38] and used by [21], see algorithm 2.1 in [21]. It is very cheap if the problem is large-scale and the Hessian is indefinite. By using the conjugate gradient method, the normal predicted decrease obtained by dnk is greater than or equal to a fraction of the normal predicted decrease obtained by the Cauchy step dncpk. This means that

    hk2hk+(Pkhk)Tdnk2ϑ1{hk2hk+(Pkhk)Tdncpk2}, (2.19)

    such that dncpk is defined as follows

    dncp=φncpkPkhkhk, (2.20)

    where the parameter φncpk is given by

    φncpk={Pkhkhk2(Pkhk)TPkhkhk2ifPkhkhk3(Pkhk)TPkhkhk2δk,and(Pkhk)TPkhkhk>0,δkPkhkhkotherwise. (2.21)

    Once dnk is obtained, we will evaluate dtk=Zkˉdtk where Zk is the matrix whose columns form a basis for the null space of (Pkhk)T.

    ● How to estimate the tangential component dtk

    To obtain the tangential component dtk, we use the conjugate gradient method [21] to solve the following trust-region subproblem

    minimize[ZTkqk(Pkdnk)]Tˉdt+12ˉdtTZTkBkZkˉdtsubjecttoZkˉdtΔk, (2.22)

    where qk(Pkdnk)=Pkxk+Bkdnk and Δk=δ2kdnk2.

    By using the conjugate gradient method, the tangential predicted decrease which is obtained by tangential step ˉdtk is greater than or equal to a fraction of the tangential predicted decrease obtained by a tangential Cauchy step ˉdtcpk. This means that

    qk(Pkdnk)qk(Pk(dnk+Zkˉdtk))ϑ2[qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))], (2.23)

    for some 0<ϑ21 and ˉdtcpk is defined as follows

    ˉdtcp=φtcpkZTkqk(Pkdnk), (2.24)

    where the parameter φtcpk is given by

    φtcpk={ZTkqk(Pkdnk)2(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ifZTkqk(Pkdnk)3(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)Δk,and(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)>0,ΔkZTkqk(Pkdnk)otherwise, (2.25)

    where ˉBk=ZTkBkZk.

    ● How to estimate a parameter γk

    Once obtaining dtk, we set dk=dnk+dtk and xk+1=xk+Pkdk. To ensure xk+1int(ˆG), we need to evaluate the parameter γk. To do this, evaluate

    a(i)k={v(i)x(i)kP(i)kd(i)k,ifv(i)>andP(i)kd(i)k<01,otherwise,

    and

    b(i)k={w(i)x(i)kP(i)kd(i)k,ifw(i)<andP(i)kd(i)k>01,otherwise.

    Compute

    γk=min{1,mini{a(i)k,b(i)k}}. (2.26)

    Once the trial step γkPkdk is evaluated, it needs to be tested to decide whether it will be accepted or not. To do this, we need to a merit function which is ties the objective function and the constraints in such a way that progress in the merit function means progress in solving problem. In our method, we use the following merit function which is introduced by [26] and known as an augmented Lagrange function

    Φ(x,λ;ρ)=(x,λ)+ρh(x)2, (2.27)

    where (x,λ) is defined in 2.1 and ρ>0 represents the penalty parameter.

    ● How to estimate λk+1

    The Lagrange multiplier vector λk+1 will be estimated as follows

    minimizefuk+1+hk+1λ2. (2.28)

    To test whether the point (xk+1,λk+1), will be accepted in the next iterate or no we need to define the following actual reduction Aredk and the predicted reduction Predk.

    The actual reduction Aredk in the merit function 2.27 in moving from (xk,λk) to (xk+γkPkdk,λk+1) is defined as follows

    Aredk=Φ(xk,λk;ρk)Φ(xk+γkPkdk,λk+1;ρk).

    Also we can write the actual reduction Aredk as follows,

    Aredk=Φ(xk,λk;ρk)Φ(xk+γkPkdk,λk+1;ρk),=(xk,λk)(xk+1,λk)ΔλTkhk+1+ρk[hk2hk+12], (2.29)

    where Δλk=(λk+1λk).

    The predicted reduction Predk is defined as follows

    Predk=(Pkx(xk,λk))Tγkdk12γ2kdTkBkdkΔλTk(hk+(Pkhk)Tγkdk)+ρk[hk2hk+(Pkhk)Tγkdk2]. (2.30)

    Since qk(γkPkdk)=k+(Pkxk)Tγkdk+12γ2kdTkBkdk, then Predk can be written as follows,

    Predk=qk(0)qk(γkPkdk)ΔλTk(hk+(Pkhk)Tγkdk)+ρk[hk2hk+(Pkhk)Tγkdk2]. (2.31)

    ● How to update the penalty parameter ρk

    To ensure Predk is strictly positive, we use the following scheme to update the positive penalty parameter ρk

    Algorithm 2.1.: (Updating the penalty parameter ρk)

    Set ρk+1=ρk.

    If

    Predkρk2[hk2hk+(Pkhk)Tγkdk2], (2.32)

    then set

    ρk=2[qk(γkPkdk)qk(0)+ΔλTk(hk+(Pkhk)Tγkdk)]hk2hk+(Pkhk)Tγkdk2+c0. (2.33)

    End if.

    ● How to test the step γkPkdk and update δk

    To decide the trial step γkPkdk will be accepted in the next iteration or no, we use the following algorithm.

    Algorithm 2.2.: (Testing the step γkPkdk and updating δk)

    Step 0. Choose 0<τ1<τ2<1, 0<β1<1<β2, and δminδ0δmax.

    Step 1. While AredkPredk<τ1 or Predk0.

    Do not accept the step and set δk=β1dk.

    Compute a new trial step.

    End while.

    Step 2. If τ1AredkPredk<τ2.

    Accept the step: xk+1=xk+γkPkdk.

    Set δk+1=max(δk,δmin).

    End if.

    Step 3. If AredkPredkτ2.

    Accept the step: xk+1=xk+γkPkdk.

    Set δk+1=min{δmax,max{δmin,β2δk}}.

    End if.

    Finally, the algorithm is stopped when either ZTkPkxk+hkε1, for some ε1>0 or dkε2 for some ε2>0.

    Main steps of the trust-region algorithm for solving subproblem 2.17 are summarized in the following algorithm.

    Algorithm 2.3. (Trust-region algorithm)

    Step 0. Starting with x0int(ˆG). Evaluate λ0, P0, and β0. Set ρ0=1 and c0=0.1.

    Choose ε1>0, ε2>0, and set k=0.

    Step 1. If ZTkPkxk+hkε1, then stop.

    Step 2. (To compute dk)

    a) Compute dnk by solving trust-region subproblem 2.18.

    b) Compute ˉdtk by solving trust-region subproblem 2.22.

    c) Set dk=dnk+Zkˉdtk.

    Step 3. If dkε2, then stop.

    Step 4. Compute γk using equation 2.26.

    Step 5. Update λk+1 using subproblem 2.28.

    Step 6. Update the penalty parameter using scheme 2.1.

    Step 7. Test the step γkPkdk and update δk by using algorithm 2.2.

    Step 8. Compute Pk+1 and αk+1 using definitions 2.8 and 2.13 respectively.

    Step 9. Set k=k+1 and go to Step 1.

    Main steps for solving NBLP problem1.1 are summarized in the following algorithm.

    Algorithm 2.4. (Interior-point trust-region (IPTR) algorithm)

    Step 1. Adding slack variables to inequalities in NBLP problem1.1 and convert it to problem 1.2.

    Step 2. By KKT optimality conditions for the lower-level problem, NBLP problem 1.2 is equivalent to the one level problem 1.3 which can be written in the form 1.4.

    Step 3. Using Newton's method and Das strategy to transform problem 1.4 to subproblem 2.16.

    Step 4. Using trust-region algorithm 2.3 to solve subproblem 2.16.

    The following section is devoted to global convergence analysis for IPTR algorithm 2.4.

    We state the general assumption under which the global convergence theory for IPTR algorithm 2.4 is proved.

    Let Ω be a convex subset of n that contains all points xkint(ˆG) and (xk+γkPkdk)int(ˆG). On the set Ω we state the following general assumptions under which the global convergence theory of IPTR algorithm is proved

    [GS1.] The functions fu(x), h(x)C2 for all xΩ.

    [GS2.] The matrix Pkhk has full column rank.

    [GS3.] All of fu(x), fu(x), 2fu(x), h(x), h(x), 2hi(x) for i=1,...,m and (PkHk)((Pkhk)T(Pkhk))1 are uniformly bounded in Ω.

    [GS4.] The sequence of Lagrange multiplier vectors {λk} is bounded.

    [GS5.] The sequence of approximate Hessian matrices {Hk} is bounded.

    An immediate consequence of the above general assumptions is that the existence of positive constant b1, such that

    ZTkBkb1,ZTkBkZkb1. (3.1)

    In this section, we introduce some important results which are needed in the subsequent proof.

    The following lemma shows how accurate the definition of Predk is as an approximation to Aredk.

    Lemma 3.1. Under assumptions GS1-GS5, there exists a positive constant K1, such that

    |AredkPredk|K1ρkγkdk2. (3.2)

    Proof. From Equations 2.29, 2.30, and using the inequality of Cauchy-Schwarz, we have

    |AredkPredk|12|γ2kPkdTk[Hk2(xk+ξ1γkPkdk)]Pkdk|+12|Δλkγ2kPkdTk2h(xk+ξ2γkPkdk)Pkdk|+12|γ2kdTkdiag(xk)diag(θk)dk|+|ΔλkPk[hkh(xk+ξ2γkPkdk)]Tγkdk|+2ρk|Pk[(hkh(xk+ξ2γkPkdk))hk]Tγkdk|+ρk|γ2kPkdTk[hkhTkh(xk+ξ2γkPkdk)h(xk+ξ2γkPkdk)T]Pkdk|,

    for some ξ1 and ξ2(0,1). Using the general assumptions GS1GS5 and 0<γk1, we have

    |AredkPredk|γk[κ1dk2+κ2ρkdk3+κ3ρkdk2hk], (3.3)

    where κ1, κ2, and κ3 are positive constants. Since ρk0, dkδmax, and hk is uniformly bounded, then inequality 3.2 hold.

    The following lemma obviously that the normal predicted reduction at any iteration k, is at least equal to the decrease in the 2-norm of the linearized constrained by the Cauchy step

    Lemma 3.2. Under assumptions GS1-GS5, there exists a constant K2>0, such that

    PredkK2γkρk2hkmin{hk,δk}. (3.4)

    Proof. Since dnk is obtained by approximating the solution of subproblem 2.18 using the conjugate gradient method [21], then the fraction of Cauchy decrease condition 2.19 is hold. We will consider two cases:

    Firstly, if dncp=δkPkhkhk(Pkhkhk) and δk(Pkhk)TPkhkhk2Pkhkhk3 then

    hk2hk+(Pkhk)Tdncpk2=2(Pkhkhk)TdncpkdncpTk(Pkhk)(Pkhk)Tdncpk=2δkPkhkhkδ2k(Pkhk)TPkhkhk2Pkhkhk22δkPkhkhkδkPkhkhkδkPkhkhk. (3.5)

    Secondly, if dncp=Pkhkhk2(Pkhk)TPkhkhk2(Pkhkhk) and δk(Pkhk)TPkhkhk2Pkhkhk3, then

    hk2hk+(Pkhk)Tdncpk2=2(Pkhkhk)TdncpkdncpTk(Pkhk)(Pkhk)Tdncpk=2Pkhkhk4(Pkhk)TPkhkhk2Pkhkhk4(Pkhk)TPkhkhk2=Pkhkhk4(Pkhk)TPkhkhk2Pkhkhk2Pkhk(Pkhk)T. (3.6)

    Using assumption GS2, we have

    Pkhkhkhk((Pkhk)TPkhk)1(Pkhk)T.

    Then, from the above inequality, inequalities 3.5, 3.6, and using assumption GS3, we have

    hk2hk+(Pkhk)Tdncpk2K2hkmin{hk,δk}.

    From the above inequality and 2.19, we have

    hk2hk+(Pkhk)Tdnk2K2hkmin{hk,δk}. (3.7)

    Since 0<γk1, then we have

    hk2hk+(Pkhk)Tγkdnk2γk[hk2hk+(Pkhk)Tdnk2].

    From inequality 3.7 and the above inequality, we have

    hk2hk+(Pkhk)Tγkdnk2K2γkhkmin{hk,δk}. (3.8)

    From inequalities 2.32 and 3.8 we have

    PredkK2γkρk2hkmin{hk,δk}.

    Lemma 3.3. Under assumptions GS1-GS5, there exists a positive constant K3, such that

    [qk(γkPkdnk)qk(γkPkdk)]K3γkZTkqk(Pkdnk)min{ZTkqk(Pkdnk)ˉB,Δk}. (3.9)

    Proof. Since the conjugate gradient method is used to solve subproblem 2.22 to obtain an approximate solution for ˉdtk, then the fraction of Cauchy decrease condition 2.23 is hold and we will consider two cases:

    Firstly, if ˉdtcpk=ΔkZTkqk(Pkdnk)(ZTkqk(Pkdnk)) and Δk(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ZTkqk(Pkdnk)3, then

    qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))=qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))=(ZTkqk(Pkdnk))Tˉdtcpk12ˉdtcpTkˉBkˉdtcpk=ΔkZTkqk(Pkdnk)Δ2k2ZTkqk(Pkdnk)2[(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)]ΔkZTkqk(Pkdnk)12ΔkZTkqk(Pkdnk)12ΔkZTkqk(Pkdnk). (3.10)

    Secondly, if ˉdtcpk=ZTkqk(Pkdnk)2ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ZTkqk(Pkdnk) and Δk(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ZTkqk(Pkdnk)3, then

    qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))=qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))=(ZTkqk(Pkdnk))Tˉdtcpk12ˉdtcpTkˉBkˉdtcpk=ZTkqk(Pkdnk)4(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ZTkqk(Pkdnk)42(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)=ZTkqk(Pkdnk)42(ZTkqk(Pkdnk))TˉBkZTkqk(Pkdnk)ZTkqk(Pkdnk)22ˉBk. (3.11)

    From inequalities 3.10, 3.11, and using necessary assumptions, we have

    qk(Pkdnk)qk(Pk(dnk+Zkˉdtcpk))K3ZTkqk(Pkdnk)min{ZTkqk(Pkdnk)ˉB,Δk}.

    From condition 2.23 and the above inequality, we have

    qk(Pkdnk)qk(Pk(dnk+Zkˉdtk))K3ZTkqk(Pkdnk)min{ZTkqk(Pkdnk)ˉB,Δk}. (3.12)

    Since 0<γk1, then we have

    qk(γkPkdnk)qk(γkPk(dnk+Zkˉdtk))γk[qk(Pkdnk)qk(Pk(dnk+Zkˉdtk))].

    From the above inequality and inequality 3.12, we have

    qk(γkPkdnk)qk(γkPkdk)K3γkZTkqk(Pkdnk)min{ZTkqk(Pkdnk)ˉB,Δk}.

    This completes the proof.

    The following lemma is needed in many forthcoming lemmas. In what follows, we use implicitly that hkdnk=hkdk.

    Lemma 3.4. Under assumptions GS1-GS5, there exists a positive constant K4, such that

    qk(0)qk(γkPkdnk)ΔλTk(hk+(Pkhk)Tγkdk)K4γkhk. (3.13)

    Proof. Since dnk is normal to the tangent space, then we have

    dnk=(Pkhk)[(Pkhk)T(Pkhk)]1(Pkhk)Tdk=(Pkhk)[(Pkhk)T(Pkhk)]1[hk+(Pkhk)Tdkhk](Pkhk)[(Pkhk)T(Pkhk)]1[hk+(Pkhk)Tdk+hk].

    Using the fact that hk+(Pkhk)Tdkhk, we have

    dnk2(Pkhk)[(Pkhk)T(Pkhk)]1hk.

    From above inequality and necessary assumptions, we have

    dnkκ4hk. (3.14)

    Since

    qk(0)qk(γkPkdnk)ΔλTk(hk+(Pkhk)Tγkdk)=(Pkxk)Tγkdnk12γ2kdnkTBkdnkΔλTk(hk+(Pkhk)Tγkdk)γkPkxkdnk12γ2kBkdnk2Δλk(hk+(Pkhk)Tγkdnk)γk[Pkxk+12γkBkdnk]dnkΔλTkhk.

    From the above inequality and inequality 3.14 and using the fact that dnkδmax, we have

    qk(0)qk(γkPkdnk)ΔλTk(hk+(Pkhk)Tγkdk)K4γkhk.

    This completes the proof.

    Lemma 3.5. Under assumptions GS1-GS5,

    PredkK3γkZTkqk(Pkdnk)min{ZTkqk(Pkdnk)ˉB,Δk}K4γkhk+ρk[hk2hk+(Pkhk)Tγkdk2]. (3.15)

    Proof. From Equation 2.31, we have

    Predk=qk(0)qk(γkPkdk)ΔλTk(hk+(Pkhk)Tγkdk)+ρk[hk2hk+(Pkhk)Tγkdk2]=[qk(γkPkdnk)qk(γkPkdk)]+[qk(0)qk(γkPkdnk)ΔλTk(hk+(Pkhk)Tγkdk)]+ρk[hk2hk+(Pkhk)Tγkdk2].

    Using inequalities 3.9 and 3.13, we obtain the desired result.

    The following lemma shows that, if ZTkPkxkε1 and hkηδki at any trial iteration ki, then the penalty parameter ρk is not increased.

    Lemma 3.6. Under assumptions GS1GS5, if ZTkPkxkε1 and hkηδki at any trial iteration ki, then there exists a positive constant K5, such that

    PredkiK5γkiδki+ρki{hk2hk+(Pkhk)Tγkidki2}, (3.16)

    where η is given by

    0<ηmin{32κ4,ε12b1κ4δmax,K3ε18K4min{ε1b1δmax,1}}.

    Proof. Since ZTkPkxkε1 and hkηδki, and using inequalities 3.1 and 3.14, we have

    ZTkqk(Pkdnki)=ZTk(Pkxk+Bkdnki)ZTkPkxkZTkBkdnkiε1b1κ4hkiε1b1κ4ηδki.

    But ηε12b1κ4δmax, hence

    ZTkqk(Pkdnki)12ε1. (3.17)

    From inequality 3.14, assumption hkηδki, and η32κ4, then we have dnkiκ4ηδkiκ432κ4δki=32δki. Since Δki=δ2kidnki2, then Δki12δki. Hence, from inequalities 3.15 and 3.17, we have

    PredkiK3γkiε14min{ε1b1δmax,1}δkiK4γkiηδki+ρki[hk2hk+(Pkhk)Tγkidki2].

    But ηK3ε18K4min{ε1b1δmax,1}, hence

    PredkiK3γkiε18min{ε1b1δmax,1}δki+ρki[hk2hk+(Pkhk)Tγkidki2].

    The result follows if we take K5=K3ε18min{ε1b1δmax,1}.

    The following lemma shows that, at any iteration k, we can find an acceptable step after finite number of trials, or equivalently, the condition Aredkj/Predkjτ1 will be satisfied for some finite j.

    Lemma 3.7. Under assumptions GS1GS5, if hk>ε1, where ε1>0, then Aredkj/Predkjτ1 will be satisfied for some finite j.

    Proof. From inequalities 3.2, 3.4, and assumption hk>ε1, we have

    |AredkPredk1|=|AredkPredk|Predk2K1γkδ2kK2γkε1min{ε1,δk}.

    If the trial step dkj gets rejected, then δkj becomes small and hence we have

    |AredkjPredkj1|2K1δkjK2ε1.

    That is the acceptance rule will be met after finite number of trials (i.e., for finite j) and this completes the proof.

    Lemma 3.8. Under assumptions GS1GS5 and at any iteration k, if

    dkjmin{(1τ1)K24K1,1}hk, (3.18)

    at the jth trial step, then the step must be accepted.

    Proof. Assume that inequality 3.18 holds and the step dkj is rejected. From the way of updating trust-region radius which is clarified in Algorithm 2.2 we have

    (1τ1)<|AredkjPredkj|Predkj.

    From the above inequality and using inequalities 3.2, 3.4, and 3.18 we have

    (1τ1)<|AredkjPredkj|Predkj<2K1dkj2K2hkdkj12(1τ1).

    This is a contradiction with the assumption dkj was rejected. Hence the step must be accepted.

    Lemma 3.9. Under assumptions GS1GS5 and for all trail iterates j of any iteration k we have

    δkjmin{δminb2,β1(1τ1)K24K1,β1}hk, (3.19)

    where b2=supxΩhk.

    Proof. Consider any trial iterate kj, if j=1, then the step is accepted and hence

    δkj=δk1δminδminb2Hk, (3.20)

    such that b2=supxΩhk.

    Now, if j>1, then there exists at least one rejected trial step. For all rejected trial steps, we have from Lemma 3.8,

    dki>min{(1τ1)K24K1,1}hk,

    for all i=1,2,...j1. Since dik is rejected trial step, then from the way of updating the radius of trust-region, we have

    δkj=β1dkj1>β1min{(1τ1)K24K1,1}hk.

    From the above inequality and inequality 3.20, we obtain the desired results.

    The following lemma show that the sequence of trust-region radii {δkj} is bounded away from zero if {hk} is bounded away from zero.

    Lemma 3.10. Under assumptions GS1GS5, if hkε1 where ε1>0, then there exists a constant K6>0 such that

    δkj>K6, (3.21)

    for all trial iterates j of any iteration k.

    Proof. From Lemma 3.9 and the condition hkε1, the proof follows directly by taking K6=min{δminb2,β1(1τ1)K24K1,β1}ε1.

    Lemma 3.11. Under assumptions GS1GS5, there exists a subsequence {ki} of the iteration sequence at which ρk is increased such that at any trial steps j of any iteration k{ki}, we have

    ρkjhkK7. (3.22)

    where K7 is a positive constant.

    Proof. At any trial steps j of any iteration k, if ρkj is increased, then from equation 2.33, we have

    ρkj2[hk2hk+(Pkhk)Tγkjdkj2]=[qk(Pkγkjdkj)qk(Pkγkjdnkj)]+[qk(Pkγkjdnkj)qk(0)+ΔλTkj(hk+(Pkhk)Tγkjdnkj)]+c02[hk2hk+(Pkhk)Tγkjdkj2].

    Applying inequality 3.8 on the left hand side and inequalities 3.9, 3.13, and 3.14 on the right hand side, we have

    K2ρkjγkj2hkmin{hk,δkj}K3γkjZkTqk(Pkdnkj)min{ZkTqk(Pksnkj)ˉB,Δkj}+K4γkjhk+c0γkjPkhkhkdnkj+c0γ2kj2(Pkhk)T2dnkj2,[K4+c0κ4Pkhkhk+c0κ4γkj2(Pkhk)T2dnkj]γkjhk.

    From assumptions GS2, GS3, and using the fact that dnkjδkjδmax, we have

    ρkjhkmin{hk,δkj}˜K7hk. (3.23)

    From inequalities 3.19 and 3.23, there exists a constant K7>0 such that

    ρkjhkK7,

    at any trial steps j for any iteration k{ki}.

    In the following lemma we will prove that the sequence {hk} is not bounded away from zero when {ρk} unbounded sequence.

    Lemma 3.12. Under assumptions GS1AS6, there exists a subsequence {ki} of the iteration sequence at which ρk is increased such that

    limkihki=0. (3.24)

    Proof: From Lemma 3.11 and the assumption ρk is increased, we obtain the desired result.

    In the following section, we prove the main global convergence results for IPTR algorithm 2.4.

    In the following theorem we prove that the sequence of the iterates generated by algorithm 2.4 converges to the feasible set.

    Theorem 3.1. Under assumptions GS1GS5, the sequence of iterates generated by IPTR algorithm satisfies

    limkhk=0.

    Proof. The proof of this theorem is by contradiction, so we assume that lim supkhkε1 where ε1>0. This implies the existence an infinite subsequence of indices {kj} indexing iterates that satisfy hkε12, for all k{kj}. From Lemma 3.7, there exists a finite sequence of acceptable steps. Without lose of generality, we assume all members of the sequence {kj} are acceptable iterates. Now we will consider two cases:

    Firstly, if the sequence of the penalty parameter {ρk} is unbounded, then there exists a subsequence {ki} of the iteration sequence at which ρk is increased. Using Lemma 3.12, we have limkihki=0. Therefore, there are no common elements between {ki} and {kj} at iteration k which is sufficiently large.

    From inequality 3.4 and Lemma 3.10, we have

    Aredkρkτ1Predkρkτ1ε1K2γk4min[ε12,δk]τ1ε1K2γk4min[ε12,ˉK6], (3.25)

    for all k{kj}, such that ˉK6=ε12min{δminb2,β1(1τ1)K22K1,β1}. Since

    Aredk=Φ(xk,λk;ρk)Φ(xk+γkPkdk,λk+1;ρk),=(xk,λk)(xk+1,λk+1)+ρk[hk2hk+12],

    then from 3.25 we have

    Aredkρk=kk+1ρk+hk2hk+12τ1ε1K2γk4min[ε12,ˉK6]>0. (3.26)

    Hence

    kk+1ρk+hk2hk+120, (3.27)

    for all acceptable steps which are generated by IPTR algorithm 2.4. Let k{kj} be an element between the two elements kˆi and kˆi+1 which are consecutive elements of the sequence {ki}. From inequality 3.26, we have

    kˆi+11k=kˆi{kk+1}ρk+hkˆi2hkˆi+12τ1ε1K2γk4min[ε12,ˉK6]>0.

    Since the value of ρk is the same for all iterates kˆi,,kˆi+11, we have

    kˆikˆi+1ρkˆi+hkˆi2hkˆi+12τ1ε1K2γk4min[ε12,ˉK6].

    Since ρk as k, and |k| is bounded, we can write

    hkˆi2hkˆi+12τ1ε1K2γk8min[ε12,ˉK6]>0,

    for kˆi sufficiently large. But this leads to a contradiction with Lemma 3.12.

    Secondly, if the sequence of the penalty parameters {ρk} is bounded, then there exists an integer ˉk such that for all kˉk, we have ρk=ˉρ. Since all the iterates of {kj} are acceptable, then for any ˉk{kj} we have

    Φ˜kΦ˜k+1=Ared˜kτ1Pred˜k. (3.28)

    From Lemma 3.10, inequality 3.4, we have for any ˜k{kj} and ˜kˉk

    Pred˜kK2τ˜kˉρ2h˜kmin{h˜k,δ˜k}K2τ˜kˉρε14min{ε12δmax,1}δ˜kK8δ˜kK6K8, (3.29)

    such that K8=K2τ˜kˉρε14min{ε12δmax,1}. From inequalities 3.28 and 3.29, we have

    Φ˜kΦ˜k+1τ1K6K8>0.

    This gives a contradiction with the fact that {Φk} is bounded below when {ρk} is bounded. Hence in both cases, we have a contradiction. Thus, the supposition is not correct and the theorem is proved.

    Theorem 3.2. Under assumptions GS1GS5, the algorithm is terminated because

    limk[ZTkPkk+hk]=0

    Proof. Assume that IPTR algorithm 2.4 does not terminate and that some subsequences of {ZTkPkk} convergence to zero, then the nontermination is immediately contradicted by Theorem 3.1.

    Now assume that for ˉk sufficiently large, there exists an index ˜k>ˉk such that ZTkPkkε1. Let {kj} be a subsequence of iterates that satisfy hkj>ηδkj, then limkjδkj=0 such that limkjhkj=0. This implies the existence of an infinite sequence {kj} of rejected trial steps. But this leads to contradiction. To show this, we consider two cases:

    Firstly, if the sequence of the penalty parameter {ρk} is unbounded, then from inequalities 3.3 and 3.4, we have

    |AredkjPredkj|Predkj[κ1dkj2+κ2ρkjdkj3+κ3ρkjdkj2hkj]K22ρkjhkjmin{hkj,δkj}[κ1dkj2+κ2ρkjdkj3+κ3ρkjdkj2hkj]K22ρkjhkjdkjmin{η,1}2κ1K2ρkjηmin{η,1}+[2κ2K2+2κ3K2η]δkjmin{η,1}.

    As ρkj and δkj0, then |AredkjPredkj|Predkj0. This means that for kj large enough, all trial steps dkj must be accepted. This leads to a contradiction, so δkj must be bounded away from zero in this case.

    Secondly, if the sequence of the penalty parameter {ρk} is bounded, then there exists an integer ˉk such that for all kˉk, ρk=ˉρ. Now, we discuss three cases:

    1] If the previous step is accepted (j=1), then from the way of updating the trust-region radius in algorithm 2.2, we have δkjδmin. That is δkj is bounded away from zero in this case.

    2] If j>1 and hkr>ηδkr for all r=1,,j1. Then

    (1τ1)<|AredkrPredkr|Predkr

    such that all the trial steps on {kj} are rejected. From above inequality and inequalities 3.2 and 3.4 we have

    (1τ1)<|AredkrPredkr|Predkr2K1dkrK2hkrmin{1,η}.

    Hence

    dkr>K2(1τ1)min{1,η}2K1hkk.

    But from the way of updating the radius of trust-region in algorithm 2.2, all the rejected trial steps satisfy δkr=β1dkr, hence

    δkr=β1dkr1K2β1η(1τ1)min{1,η}2K1δkrK2β1η(1τ1)min{1,η}2K1δmin.

    This means that δkr is bounded away from zero in this case.

    3] If j>1 and hkr>ηδkr does not hold for all r. Hence, there exists an integer i such that hkrηδkr for all r=1,,i, and hkr>ηδkr for all r=i+1,,j1. Since hkr>ηδkr for all r=i+1,,j1, then as the above case we can prove δkr is bounded away from zero.

    The case when hkrηδkr for all r=1,,i, then for all rejected trial steps, we have

    (1τ1)<|AredkrPredkr|Predkr.

    From inequality 3.2, Lemma 3.6, and the above inequality, we have

    (1τ1)<|AredkrPredkr|PredkrK1ˉρdkrK5.

    Hence

    dkr>K5(1τ1)K1ˉρ.

    From the way of updating the radius of trust-region, we have for all rejected trial step

    δkr=β1dkr1>β1K5(1τ1)K1ˉρ.

    Hence, δkr is bounded away from zeros. This leads to a contradiction and then for kj sufficiently large, all the iterates satisfy hkηδkj.

    For all successful steps and from the way of updating the radius of trust-region and Lemma 3.6, we have for all k{kj} and kˉk

    ΦkΦk+1=Aredkτ1Predkτ1K5γkδk,forallkˉk.

    We proved in the above cases, that δkj is bounded away from zeros. Then ΦkΦk+1>0. This leads to a contradiction with the fact that {Φk} is bounded below when {ρk} is bounded. Hence in both cases, we have a contradiction. Thus, the supposition is not correct and the theorem is proved.

    In this section, firstly the proposed algorithm IPTR is applied to the engineering application which is called two-echelon supply chain system with one manufacturer and one retailer.

    The manufacturer purchases raw materials from the supplier first, then after the manufacturer's production and processing, the end products are sold to the retailer, this problem is formulated as bilevel models for joint pricing and lot-sizing decisions, see [34].

    maxt1,t2fu=(t2˜Ps˜Tc˜Mc)t1t3y10.5˜cm˜T˜Pst3(y11)˜Omt1s.t.˜Ps+˜Tc+˜Mct210,t10,maxy1,y2fl=t1t2t3y1(y21)0.5˜cr˜Tt2t3˜Ort1y1s.t.1y25,y10.

    where ˜T=52; ˜Ps=4; ˜Tc=0.5; ˜Mc=1; ˜cm=˜cr=0.001; ˜Om=400; ˜Or=200. For more details about the above application and its notations, see [34].

    We solve this model in case of the manufacturer is the leader, who makes the first decision, and the retailer is the follower. Our results, when applying Algorithm (2.4) is t1=5.8778, t2=6.002, t3=19710.195, y1=7.691, y2=2.6007, fu=431230, and fl=8548300, which is closed to whose reported in [34].

    Secondly, we introduce an extensive variety of possible numeric bilevel nonlinear programming problems to clarify the effectiveness of our IPTR algorithm, since, Problems 1, 2, 6, 7, 13, and 14 have quadratic functions in both levels. Problems 3, 4, 5, 8, 9 all the inner level functions are convex and Problem 10 [27], at fixed x, the inner problem is convex. These problems are solved numerically with the help of algorithm (2.4) to clarify the effectiveness of that approach. For each test example, 10 independent runs with different initial starting point are performed to observe the consistency of the outcome. Statistical results of all examples are summarized in Table 1 which shows that the results found by the IPTR algorithm (2.4) are approximate or equal to those by the compared algorithms in the literature.

    Table 1.  Comparisons of the results by IPTR algorithm 2.4 and methods in reference.
    Problem (t,y) fu iter CPUs (t,y) fu
    fl nfunc time fl
    name IPTR IPTR IPTR IPTR Ref. Ref.
    prob(1) (0.8503, 0.0227, -2.6764 11 1.43 (0.8438, 0.7657, 0) -2.0769
    0.03589) 0.0332 12 -0.5863
    prob(2) (0.609, 0.391, 0, 0.6086 10 1.987 (0.609, 0.391, 0, 0.6426
    0, 1.828) 1.6713 14 0, 1.828) 1.6708
    prob(3) (0.97, 3.14, -8.92 6 2.9 (0.97, 3.14, -8.92
    2.6, 1.8) -6.05 8 2.6, 1.8) -6.05
    prob(4) (.5, .5, .5, .5) -1 10 1.68 (0.5, 0.5, 0.5, 0.5) -1
    0 14 0
    prob(5) (9.839, 10.059) 96.809 6 1.635 (10.03, 9.969) 100.58
    0.0019 9 0.001
    prob(6) (1.6879, 0.8805, 0) -1.3519 6 4.1 NA 3.57
    7.4991 11 2.4
    prob(7) (1, 0) 17 12 1.9 (1, 0) 17
    1 13 1
    prob(8) (0.75, 0.75, -2.25 10 1.002 (3/2, 3/2, 3/2, -2.1962
    0.75, 0.75) 0 11 3/2) 0
    prob(9) (11.138, 5) 2209.8 10 1.95 (11.25, 5) 2250
    222.52 13 197.753
    prob(10) (1, 0, 6.6387e-06) 6.6387e-06 5 2.987 (1, 0, 1) 1
    -6.6387e-06 7 -1
    prob(11) (24.972, 29.653, 4.9101 9 3.742 (25, 30, 5, 10) 5
    5.0238, 9.7565) 0.01332 12 0
    prob(12) (3, 5) 9 8 1.23 (3, 5) 9
    0 9 0
    prob(13) (0, 1.7405, -15.548 5 2.1 (0, 2, 1.875, 0.9063) -12.68
    1.8497, 0.9692) -1.4247 7 -1.016
    prob(14) (10.016, 0.81967) 81.328 6 2.12 (10.04, 0.1429) 82.44
    -0.3359 8 0.271
    prob(15) (0, 0.9, 0, 0.6, 0.4) -29.2 5 20.512 (0, 0.9, 0, 0.6, 0.4) -29.2
    3.2 6 3.2
    prob(16) (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 5 40.319 (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2
    0.3148 7 0.3148

     | Show Table
    DownLoad: CSV

    Table 1 also including the mean number of iterations (iter), the mean number of function evaluations (nfunc), the mean value of CPU time (CPUs) in seconds.

    For comparison, we have included the corresponding results of the mean value of CPU time (CPUs) obtained by Method in [31](Table 2), [27](Table 3), and [44](Table 4) respectively. It is clear from the results that our approach is capable for treating nonlinear bilevel programming problems even the upper and the lower levels are convex or not and the computed results converge to the optimal solution which is similarly or approximate to the optimal that reported in literature. Finally, it is clear from the comparison between the solutions obtained using IPTR algorithm with literature, that IPTR is able to find the optimal solution of all problems by a small number of iterations, small number of function evaluations, and less time.

    Table 2.  Comparisons of the results by IPTR (2.4) and method [31].
    Problem (t,y) fu CPUs (t,y) fu CPUs
    fl fl
    name IPTR IPTR IPTR method [31] method [31] method [31].
    prob(1) (0.8503, 0.0227, -2.6764 1.43 (0.8462, 0.769 2, 0) -2.0769 1.734
    0.03589) 0.0332 -0.5917
    prob(2) (0.609, 0.391, 0, 0.6086 1.987 (0.6111, 0.3889, 0, 0.6389 2.375
    0, 1.828) 1.6713 0, 1.8333) 1.6806
    prob(3) (0.97, 3.14, -8.92 2.9 (1.031 6, 3.097 8, -8.917 2 3.315
    2.6, 1.8) -6.05 2.597 0, 1.792 9) -6.137 0
    prob(4) (0.5, 0.5, 0.5, 0.5) -1 1.68 (0.5, 0.5, 0.5, 0.5) -1 1.576
    0 0
    prob(5) (9.839, 10.059) 96.809 1.635 (10, 10) 100 1.825
    0.0019 0
    prob(6) (1.6879, 0.8805, 0) -1.3519 4.1 (1.8889, 0.8889, 0) -1.2099 4.689
    7.4991 7.6173
    prob(7) (1, 0) 17 1.9 (1, 0) 17 1.769
    1 1
    prob(8) (0.75, 0.75, -2.25 1.002 (0.75, 0.75, -2.25 1.124
    0.75, 0.75) 0 0.75, 0.75) 0

     | Show Table
    DownLoad: CSV
    Table 3.  Comparisons of the results by IPTR (2.4) and method [27].
    Problem (t,y) fu CPUs (t,y) fu CPUs
    fl fl
    name IPTR IPTR IPTR method [27] method [27] method [27].
    prob(9) (11.138, 5) 2209.8 1.95 (11.25, 5) 2250 2.21
    222.52 197.753
    prob(10) (1, 0, 6.6387e-06) 6.6387e-06 1.9 (1, 0, -1) -1 3.38
    -6.6387e-06 1
    prob(12) (3, 5) 9 1.23 (3, 5) 9 -
    0 0

     | Show Table
    DownLoad: CSV
    Table 4.  Comparisons of the results by IPTR (2.4) and method [44].
    Problem (t,y) fu CPUs (t,y) fu CPUs
    fl fl
    name IPTR IPTR IPTR method [44] method [44] method [44].
    prob(3) (0.97, 3.14, -8.92 2.9 (1.03, 3.097, -8.92 11.854
    2.6, 1.8) -6.05 2.59, 1.79 -6.14
    prob(5) (9.839, 10.059) 96.809 1.635 (10, 10) 100.014 5.888
    0.0019 4.93e-7
    prob(6) (1.6879, 0.8805, 0) -1.3519 4.1 (1.8888, 0.888) -1.2091 25.332
    7.4991 7.6145
    prob(11) (24.972, 29.653 4.9101 3.742 (0, 30, -10, 10) 0 37.308
    5.0238, 9.7565) 0.01332 100
    prob(13) (0, 1.7405, -15.548 2.1 (4.4e-7, 2, -12.65 14.42
    1.8497, 0.9692) -1.4247 1.875, 0.9063) -1.021
    prob(14) (10.016, 0.81967) 81.328 2.12 (10.0164, 0.8197) 18.3279 4.218
    -0.3359 -0.3359
    prob(15) (0, 0.9, 0, 0.6, 0.4) -29.2 20.512 (0, 0.9, 0, 0.6, 0.4) -29.2 45.39
    3.2 3.2
    prob(16) (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 40.319 (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) -29.2 107.55
    0.3148 0.3148

     | Show Table
    DownLoad: CSV

    Problem 1 [31]:

    mintfu=y21+y22+t24ts.t.0t2,minyfl=y21+0.5y22+y1y2+(13t)y1+(1+t)y2,s.t.2y1+y22t1,y10,y20.

    Problem 2 [31]:

    mintfu=y21+y23y1y34y27t1+4t2s.t.t1+t21,t10,t20minyfl=y21+0.5y22+0.5y23+y1y2+(13t1)y1+(1+t2)y2,s.t.2y1+y2y3+t12t2+20,y10;y20y30.

    Problem 3 [31]:

    mintfu=0.1(t21+t22)3y14y2+0.5(y21+y22)s.t.minyfl=0.5(y21+5y22)2y1y2t1y1t2y2,s.t.0.333y1+y220,y10.333y220,,y10,y20,

    Problem 4 [31]:

    mintfu=t212t1+t222t2+y21+y22s.t.t10,t20minyfl=(y1t1)2+(y2t2)2,s.t.0.5y11.5,0.5y21.5,

    We offered the numerical results of our algorithm using MATLAB (R2013a)(8.2.0.701)64-bit(win64) and a starting point x0int(ˆG). The following parameter setting is used: δmin=103, δ0=max(scp0,δmin), δmax=103δ0, τ1=104, τ2=0.75, β1=0.5, β2=2, ˆε=0.01, ε1=108, and ε2=1010.

    Problem 5 [31]:

    mintfu=t2+(y10)2s.t.t+y0,0t15,minyfl=(t+2y30)2,s.t.t+y20,0y20,

    Problem 6 [31]:

    mintfu=(t1)2+2y212ts.t.t0,minyfl=(2y14)2+(2y21)2+ty1,s.t.4t+5y1+4y212,4t5y1+4y24,4t4y1+5y24,4t+4y1+5y24,y10,y20,

    Problem 7 [31]:

    mintfu=(t5)2+(2y+1)2s.t.t0,minyfl=(2y1)21.5ty,s.t.3t+y3,t0.5y4,t+y7,y0.

    Problem 8 [31]:

    mintfu=t213t1+t223t2+y21+y22s.t.t10,t20,minyfl=(y1t1)2+(y2t2)2,s.t.0.5y11.5,0.5y21.5,

    Problem 9 [27]:

    mintfu=16t2+9y2s.t.4t+y0,t0,minyfl=(t+y20)4,s.t.4t+y500,y0.

    Problem 10 [27]:

    mintfu=t3y1+y2s.t.0t1,minyfl=y2s.t.ty110,y21+ty21,y20.

    Problem 11 [44]:

    mintfu=2t1+2t23y13y260s.t.t1+t2+y12y240,0t150,0t250,minyfl=(y1t1+20)2+(y2t2+20)2,s.t.t12y110,t22y210,10y120,10y220.

    Problem 12 [27]:

    mintfu=(t3)2+(y2)2s.t.2t+y10,t2y+20,t+2y140,0t8,minyfl=(y5)2s.t.y0.

    Problem 13 [44]:

    mintfu=t213t224y1+y22s.t.t21+2t24,t10,t20,minyfl=2t21+y215y2,s.t.t212t1+2t222y1+y23,t2+3y14y24,y10,y20.

    Problem 14 [44]:

    mintfu=(t1)2+(y1)2s.t.t0,minyfl=0.5y2+500y50tys.t.y0.

    Problem 15 [44]:

    mintfu=8t14t2+4y140y24y3s.t.t10,t20minyfl=t1+2t2+y1+y2+2y3,s.t.y2+y3y11,2t1y1+2y20.5y31,2t2+2y1y20.5y31,yi0,i=1,2,3.

    Problem 16 [44]:

    mintfu=8t14t2+4y140y24y3s.t.t10,t20minyfl=1+t1+t2+2y1y2+y36+2t1+y1+y23y3,s.t.y1+y2+y3+y4=1,2t1y1+2y20.5y3+y5=1,2t2+2y1y20.5y3+y6=1,yi0,i=1,...,6.

    This paper presented a new technique for solving a nonlinear bilevel optimization problem based on using the slack variable with KKT condition to transform NBLP problem into an equivalent smooth SONP problem. A Newton's interior-point method with Das scaling matrix is utilized to solve the equivalent smooth SONP problem effectively. Newton's method is locally method, so a trust region technique is utilized to ensure global convergence from any starting point. On applying this methodology we overcome some known difficulties on treating such problems, as

    ● A trust-region technique can induce strongly global convergence, which is very important technique for solving a smooth optimization problems and is more robust when they deal with rounding errors

    ● Our approach used to transform Problem 1.3 which is not smooth to smooth problem

    ● Using the interior-point method guarantees the converges quadratically to a stationary point.

    On the other hand, the global convergence theorems for the IPTR algorithm is presented and numerical results reflect the good behavior of our algorithm and computed results converge to the optimal solutions. Finally, it is clear from the comparison between the solutions obtained using IPTR algorithm with literature, that IPTR is able to find the optimal solution of all problems by a small number of iterations.

    The authors would like to thank the anonymous referees for their valuable comments and suggestions which have helped to greatly improve this paper.

    The authors declare that there is no conflict of interest in this paper.



    [1] D. Aksen, S. Akca, N. Aras, A bilevel partial interdiction problem with capacitated facilities and demand outsourcing, Comput. Oper. Res., 41 (2014), 346–358. https://doi.org/10.1016/j.cor.2012.08.013 doi: 10.1016/j.cor.2012.08.013
    [2] Y. Abo-Elnaga, M. El-Shorbagy, Multi-Sine Cosine Algorithm for Solving Nonlinear Bilevel Programming Problems, Int. J. Comput. Int. Sys., 13 (2020), 421–432. https://doi.org/10.2991/ijcis.d.200411.001 doi: 10.2991/ijcis.d.200411.001
    [3] Y. Abo-Elnaga, S. Nasr, Modified Evolutionary Algorithm and Chaotic Search for Bilevel Programming Problems, Symmetry, 12 (2020), 1–29. https://doi.org/10.3390/sym12050767 doi: 10.3390/sym12050767
    [4] Y. Abo-Elnag, S. Nasr, K-means cluster interactive algorithm-basedevolutionary approach for solving bilevel multi-objective programming problems, Alexandria Engineering Journal, 61 (2022), 811–827. https://doi.org/10.1016/j.aej.2021.04.098 doi: 10.1016/j.aej.2021.04.098
    [5] M. Bazaraa, H. Sherali, C. Shetty, Nonlinear programming theory and algorithms, John Wiley and Sons, 2006. https://doi.org/10.1002/0471787779
    [6] R. Byrd, Omojokun, Robust trust-region methods for nonlinearly constrained optimization, A talk presented at the Second SIAM Conference on Optimization, Houston, TX, 1987.
    [7] A. Burgard, P. Pharkya, C. Maranas, Optknock: a bilevel programming framework for identifying gene knockout strategies formicrobial strain optimization, Biotechnol. Bioeng., 84 (2003), 647–657. https://doi.org/10.1002/bit.10803 doi: 10.1002/bit.10803
    [8] O. Ben-Ayed, O. Blair, Computational difficulty of bilevel linear programming, Oper. Res., 38 (1990), 556–560. https://doi.org/10.1287/opre.38.3.556 doi: 10.1287/opre.38.3.556
    [9] R. Byrd, M. Hribar, J. Nocedal, An interior point algorithm for largescale nonlinear programming, SIAM J. Optim., 9 (1999), 877–900. https://doi.org/10.1137/S1052623497325107 doi: 10.1137/S1052623497325107
    [10] R. Byrd, J. Gilbert, J. Nocedal, A trust region method based on interior point techniques for nonlinear programming, Math. Program., 89 (2000), 149–185. https://doi.org/10.1007/PL00011391 doi: 10.1007/PL00011391
    [11] F. E. Curtis, O. Schenk, A. Wachter, An interior-point algorithm for large-scale nonlinear optimization with inexact step computations, SIAM J. Sci. Comput., 32 (2010), 3447–3475. https://doi.org/10.1137/090747634 doi: 10.1137/090747634
    [12] I. Das, An interior point algorithm for the general nonlinear programming problem with trust region globlization, Technical Report 96-61, Institute for Computer Applications in Science and Engineering, NASA Langley Research Center Hampton, VA, USA, 1996.
    [13] J. Dennis, M. Heinkenschloss, L. Vicente, Trust-region interior-point SQP algorithms for a class of nonlinear programming problems, SIAM J. Control Optim., 36 (1998), 1750–1794. https://doi.org/10.1137/S036012995279031 doi: 10.1137/S036012995279031
    [14] S. Dempe, Foundations of Bilevel Programming, Kluwer Academic Publishers, London, 2002.
    [15] H. Esmaeili, M. Kimiaei, An efficient implementation of a trust-region method for box constrained optimization, J. Appl. Math. Comput., 48 (2015), 495–517. https://doi.org/10.1007/s12190-014-0815-0 doi: 10.1007/s12190-014-0815-0
    [16] B. El-Sobky, A global convergence theory for an active trust region algorithm for solving the general nonlinear programming problem, Appl. Math. Comput., 144 (2003), 127–157. https://doi.org/10.1016/S0096-3003(02)00397-1 doi: 10.1016/S0096-3003(02)00397-1
    [17] B. El-Sobky, A Multiplier active-set trust-region algorithm for solving constrained optimization problem, Appl. Math. Comput., 219 (2012), 928–946. https://doi.org/10.1016/j.amc.2012.06.072 doi: 10.1016/j.amc.2012.06.072
    [18] B. El-Sobky, An interior-point penalty active-set trust-region algorithm, Journal of the Egyptian Mathematical Society, 24 (2016), 672–680. https://doi.org/10.1016/j.joems.2016.04.003 doi: 10.1016/j.joems.2016.04.003
    [19] B. El-Sobky, An active-set interior-point trust-region algorithm, Pac. J. Optim., 14 (2018), 125–159.
    [20] B. El-Sobky, Y. Abouel-Naga, Multi-objective optimal load flow problem with interior-point trust-region strategy, Electr. Pow. Syst. Res., 148 (2017), 127–135. https://doi.org/10.1016/j.epsr.2017.03.014 doi: 10.1016/j.epsr.2017.03.014
    [21] B. El-Sobky, Y. Abouel-Naga, A penalty method with trust-region mechanism for nonlinear bilevel optimization problem, J. Comput. Appl. Math., 340 (2018), 360–374. https://doi.org/10.1016/j.cam.2018.03.004 doi: 10.1016/j.cam.2018.03.004
    [22] B. El-Sobky, A. Abotahoun, An active-set algorithm and a trust-region approach in constrained minimax problem, Comput. Appl. Math., 37 (2018), 2605–2631. https://doi.org/10.1007/s40314-017-0468-3 doi: 10.1007/s40314-017-0468-3
    [23] B. El-Sobky, A. Abotahoun, A Trust-Region Algorithm for Solving Mini-Max Problem, J. Comput. Math., 36 (2018), 881–902. https://doi.org/10.4208/jcm.1705-m2016-0735 doi: 10.4208/jcm.1705-m2016-0735
    [24] T. Edmunds, J. Bard, Algorithms for nonlinear bilevel mathematical programs, IEEE transactions on Systems, Man, and Cybernetics, 21 (1991), 83–89. https://doi.org/10.1109/21.101139 doi: 10.1109/21.101139
    [25] J. Falk, J. Liu, On bilevel programming, Part Ⅰ: general nonlinear cases, Math. Program., 70 (1995), 47–72. https://doi.org/10.1007/BF01585928 doi: 10.1007/BF01585928
    [26] M. Hestenes, Muliplier and gradient methods, J. Optimiz. Theory App., 4 (1969), 303–320. https://doi.org/10.1007/BF00927673 doi: 10.1007/BF00927673
    [27] Z. H. Gumus, C. A. Flouda, Global Optimization of Nonlinear Bilevel Programming Problems, J. Global Optim., 20 (2001), 1–31.
    [28] V. Gonzlez, J. Vallejo, G. Serrano, A scatter search algorithm for solving a bilevel optimization model for determining highway tolls, Comput. Syst., 19 (2015), 3529–3549. https://doi.org/10.13053/cys-19-1-1916 doi: 10.13053/cys-19-1-1916
    [29] G. Hibino, M. Kainuma, Y. Matsuoka, Two-level mathematical programming for analyzing subsidy options to reduce greenhouse-gas emissions, Tech. Report WP-96-129, IIASA, Laxenburg, Austria, 1996.
    [30] D. Kouri, M. Heinkenschloss, D. Ridzal, B. van Bloemen Waanders, A Trust-Region Algorithm with Adaptive Stochastic Collocation for PDE Optimization under Uncertainty, SIAM J. Sci. Comput., 35 (2020), 1847–1879. https://doi.org/10.1137/120892362 doi: 10.1137/120892362
    [31] H. Li, Y. Jiao, L. Zhang, Orthogonal genetic algorithm for solving quadratic bilevel programming problems, J. Syst. Eng. Electron., 21 (2010), 763–770. https://doi.org/10.3969/j.issn.1004-4132.2010.05.008 doi: 10.3969/j.issn.1004-4132.2010.05.008
    [32] N. Li, D. Xue, W. Sun, J. Wang, A stochastic trust-region method for unconstrained optimization problems, Math. Probl. Eng., (2019). https://doi.org/10.1155/2019/8095054 doi: 10.1155/2019/8095054
    [33] Y. Lva, T. Hua, G. Wanga, Z. Wanb, A neural network approach for solving nonlinear bilevel programming problem, Comput. Math. Appl., 55 (2008), 2823–2829. https://doi.org/10.1016/j.camwa.2007.09.010 doi: 10.1016/j.camwa.2007.09.010
    [34] W. Ma, M. Wang, X. Zhu, Improved particle swarm optimization based approach for bilevel programming problem-an application on supply chain model, Int. J. Mach. Learn. Cyber, 5 (2014), 281–290. https://doi.org/10.1007/s13042-013-0167-3 doi: 10.1007/s13042-013-0167-3
    [35] L. Ma, G. Wang, A Solving Algorithm for Nonlinear Bilevel Programing Problems Based on Human Evolutionary Model, Algorithms, 13 (2020), 1–12. https://doi.org/10.3390/a13100260 doi: 10.3390/a13100260
    [36] L. F. Niu, Y. Yuan, A new trust region algorithm for nonlinear constrained optimization, J. Comput. Math., 28 (2010), 72–86. https://doi.org/10.4208/jcm.2009.09-m2924 doi: 10.4208/jcm.2009.09-m2924
    [37] E. Omojokun, Trust-region strategies for optimization with nonlinear equality and inequality constraints, PhD thesis, Department of Computer Science, University of Colorado, Boulder, Colorado, 1989.
    [38] T. Steihaug, The conjugate gradient method and trust-region in large scale optimization, Siam J. Numer. Anal., 20 (1983), 626–637. https://doi.org/10.1137/0720042 doi: 10.1137/0720042
    [39] S. Sadatrasou, M. Gholamian, K. Shahanaghi, An application of data mining classification and bi-level programming for optimal credit allocation, Decis. Sci. Lett., 4 (2015), 35–50. https://doi.org/10.5267/j.dsl.2014.9.005 doi: 10.5267/j.dsl.2014.9.005
    [40] G. Savard, J. Gauvin, The steepest descent direction for the nonlinear bilevel programming problem, Oper. Res. Lett., 15 (1994), 265–272. https://doi.org/10.1016/0167-6377(94)90086-8 doi: 10.1016/0167-6377(94)90086-8
    [41] N. Thoai, Y. Yamamoto, A. Yoshise, Global optimization method for solving mathematical programs with linear complementarity constraints, Discussion Paper No. 987, Institute of Policy and Planning Sciences, University of Tsukuba, Japan, 2002.
    [42] X. Wang, Y. Yuan, A trust region method based on a new affine scaling technique for simple bounded optimization, Optimization Methods and Software, 28 (2013), 871–888. https://doi.org/10.1080/10556788.2011.622378 doi: 10.1080/10556788.2011.622378
    [43] X. Wang, Y. Yuan, An augmented Lagrangian trust region method for equality constrained optimization, Optimization Methods and Software, 30 (2015), 559–582. https://doi.org/10.1080/10556788.2014.940947 doi: 10.1080/10556788.2014.940947
    [44] Y. Wang, Y. Jiao, H. Li, An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-Handling scheme, IEEE T. Syst. Man Cy. C, 35 (2005), 221–232. https://doi.org/10.1109/TSMCC.2004.841908 doi: 10.1109/TSMCC.2004.841908
    [45] Y. Yuan, Recent advances in trust region algorithms, Math. Program. Ser. B, 151 (2015), 249–281. https://doi.org/10.1007/s10107-015-0893-2 doi: 10.1007/s10107-015-0893-2
    [46] M. Zeng, Q. Ni, A new trust region method for nonlinear equations involving fractional mode, Pac. J. Optim., 15 (2019), 317–329.
  • This article has been cited by:

    1. B. El-Sobky, G. Ashry, Y. Abo-Elnaga, An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem, 2022, 7, 2473-6988, 16112, 10.3934/math.2022882
    2. Bothina Elsobky, Gehan Ashry, An Active-Set Fischer–Burmeister Trust-Region Algorithm to Solve a Nonlinear Bilevel Optimization Problem, 2022, 6, 2504-3110, 412, 10.3390/fractalfract6080412
    3. B. El-Sobky, M. F. Zidan, A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games, 2023, 8, 2473-6988, 13706, 10.3934/math.2023696
    4. B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan, A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems, 2024, 9, 2473-6988, 6528, 10.3934/math.2024318
    5. D. Srinivasa Rao, Ch. Rajasekhar, GBSR Naidu, 2024, Chapter 2, 978-3-031-64063-6, 17, 10.1007/978-3-031-64064-3_2
    6. Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry, A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems, 2025, 10, 2473-6988, 2509, 10.3934/math.2025117
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2266) PDF downloads(69) Cited by(6)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog