Processing math: 44%
Research article

A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems

  • Received: 16 October 2024 Revised: 23 January 2025 Accepted: 29 January 2025 Published: 12 February 2025
  • MSC : 49M37, 65K05, 90C30, 90C55

  • This study is devoted to incorporating a nonmonotone strategy with an automatically adjusted trust-region radius to propose a more efficient hybrid of trust-region approaches for constrained optimization problems. First, the active-set strategy was used with a penalty and Newton's interior point method to convert a nonlinearly constrained optimization problem to an equivalent nonlinear unconstrained optimization problem. Second, a nonmonotone trust region was utilized to guarantee convergence from any starting point to the stationary point. Third, a global convergence theory for the proposed algorithm was presented under some assumptions. Finally, the proposed algorithm was tested by well-known test problems (the CUTE collection); three engineering design problems were resolved, and the results were compared with those of other respected optimizers. Based on the results, the suggested approach generally provides better approximation solutions and requires fewer iterations than the other algorithms under consideration. The performance of the proposed algorithm was also investigated, and computational results clarified that the suggested algorithm was competitive and better than other optimization algorithms.

    Citation: Bothina El-Sobky, Yousria Abo-Elnaga, Gehan Ashry. A nonmonotone trust region technique with active-set and interior-point methods to solve nonlinearly constrained optimization problems[J]. AIMS Mathematics, 2025, 10(2): 2509-2540. doi: 10.3934/math.2025117

    Related Papers:

    [1] B. El-Sobky, M. F. Zidan . A trust-region based an active-set interior-point algorithm for fuzzy continuous Static Games. AIMS Mathematics, 2023, 8(6): 13706-13724. doi: 10.3934/math.2023696
    [2] Yiting Zhang, Chongyang He, Wanting Yuan, Mingyuan Cao . A novel nonmonotone trust region method based on the Metropolis criterion for solving unconstrained optimization. AIMS Mathematics, 2024, 9(11): 31790-31805. doi: 10.3934/math.20241528
    [3] B. El-Sobky, G. Ashry, Y. Abo-Elnaga . An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem. AIMS Mathematics, 2022, 7(9): 16112-16146. doi: 10.3934/math.2022882
    [4] Ke Su, Wei Lu, Shaohua Liu . An area-type nonmonotone filter method for nonlinear constrained optimization. AIMS Mathematics, 2022, 7(12): 20441-20460. doi: 10.3934/math.20221120
    [5] B. El-Sobky, G. Ashry . An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem. AIMS Mathematics, 2022, 7(4): 5534-5562. doi: 10.3934/math.2022307
    [6] B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan . A nonmonton active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems. AIMS Mathematics, 2024, 9(3): 6528-6554. doi: 10.3934/math.2024318
    [7] Yueting Yang, Hongbo Wang, Huijuan Wei, Ziwen Gao, Mingyuan Cao . An adaptive simple model trust region algorithm based on new weak secant equations. AIMS Mathematics, 2024, 9(4): 8497-8515. doi: 10.3934/math.2024413
    [8] Xiaoping Xu, Jinxuan Liu, Wenbo Li, Yuhan Xu, Fuxiao Li . Modified nonmonotonic projection Barzilai-Borwein gradient method for nonnegative matrix factorization. AIMS Mathematics, 2024, 9(8): 22067-22090. doi: 10.3934/math.20241073
    [9] Zhensheng Yu, Peixin Li . An active set quasi-Newton method with projection step for monotone nonlinear equations. AIMS Mathematics, 2021, 6(4): 3606-3623. doi: 10.3934/math.2021215
    [10] Xuejie Ma, Songhua Wang . A hybrid approach to conjugate gradient algorithms for nonlinear systems of equations with applications in signal restoration. AIMS Mathematics, 2024, 9(12): 36167-36190. doi: 10.3934/math.20241717
  • This study is devoted to incorporating a nonmonotone strategy with an automatically adjusted trust-region radius to propose a more efficient hybrid of trust-region approaches for constrained optimization problems. First, the active-set strategy was used with a penalty and Newton's interior point method to convert a nonlinearly constrained optimization problem to an equivalent nonlinear unconstrained optimization problem. Second, a nonmonotone trust region was utilized to guarantee convergence from any starting point to the stationary point. Third, a global convergence theory for the proposed algorithm was presented under some assumptions. Finally, the proposed algorithm was tested by well-known test problems (the CUTE collection); three engineering design problems were resolved, and the results were compared with those of other respected optimizers. Based on the results, the suggested approach generally provides better approximation solutions and requires fewer iterations than the other algorithms under consideration. The performance of the proposed algorithm was also investigated, and computational results clarified that the suggested algorithm was competitive and better than other optimization algorithms.



    In this paper, we will consider the following nonlinear constrained optimization problem

    minimizef(x),subjecttoPi(x)=0,i˜E,Pi(x)0,i˜I,ˆuxˆv, (1.1)

    where f: n and Pi(x): nm, such that

    ˜E˜I={1,,m}

    and

    ˜E˜I=

    are twice continuously differentiable and m<n. We denote the feasible set

    E={x:ˆuxˆv}

    and the strict interior feasible set

    int(E)={x:ˆu<x<ˆv},

    where

    ˆu{{}}n,  ˆv{{}}n,

    and ˆu<ˆv.

    This work combines an active-set strategy with the penalty approach to transform problem (1.1) into an unconstrained optimization problem with bounded variables. To solve the unconstrained optimization problem with bounded-on variables, Newton's interior-point method, which was suggested in[1] and used by [2,3,4,5], is utilized. However, Newton's method is a local one, and it may not converge if the starting point is far from a stationary point. A nonmonotone trust-region mechanism deals with this problem and guarantees convergence from any starting point to the stationary point.

    A trust-region technique can induce strong global convergence and is a very important method for solving unconstrained and constrained optimization problems [6,7,8,9]. The trust-region technique is more robust when dealing with rounding errors. One advantage of this technique is that it does not require the model's objective function to be convex.

    A critical aspect in trust-region approaches is the strategy for determining the trust-region radius Δk at each iteration. The standard trust-region strategy is predicated on the objective function and the model agreement. The radius of the trust region is updated by paying attention to the ratio

    rk=AredkPredk,

    where Aredk represents the actual reduction, and Predk represents the predicted reduction. It is safe to increase Δk in the next iterate when rk is close to 1, and this is due to a good agreement between the objective function and the model over a current region of trust. Otherwise, Δk must be reduced.

    It is well-known that the standard trust-region radius Δk is independent of the gradient and Hessian of the objective function, so we cannot know if the radius Δk is convenient for the whole implementation. This situation increases the number of subproblems to solve in the inner steps of the method, decreasing its efficiency. If we reduce the number of ineffective iterations, we can decrease the number of subproblems solved in each step. Authors in [10] proposed a method for determining the initial radius monitoring agreement between the objective function and the model along the steepest descent path evaluated.

    Authors in [11] proposed the first customizable technique to reduce the number of solved subproblems. This technique used the gradient and Hessian information from the current iteration to calculate the trust-region radius Δk without requiring an initial trust-region radius.

    Motivated by the idea proposed in [11], authors in [12] proposed an automatically adjustable radius for trust-region methods and demonstrated that nonmonotone trust-region methods inherit the conventional trust-region method's strong convergence features. On the other hand, authors in [13] presented a nonmonotone strategy to line search methods for unconstrained optimization problems. Numerical experiments and theoretical analysis have referred to the effectiveness of this method in improving both the possibility of obtaining the global optimum and the rate of convergence of algorithms [14]. Motivated by these outstanding results, many researchers have been interested in combining the nonmonotone strategy with the trust-region methods [15,16].

    Nonmonotone approaches have altered the ratio rk when comparing Aredk to Predk. The following is a definition of one of the most common nonmonotone ratios:

    ~rk=fl(k)f(xk+dk)Predk,

    where

    fl(k)=max0jm(k){fkj}

    in which

    m(0)=0

    and

    m(k)=min{m(k1)+1,N}

    with an integer constant N0. It has been proven that the nonmonotone trust-region methods inherit the robust convergence properties of the trust-region method. The numerical experiments of the nonmonotone trust-region methods have shown that it is more efficient than the monotone versions, especially in the presence of the narrow, curved valley. Although the nonmonotone strategy in [13] performs well in many cases, it contains some drawbacks, and two important instances of these drawbacks can be described as follows:

    ● In any iterate, a good function value generated is essentially discarded due to {max} term in

    fl(k)=max0jm(k){fkj}.

    ● The numerical performances in some cases seriously depend on the choice of parameter N.

    Authors in [17] proposed a new nonmonotone strategy for line search methods. It was based on a weighted average of previous successive iterations. This strategy is generally efficient and promising when encountered with unconstrained optimization and can overcome the mentioned drawbacks. In this strategy, fl(k) is replaced with a weighted average of previous successive iterations Ck, which is defined as follows:

    Ck={f(xk),ifk=0,ηk1Qk1Ck1+f(xk)Qk,ifk1,

    and

    Qk={1,ifk=0,ηk1Qk1+1,ifk1,

    such that

    0ηminηk1ηmax1,

    where ηk is updated as follows

    ηk={0.5η0,ifk=1,0.5(ηk1+ηk2),ifk2. (1.2)

    Motivated by the advantage of the nonmonotone strategy in the trust-region framework [17], authors in [18] suggested a new nonmonotone trust-region method such that the ratio ~rk in their proposal changed as follows

    ~rk=Ckf(xk+dk)Predk.

    The investigation has proved that the combination of the nonmonotone strategy of [17] with the trust region a new method that has inherited the strong theoretical properties of the trust-region method as well as the computational robustness of the nonmonotone strategy.

    Motivated by the nonmonotone trust-region strategy in [18], we will utilize it in our proposed method. We expect it will significantly decrease the total number of iterations and function evaluations. We will clarify that under some conditions, the proposed nonmonotone trust-region active-set penalty (NTRAI) algorithm has global convergence properties.

    Furthermore, the applicability of the NTRAI approach to solving problem (1.1) was examined using well-known test problems (the CUTE collection), three mechanical engineering problems from the most recent test suite [19], and a nonconvex problem from [20]. Numerical experiments show that the NTRAI method exceeds rival algorithms in terms of efficacy.

    Some notations are utilized throughout this paper, and this is clarified in the rest of this section. The paper is organized as follows: In Section 2, a detailed description of the main steps of the NTRAI algorithm for constrained optimization problems is introduced. Section 3 is devoted to the global convergence theory of the NTRAI algorithm under some suitable conditions. Section 4 contains a Matlab implementation of the NTRAI algorithm and numerical results for three mechanical engineering problems. Finally, Section 5 contains concluding remarks.

    To express the function value at a particular point, we use the symbol

    fk=f(xk), fk=f(xk), 2fk=2f(xk), Pk=P(xk), Pk=P(xk), Yk=Y(xk), Zk=Z(xk)

    and so on. We denote the Hessian of the objective function fk or an approximation to it by Pk. Finally, every norm is l2-norms.

    In this section, we will first present a complete description of the significant steps of the active-set strategy using the penalty technique and Newton's interior-point approach. Then the nonmonotone trust-region algorithm's essential phases are presented. Finally, the key stages for applying the NTRAI method to problem (1.1) are shown.

    Consider the active-set strategy introduced in [21] and used by [5,7]. We will introduce a diagonal matrix Z(x)m×m whose diagonal entries are

    zi(x)={1,ifi˜E,1,ifi˜IandPi(x)0,0,ifi˜IandPi(x)<0, (2.1)

    where i=1,,m. Utilizing the diagonal matrix (2.1), we can write problem (1.1) as follows

    minimizef(x),subjecttoP(x)TZ(x)P(x)=0,ˆuxˆv,

    where

    P(x)=(P1(x),,Pm(x))T

    is a twice continuously differentiable function. The above problem is converted to the following equivalent problem by utilizing the penalty method [22]

    minimizef(x)+ρ2Z(x)P(x)2,subjecttoˆuxˆv, (2.2)

    where ρ>0 is a parameter.

    Let

    ˜ϕ(x;ρ)=f(x)+ρ2Z(x)P(x)2, (2.3)

    and then we will define a Lagrangian function associated with problem (2.2) as follows

    L(x,λˆu,λˆv;ρ)=˜ϕ(x;ρ)λTˆu(xˆu)λTˆv(ˆvx), (2.4)

    where λˆu and λˆv are Lagrange multiplier vectors associated with the inequality constraints xˆu0 and ˆvx0, respectively.

    A point xE will be a local minimizer of problem (2.2) if there exists multiplier vectors λˆu(x)n+ and λˆv(x)n+ such that the following conditions are satisfied

    ˜ϕ(x;ρ)λˆu(x)+λˆv(x)=0, (2.5)
    λ(j)ˆu(x(j)ˆu(j))=0, (2.6)
    λ(j)ˆv(ˆv(j)x(j))=0, (2.7)

    where

    ˜ϕ(x;ρ)=f(x)+ρP(x)Z(x)P(x).

    Motivated by the interior point method introduced in [1] and used by [2,4,8,9], we define a diagonal scaling matrix Y(x) whose diagonal elements are given by

    y(j)(x)={(x(j)ˆu(j)),if(˜ϕ(x;ρ))(j)0andˆu(j)>,(ˆv(j)x(j)),if(˜ϕ(x;ρ))(j)<0andˆv(j)<+,1,otherwise. (2.8)

    Utilizing the matrix Y(x), conditions (2.5)–(2.7) are equivalent to the following nonlinear system

    Y2(x)˜ϕ(x;ρ)=0. (2.9)

    Nonlinear Eq (2.9) is continuous but is not differentiable at some point xE for the following reasons:

    ● It may be non-differentiable when y(j)=0. To overcome this problem, we restrict xint(E).

    ● It may be non-differentiable when y(j) has an infinite upper bound and a finite lower bound, and

    (˜ϕ(x;ρ))(j)=0.

    To overcome this problem, we define a vector Ψ(x) whose components

    Ψ(j)(x)=((y(j))2)x(j),   j=1,,n

    are defined as follows

    Ψ(j)(x)={1,if(˜ϕ(x;ρ))(j)0andˆu(j)>,1,if(˜ϕ(x;ρ))(j)<0andˆv(j)<+,0,otherwise. (2.10)

    When we apply Newton's technique to the system (2.9), we get

    [Y2(x)2˜ϕ(x;ρ)+diag(˜ϕ(x;ρ))diag(Ψ(x))]Δx=Y2(x)˜ϕ(x;ρ), (2.11)

    where

    2˜ϕ(x;ρ)=H+ρP(x)Z(x)P(x)T, (2.12)

    and H is the Hessian of the objective function f(x) or an approximation to it.

    Assuming that xint(E), then the matrix Y(x) must be non-singular. Multiply both sides of Eq (2.9) by Y1(x) and set

    Δx=Y(x)d,

    we have

    [Y(x)2˜ϕ(x;ρ)Y(x)+diag(˜ϕ(x;ρ))diag(Ψ(x))]d=Y(x)˜ϕ(x;ρ). (2.13)

    Notice that the step dk, which is generated by system (2.13) is equivalent the step generated by solving the following quadratic programming subproblem

    minimize˜ϕ(x;ρ)+(Y˜ϕ(x;ρ))Td+12dTBd, (2.14)

    where

    B=Y(x)2˜ϕ(x;ρ)Y(x)+diag(˜ϕ(x;ρ))diag(Ψ(x)). (2.15)

    Newton's approach has the advantage of being quadratically convergent under reasonable assumptions, but it has the drawback of requiring the initial point to be close to the solution. The nonmonotone trust-region globalization approach ensures convergence from any starting point. It is a crucial method for solving a smooth, nonlinear, unconstrained, or constrained optimization problem that can produce substantial global convergence.

    For the purpose of solving problem (2.14), we introduce a detailed description of the nonmonotone trust-region algorithm.

    The trust-region subproblem associated with the problem (2.14) is

    minimizeqk(Ykd)=˜ϕ(xk;ρk)+(Yk˜ϕ(xk;ρk))Td,+12dTBkd,subjecttod∥≤Δk, (2.16)

    where Δk>0 is the radius of the trust region.

    Using a dogleg method, which is a very cheap method, to solve subproblem (2.16) and obtain the step dk. For more details, see [23].

    To compute the step dk

    In a dogleg method, the solution curve to the subproblem (2.16) is approximated by a piecewise linear function connecting the Newton point to the Cauchy point. The following algorithm explains the key phases of the dogleg approach to solve subproblem (2.16) and obtain dk.

    Algorithm 2.1. Dogleg algorithm.

    Step 1. Compute the parameter tcpk as follows:

    tcpk={(Yk˜ϕ(xk;ρk))2(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))ifYk˜ϕ(xk;ρk)3(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))Δk,and(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))>0,ΔkYk˜ϕ(xk;ρk),otherwise. (2.17)

    Step 2. Compute the Cauchy step

    dcpk=tcpk(Yk˜ϕ(xk;ρk)).

    Step 3. If

    dcpk=Δk,

    then set dk=dcpk.

    Else, If Yk˜ϕ(xk;ρk)+Bkdcpk=0, then set dk=dcpk.

    Else, compute Newton's step sN by solving the following subproblem:

    min(Yk˜ϕ(xk;ρk))TdN+12dNTBkdN.

    End if.

    Step 4. If dNkΔk, then set dk=dNk.

    Else, computing dogleg step between dcpk and dNk and compute dk as follows

    dk=dNk+dcpk.

    End if.

    By using dogleg Algorithm 2.1, the step dk satisfies the following condition, which is called a fraction-of-Cauchy decrease condition

    qk(0)qk(Ykdk)φ[qk(0)qk(Ykdcpk)] (2.18)

    for some φ(0,1].

    When a condition is referred to as a fraction-of-Cauchy decrease, it indicates that the predicted reduction obtained by the step dk is more than or equal to a fraction of the predicted decrease obtained by the dkcp (Cauchy step).

    To ensure that the matrix Yk is nonsingular, we need to guarantee that xk+1int(E). To do this, we need to a damping parameter τk.

    To obtain damping parameter τk

    The main steps to obtain the damping parameter τk are clarified in the following algorithm:

    Algorithm 2.2. Damping parameter τk.

    Step 1. Compute the parameter αk as follows:

    α(i)k={ˆu(i)x(i)kY(i)kd(i)k,ifˆu(i)>andY(i)kd(i)k<0,1,otherwise.

    Step 2. Compute the parameter βk as follows:

    β(i)k={ˆv(i)x(i)kY(i)kd(i)k,ifˆv(i)<andY(i)kd(i)k>0,1,otherwise.

    Step 3. Compute the damping parameter τk as follows

    τk=min{1,mini{α(i)k,β(i)k}}. (2.19)

    Step 4. Set

    xk+1=xk+τkYkdk.

    To test the scaling step τkYkdk to decide whether it will be accepted or not, a merit function is required. The merit function ties the objective function and the constraints in such a way that progress in the merit function means progress in solving the problem. The merit function that is used in our algorithm is the penalty function ˜ϕ(xk;ρk).

    Let the actual reduction Aredk be defined as follows

    Aredk=˜ϕ(xk;ρk)˜ϕ(xk+τkYkdk;ρk).

    Additionally, Aredk can be expressed as follows

    Aredk=f(xk)f(xk+τkYkdk)+ρk2[ZkPk2Zk+1Pk+12]. (2.20)

    Let the predicted reduction Predk be defined as follows

    Predk=qk(0)qk(τkYkdk)=(Ykfk)Tτkdk12τ2kdTkGkdk+ρk2[ZkPk2Zk(Pk+PTkYkτkdk)2], (2.21)

    where

    Gk=YkPkYk+diag(˜ϕ(xk;ρk))diag(ηk).

    To test τkYkdk and update Δk

    Motivated by the nonmonotone trust-region strategy in [18], we define

    ^rk=Ck˜ϕ(xk+τkYkdk;ρk)qk(0)qk(τkYkdk), (2.22)

    where

    Ck={˜ϕ(xk;ρk),ifk=0,ηk1Qk1Ck1+˜ϕ(xk;ρk)Qk,ifk1, (2.23)

    and

    Qk={1,ifk=0,ηk1Qk1+1,ifk1, (2.24)

    such that

    0ηminηk1ηmax1.

    The following algorithm clarifies how the trial step will be tested and updated the trust region radius Δk:

    Algorithm 2.3. Test τkYkdk and update Δk.

    Step 0. Choose 0<θ1<θ21, Δmax>Δmin, and 0<~α1<1<~α2.

    Step 1. Compute Qk using (2.24) and Ck using (2.23).

    Evaluate ^rk using (2.22).

    While ^rk<θ1, or Predk0.

    Set Δk=~α1dk.

    Return to algorithm (2.1) to evaluate a new step dk.

    Step 2. If θ1^rk<θ2, then set xk+1=xk+τkYkdk.

    Set Δk+1=max(Δmin,Δk).

    End if.

    Step 3. If ^rkθ2, then set xk+1=xk+τkYkdk.

    Set Δk+1=min{max{Δmin,~α2Δk},Δmax}.

    End if.

    To update the parameter ρk

    To update ρk, a scheme proposed by [24] is used; and we will clarify this in the algorithm that follows:

    Algorithm 2.4. Updating ρk.

    Step 1. Set ρ0=1 and use Eq (2.21) to evaluate Predk.

    Step 2. If

    Predk≥∥YkPkZkPkmin{YkPkZkPk,Δk}. (2.25)

    Then, set ρk+1=ρk.

    Else, set ρk+1=2ρk.

    End if.

    Finally, the nonmonotone trust-region algorithm is terminated, if

    Ykfk+YkPkZkPk∥≤ϵ1

    or

    dk∥≤ϵ2

    for some ϵ1>0 and ϵ2>0.

    Nonmonotone trust-region algorithm

    We will clarify the main steps of the nonmonotone trust-region algorithm to solve subproblem (2.16) in the algorithm that follows:

    Algorithm 2.5. The nonmonotone trust-region algorithm.

    Step 0. Initial value

          Starting x0int(E). Compute matrices Z0, Y0 and Ψ0. Set ρ0=1.

          Choose ϵ1,ϵ2,~α1,~α2,θ1,θ2, such that ϵ1>0,ϵ2>0,~α2>1>~α1>0 and 0<θ1<θ21.

          Choose Δ0, Δmin, and Δmax, such that ΔminΔ0Δmax.

          Set k=0.

    Step 1. If Ykfk+YkPkZkPk∥≤ϵ1, then stop.

    Step 2. Evaluating the trial step dk using the Algorithm 2.1.

    Step 3. Stop and end the algorithm if dk∥≤ϵ2.

    Step 4. Compute both τk and Yk using Algorithm 2.2 and Eq (2.8), respectively.

    Set xk+1=xk+τkYkdk.

    Step 5. Utilize (2.1) to evaluate Zk+1.

    Step 6. To test the scaling step and update Δk:

          i) Computing Qk using (2.24) and Ck using (2.23).

          ii) Compute ^rk using (2.22).

          iii) Using Algorithm 2.3 to test the scaling step τkYkdk and update the radius of the trust-region Δk.

    Step 7. Updating the parameter ρk using Algorithm 2.4.

    Step 8. Utilize (2.10) to evaluate Ψk+1.

    Step 9. Set k=k+1 and return to Step 1.

    The main steps for the NTRAI algorithm to solve problem (1.1) will be clarified in the following algorithm:

    Algorithm 2.6. NTRAI algorithm.

    Step 1. Utilize active-set strategy and penalty method to convert a nonlinearly constrained optimization problem (1.1) to an unconstrained optimization problem with bounded variables (2.2).

    Step 2. Utilize an interior-point method and a diagonal scaling matrix Y(x) given in (2.8), and the first-order necessary conditions (2.5)(2.7) equivalent to the nonlinear system in (2.9).

    Step 3. Utilize Newton's method to solve the nonlinear system (2.9) and obtain the equivalent subproblem (2.14).

    Step 4. Solve subproblem (2.14) using nonmonotone trust-region Algorithm 2.5.

    The global convergence analysis for NTRAI Algorithm 2.6 is conducted in the following section.

    In this section, a global convergence analysis for the NTRAI Algorithm 2.6 to solve problem (1.1) will be presented. First, we will introduce the necessary assumptions that are requested to prove the theory of global convergence for the NTRAI Algorithm 2.6. Second, we will introduce some lemmas that are required to prove the main results. Third, we will study the iteration sequence convergence when ρk is unbounded and bounded, respectively. Finally, the main global convergence results for the NTRAI Algorithm 2.6 will be proved.

    Let {xk} be the sequence of points generated by the NTRAI Algorithm 2.6 and let Ω be a convex subset of n that contains all iterates xkint(E) and xk+τkYkdkint(E), for all trial steps dk. On the set Ω, we assume the following assumptions, under which the global convergence theory will be proved.

    Assumptions:

    [As1] For all xΩ, the functions f(x) and P(x) are at least twice continuously differentiable.

    [As2] All of f(x), f(x), 2f(x), P(x), and P(x) are uniformly bounded in Ω.

    [As3] The sequence of Hessian matrices {Bk} is bounded.

    Some lemmas are required to prove the main global convergence theory. These lemmas are introduced in the following section:

    We shall introduce some lemmas that are necessary to support the main results.

    Lemma 3.1. Under assumptions As1As3, there exists a constant K1>0 such that,

    PredkK1τkYk˜ϕ(xk;ρk)min{Δk,Yk˜ϕ(xk;ρk)Bk}. (3.1)

    Proof. Since the fraction-of-Cauchy decrease condition (2.18) is satisfied by the trial step dk, then we will consider the following two cases:

    ⅰ) If

    dcpk=ΔkYk˜ϕ(xk;ρk)(Yk˜ϕ(xk;ρk))

    and

    Yk˜ϕ(xk;ρk)3Δk[(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))],

    then

    qk(0)qk(Ykdcpk)=(Yk˜ϕ(xk;ρk))Tdcpk12dcpTkBkdcpk=ΔkYk˜ϕ(xk;ρk)Yk˜ϕ(xk;ρk)212Δ2kYk˜ϕ(xk;ρk)2((Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk)))12ΔkYk˜ϕ(xk;ρk). (3.2)

    ⅱ) If

    dcpk=Yk˜ϕ(xk;ρk)2(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))(Yk˜ϕ(xk;ρk))

    and

    Yk˜ϕ(xk;ρk)3Δk((Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))),

    then we have

    qk(0)qk(Ykdcpk)=(Yk˜ϕ(xk;ρk))Tdcpk12dcpTkBkdcpk=12Yk˜ϕ(xk;ρk)4(Yk˜ϕ(xk;ρk))TBk(Yk˜ϕ(xk;ρk))Yk˜ϕ(xk;ρk)22Bk. (3.3)

    From inequalities (2.18), (3.2), and (3.3), we have

    qk(0)qk(Ykdk)K1Yk˜ϕ(xk;ρk)min{Δk,Yk˜ϕ(xk;ρk)Bk}. (3.4)

    From inequality (3.4) and the following fact

    qk(0)qk(Ykτkdk)τk[qk(0)qk(Ykdk)],

    where 0τk1, then we have

    qk(0)qk(Ykτkdk)K1τkYk˜ϕ(xk;ρk)min{Δk,Yk˜ϕ(xk;ρk)Bk}.

    From (2.21), we have

    Predk=q(0)q(Ykτkdk).

    Hence

    PredkK1τkYk˜ϕ(xk;ρk)min{Δk,Yk˜ϕ(xk;ρk)Bk}.

    This completes the proof.

    Lemma 3.2. Under assumptions As1 and As3, then Z(x)P(x) is Lipschitz continuous in Ω.

    Proof. The proof of this lemma similar [21, Lemma 4.1].

    We can verify that P(x)Z(x)P(x) is Lipschitz continuous in Ω and Z(x)P(x)2 is differentiable from Lemma 3.2.

    Lemma 3.3. At any iteration k, we have

    Zk+1=Zk+Ak, (3.5)

    where A(xk)m×m is a diagonal matrix whose diagonal entries are defined as follows

    (ak)i={1,if(Pk)i<0and(Pk+1)i0,1,if(Pk)i0and(Pk+1)i<0,0,otherwise, (3.6)

    where i=1,2,,m.

    Proof. See [6, Lemma 6.2].

    Lemma 3.4. Under assumptions As1As3, there exists a constant K2>0, such that

    AkPkK2dk. (3.7)

    Proof. See [6, Lemma 6.3].

    Lemma 3.5. Under assumptions As1As3, there exists a constant K3>0, such that

    AredkPredk∣≤K3τkρkdk2. (3.8)

    Proof. From Eqs (2.20) and (3.5), we have

    Aredk=f(xk)f(xk+τkYkdk)+ρk2[PTkZkPkP(xk+τkYkdk)T(Zk+Ak)P(xk+τkYkdk)].

    Subtracting the above equation from (2.21), and using Cauchy-Schwarz inequality, we have

    |AredkPredk|τ2k2dTkYk(2f(xk)2f(xk+ξ1τkYkdk))Ykdk+τ2k2dTkdiag(˜ϕ(xk;ρk))diag(Ψ)dk+ρkτkYk(PkP(xk+ξ2τkYkdk))ZkPkdk+ρkτ2k2dTkYk[PkZkPTkP(xk+ξ2Ykτkdk)ZkP(xk+ξ2Ykτkdk)T]Ykdk+ρkτ2k2AkPk2+ρkτkYkP(xk+ξ2Ykτkdk)AkPkdk+ρkτ2k2dTkYk[P(xk+ξ2Ykτkdk)AkP(xk+ξ2Ykτkdk)T]Ykdk

    for some ξ1 and \xi_2 \in (0, 1) . From assumptions As_1 As_3 and using Lemma 3.4, the proof is completed.

    The following section is devoted to the analysis of global convergence for NTRAI Algorithm 2.6 when \rho_k is unlimited.

    Observe that we do not assume that \nabla P(x) has full column rank for all x \in \Omega in assumptions As_1 - As_3 ; therefore, we may have alternative types of stationary points. The definitions that follow describe these stationary spots.

    Definition 3.1. (A Fritz John (FJ) point.) If there is \omega_*\in \Re and a Lagrange multiplier vector \sigma_* \in \Re^m that is not all zero, then the point x_*\in \Re is said to be a FJ point if the following conditions are satisfied

    \begin{align} \omega_*Y_*\nabla f(x_*)+ Y_*\nabla P(x_*) \sigma_* & = 0, \end{align} (3.9)
    \begin{align} Z(x_*) P(x_*) & = 0, \end{align} (3.10)
    \begin{align} (\sigma_*)_i P_i(x_*)& = 0, \;\;\; i = 1, 2, \cdots, m \end{align} (3.11)
    \begin{align} \omega_*, \; (\sigma_*)_i & \geq 0, \;\;\; i = 1, 2, \cdots, m. \end{align} (3.12)

    The conditions (3.9)–(3.12) are referred to as FJ conditions. See [25] for further information.

    The FJ conditions are referred to as a Karush-Kuhn-Tucker (KKT) conditions, and the point (x_*, 1, \frac{\sigma_*}{\omega_*}) is referred to as the KKT point if \omega_*\neq 0 .

    Definition 3.2. (Infeasible Fritz John (IFJ) point.) If there is \omega_*\in \Re and a Lagrange multiplier vector \sigma_* \in \Re^m that is not all zero, then the point x_*\in \Re is said to be a IFJ point if the following conditions satisfy

    \begin{align} \omega_*Y_*\nabla f(x_*)+ Y_*\nabla P(x_*) \sigma_* & = 0, \end{align} (3.13)
    \begin{align} Y(x_*)\nabla P(x_*) Z(x_*) P(x_*) & = 0, \;\;\;\; but \;\; \| Z(x_*) P(x_*) \| > 0, \end{align} (3.14)
    \begin{align} (\sigma_*)_i P_i(x_*)& = 0, \ \;\;\; i = 1, 2, \cdots, m, \end{align} (3.15)
    \begin{align} \omega_*, \; (\sigma_*)_i & \geq 0, \ \;\;\; i = 1, 2, \cdots, m. \end{align} (3.16)

    The conditions (3.13)–(3.16) are referred to as IFJ conditions. See [25] for further information.

    The IFJ conditions are referred to as the infeasible KKT conditions and the point (x_*, 1, \frac{\sigma_*}{\omega_*}) is referred to as the infeasible KKT point if \omega_*\neq 0 .

    The next two lemmas provide conditions that are equivalent to those stated in Definitions 1 and 2.

    Lemma 3.6. Under assumptions As_1 As_3 , there exists \{x_{k_i}\}\subseteq \{x_k\}_{k\geq0} , satisfies IFJ conditions if:

    1) \lim_{{k_i} \rightarrow \infty}\| Z_{k_i} P_{k_i} \| > 0.

    2) \lim_{{k_i} \rightarrow \infty}\left \{ \min_d \left \{\| Z_{k_i}(P_{k_i}+\nabla P_{k_i}^T Y_{k_i}\tau_{k_i}d)\|^2\right \} \right \} = \lim_{{k_i} \rightarrow \infty}\| Z_{k_i} P_{k_i}\|^2 .

    Proof. The proof of this lemma is similar to the proof of [2, Lemma 3.1].

    Lemma 3.7. Under assumptions As_1 As_3 , there exists \{x_{k_i}\} \subseteq \{ x_k\}_{k\geq0} satisfies FJ conditions if:

    1) For all k_i , \| Z_{k_i} P_{k_i} \| > 0 and \lim_{k_i \rightarrow \infty } Z_{k_i} P_{k_i} = 0.

    2) \lim_{{k_i} \rightarrow \infty}\left \{ \min_{d}\left \{ \frac{\| Z_{k_i}(P_{k_i}+\nabla P_{k_i}^T Y_{k_i}\tau_{k_i} d)\|^2}{\| Z_{k_i} P_{k_i}\|^2} \right \} \right \} = 1 .

    Proof. The proof of this lemma similar the proof of [26, Lemma 3.2].

    According to the Algorithm 2.4, the sequence of parameters \{\rho_k\} is only unlimited when there is an infinite subsequence of indexes k_i indexing iterates of acceptable steps that fulfill, for every k\in\{k_i\} ,

    \begin{equation} Pred_k < \|Y_k \nabla P_k Z_k P_k \| \min \{\| Y_k\nabla P_k Z_k P_k \|, \Delta_k\}. \end{equation} (3.17)

    A subsequence of iterates \{ x_k\} satisfies the FJ conditions or IFJ conditions if \rho_k \rightarrow \infty as k \rightarrow \infty . This is demonstrated by the next two lemmas.

    Lemma 3.8. Under assumptions As_1 As_3 and \rho_k \rightarrow \infty as k \rightarrow \infty .

    If \| Z_k P_k \| \geq \varepsilon > 0 for all k\in \{ k_i\} , a subsequence of the iteration sequence with index k_i exists and fulfills the IFJ conditions in the limit.

    Proof. For simplification, we assume k_i is k . This lemma's proof is due to a contradiction, so, assume that there is no subsequence of iterates that satisfies the IFJ conditions in the limit. From Lemma 3.6 and Definition 3.2, we have

    \mid\| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T Y_k\tau_k d)\|^2 \mid \geq \varepsilon_1

    and

    \| Y_k\nabla P_k Z_k P_k \| \geq \varepsilon_2 ,

    respectively. Hence

    \begin{align*} \|Y_k\nabla \tilde{\phi}(x_k; \rho_k)\|& = \|Y_k(\nabla f_k+ \rho_k \nabla P_k Z_k P_k ) \|\\&\geq \rho_k\|Y_k \nabla P_k Z_k P_k \|-\| Y_k \nabla f_k\|\\ &\geq \rho_k \varepsilon_2-\| Y_k\nabla f_k\|\geq \rho_k \varepsilon_2. \end{align*}

    From (2.15) and (2.12), we have

    \begin{align} \| B_k\|& = \|Y_k P_k Y_k +\rho_k Y_k\nabla P_k Z_k \nabla P_k^T Y_k+diag(\nabla \tilde{\phi} (x; \rho))diag(\Psi(x))\| \\ &\leq \rho_k (\| Y_k\nabla P_k Z_k \nabla P_k^T Y_k \|+ diag(\frac{1}{\rho_k}\nabla f_k+ \nabla P_k Z_k P_k)diag(\Psi(x)))+\| Y_k P_k Y_k\|. \end{align} (3.18)

    From inequalities (3.1) and (3.18), we have

    \begin{equation} Pred_k \geq K_1 \rho_k \tau_k \varepsilon_2 \min \{\Delta_k, \frac{\varepsilon_2}{\| Y_k\nabla P_k Z_k \nabla P_k^T Y_k \|+diag( \nabla P_k Z_k P_k)diag(\Psi(x))} \} \end{equation} (3.19)

    for k sufficiently large. There is an infinite number of acceptable iterates at which inequality (3.17) holds since \rho_k \rightarrow \infty . From inequalities (3.17) and (3.19), we have

    \begin{align*} \| Y_k\nabla P_k Z_k P_k \| &\min \{\| Y_k\nabla P_k Z_k P_k \|, \Delta_k\}\geq K_1, \\ \rho_k \tau_k \varepsilon_2 &\min \{\Delta_k, \frac{\varepsilon_2}{\| Y_k\nabla P_k Z_k \nabla P_k^T Y_k \|+diag( \nabla P_k Z_k P_k)diag(\Psi(x))} \}. \end{align*}

    According to the assumption As_2 , the preceding inequality's right side tends toward infinity as k \rightarrow \infty and the left-hand side is bounded such that

    \lim\limits_{k \rightarrow \infty}\tau_k = 1.

    This result gives a contradiction unless \rho_k\Delta_k is bounded. That is \Delta_k \rightarrow 0 .

    Now, if \rho_k \rightarrow \infty , at k \rightarrow \infty , we will consider two cases:

    First, if

    \| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T \tau_k Y_k d_k)\|^2 \geq \varepsilon_1 ,

    then

    \rho_k \{\| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T \tau_k Y_k d_k)\|^2 \}\geq \rho_k \varepsilon_1 \rightarrow \infty.

    Hence, Pred_k\rightarrow \infty using assumptions As_2 and As_3 . In other words, the left-hand side of inequality (3.17) goes to infinity since k \rightarrow \infty , but the right-hand side goes to zero since \Delta_k \rightarrow 0 . Then we get a contradiction with the assumption in this case.

    Second, if

    \| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T \tau_k Y_k d_k)\|^2 \leq -\varepsilon_1 ,

    then we have

    \rho_k \{\| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T \tau_k Y_k d_k)\|^2 \}\leq -\rho_k \varepsilon_1 \rightarrow -\infty.

    Similar to the first case, Pred_k\rightarrow -\infty , but Pred_k > 0 and this gives a contradiction. These two contradictions prove the lemma.

    The following lemma shows that the behavior of NTRAI Algorithm 2.6 when

    \liminf\limits_{k \rightarrow \infty}\| Z_k P_k \| = 0

    and \rho_k \rightarrow \infty as k \rightarrow \infty .

    Lemma 3.9. Under assumptions As_1 As_3 and at \rho_k \rightarrow \infty as k \rightarrow \infty , then there exists a subsequence \{k_i\} of iterates that satisfies the FJ conditions in the limit if \| Z_k P_k \| > 0 for all k\in \{k_i\} and

    \lim\limits_{{k_i} \rightarrow \infty}\| Z_{k_i} P_{k_i} \| = 0.

    Proof. For simplification, we assume k_i is k . Assume that there is no subsequence of iterations that fulfills the FJ conditions in the limit since the demonstration of this lemma relies on contradiction. From Lemma 3.7, we have

    \begin{equation} \frac{\mid\| Z_k P_k\|^2 - \| Z_k(P_k+\nabla P_k^T \tau_k Y_k d_k)\|^2 \mid}{\| Z_k P_k\|^2} \geq \varepsilon > 0 \end{equation} (3.20)

    for each k very large.

    Now we will consider three cases if \rho_k \rightarrow \infty , at k \rightarrow \infty :

    First, if

    \liminf\limits_{k \rightarrow \infty}\frac{d_k}{\| Z_k P_k \|} = 0,

    then there is a contradiction with inequality (3.20).

    Second, if

    \limsup\limits_{k \rightarrow \infty}\frac{d_k}{\| Z_k P_k \|} = \infty.

    From subproblem (2.16), we have

    \begin{equation} Y_k\nabla f_k + \rho_k Y_k\nabla P_k Z_k P_k = -(B_k + \upsilon_k I)d, \end{equation} (3.21)

    where 0 < \upsilon_k is the Lagrange multiplier vector of the trust region constraint. Hence, we can write inequality (3.1) as follows

    Pred_k \geq K_1\tau_k\parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \min \{\Delta_k, \frac{\parallel (B_k + \upsilon_k I)d_k \parallel}{ \parallel B_k \parallel} \}.

    From (2.15) and the above inequality, we have

    \begin{eqnarray} Pred_k \geq K_1\tau_k\parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \min \{\Delta_k, \frac{\parallel ( Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k+\frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k \parallel} \}, \\ \end{eqnarray} (3.22)

    where

    \hat{G} = \frac{1}{\rho_k} Y_k P_k Y_k+diag( \frac{1}{\rho_k}\nabla f_k+\nabla P_k Z_k P_k)diag(\Psi(x)).

    As a result, there are an infinite number of acceptable steps at which inequality (3.17) holds. From inequality (3.17), we have

    \begin{equation} Pred_k < \|Y_k \nabla P_k\|^2 \|Z_k P_k \|^2, \end{equation} (3.23)

    and using inequalities (3.22) and (3.23), we have

    \begin{align*} K_1\tau_k\parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel &\min \{\Delta_k, \frac{\parallel ( Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k+ \frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k \parallel} \}\\ & < \kappa^2 \|Z_k P_k \|^2, \end{align*}

    where

    \kappa = sup_{x\in \Omega}\|Y_k \nabla P_k\|.

    Dividing the above inequality by \| Z_k P_k\| , then

    \begin{align} K_1\tau_k\parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel &\min \{\frac{\Delta_k}{\| Z_k P_k\|}, \frac{\parallel ( Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k+ \frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k\parallel \| Z_k P_k\|} \} \\ & < \kappa^2 \|Z_k P_k \|, \end{align} (3.24)

    As k \rightarrow \infty , the right-hand side of the previous inequality goes to zero. That is,

    \parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \frac{\parallel (Y_k\nabla P_k Z_k \nabla P_k^TY_k+\hat{G}_k + \frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^TY_k+ \hat{G}_k \parallel \| Z_k P_k\|}

    is bounded along the subsequence \{ k_i\} where

    \lim\limits_{k_i \rightarrow \infty}\frac{d_{k_i}}{\| Z_{k_i} P_{k_i} \|} = \infty.

    That is, either \frac{d_{k_i}}{\| Z_{k_i} P_{k_i} \|} lies in the null space of

    Y_{k_i}\nabla P_{k_i} Z_{k_i} \nabla P_{k_i}^T Y_{k_i} + \frac{\upsilon_{k_i}}{\rho_{k_i}} I

    or

    \parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \rightarrow 0.

    The first possibility only occurs when \frac{\upsilon_{k_i}}{\rho_{k_i}}\rightarrow 0 as k_i \rightarrow \infty and \frac{d_{k_i}}{\| Z_{k_i} P_{k_i} \|} lies in the matrix's null space. Y_{k_i}\nabla P_{k_i} Z_{k_i} \nabla P_{k_i}^T Y_{k_i} contradicts assumption (3.20) and implies that a subsequence of \{ k_i\} satisfies the FJ conditions in the limit.

    The second possibility implies

    \parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \rightarrow 0

    as k_i \rightarrow \infty . Hence, \rho_{k_i} \|Y_{k_i}\nabla P_{k_i} Z_{k_i} P_{k_i}\| must be bounded, and we have \nabla f_{k_i} = 0 . This implies that a subsequence of \{k_i\} satisfies the FJ conditions in the limit.

    Third, if

    \limsup\limits_{k \rightarrow \infty}\frac{d_k}{\| Z_k P_k \|} < \infty

    and

    \liminf\limits_{k \rightarrow \infty}\frac{d_k}{\| Z_k P_k \|} > 0.

    Therefore \| d_k\|\rightarrow 0 . As a result, in the second case, as k \rightarrow \infty , the right-hand side of (3.24) goes to zero. Hence

    \parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \frac{\parallel ( Y_k\nabla P_k Z_k \nabla P_k^TY_k+ diag(\nabla P_k Z_k P_k)diag(\Psi(x))+ \frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^T Y_k+ diag(\nabla P_k Z_k P_k)diag(\Psi(x)) \parallel \| Z_k P_k\|}\rightarrow 0.

    This implies that, either

    \parallel Y_k(\nabla f_k + \rho_k \nabla P_k Z_k P_k )\parallel \rightarrow 0

    or

    \frac{\parallel ( Y_k\nabla P_k Z_k \nabla P_k^TY_k+ diag(\nabla P_k Z_k P_k)diag(\Psi(x))+ \frac{\upsilon_k}{\rho_k} I)d_k \parallel}{ \parallel Y_k\nabla P_k Z_k \nabla P_k^T Y_k+ diag(\nabla P_k Z_k P_k)diag(\Psi(x)) \parallel \| Z_k P_k\|}\rightarrow 0.

    In a similar way to the above second case, we can prove that a subsequence of \{k_i\} satisfies the FJ conditions in the limit. The proof is now complete.

    We continue our analysis in this section on the assumption that the penalty parameter \rho_k is bounded. In other words, we proceed with our analysis assuming that there is an integer \bar{k} such that \rho_k = \bar{\rho} < \infty for all k\geq \bar{k} .

    Lemma 3.10. Assume that \{x_ k\} is the sequence of iterations generated by the NTRAI algorithm, then we have

    \begin{equation} \tilde{\phi}(x_{k+1}; \bar{\rho})\leq C_{k+1}\leq C_k. \end{equation} (3.25)

    Proof. Let iterate k be a successive iterate, then from Algorithm 2.3, we have

    \hat{r_k} = \frac{C_k-\tilde{\phi}(x_k + \tau_k Y_k d_k; \bar{\rho})}{Pred_k}\geq \theta_1.

    That is,

    C_k-\tilde{\phi}(x_k + \tau_k Y_k d_k; \bar{\rho})\geq \theta_1 Pred_k

    and by using inequality (3.1), we have

    \begin{equation} \tilde{\phi}(x_{k+1}; \bar{\rho})\leq C_k- K_1 \theta_1\tau_k \parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel \min \{\Delta_k, \frac{\parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel}{ \parallel B_k \parallel} \}. \end{equation} (3.26)

    From (2.23), (2.24) and using inequality (3.26), then we have

    \begin{align*} C_{k+1}& = \frac{\eta_k Q_k C_k+\tilde{\phi}(x_{k+1}; \bar{\rho})}{Q_{k+1}}\\ &\leq \frac{\eta_k Q_k C_k+ C_k- K_1 \theta_1\tau_k \parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel \min \{\Delta_k, \frac{\parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel}{ \parallel B_k \parallel} \}}{Q_{k+1}}\\ &\leq \frac{C_k(\eta_k Q_k+ 1)- K_1 \theta_1\tau_k \parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel \min \{\Delta_k, \frac{\parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel}{ \parallel B_k \parallel} \}}{Q_{k+1}}\\ &\leq C_k - \frac{ K_1 \theta_1\tau_k \parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel \min \{\Delta_k, \frac{\parallel Y_k\nabla \tilde{\phi}(x_k; \bar{\rho})\parallel}{ \parallel B_k \parallel} \}}{Q_{k+1}}. \end{align*}

    That is,

    \begin{equation} C_{k+1}\leq C_k. \end{equation} (3.27)

    From (2.23), we have

    C_{k+1} = \frac{\eta_k Q_k C_k+\tilde{\phi}(x_{k+1}; \bar{\rho})}{Q_{k+1}}.

    Using (2.24), then we have

    C_{k+1}(\eta_k Q_k+1) = \eta_k Q_k C_k+\tilde{\phi}(x_{k+1}; \bar{\rho}).

    Hence

    \begin{equation} C_{k+1}- C_k = \frac{\tilde{\phi}(x_{k+1}; \bar{\rho})-C_{k+1}}{\eta_k Q_k}. \end{equation} (3.28)

    From (3.27) and (3.28), we have

    \begin{equation} \tilde{\phi}(x_{k+1}; \bar{\rho})\leq C_{k+1}. \end{equation} (3.29)

    From (3.27) and (3.29), we have

    \tilde{\phi}(x_{k+1}; \bar{\rho})\leq C_{k+1}\leq C_k.

    This completes the proof.

    Lemma 3.11. Under assumptions As_1 As_3 and at any iteration k at which

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k)\|+ \| Y_k \nabla P_k D_k P_k) \| > \epsilon_1.

    Then, there exists a constant K_4 > 0 such that

    \begin{equation} Pred_k \geq K_4 \tau_k \Delta_k. \end{equation} (3.30)

    Proof. From (2.12), (2.15), and using assumptions As_1 As_3 , then there exists a constant b_1 > 0 such that \|B_k\|\leq b_1 for all k . Let

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k) \| > \frac{\epsilon_1}{2}

    and using inequality (3.1), we have

    \begin{align*} \label{eqn4.63} Pred_k &\geq K_1\tau_k\parallel Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k)\parallel \min \{\Delta_k, \frac{\parallel Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k) \parallel}{ \parallel B_k \parallel} \}\nonumber\\ &\geq \frac{1}{2} K_1\tau_k\epsilon_1\min\{1, \frac{\epsilon_1}{2 b_1\Delta_{max}} \}\Delta_k\nonumber\\ &\geq K_4 \tau_k\Delta_k, \end{align*}

    where

    K_4 = \frac{1}{2} K_1 \epsilon_1\min\{1, \frac{\epsilon_1}{2b_1\Delta_{max}} \}.

    This completes the proof.

    Lemma 3.12. Under assumptions As_1 As_3 and if

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k)\|+ \| Y_k \nabla P_k Z_k P_k) \| > \epsilon_1,

    then an acceptable step is found after finitely many trials. That is, the condition

    \frac{C_k-\tilde{\phi}(x_{k+1}; \bar{\rho}_k)}{Pred_k}\geq \theta_1

    will be satisfied.

    Proof. Since

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k)\|+ \| Y_k \nabla P_k Z_k P_k) \| > \epsilon_1,

    then from Lemmas 3.5, 3.10, and 3.11, we have

    \begin{align*} \mid \frac{C_k-\tilde{\phi}(x_{k+1}; \bar{\rho}_k)}{Pred_k}-1\mid &\leq \mid \frac{\tilde{\phi}(x_k; \bar{\rho}_k)-\tilde{\phi}(x_{k+1}; \bar{\rho}_k)}{Pred_k}-1\mid\\& = \mid \frac{Ared_k}{Pred_k}-1\mid \\ & = \frac{\mid Ared_k -Pred_k\mid}{Pred_k}\\&\leq \frac{K_3 \bar{\rho}\tau_k\Delta_k^2}{K_4\tau_k\Delta_k}\leq \frac{K_3 \bar{\rho}\Delta_k}{K_4}. \end{align*}

    As a result of step d_k being rejected, \Delta_k is now small, and after a finite number of trials, the acceptance rule will finally be satisfied. That is

    \frac{C_k-\tilde{\phi}(x_{k+1}; \bar{\rho}_k)}{Pred_k}\geq \theta_1

    and this ends the proof.

    Lemma 3.13. Under assumptions As_1 As_3 and if

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k)\|+ \| Y_k \nabla P_k Z_k P_k) \| > \epsilon_1,

    at a given iteration k , the j^{th} trial step satisfies

    \begin{equation} \| d_{k^j} \| \leq \frac{(1-\theta_1) K_4}{2\bar{\rho} K_3}, \end{equation} (3.31)

    then it has to be accepted.

    Proof. Since the proof of this lemma is by a contradiction, we assume that the inequality (3.31) is true and the step d_{k^j} is rejected. Since d_{k^j} is rejected, then we have from Algorithm 2.3

    \frac{C_{k^j}-\tilde{\phi}(x_{{k^j}+1}; \bar{\rho})}{Pred_{k^j}}\leq \theta_1.

    Using, inequalities (3.8) and (3.30), we have Hence

    \begin{align*} (1- \theta_1) < \mid\frac{C_{k^j}-\tilde{\phi}(x_{{k^j}+1}; \bar{\rho})}{Pred_{k^j}}\mid &\leq \mid\frac{\tilde{\phi}(x_{k^j}; \bar{\rho})-\tilde{\phi}(x_{{k^j}+1}; \bar{\rho})}{Pred_{k^j}}\mid \\ & = \frac{\mid Ared_{k^j} - Pred_{k^j}\mid}{Pred_{k^j}} \\& < \frac{K_3 \bar{\rho}\tau_{k^j}\| d_{k^j}\|^2}{K_4\tau_{k^j} \| d_{k^j}\|} \leq \frac{(1- \theta_1)}{2}. \end{align*}

    This demonstrates the lemma and provides a contradiction.

    The fundamental global convergence theorem for Algorithm 2.6 is covered in this section.

    Theorem 3.1. Under assumptions As_1 As_3 , the sequence of iterates generated by the NTRAI algorithm satisfies

    \begin{equation} \liminf\limits_{k \rightarrow \infty} \; [ \|Y_k \nabla f_k \| +\| Y_k\nabla P_k Z_k P_k \|\; ]\; = \; 0. \end{equation} (3.32)

    Proof. First, we will prove the following limit by contradiction

    \begin{equation} \liminf\limits_{k \rightarrow \infty} \|Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k) \| +\| Y_k\nabla P_k Z_k P_k \| = 0. \end{equation} (3.33)

    So, suppose that,

    \| Y_k \nabla \tilde{\phi}(x_k; \bar{\rho}_k) \| + \| Y_k\nabla P_k Z_k P_k \| > \epsilon_1

    for all k . Consider a trial step indexed j of the iteration indexed k such that k^j\geq \bar{k} and k\geq \bar{k} . Using Algorithm 2.3 and Lemma 3.11, we have the following for any acceptable step indexed k^j ,

    \begin{equation} C_{k^j}-\tilde{\phi}_{k^j+1}\geq \theta_1 Pred_{k^j}\geq \theta_1 K_4\tau_{k^j} \Delta_{k^j}. \end{equation} (3.34)

    As k goes to infinity, we have

    \begin{equation} \lim\limits_{k \rightarrow \infty} \Delta_{k^j} = 0. \end{equation} (3.35)

    This implies that the value of \Delta_{k^j} is not bounded below:

    If we consider an iteration with indexed k^j > \bar{k} and if the preceding step was approved, that is, if j = 1 , then \Delta_{k^1} \geq \Delta_{min} . Thus, in this case, \Delta_{k^j} is bounded.

    If j > 1 , at least one trial step has been rejected, and according to Lemma 3.13, we have

    \| d_{k^i} \| > \frac{(1-\theta_1) K_4}{2\bar{\rho} K_3},

    for all i = 1, 2, \cdots, j-1 . Since d_{k^i} is a rejected trial step, then from algorithm 2.3, we have

    \Delta_{k^j} = \tilde{\alpha_1}\| d_{k^{j-1}}\| > \tilde{\alpha_1}\frac{(1-\theta_1) K_4}{2\bar{\rho} K_3}.

    Hence, \Delta_{k^j} is bounded and this contradicts condition (3.35). Hence, the supposition is wrong and the limit in (3.33) holds. Hence, limit in (3.32) holds and this completes the proof of the theorem.

    This section compares the performance of the NTRAI algorithm and demonstrates its robustness and efficiency using a collection of test problems with varying features that are commonly utilized in the literature. First, the tested problems are the Hock and Schittkowski's subset of the general nonlinear programming testing environment (the CUTE collection) [27]. Second, three engineering design problems are also tested.

    We provided the numerical results of NTRAI algorithm obtained on a laptop with 8 GB RAM, USB 3 (10x), Nvidia GEFORCE GT, and Intel inside Core (TM)i7-2670QM CPU 2.2 GHz. NTRAI was run under MATLAB (R2013a)(8.2.0.701)64-bit(win64). The values of the required constants in Step 0 of nonmonotone trust-region Algorithm 2.5 were selected to be

    \theta_1 = 0.25, \theta_2 = 0.75, \tilde{\alpha_1} = 0.5, \tilde{\alpha_2} = 2, \epsilon_1 = 10^{-10}, \; \; \text{and}\; \; \epsilon_2 = 10^{-8}.

    Successful termination with respect to the nonmonotone trust-region Algorithm 2.5 means that the termination condition of the algorithm is met with \epsilon_1 .

    Benchmark problems are listed in Hock and Schittkowski[27] to show the effectiveness of the NTRAI algorithm. For comparison, we have included the corresponding results of the NTRAI algorithm against the numerical results in[3,28,29]. This is summarized in Table 1, where Niter refers to the number of iterations. The algorithm has the ability to locate the optimal solution for either a feasible or infeasible initial reference point.

    Table 1.  Comparison between the methods in [3,28,29], and NTRAI algorithm.
    Problem name Method [29] Method [28] Method [3] NTRAI algorithm
    Niter Niter Niter Niter
    Prob1 hs006 7 5 4 4
    Prob2 hs007 9 8 7 6
    Prob3 hs008 14 6 8 6
    Prob4 hs009 10 6 7 5
    Prob5 hs012 5 7 4 4
    Prob6 hs024 14 4 9 6
    Prob7 hs026 19 19 14 12
    Prob8 hs027 14 18 12 12
    Prob9 hs028 6 2 3 2
    Prob10 hs029 8 6 9 7
    Prob11 hs030 7 6 8 4
    Prob12 hs032 24 5 6 5
    Prob13 hs033 29 6 8 5
    Prob14 hs034 30 5 7 9
    Prob15 hs036 10 7 9 6
    Prob16 hs037 7 6 6 4
    Prob17 hs039 19 23 5 7
    Prob18 hs040 4 3 6 4
    Prob19 hs042 6 3 7 5
    Prob20 hs043 9 7 6 6
    Prob21 hs046 25 10 10 8
    Prob22 hs047 25 17 12 10
    Prob23 hs048 6 2 3 3
    Prob24 hs049 69 16 10 12
    Prob25 hs050 11 8 6 5
    Prob26 hs051 8 2 3 3
    Prob27 hs052 4 2 3 2
    Prob28 hs053 5 4 4 3
    Prob29 hs056 12 5 4 3
    Prob30 hs060 7 7 5 4
    Prob31 hs061 44 7 8 6
    Prob32 hs063 5 5 4 3
    Prob33 hs073 16 7 8 6
    Prob34 hs078 4 4 5 4
    Prob35 hs079 7 4 4 4
    Prob36 hs080 6 5 6 4
    Prob37 hs081 9 6 7 5
    Prob38 hs093 12 6 5 5

     | Show Table
    DownLoad: CSV

    For all problems, these algorithms achieved the same optimal solution in [27]. Figure 1 shows the numerical results, which are summarized in Table 1 by using the performance profile that is proposed by Dolan and More [30].

    Figure 1.  Performance profile based on the Niter of methods in [3,28,29], and NTRAI algorithm.

    The performance profile in terms of Niter is given in Figure 1, which shows a distinctive difference between the NTRAI algorithm and the other algorithms [3,28,29]. Figure 2 represents the number of iterations required for each problem with different methods.

    Figure 2.  Comparison between the Niter of methods in [3,28,29] and the NTRAI algorithm.

    In this section, to evaluate the applicability of the NTRAI algorithm in real-world applications, we will consider three constrained mechanical engineering problems from the latest test suite [19].

    In this experimental estimation, the NTRAI algorithm was compared with algorithms AOA [31], CGA [32], ChOA [33], SA [34], LMFO [35], I-MFO[36], MFO [37], WOA [38], GWO [39], SMFO [40], and WCMFO[41]. All algorithms attempt to solve three distinct problems, including: a gas transmission compressor design problem, a three-bar truss design problem, and a tension/compression spring design problem.

    P_1 . Gas transmission compressor design (GTCD) problem

    Minimizing the objective function utilizing four design variables is the fundamental objective of the GTCD problem. Figure 3 clarifies the GTCD problem. The mathematical formulation for the GTCD problem is

    \begin{array}{cl} minimize & 8.61\times 10^5 \sqrt{\frac{ x_1}{x_4}} x_2 x_3^{\frac{-2}{3}}+3.69\times 10^4 x_3+7.72\times 10^8 \frac{x_2^{0.219}}{x_1}-765.43\times 10^6 x_1^{-1}, \\ subject \; to & \frac{x_4+1}{x_2^2}-1 \leq 0, \\ & 20\leq x_1 \leq 50, \\ & 1\leq x_2 \leq 10, \\ & 20\leq x_3 \leq 45, \\ & 0.1\leq x_4 \leq 60. \end{array}
    Figure 3.  Gas transmission compressor design problem.

    The performance of the NTRAI algorithm was evaluated against other methods when solving the GTCD problem. Table 2 presents the numerical results and comparisons for the GTCD problem. As seen in the table, the NTRAI algorithm is the most effective in addressing this problem.

    Table 2.  The numerical results and comparison for GTCD problem.
    Name of algorithm x_{1} x_{2} x_{3} x_{4} Optimal cost
    SA[34] 46.76 1.62 25.79 0.55 4.390311\times 10^{6}
    CGA[32] 49.97 20.01 31.47 49.83 1.735023\times 10^{7}
    GWO[39] 20.00 7.81 20.00 60.00 2.964974\times 10^{6}
    MFO [37] 50.00 1.18 24.57 0.39 2.964902\times 10^{6}
    WOA[38] 50.00 1.18 24.86 0.39 2.965002\times 10^{6}
    LMFO [35] 49.46 1.18 24.64 0.39 2.965456\times 10^{6}
    WCMFO [41] 50.00 1.18 24.61 0.39 2.964897\times 10^{6}
    ChOA [33] 50.00 1.19 24.24 0.41 2.966828\times 10^{6}
    AOA[31] 50.00 1.23 20.00 0.51 3.014615\times 10^{6}
    SMFO [40] 23.66 1.09 23.66 0.19 3.052254\times 10^{6}
    I-MFO [36] 50.00 1.18 24.60 0.39 2.964896\times 10^{6}
    NTRAI 49.6 1.175 24.9 0.382 2.962714204361616\times 10^{6}

     | Show Table
    DownLoad: CSV

    P_2 . Three-bar truss design (TBTD) problem

    In the TBTD problem, three constraints and two variables are used to formulate the weight of the bar structures, which is the objective function. Figure 4 clarifies the schematic for the TBTD problem. The mathematical formula for the TBTD problem is

    \begin{array}{cl} minimize & 100(x_2+2\sqrt{2}x_1), \\ subject \; to & \frac{2x_2}{2x_1x_2+\sqrt{2}x_1^2}-2 \leq 0, \\ & \frac{2x_2+2\sqrt{2}x_1}{2x_1x_2+\sqrt{2}x_1^2}-2 \leq 0, \\ & \frac{2}{x_1+\sqrt{2}x_2}-2 \leq 0, \\ & 0\leq x_1 \leq 1, \\ & 0\leq x_2 \leq 11. \end{array}
    Figure 4.  Three-bar truss design problem.

    The NTRAI algorithm is compared with the other algorithms when solving the TBTD problem. Table 3 shows the numerical results for the TBTD problem. The NTRAI algorithm outperforms other algorithms in approximating the optimal values for variables with minimum weight.

    Table 3.  The numerical results and comparison for TBTD.
    Name of algorithm x_1 x_2 Optimal weight
    SA [34] 0.768630 0.474232 2.6482456\times 10^{2}
    CGA [32] 0.792428 0.397752 2.6390770\times 10^{2}
    GWO [39] 0.787771 0.410862 2.6389619\times 10^{2}
    MFO [37] 0.789186 0.406806 2.6389603\times 10^{2}
    WOA [38] 0.787713 0.410977 2.6389653\times 10^{2}
    LMFO [35] 0.791713 0.399909 2.6392114\times 10^{2}
    WCMFO [41] 0.788472 0.408822 2.6389589\times 10^{2}
    ChOA [33] 0.787802 0.410724 2.6389653\times 10^{2}
    AOA [31] 0.792789 0.396906 2.6392526\times 10^{2}
    SMFO [40] 0.792044 0.398859 2.6390973\times 10^{2}
    I-MFO [36] 0.788792 0.407919 2.6389585\times 10^{2}
    NTRAI 0.7 0.4 2.3798989873\times 10^{2}

     | Show Table
    DownLoad: CSV

    P_3 . Tension/compression spring design (TCSD) problem

    In the TCSD problem, four constraints and three variables are utilized to formulate the weight of the tension/compression spring, which is an objective function. As shown in Figure 5, the variables are wire diameter d, the mean coil diameter D, and the number of active coils N. These variables are denoted in mathematical formulation by x_1 , x_2 , and x_3 , respectively.

    Figure 5.  Tension/compression spring design problem.

    The mathematical formulation for the TCSD problem is

    \begin{array}{cl} minimize & x_1^2 x_2(2+x_3), \\ subject \; to &1- \frac{x_2^3 x_3}{71785 x_1^4}\leq 0, \\ & \frac{4x_2^2-x_1 x_2}{12566(x_2x_1^3-x_1^4)}+\frac{1}{5108 x_1^2}-1 \leq 0, \\ & 1-\frac{140.45 x_1}{x_2^2 x_3} \leq 0, \\ &\frac{2}{3}(x_1+x_2)-1\leq 0, \\ & 0.05\leq x_1 \leq 2, \\ & 0.25\leq x_2 \leq 1.3, \\ & 2\leq x_3 \leq 15. \end{array}

    The NTRAI Algorithm 2.6 is compared with other algorithms when solving the tension/compression spring design problem. The numerical results and the comparison between algorithms for the TCSD problem are shown in Table 4. The NTRAI algorithm is better than other algorithms in approximating the optimal values for variables with minimum weight.

    Table 4.  The numerical results and comparison for the TCSD problem.
    Name of algorithm d=x_1 D=x_2 N=x_3 Optimum weight
    SA [34] 0.075935 0.993094 3.879891 0.033670
    CGA [32] 0.071031 1.019975 1.726076 0.019749
    GWO [39] 0.051231 0.345699 11.970135 0.012676
    MFO [37] 0.053064 0.390718 9.542437 0.012699
    WOA [38] 0.050451 0.327675 13.219341 0.012694
    LMFO [35] 0.050000 0.317154 14.107156 0.012771
    WCMFO [41] 0.051509 0.352411 11.545969 0.012666
    ChOA [33] 0.051069 0.341746 12.251078 0.012702
    AOA [31] 0.050000 0.310475 15.000000 0.013195
    SMFO [40] 0.050000 0.314692 14.696505 0.013136
    I-MFO [36] 0.051710 0.357217 11.259785 0.012665
    NTRAI 0.05179848439 0.35946589 11.12481959619885 0.0126585842553172

     | Show Table
    DownLoad: CSV

    Example. Nonconvex optimization problem [20]

    Consider the following nonconvex nonlinear constrained optimization problem

    \begin{array}{cl} minimize & -x_1-x_2, \\ subject \; to &x_1 x_2\leq 4\\ & 0\leq x_1 \leq 6, \\ & 0\leq x_2 \leq 4. \end{array}

    The above problem possesses two strong local minima points (1, 4) and (6, 0.66667) . Applying the NTRAI Algorithm 2.6 on this nonconvex problem, we have the local points (1.0000001220725, 4) are obtained and the value of objective function is -5.0000001220725.

    This research focused on combining a nonmonotone technique with an autonomously modified trust-region radius to provide a more efficient hybrid of trust-region approaches for constrained optimization problems. The active-set strategy was combined with a penalty and Newton's interior point method to transform a nonlinearly constrained optimization problem into an identical unconstrained one. A nonmonotone trust region was used to ensure convergence from any starting point to the stationary point. A global convergence theory for the suggested method was developed based on certain assumptions. Well-known test problems (the CUTE collection) were used to evaluate the suggested method; three engineering design problems were performed, and the outcomes were compered with those of other reputable optimizers. The results showed that, compared with the other algorithms under discussion, the proposed method typically yields better approximation solutions and requires fewer iterations. Computational findings, which also examined the algorithm's performance, demonstrated the suggested algorithm's competitiveness and superiority over alternative optimization algorithms.

    Several questions should be answered in future research:

    ● Improving the nonmonotone trust-region algorithm to be able to handle nondifferentiation-constrained optimization problems.

    ● Improving the nonmonotone trust-region algorithm to be able to handle large-scale constrained optimization problems.

    ● Utilize a secant approximation of the Hessian matrix to output a more effective algorithm.

    Bothina Elsobky: conceived the study, developed the theoretical framework and performed the numerical experiments; Yousria Abo-Elnaga: conceived the study, supervised the application; Gehan Ashry: aided in the analysis. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are very grateful to the anonymous reviewers for their valuable and insightful comments, which have aided us in improving the quality of this paper.

    The authors declare no conflicts of interest.



    [1] I. Das, An interior point algorithm for the general nonlinear programming problem with trust region globlization, Institute for Computer Applications in Science and Engineering, 1996. https://doi.org/10.5555/870136
    [2] B. El-Sobky, An active-set interior-point trust-region algorithm, Pacific J. Optim., 14 (2018), 125–159.
    [3] B. El-Sobky, Y. Abo-Elnaga, A. Mousa, A. El-Shorbagy, Trust-region based penalty barrier algorithm for constrained nonlinear programming problems: an application of design of minimum cost canal sections, Mathematics, 9 (2021), 1551. https://doi.org/10.3390/math9131551 doi: 10.3390/math9131551
    [4] B. El-Sobky, G. Ashry, An interior-point trust-region algorithm to solve a nonlinear bilevel programming problem, AIMS Math., 7 (2022), 5534–5562. https://doi.org/10.3934/math.2022307 doi: 10.3934/math.2022307
    [5] B. El-Sobky, G. Ashry, Y. Abo-Elnaga, An active-set with barrier method and trust-region mechanism to solve a nonlinear Bilevel programming problem, AIMS Math., 7 (2022), 16112–16146. https://doi.org/10.3934/math.2022882 doi: 10.3934/math.2022882
    [6] B. El-Sobky, A multiplier active trust-region algorithm for solving general nonlinear programming problem, Appl. Math. Comput., 219 (2012), 928–946.
    [7] B. El-Sobky, G. Ashry, An active-set Fischer-Burmeister trust-region algorithm to solve a nonlinear bilevel optimization problem, Fractal Fract., 6 (2022), 412–441. https://doi.org/10.3390/fractalfract6080412 doi: 10.3390/fractalfract6080412
    [8] B. El-Sobky, M. F. Zidan, A trust-region based an active-set interior-point algorithm for fuzzy continuous static games, AIMS Math., 8 (2023), 13706–13724. https://doi.org/10.3934/math.2023696 doi: 10.3934/math.2023696
    [9] B. El-Sobky, Y. Abo-Elnaga, G. Ashry, M. Zidan, A nonmonotone active interior point trust region algorithm based on CHKS smoothing function for solving nonlinear bilevel programming problems, AIMS Math., 9 (2024), 6528–6554. https://doi.org/10.3934/math.2024318 doi: 10.3934/math.2024318
    [10] A. Sartenaer, Automatic determination of an initial trust region in nonlinear programming, SIAM J. Sci. Comput., 18 (1997), 1788–1803. https://doi.org/10.1137/S1064827595286955 doi: 10.1137/S1064827595286955
    [11] X. S. Zhang, J. L. Zhang, L. Z. Liao, An adaptive trust region method and its convergence, Sci. China Ser. A, 45 (2002), 620–631. https://doi.org/10.1360/02ys9067 doi: 10.1360/02ys9067
    [12] Z. J. Shiand, J. H. Guo, A new trust region methods for unconstrained optimization, J. Comput. Appl. Math., 213 (2008), 509–520. https://doi.org/10.1016/j.cam.2007.01.027 doi: 10.1016/j.cam.2007.01.027
    [13] L. Grippo, F. Lampariello, S. Lucidi, A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl., 60 (1989), 401–419. https://doi.org/10.1007/BF00940345 doi: 10.1007/BF00940345
    [14] P. L. Toint, An assessment of nonmonotone linesearch technique for unconstrained optimization, SIAM J. Sci. Comput., 17 (1996), 725–739. https://doi.org/10.1137/S106482759427021X doi: 10.1137/S106482759427021X
    [15] N. Y. Deng, Y. Xiao, F. J. Zhou, Nonmonotonic trust region algorithm, J. Optim. Theory Appl., 76 (1993), 259–285. https://doi.org/10.1007/BF00939608 doi: 10.1007/BF00939608
    [16] J. L. Zhang, X. S. Zhang, A nonmonotone adaptive trust region method and its convergence, Comput. Math. Appl., 45 (2003), 1469–1477. https://doi.org/10.1016/S0898-1221(03)00130-5 doi: 10.1016/S0898-1221(03)00130-5
    [17] H. C. Zhang, W. W. Hager, A nonmonotone line search technique for unconstrained optimization, SIAM J. Optim., 14 (2004), 1043–1056.
    [18] J. Mo, C. Liu, S. Yan, A nonmonotone trust region method based on nonincreasing technique of weighted average of the successive function value, J. Comput. Appl. Math., 209 (2007), 97–108. https://doi.org/10.1016/j.cam.2006.10.070 doi: 10.1016/j.cam.2006.10.070
    [19] A. Kumar, G. Wu, M. Ali, R. Mallipeddi, P. Suganthan, S. Das, A test-suite of non-convex constrained optimization problems from the real-world and some baseline results, Swarm Evol. Comput., 56 (2020), 100693. https://doi.org/10.1016/j.swevo.2020.100693 doi: 10.1016/j.swevo.2020.100693
    [20] N. Sahinidis, I. E. Grossmann Convergence properties of generalized benders decomposition, Comput. Chem. Eng., 15 (1991), 481–491. https://doi.org/10.1016/0098-1354(91)85027-R doi: 10.1016/0098-1354(91)85027-R
    [21] J. Dennis, M. El-Alem, K. Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, SIAM J. Optim., 9 (1999), 291–315. https://doi.org/10.1137/S1052623494276208 doi: 10.1137/S1052623494276208
    [22] R. Fletcher, An l_1 penalty method for nonlinear constraints, Numer. Optim., 1984, 26–40.
    [23] J. Dennis, R. Schnabel, Numerica methods for unconstrained optimization and nonlinear equations, Prentice-Hall, 1983. https://doi.org/10.1137/1.9781611971200
    [24] Y. Yuan, On the convergence of a new trust region algorithm, Numer. Math., 70 (1995), 515–539. https://doi.org/10.1007/s002110050133 doi: 10.1007/s002110050133
    [25] O. Mangasarian, Nonlinear programming, McGraw-Hill Book Company, 1969.
    [26] B. El-Sobky, Y. Abouel-Naga, A penalty method with trust-region mechanism for nonlinear bilevel optimization problem, J. Comput. Appl. Math., 340 (2018), 360–374. https://doi.org/10.1016/j.cam.2018.03.004 doi: 10.1016/j.cam.2018.03.004
    [27] W. Hock, K. Schittkowski, Test examples for nonlinear programming codes, Springer-Verlag, 1981. https://doi.org/10.1007/978-3-642-48320-2
    [28] J. Nocedal, F. Oztoprak, R. Waltz, An interior point method for nonlinear programming with infeasibility detection capabilities, Optim. Methods Software, 29 (2014), 837–854. https://doi.org/10.1080/10556788.2013.858156 doi: 10.1080/10556788.2013.858156
    [29] A. L. Tits, A. Wachter, S. Bakhtiari, T. J. Urban, G. T. Lawrence, A primal-dual interior-point method for nonlinear programming with strong global and local convergence properties, SIAM J. Optim., 14 (2003), 173–199. https://doi.org/10.1137/S1052623401392123 doi: 10.1137/S1052623401392123
    [30] E. Dolan, J. More, Benchmarking optimization software with performance profiles, Math. Programm., 91 (2002), 201–213. https://doi.org/10.1007/s101070100263 doi: 10.1007/s101070100263
    [31] L. Abualigah, A. Diabat, S. Mirjalili, M. A. Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
    [32] R. Chelouah, P. Siarry, A continuous genetic algorithm designed for the global optimization of multimodal functions, J. Heuristics, 6 (2000), 191–213. https://doi.org/10.1023/A:1009626110229 doi: 10.1023/A:1009626110229
    [33] M. Khishe, M. R. Mosavi, Chimp optimization algorithm, Expert Syst. Appl., 149 (2020), 113338. https://doi.org/10.1016/j.eswa.2020.113338 doi: 10.1016/j.eswa.2020.113338
    [34] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, Science, 220 (1983), 671–680. https://doi.org/10.1126/science.220.4598.671 doi: 10.1126/science.220.4598.671
    [35] Z. Li, Y. Zhou, S. Zhang, J. Song, Levy-flight moth-flame algorithm for function optimization and engineering design problems, Math. Probl. Eng., 126 (2016), 1423930. https://doi.org/10.1155/2016/1423930 doi: 10.1155/2016/1423930
    [36] M. H. Nadimi-Shahraki, A. Fatahi, H. Zamani, S. Mirjalili, L. Abualigah, An improved moth-flame optimization algorithm with adaptation mechanism to solve numerical and mechanical engineering problems, Entropy, 23 (2021), 1637. https://doi.org/10.3390/e23121637 doi: 10.3390/e23121637
    [37] S. Mirjalili, Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm, Knowl. Based Syst., 89 (2015), 228–249. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
    [38] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [39] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
    [40] C. Chen, X. Wang, H. Yu, M. Wang, H. Chen, Dealing with multi-modality using synthesis of Moth-flame optimizer with sine cosine mechanisms, Math. Comput. Simul., 188 (2021), 291–318. https://doi.org/10.1016/j.matcom.2021.04.006 doi: 10.1016/j.matcom.2021.04.006
    [41] S. Khalilpourazari, S. Khalilpourazary, An efficient hybrid algorithm based on water cycle and moth-flame optimization algorithms for solving numerical and constrained engineering optimization problems, Soft Comput., 23 (2019), 1699–1722. https://doi.org/10.1007/s00500-017-2894-y doi: 10.1007/s00500-017-2894-y
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(327) PDF downloads(41) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog