Research article

Outer space branching search method for solving generalized affine fractional optimization problem

  • Received: 18 July 2022 Revised: 23 September 2022 Accepted: 10 October 2022 Published: 26 October 2022
  • MSC : 90C26, 90C32, 65K05

  • This paper proposes an outer space branching search method, which is used to globally solve the generalized affine fractional optimization problem (GAFOP). First, we will convert the GAFOP into an equivalent problem (EP). Next, we structure the linear relaxation problem (LRP) of the EP by using the linearization technique. By subsequently partitioning the initial outer space rectangle and successively solving a series of LRPs, the proposed algorithm globally converges to the optimum solution of the GAFOP. Finally, comparisons of numerical results are reported to show the superiority and the effectiveness of the presented algorithm.

    Citation: Junqiao Ma, Hongwei Jiao, Jingben Yin, Youlin Shang. Outer space branching search method for solving generalized affine fractional optimization problem[J]. AIMS Mathematics, 2023, 8(1): 1959-1974. doi: 10.3934/math.2023101

    Related Papers:

    [1] Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang . A novel algorithm for solving sum of several affine fractional functions. AIMS Mathematics, 2023, 8(4): 9247-9264. doi: 10.3934/math.2023464
    [2] Yan Shi, Qunzhen Zheng, Jingben Yin . Effective outcome space branch-and-bound algorithm for solving the sum of affine ratios problem. AIMS Mathematics, 2024, 9(9): 23837-23858. doi: 10.3934/math.20241158
    [3] Xiaoli Huang, Yuelin Gao . An efficient outer space branch-and-bound algorithm for globally minimizing linear multiplicative problems. AIMS Mathematics, 2023, 8(11): 26045-26069. doi: 10.3934/math.20231327
    [4] Paresh Kumar Panigrahi, Sukanta Nayak . Numerical investigation of non-probabilistic systems using Inner Outer Direct Search optimization technique. AIMS Mathematics, 2023, 8(9): 21329-21358. doi: 10.3934/math.20231087
    [5] Hongwu Li, Longfei Wang, Yingfeng Zhao . Global optimization algorithm for a class of linear ratios optimization problem. AIMS Mathematics, 2024, 9(6): 16376-16391. doi: 10.3934/math.2024793
    [6] Bo Zhang, Yuelin Gao, Ying Qiao, Ying Sun . A nonlinear relaxation-strategy-based algorithm for solving sum-of-linear-ratios problems. AIMS Mathematics, 2024, 9(9): 25396-25412. doi: 10.3934/math.20241240
    [7] Xiaojun Xie, Saratha Sathasivam, Hong Ma . Modeling of 3 SAT discrete Hopfield neural network optimization using genetic algorithm optimized K-modes clustering. AIMS Mathematics, 2024, 9(10): 28100-28129. doi: 10.3934/math.20241363
    [8] Caicai Feng, Saratha Sathasivam, Nurshazneem Roslan, Muraly Velavan . 2-SAT discrete Hopfield neural networks optimization via Crow search and fuzzy dynamical clustering approach. AIMS Mathematics, 2024, 9(4): 9232-9266. doi: 10.3934/math.2024450
    [9] Regina S. Burachik, Bethany I. Caldwell, C. Yalçın Kaya . Douglas–Rachford algorithm for control- and state-constrained optimal control problems. AIMS Mathematics, 2024, 9(6): 13874-13893. doi: 10.3934/math.2024675
    [10] Xuanlong Wu, Peng Zhong, Weihao Lin, Jin Deng . Multi-body dynamic evolution sequence-assisted PSO for interval analysis. AIMS Mathematics, 2024, 9(11): 31198-31216. doi: 10.3934/math.20241504
  • This paper proposes an outer space branching search method, which is used to globally solve the generalized affine fractional optimization problem (GAFOP). First, we will convert the GAFOP into an equivalent problem (EP). Next, we structure the linear relaxation problem (LRP) of the EP by using the linearization technique. By subsequently partitioning the initial outer space rectangle and successively solving a series of LRPs, the proposed algorithm globally converges to the optimum solution of the GAFOP. Finally, comparisons of numerical results are reported to show the superiority and the effectiveness of the presented algorithm.



    The considered generalized affine fractional optimization problem is as follows:

    (GAFOP):{minmax{nj=1e1j  yj+f1nj=1c1j  yj+h1,nj=1e2j  yj+f2nj=1c2j  yj+h2,,nj=1epj  yj+fpnj=1cpj  yj+hp}s.t.yY={yRn|Ayb},

    where ARm×n, bRm, ei,diRn, and gi,fi are arbitrary real numbers. p2, Y is a nonempty compact set, nj=1eijyj+fi and nj=1cijyj+hi are all bounded linearity functions defined on Y, and for any yY, the denominator nj=1cijyj+hi0,i=1,2,,p.

    As a class of special fractional optimization problems, the GAFOP has attracted the attention of many researchers and practitioners for decades. It has a variety of applications in many fields, including finance and investment [1,2,3], transportation planning [4,5], optimal design [6], estimation of iterative parameters [7], signal processing [8], data envelopment analysis and others [9,10,11,12,13,14,15,16,17]. Furthermore, since the GAFOP may be not (quasi)convex, there may exist many local optimal solutions, many of which fail to be global solutions. Hence, it is still of great significance to propose an effective global algorithm to solve the GAFOP.

    Some algorithms have been presented to globally solve the GAFOP over the past several decades: for instance, the cutting plane algorithm [18], branch-relaxation-bound methods [19,20], the interior-point algorithm [21], the partial linearization algorithm [22], the monotonic optimization method [23], the method of centers [24] and the prox-regularization method [25]. Recently, based on the Dinkelbach type algorithm, Ghazi and Roubi [26] presented a difference of convex functions (DC) method for globally solving the generalized convex fractional optimization problem. By utilizing the proximal bundle theory, Boualam and Roubi [27] proposed a dual method for the generalized convex fractional optimization problem. Jiao et al. [28] designed an image space branch-and-bound algorithm for solving minimax linear fractional programs. Haffari and Roubi [29] described a prox-dual regularization method for globally solving generalized fractional programs. By utilizing convex hull and concave hull approximation of a bilinear function, Jiao and Li [30] put forward a novel algorithm for globally addressing min-max linear fractional programs. However, the previous reviewed methods can only solve a particular form of the GAFOP, or they are difficult to use to solve large-scale practical problems. Therefore, there remains the necessity to propose a practical algorithm to solve the GAFOP.

    In addition to the methods reviewed above, some theoretical progress on the generalized fractional optimization problem (GFOP) has also been made. For example, Ahmad and Husain [31] gave a duality theory for a non-differentiable GFOP with generalized convexity. Schmitendorf [32] presented some optimality conditions for the GFOP. By utilizing the optimality condition, Tanimoto [33] gave a dual problem for a class of non-differentiable GFOP and derived the duality theorems. Yadav and Mukherjee [34] gave a duality theory for GFOP. When the data in the system are uncertain, Jeyakumar et al. [35] put forward a strong duality theorem for the robust GFOP. Based on unconstrained conditions, Lai et al. [36] gave the duality theorem for the GFOP. For a detailed review of the methods and theories for the GFOP, readers can refer to Stancu-Minasian [37,38].

    In this article, an outer space branching search method is designed for globally solving the GAFOP. We first convert the GAFOP into the EP. Next, by utilizing the structural characteristics of the EP, we construct a new linearizing method for establishing the LRP of the EP. Compared with the known existing algorithms, the branching search of the presented algorithm occurs in the outer space Rp rather than the variable dimension space Rn, which provides the possibility of mitigating the required computational efforts of the algorithm. In addition, the numerical computational results are reported, indicating that the proposed algorithm has higher efficiency and notable superiority compared to the known existing algorithms [19,39,40].

    The remainder of this article is organized as follows. We derive the EP of the GAFOP and establish the LRP of the EP in Section 2. In Section 3, we give an outer space branching search method for globally solving the GAFOP, and we also analyze the global convergence of the algorithm. Numerical results for some test examples from recent studies are presented in Section 4. Finally, Section 5 gives some conclusions.

    In the following, to solve the GAFOP, we first transform the GAFOP into the EP. Then, we present a novel linearization technique and construct the LRP of the EP. For this purpose, for each i=1,,p, we introduce the additional variables zi=nj=1cijyj+hi. By computing the minimum value z_0i=minyYnj=1cijyj+hi and the maximum value ¯z0i=maxyYnj=1cijyj+hi of the linear function nj=1cijyj+hi over Y, an initial outer space rectangle Z0={zRpz_0izi¯z0i, i=1,,p}, can be constructed. By introducing the new variable r=max{nj=1e1jyj+f1z1,nj=1e2jyj+f2z2,,nj=1epjyj+fpzp}, we can simplify the objective function of the original problem GAFOP to r, so that we can get the EP of the GAFOP as below:

    (EP):{min rs.t.nj=1eijyj+fizir, i=1,2,,pzi=nj=1cijyj+hiAyb, zZ0.

    Theorem 1. y is a global optimum solution of the GAFOP if and only if (y,z,r) is a global optimum solution of the EP, with

    zi=nj=1cijy+hi,i=1,2,,p,

    and

    r=max{nj=1e1j  yj+f1z1,nj=1e2j  yj+f2z2,,nj=1epj  yj+fpzp}.

    Additionally, the global optimal values of the GAFOP and EP are equal.

    Proof. By the above discussion, the conclusions are obvious, and thus we omit the proof.

    By Theorem 1, to globally solve the GAFOP, we can instead solve the EP. In the following, we only consider solving the EP.

    For globally solving the EP, we need to establish its LRP for providing the lower bound in the branch-and-bound process. The detailed derivation process of the LRP is as follows.

    For any Z={zRpz_izi¯zi, i=1,,p}Z0, we define

    Φi(y,zi)=nj=1eijyj+fizi,Φ_i(y,z_i,¯zi)={nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fi¯zi,if fi>0,nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fiz_i,if fi<0.

    Obviously, for each i=1,,p, we can see that

    Φi(y,zi)=nj=1eijyj+fiziΦ_i(y,z_i,¯zi)={nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fi¯zi,if fi>0,nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fiz_i,if fi<0. (1)

    Based on (1), for any ZZ0, we can construct the LRP of the EP as below.

    (LRP):{minrs.t.Φ_i(y,z_i,¯zi)r, i=1,2,,p,zi=nj=1cijyj+hiAyb, y0, zZ.

    From the above discussion, it is known that all feasible points of the EP over the sub-rectangle Z are also feasible for the LRP. Let v(EP) and v(LRP) be the global optimal values of the LRP and EP, respectively, and we have v(LRP) v(EP) over Zk. Thus, the optimal value of the LRP will provide a valid lower bound for that of the EP over Z.

    Next, we will prove that the optimal solution of the LRP will infinitely approximate the optimal solution of the EP over Z as ¯zz_0, as detailed in Theorem 2.

    Theorem 2. For each i=1,2,,p, consider the functions Φi(y,zi) and Φ_i(y,z_i,¯zi). We have the following:

    lim¯zz_0(Φi(y,zi)Φ_i(y,z_i,¯zi))=0.

    Proof. By the definitions of the functions Φi(y,zi) and Φ_i(y,z_i,¯zi), for any yY, zi[z_i,¯zi], we have

    Φi(y,zi)Φ_i(y,z_i,¯zi)={nj=1eijyj+fizi[nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fi¯zi],if fi>0nj=1eijyj+fizi[nj=1,eij>0eij¯ziyj+nj=1,eij<0eijz_iyj+fiz_i],if fi<0={nj=1,eij>0[eijyjzieijyj¯zi]+nj=1,eij<0[eijyjzieijyjz_i]+[fizifi¯zi],if fi>0nj=1,eij>0[eijyjzieijyj¯zi]+nj=1,eij<0[eijyjzieijyjz_i]+[fizifiz_i],if fi<0={(¯zizi)zi¯zinj=1,eij>0eijyj+(z_izi)ziz_inj=1,eij<0eijyj+fi(¯zizi)zi¯zi,if fi>0(¯zizi)zi¯zinj=1,eij>0eijyj+(z_izi)ziz_inj=1,eij<0eijyj+fi(z_izi)ziz_i,if fi<0(¯ziz_i)z_2i[nj=1|eij|yj+|fi|].

    Since nj=1|eij|yj+|fi| is a bounded linear function, we have

    lim¯zz_0(Φi(y,zi)Φ_i(y,z_i,¯zi))=0

    and complete the proof of the Theorem.

    The above Theorem ensures that the function Φi(y,zi) will be infinitely approximated by the function Φ_i(y,z_i,¯zi) as ¯zz_0, so the global optimal solution of the LRP will infinitely approximate the global optimal solution of the EP over Z as ¯zz_0.

    In this section, we first put forward an outer space rectangle bisection method. Next, by combining the previous LRP and the branch-and-bound framework, an outer space branching search method is designed to globally solve the GAFOP. In addition, we derive the global convergence of the outer space branching search method.

    The outer space rectangle bisection method iteratively subdivides the currently investigated rectangle into two sub-rectangles. Consider any selected sub-rectangle Z={zRp|z_izi¯zi,i=1,2,,p}Z0. The outer space rectangle bisection method is given as follows:

    (ⅰ) Let q=argmax{¯ziz_i|i=1,2,,p};

    (ⅱ) Let

    Z1={zRp|z_izi¯zi,i=1,2,,p,iq; z_qzq(z_q+¯zq)/2}

    and

    Z2={zRp|z_izi¯zi,i=1,2,,p,iq; (z_q+¯zq)/2zq¯zq}.

    Through utilizing the proposed outer space rectangle bisection method, the selected sub-rectangle Z can be subdivided into two sub-rectangles Z1 and Z2.

    In this subsection, the basic steps of the proposed outer space branching search method are formulated as follows.

    Step 0. Let the convergence error ϵ0, and let the initial outer space rectangle

    Z0={zRpz_0izi¯z0i, i=1,2,,p}.

    Denote F= as the set of the initial feasible points, let k=0, and let the set of all active nodes Ω0={Z0}.

    Step 1. Solve the LRP over Z0, and define (y0,z0,r0) and LB0 as its optimal solution and optimal value. Let

    UB0=max{nj=1e1jy0j+f1nj=1c1jy0j+h1,nj=1e2jy0j+f2nj=1c2jy0j+h2,,nj=1epjy0j+fpnj=1cpjy0j+hp}.

    If UB0LB0ϵ, then the proposed algorithm stops. y0 and (y0,z0,ˆr0) are ϵ-optimal solutions of the GAFOP and EP over (Z0), respectively. Otherwise, proceed with Step 2.

    Step 2. Use the proposed rectangle bisection method to subdivide Zk1 into two sub-rectangles Zk,1 and Zk,2. Let Q={Zk,1,Zk,2}.

    Step 3. For each Zk,t,t=1,2, compute the lower bound LB(Zk,t) and (y(Zk,t),z(Zk,t),r(Zk,t)) by solving the LRP over Zk,t, and let

    UB(Zk,t)=max{nj=1e1jy0j(Zk,t)+f1nj=1c1jy0j(Zk,t)+h1,nj=1e2jy0j(Zk,t)+f2nj=1c2jy0j(Zk,t)+h2,,nj=1epjy0j(Zk,t)+fpnj=1cpjy0j(Zk,t)+hp}.

    If LB(Zk,t)>UBk, then set Q=QZk,t; else, let

    F=F{(y(Z),z(Z))} and UBk=min{UBk,UB(Zk,t)}.

    If UBk=UB(Zk,t), then let yk=y(Zk,t) and (yk,zk,ˆrk)=(y(Zk,t),z(Zk,t),r(Zk,t)).

    Step 4. Set Ωk=(Ωk1Zk1)Q.

    Step 5. Set LBk=min{LB(Z)|ZΩk}, and let Zk be the sub-rectangle which satisfies LBk=LB(Zk).

    If UBkLBkϵ, then the proposed algorithm stops. yk and (yk,zk) are the ϵ-global optimal solutions of the GAFOP and EP, respectively.

    Otherwise, set k=k+1, and go back to Step 2.

    In this part, first of all, we define

    Λ(y)=max{nj=1e1jyj+f1nj=1c1jyj+h1,nj=1e2jyj+f2nj=1c2jyj+h2,,nj=1epjyj+fpnj=1cpjyj+hp}.

    Let v be the global optimal value of the EP over Θ0, and define r(yk,zk) as the objective functional value of the EP corresponding to the feasible solution (yk,zk). The global convergence analysis of the proposed algorithm can be given by the following theorem.

    Theorem 3. Given any ϵ0, if the proposed algorithm finitely terminates after k iterations, then yk is a global ϵ-optimal solution to the GAFOP in the sense that

    rk(yk,zk)v+ϵ.

    Otherwise, the proposed algorithm will generate an infinite sequence {yk}, whose accumulation point will be a global optimum solution to the GAFOP.

    Proof. If the presented algorithm finitely terminates after k iterations, according to the termination of the algorithm, it follows that

    UBkLBkϵ.

    By Step 3 of the presented algorithm, we can find a feasible solution (yk,zk) to the EP such that

    r(yk,zk)LBkϵ and LBkv.

    Since (yk,zk) is feasible for the EP, we have

    r(yk,zk)v.

    By using the above conclusions, we have

    vr(yk,zk)LBk+ϵv+ϵ.

    So, (yk,zk) is a global ϵ-optimal solution of the EP, with

    vr(yk,zk)v+ϵ.

    Thus, yk is a global ϵ-optimum solution to the GAFOP.

    If the presented algorithm does not finitely terminate, then it must produce an infinite feasible solution sequence {(yk,zk)}, and the sequence {(yk,zk)} has a convergence subsequence. Therefore, we can let

    limk(yk,zk)=(y,z).

    So, we have

    limkzki=zi=nj=1cijyj+hi,i=1,2,,p.

    From the branch-and-bound structure of the algorithm, we also get

    limkLBkv.

    Since y is a feasible solution of the GAFOP over Z0, and due to Theorem 2, we can get

    vΛ(y).

    Combining the above inequalities, we have

    Λ(y)vlimkLBk=limkr(yk,zk)=r(y,z). (2)

    Furthermore, by the equivalence of the GAFOP and EP, and the continuity of the function Λ(y), we can conclude the following:

    limkr(yk,zk)=r(y,z)=Λ(y)=limkΛ(yk). (3)

    Based on the above inequalities (2) and (3), we have

    v=Λ(y)=limkΛ(yk)=r(y,z)=limkLBk.

    Therefore, this implies that any accumulation point y of the sequence {yk} is a globally optimum solution to the GAFOP. The proof is complete.

    For verifying the computational superiority of the algorithm, the presented algorithm is implemented in the software MATLAB R2014a and solved on the same microcomputer with an Intel(R) Core(TM) i5-7200U CPU @2.50 GHz processor and 4 GB RAM.

    We first tested some randomly generated Problem 1 with small-size variables, numerically compared them with the known existing algorithms [19,39,40] and listed these numerical comparison results in Table 1. Next, we tested some randomly generated Problem 1 with large-size variables to verify our algorithm further and listed the numerical results in Table 2. In Table 2, Avg.Iter represents the average iteration times and Avg.Time represents the average execution CPU time in seconds.

    Table 1.  Numerical comparisons among some algorithms and our algorithm on Problem 1.
    (p,m,n) Algorithms #iter Time(s)
    min. ave. max. min. ave. max.
    (2, 10, 2) Feng et al. [19] 39 361 1188 0.45 4.22 13.11
    Wang et al. [39] 11 19.5 35 0.13 0.24 0.42
    Jiao & Liu [40] 14 24.5 38 0.16 0.31 0.46
    Our algorithm 28 275.5 971 0.33 3.30 11.37
    (2, 10, 4) Feng et al. [19] 365 6579.3 18164 3.92 77.36 213.11
    Wang et al. [39] 133 340.3 833 1.45 3.79 9.00
    Jiao & Liu [40] 38 190.9 498 0.42 2.18 5.55
    Our algorithm 14 51 601 0.17 0.79 7.31
    (2, 10, 6) Feng et al. [19]
    Wang et al. [39] 79 4165.8 24017 0.92 48.07 285.29
    Jiao & Liu [40] 220 661.3 1806 2.41 7.26 19.82
    Our algorithm 36 265.3 439 0.43 3.24 5.35
    (2, 10, 8) Feng et al. [19]
    Wang et al. [39] 189 9030.5 44047 2.03 118.66 654.93
    Jiao & Liu [40] 1205 7875 59143 13.03 115.83 940.51
    Our algorithm 31 84 520 0.40 1.04 6.11
    (2, 10, 10) Feng et al. [19]
    Wang et al. [39]
    Jiao & Liu [40] 613 4679.4 10880 6.72 52.54 124.93
    Our algorithm 48 168.9 452 0.56 2.02 5.32
    (3, 10, 10) Feng et al. [19]
    Wang et al. [39]
    Jiao & Liu [40] 2599 8162.3 12849 28.56 93.13 150.47
    Our algorithm 183 1232.8 3860 2.17 15.2 47.8
    (4, 10, 10) Feng et al. [19]
    Wang et al. [39]
    Jiao & Liu [40] 1629 21785.3 83513 17.83 340.12 1510.87
    Our algorithm 1071 8368.7 31234 12.53 120.12 537.53
    (5, 10, 10) Feng et al. [19]
    Wang et al. [39]
    Jiao & Liu [40] 2894 34659.2 179384 31.37 859.55 6021.98
    Our algorithm 1943 27459 59576 22.41 497.90 1259.80

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical computational results of our algorithm for Problem 1.
    (p,m,n) Avg.N Avg.T
    (2,100, 1000) 40.2 46.0363
    (2,100, 2000) 45.4 116.6136
    (2,100, 3000) 44.4 181.6550
    (2,100, 4000) 35.7 195.7975
    (2,100, 5000) 34.1 252.4213
    (2,100, 6000) 31.2 278.1843
    (2,100, 7000) 29.1 312.0120
    (2,100, 8000) 18.6 219.5177
    (3,100, 1000) 302.1 365.2927
    (3,100, 2000) 499.2 1424.3399
    (3,100, 3000) 393.3 1792.6109
    (3,100, 4000) 200.7 1232.3972

     | Show Table
    DownLoad: CSV

    The maximum CPU time limit of all algorithms is set at 3600s, and the approximation error is set as ϵ=102. "" denotes the situation in which the used algorithm failed to terminate in 3600s. Since the known existing algorithms [19,39,40] failed to solve ten arbitrary randomly generated Problem 1 with large-size variables in 3600s, we only list the numerical results obtained by our algorithm in Table 2.

    We solved ten arbitrary randomly generated examples for all test problems. First of all, we tested the randomly generated Problem 1 with small-size variables. Table 1 shows the best results, worst results and average results among these ten test results, and we highlighted in bold the winners of these average results in their numerical comparison results in Table 1. Second, we solved the randomly generated Problem 1 with large-size variables, and numerical results are reported in Table 2.

    From the computational results of Table 1, it can be seen that, when p2,m10, and n6, the algorithm of Feng et al. [19] failed to solve any one of ten randomly generated Problem 1 in 3600s. When p2,m10, and n10, the algorithm of Wang et al. [39] failed to solve any one of ten randomly generated Problem 1 in 3600s. When p3,m10, and n20, the algorithm of Jiao & Liu [40] failed to solve any one of ten randomly generated Problem 1 in 3600s. However, in all cases, our algorithm can globally solve any one of ten randomly generated Problem 1. In addition, when p2,m10, and n6, compared with the known existing algorithms [19,39,40], our algorithm takes less running time and iterations. Thus, our algorithm has better computational superiority than the algorithms of Feng et al. [19], Wang et al. [39] and Jiao & Liu [40].

    From the computational results of Table 2, it is obvious that the proposed algorithm can globally solve Problem 1 with large-size variables, and this demonstrates the strong robustness and the reliable stability of our algorithm.

    Problem 1.

    {min max {nj=1d1j  yj+g1nj=1e1j  yj+h1,nj=1d2j  yj+g2nj=1e2j  yj+h2,,nj=1dpj  yj+gpnj=1epj  yj+hp}s. t.  nj=1akjyjbk, k=1,2,,m,       yj0, j=1,2,,n,

    where dij,eij,bk,akj,i=1,2,,p,k=1,2,,m,j=1,2,,n, are all randomly generated in the interval [0,10]; gi and hi,i=1,2,,p, are all randomly generated in the unit interval [0,1]. The numerators gi and denominators hi of the linear fraction function in test Problem 1 are small constants.

    Problem 2. [20]

    {minmax{2x1+2x2x3+0.9x1x2+x3,3x1x2+x38x1+4x2x3}s.t.  x1+x2x31,      x1+x2x31,      12x1+5x2+12x334.8,      12x1+12x2+7x329.1,       6x1+x2+x34.1,      1.0x11.2,      0.55x20.65,      1.35x31.45.

    Before executing the algorithm, by calculating the upper and lower bounds of z, we can obtain the initial rectangle Z1=Z={zR21.7315z11.9292,8.8500z29.5500}.

    We set the approximation error as ϵ=102, and a brief summary of the algorithm's solution steps for this problem is as follows.

    Initialization. Solving the problem LRP over Z1 yields LB1=1.3076 and its optimal solution

    (y1,z1,r1)=(1.0167,0.5500,1.4500,1.9167,8.8833,1.3076).

    Let F1={(y1,z1,r1)} and Ω1={Z1}.

    According to

    UB1=max{nj=1e1jy1j+f1nj=1c1jy1j+h1,nj=1e2jy1j+f2nj=1c2jy1j+h2,,nj=1epjy1j+fpnj=1cpjy1j+hp}.

    Following this, the upper bound of the currently known optimal value can be found: UB1=1.3478. Since UB1LB1>ϵ, the algorithm continues with the following iterations.

    Iteration 1. Subdivide Z1 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:

    Z1,1={zR21.7315z11.9292,8.8500z29.2000}

    and

    Z1,2={zR21.7315z11.9292,9.2000z29.5500}.

    Solving the problem LRP over Z1,1 yields LBZ1,1=1.3076 and its optimal solution

    (yZ1,1,zZ1,1,rZ1,1)=(1.0167,0.5500,1.4500,1.9167,8.8833,1.3076),

    and UBZ1,1=1.3478.

    Solving the problem LRP over Z1,2 yields LBZ1,2=1.3657 and its optimal solution

    (yZ1,2,zZ1,2,rZ1,2)=(1.0515,0.5500,1.4118,1.9132,9.2000,1.3657),

    and UBZ1,2=1.3478.

    Let Ω2={Z1,1,Z1,2}, F2=F1{(yZ1,1,zZ1,1,rZ1,1),(yZ1,2,zZ1,2,rZ1,2)} and the upper bound UB2=min{1.3478,1.3478,1.3478}=1.3478. The currently best feasible solution (y2,z2,r2)=(yZ1,1,zZ1,1,rZ1,1), and the lower bound LB2=min{LB(Z)|ZΩ2}=min{LBZ1,1,LBZ1,2}=min{1.3076,1.3657}=1.3076.

    Since LB2=LBZ1,1, let Z2=Z1,1. Since UB2LB2>ϵ, continue to iteration 2.

    Iteration 2. Consistent with the above, subdivide Z2 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:

    Z2,1={zR21.7315z11.9292,8.8500z29.0250}

    and

    Z2,2={zR21.7315z11.9292,9.0250z29.2000}.

    Solving the problem LRP over Z2,1 yields LBZ2,1=1.3076 and its optimal solution

    (yZ2,1,zZ2,1,rZ2,1)=(1.0167,0.5500,1.4500,1.9167,8.8833,1.3076),

    and UBZ2,1=1.3478.

    Solving the problem LRP over Z2,2 yields LBZ1,2=1.3293 and its optimal solution

    (yZ2,2,zZ2,2,rZ2,2)=(1.0335,0.5500,1.4426,1.9261,9.0250,1.3293),

    and UBZ2,2=1.3625.

    Let Ω3={Z1,1,Z2,1,Z2,2}, F3=F2{(yZ2,1,zZ2,1,rZ2,1),(yZ2,2,zZ2,2,rZ2,2)} and the upper bound UB3=min{1.3478,1.3478,1.3625}=1.3478. The currently best feasible solution (y3,z3,r3)=(yZ2,1,zZ2,1,rZ2,1), and the lower bound LB3=min{LB(Z)|ZΩ3}=min{LBZ2,1,LBZ2,2}=min{1.3076,1.3293}=1.3076.

    Since LB3=LBZ2,1, let Z3=Z2,1. Since UB3LB3>ϵ, continue to iteration 3.

    Iteration 3. During this iteration, subdivide Z3 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:

    Z3,1={zR21.7315z11.8333,8.8500z29.0250}

    and

    Z3,2={zR21.8333z11.9292,9.0250z29.2000}.

    Solving the problem LRP over Z3,1 yields LBZ3,1=1.4207 and its optimal solution

    (yZ3,1,zZ3,1,rZ3,1)=(1.0048,0.5500,1.3786,1.8333,8.8595,1.4207),

    and UBZ3,1=1.3478.

    Solving the problem LRP over Z3,2 yields LBZ3,2=1.3242 and its optimal solution

    (yZ3,2,zZ3,2,rZ3,2)=(1.0167,0.5500,1.4500,1.9167,8.8833,1.3242),

    and UBZ3,2=1.3478.

    Let Ω4={Z3,1,Z3,2,Z2,1,Z2,2}, F4=F3{(yZ3,1,zZ3,1,rZ3,1),(yZ3,2,zZ3,2,rZ3,2)}, and the upper bound UB3=min{1.3625,1.3478,1.3478}=1.3478. The currently best feasible solution (y4,z4,r4)=(yZ3,2,zZ3,2,rZ3,2), and the lower bound LB4=min{LB(Z)|ZΩ4}=min{LBZ3,1,LBZ3,2}=min{1.4207,1.3242}=1.3242.

    Since LB4=LBZ3,2, let Z4=Z3,2. Since UB4LB4>ϵ, continue to iteration 4.

    Repeat the above iterative process, and the algorithm stops when UBLBϵ is satisfied. The optimal solution (y1,y2,y3)=(1.0167,0.5500,1.4500) and optimal value 1.34783 of the problem can be obtained after the algorithm executes 14 iterations.

    We study the GAFOP. By exploiting equivalent conversion and a new linearizing technique, the initial GAFOP is able to be converted into a series of LRPs. By integrating the outer space branching search method and the LRP, we put forward an efficient global algorithm for the GAFOP. In contrast to the known existing algorithms, our algorithm has the following computational superiority: (ⅰ) The branching search occurs in Rp outer space, which provides the possibility of mitigating the required computational efforts of the algorithm. (ⅱ) Numerical results demonstrate that our algorithm has superior efficiency compared to the known existing algorithms. Future work is to give a further improvement of our algorithm and extend our method to deal with the general nonlinear fractional optimization problem.

    This paper is supported by the National Natural Science Foundation of China (11871196; 12071133; 12071112), the China Postdoctoral Science Foundation (2017M622340), the Key Scientific and Technological Research Projects in Henan Province (202102210147, 192102210114), the Science and Technology Climbing Program of the Henan Institute of Science and Technology (2018JY01), Henan Institute of Science and Technology Postdoctoral Science Foundation.

    The authors declare that they have no conflicts of interest.



    [1] C. D. Maranas, I. P. Androulakis, C. A. Floudas, A. J. Bergerb, J. M. Mulvey, Solving long-term financial planning problems via global optimization, J. Econ. Dyn. Contr., 21 (1997), 1405–1425. https://doi.org/10.1016/S0165-1889(97)00032-8 doi: 10.1016/S0165-1889(97)00032-8
    [2] H. W. Jiao, J. Q. Ma, P. P. Shen, Y. J. Qiu, Effective algorithm and computational complexity for solving sum of linear ratios problem, J. Ind. Manag. Optim., 2022. http://dx.doi.org/10.3934/jimo.2022135 doi: 10.3934/jimo.2022135
    [3] H. W. Jiao, S. Y. Liu, An efficient algorithm for quadratic Sum-of-Ratios fractional programs problem, Numer. Func. Anal. Opt., 38 (2017), 1426–1445. https://doi.org/10.1080/01630563.2017.1327869 doi: 10.1080/01630563.2017.1327869
    [4] J. E. Falk, S. W. Palocsay, Optimizing the sum of linear fractional functions, recent advances in global optimization, New Jersey: Princeton University Press, 1992.
    [5] H. W. Jiao, J. Q. Ma, An efficient algorithm and complexity result for solving the sum of general ratios problem, Chaos Soliton. Fract., 164 (2022), 112701. https://doi.org/10.1016/j.chaos.2022.112701 doi: 10.1016/j.chaos.2022.112701
    [6] C. Bajona-Xandri, J. E. Martinez-Legaz, Lower subdifferentiability in minimax fractional programming, Optimization, 45 (1999), 1–12. https://doi.org/10.1080/02331939908844423 doi: 10.1080/02331939908844423
    [7] F. Ding, Two-stage least squares based iterative estimation algorithm for CARARMA system modeling, Appl. Math.Model., 37 (2013), 4798–4808. https://doi.org/10.1016/j.apm.2012.10.014 doi: 10.1016/j.apm.2012.10.014
    [8] F. Ding, Decomposition based fast least squares algorithm for output error systems, Signal Process., 93 (2013), 1235–1242. https://doi.org/10.1016/j.sigpro.2012.12.013 doi: 10.1016/j.sigpro.2012.12.013
    [9] H. W. Jiao, S. Y. Liu, Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comp. Appl. Math., 36 (2017), 225–247. https://doi.org/10.1007/s40314-015-0224-5 doi: 10.1007/s40314-015-0224-5
    [10] H. W. Jiao, S. Y. Liu, W. J. Wang, Solving generalized polynomial problem by using new affine relaxed technique, Int. J. Comput. Math., 99 (2022), 309–331. https://doi.org/10.1080/00207160.2021.1909727 doi: 10.1080/00207160.2021.1909727
    [11] H. W. Jiao, W. J. Wang, R. J. Chen, Y. L. Shang, J. B. Yin, An efficient outer space algorithm for generalized linear multiplicative programming problem, IEEE Access, 8 (2020), 80629–80637. https://doi.org/10.1109/ACCESS.2020.2990677 doi: 10.1109/ACCESS.2020.2990677
    [12] H. W. Jiao, Y. L. Shang, R. J. Chen, A potential practical algorithm for minimizing the sum of affine fractional functions, Optimization, 2022. https://doi.org/10.1080/02331934.2022.2032051 doi: 10.1080/02331934.2022.2032051
    [13] Y. Y. Ding, Y. H. Xiao, J. W. Li, A class of conjugate gradient methods for convex constrained monotone equations, Optimization, 66 (2017), 2309–2328. https://doi.org/10.1080/02331934.2017.1372438 doi: 10.1080/02331934.2017.1372438
    [14] Y. H. Xiao, L. Chen, D. H. Li, A generalized alternating direction method of multipliers with semi-proximal terms for convex composite conic programming, Math. Program. Comput., 10 (2018), 533–555. https://doi.org/10.1007/s12532-018-0134-9 doi: 10.1007/s12532-018-0134-9
    [15] H. W. Jiao, W. J. Wang, J. B. Yin, Y. L. Shang, Image space branch-reduction-bound algorithm for globally minimizing a class of multiplicative problems, Rairo-Oper. Res., 56 (2022), 1533–1552. https://doi.org/10.1051/ro/2022061 doi: 10.1051/ro/2022061
    [16] H. W. Jiao, Y. L. Shang, Two-level linear relaxation method for generalized linear fractional programming, J. Oper. Res. Soc., 2022. https://doi.org/10.1007/s40305-021-00375-4 doi: 10.1007/s40305-021-00375-4
    [17] H. W. Jiao, W. J. Wang, Y. L. Shang, Outer space branch-reduction-bound algorithm for solving generalized affine multiplicative problem, J. Comput. Appl. Math., 419 (2023), 114784. https://doi.org/10.1016/j.cam.2022.114784 doi: 10.1016/j.cam.2022.114784
    [18] A. I. Barros, J. B. G. Frenk, Generalized fractional programming and cutting plane algorithms, J. Optim. Theory Appl., 87 (1995), 103–120. https://doi.org/10.1007/BF02192043 doi: 10.1007/BF02192043
    [19] Q. G. Feng, H. P. Mao, H. W. Jiao, A feasible method for a class of mathematical problems in manufacturing system, Key Eng. Mater., 460 (2011), 806–809. https://doi.org/10.4028/www.scientific.net/KEM.460-461.806 doi: 10.4028/www.scientific.net/KEM.460-461.806
    [20] Q. G. Feng, H. W. Jiao, H. P. Mao, Y. Q. Chen, A deterministic algorithm for min-max and max-min linear fractional programming problems, Int. J. Comput. Int. Sys., 4 (2011), 134–141. http://dx.doi.org/10.1080/18756891.2011.9727770 doi: 10.1080/18756891.2011.9727770
    [21] R.W. Freund, F. Jarre, An interior-point method for fractional programs with convex constraints, Math. Program., 67 (1994), 407–440. https://doi.org/10.1007/BF01582229 doi: 10.1007/BF01582229
    [22] Y. Benadada, J. A. Fedand, Partial linearization for generalized fractional programming, Zeitschrift Für Oper. Res., 32 (1988), 101–106. https://doi.org/10.1007/BF01919185 doi: 10.1007/BF01919185
    [23] N. T. H. Phuong, H. Tuy, A unified monotonic approach to generalized linear fractional programming, J. Global Optim., 26 (2003), 229–259. https://doi.org/10.1023/A:1023274721632 doi: 10.1023/A:1023274721632
    [24] A. Roubi, Method of centers for generalized fractional programming, J. Optim. Theory Appl., 107 (2000), 123–143. https://doi.org/10.1023/A:1004660917684 doi: 10.1023/A:1004660917684
    [25] M. Gugat, Prox-regularization methods for generalized fractional programming, J. Optim. Theory Appl., 99 (1998), 691–722. https://doi.org/10.1023/A:1021759318653 doi: 10.1023/A:1021759318653
    [26] A. Ghazi, A. Roubi, A DC approach for minimax fractional optimization programs with ratios of convex functions, Optim. Methods Softw., 2020. https://doi.org/10.1080/10556788.2020.1818234 doi: 10.1080/10556788.2020.1818234
    [27] H. Boualam, A. Roubi, Dual algorithms based on the proximal bundle method for solving convex minimax fractional programs, J. Ind. Manag. Optim., 15 (2019), 1897–1920. https://doi.org/10.3934/jimo.2018128 doi: 10.3934/jimo.2018128
    [28] H. W. Jiao, J. Q. Ma, Y. Shang, Image space branch-and-bound algorithm for globally solving minimax linear fractional programming problem, Pac. J. Optim., 18 (2022), 195–212.
    [29] M. E. Haffari, A. Roubi, Prox-dual regularization algorithm for generalized fractional programs, J. Ind. Manag. Optim., 13 (2017). https://doi.org/10.3934/jimo.2017028 doi: 10.3934/jimo.2017028
    [30] H. W. Jiao, B. B. Li, Solving min-max linear fractional programs based on image space branch-and-bound scheme, Chaos Soliton. Fract., 164 (2022), 112682. https://doi.org/10.1016/j.chaos.2022.112682 doi: 10.1016/j.chaos.2022.112682
    [31] I. Ahmad, Z. Husain, Duality in nondifferentiable minimax fractional programming with generalized convexity, Appl. Math. Comput., 176 (2006), 545–551. https://doi.org/10.1016/j.amc.2005.10.002 doi: 10.1016/j.amc.2005.10.002
    [32] W. E. Schmitendorf, Necessary conditions and sufficient conditions for static minimax problems, J. Math. Anal. Appl., 57 (1977), 683–693. https://doi.org/10.1016/0022-247X(77)90255-4 doi: 10.1016/0022-247X(77)90255-4
    [33] S. Tanimoto, Duality for a class of nondifferentiable mathematical programming problems, J. Math. Anal. Appl., 79 (1981), 286–294. https://doi.org/10.1016/0022-247X(81)90025-1 doi: 10.1016/0022-247X(81)90025-1
    [34] S. R. Yadav, R. N. Mukherjee, Duality for fractional minimax programming problems, ANZIAM J., 31 (1990), 484–492. https://doi.org/10.1017/S0334270000006809 doi: 10.1017/S0334270000006809
    [35] V. Jeyakumar, G. Y. Li, S. Srisatkunarajah, Strong duality for robust minimax fractional programming problems, Eur. J. Oper. Res., 228 (2013), 331–336. https://doi.org/10.1016/j.ejor.2013.02.015 doi: 10.1016/j.ejor.2013.02.015
    [36] H. C. Lai, J. C. Liu, K. Tanaka, Duality without a constraint qualification for minimax fractional programming, J. Math. Anal. Appl., 101 (1999), 109–125. https://doi.org/10.1023/A:1021771011210 doi: 10.1023/A:1021771011210
    [37] I. M. Stancu-Minasian, A ninth bibliography of fractional programming, Optimization, 68 (2019) 2125–2169. https://doi.org/10.1080/02331934.2019.1632250 doi: 10.1080/02331934.2019.1632250
    [38] I. M. Stancu-Minasian, A eighth bibliography of fractional programming, Optimization, 66 (2017) 439–470. https://doi.org/10.1080/02331934.2016.1276179 doi: 10.1080/02331934.2016.1276179
    [39] C. F. Wang, Y. Jiang, P. P. Shen, A new branch-and-bound algorithm for solving minimax linear fractional programming, J. Math., 38 (2018), 113–123.
    [40] H. W. Jiao, S. Y. Liu, A new linearization technique for minimax linear fractional programming, Int. J. Comput. Math., 91 (2014), 1730–1743. https://doi.org/10.1080/00207160.2013.860449 doi: 10.1080/00207160.2013.860449
  • This article has been cited by:

    1. Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang, A novel algorithm for solving sum of several affine fractional functions, 2023, 8, 2473-6988, 9247, 10.3934/math.2023464
    2. Koushik Das, Savin Treanţă, Muhammad Bilal Khan, Set-valued fractional programming problems with σ-arcwisely connectivity, 2023, 8, 2473-6988, 13181, 10.3934/math.2023666
    3. Hongwei Jiao, Binbin Li, Youlin Shang, A Novel Affine Relaxation-Based Algorithm for Minimax Affine Fractional Program, 2024, 41, 0217-5959, 10.1142/S0217595923500367
    4. Zhisong Hou, Sanyang Liu, An accelerating outer space algorithm for globally solving generalized linear multiplicative problems, 2023, 94, 1017-1398, 877, 10.1007/s11075-023-01523-y
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1572) PDF downloads(59) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog