1.
Introduction
The considered generalized affine fractional optimization problem is as follows:
where A∈Rm×n, b∈Rm, ei,di∈Rn, and gi,fi are arbitrary real numbers. p≥2, Y is a nonempty compact set, n∑j=1eijyj+fi and n∑j=1cijyj+hi are all bounded linearity functions defined on Y, and for any y∈Y, the denominator n∑j=1cijyj+hi≠0,i=1,2,…,p.
As a class of special fractional optimization problems, the GAFOP has attracted the attention of many researchers and practitioners for decades. It has a variety of applications in many fields, including finance and investment [1,2,3], transportation planning [4,5], optimal design [6], estimation of iterative parameters [7], signal processing [8], data envelopment analysis and others [9,10,11,12,13,14,15,16,17]. Furthermore, since the GAFOP may be not (quasi)convex, there may exist many local optimal solutions, many of which fail to be global solutions. Hence, it is still of great significance to propose an effective global algorithm to solve the GAFOP.
Some algorithms have been presented to globally solve the GAFOP over the past several decades: for instance, the cutting plane algorithm [18], branch-relaxation-bound methods [19,20], the interior-point algorithm [21], the partial linearization algorithm [22], the monotonic optimization method [23], the method of centers [24] and the prox-regularization method [25]. Recently, based on the Dinkelbach type algorithm, Ghazi and Roubi [26] presented a difference of convex functions (DC) method for globally solving the generalized convex fractional optimization problem. By utilizing the proximal bundle theory, Boualam and Roubi [27] proposed a dual method for the generalized convex fractional optimization problem. Jiao et al. [28] designed an image space branch-and-bound algorithm for solving minimax linear fractional programs. Haffari and Roubi [29] described a prox-dual regularization method for globally solving generalized fractional programs. By utilizing convex hull and concave hull approximation of a bilinear function, Jiao and Li [30] put forward a novel algorithm for globally addressing min-max linear fractional programs. However, the previous reviewed methods can only solve a particular form of the GAFOP, or they are difficult to use to solve large-scale practical problems. Therefore, there remains the necessity to propose a practical algorithm to solve the GAFOP.
In addition to the methods reviewed above, some theoretical progress on the generalized fractional optimization problem (GFOP) has also been made. For example, Ahmad and Husain [31] gave a duality theory for a non-differentiable GFOP with generalized convexity. Schmitendorf [32] presented some optimality conditions for the GFOP. By utilizing the optimality condition, Tanimoto [33] gave a dual problem for a class of non-differentiable GFOP and derived the duality theorems. Yadav and Mukherjee [34] gave a duality theory for GFOP. When the data in the system are uncertain, Jeyakumar et al. [35] put forward a strong duality theorem for the robust GFOP. Based on unconstrained conditions, Lai et al. [36] gave the duality theorem for the GFOP. For a detailed review of the methods and theories for the GFOP, readers can refer to Stancu-Minasian [37,38].
In this article, an outer space branching search method is designed for globally solving the GAFOP. We first convert the GAFOP into the EP. Next, by utilizing the structural characteristics of the EP, we construct a new linearizing method for establishing the LRP of the EP. Compared with the known existing algorithms, the branching search of the presented algorithm occurs in the outer space Rp rather than the variable dimension space Rn, which provides the possibility of mitigating the required computational efforts of the algorithm. In addition, the numerical computational results are reported, indicating that the proposed algorithm has higher efficiency and notable superiority compared to the known existing algorithms [19,39,40].
The remainder of this article is organized as follows. We derive the EP of the GAFOP and establish the LRP of the EP in Section 2. In Section 3, we give an outer space branching search method for globally solving the GAFOP, and we also analyze the global convergence of the algorithm. Numerical results for some test examples from recent studies are presented in Section 4. Finally, Section 5 gives some conclusions.
2.
Linear relaxation programming problem
In the following, to solve the GAFOP, we first transform the GAFOP into the EP. Then, we present a novel linearization technique and construct the LRP of the EP. For this purpose, for each i=1,…,p, we introduce the additional variables zi=n∑j=1cijyj+hi. By computing the minimum value z_0i=miny∈Yn∑j=1cijyj+hi and the maximum value ¯z0i=maxy∈Yn∑j=1cijyj+hi of the linear function n∑j=1cijyj+hi over Y, an initial outer space rectangle Z0={z∈Rp∣z_0i≤zi≤¯z0i, i=1,…,p}, can be constructed. By introducing the new variable r=max{n∑j=1e1jyj+f1z1,n∑j=1e2jyj+f2z2,…,n∑j=1epjyj+fpzp}, we can simplify the objective function of the original problem GAFOP to r, so that we can get the EP of the GAFOP as below:
Theorem 1. y∗ is a global optimum solution of the GAFOP if and only if (y∗,z∗,r∗) is a global optimum solution of the EP, with
and
Additionally, the global optimal values of the GAFOP and EP are equal.
Proof. By the above discussion, the conclusions are obvious, and thus we omit the proof.
By Theorem 1, to globally solve the GAFOP, we can instead solve the EP. In the following, we only consider solving the EP.
For globally solving the EP, we need to establish its LRP for providing the lower bound in the branch-and-bound process. The detailed derivation process of the LRP is as follows.
For any Z={z∈Rp∣z_i≤zi≤¯zi, i=1,…,p}⊆Z0, we define
Obviously, for each i=1,…,p, we can see that
Based on (1), for any Z⊆Z0, we can construct the LRP of the EP as below.
From the above discussion, it is known that all feasible points of the EP over the sub-rectangle Z are also feasible for the LRP. Let v(EP) and v(LRP) be the global optimal values of the LRP and EP, respectively, and we have v(LRP) ≤ v(EP) over Zk. Thus, the optimal value of the LRP will provide a valid lower bound for that of the EP over Z.
Next, we will prove that the optimal solution of the LRP will infinitely approximate the optimal solution of the EP over Z as ‖¯z−z_‖→0, as detailed in Theorem 2.
Theorem 2. For each i=1,2,…,p, consider the functions Φi(y,zi) and Φ_i(y,z_i,¯zi). We have the following:
Proof. By the definitions of the functions Φi(y,zi) and Φ_i(y,z_i,¯zi), for any y∈Y, zi∈[z_i,¯zi], we have
Since n∑j=1|eij|yj+|fi| is a bounded linear function, we have
and complete the proof of the Theorem.
The above Theorem ensures that the function Φi(y,zi) will be infinitely approximated by the function Φ_i(y,z_i,¯zi) as ‖¯z−z_‖→0, so the global optimal solution of the LRP will infinitely approximate the global optimal solution of the EP over Z as ‖¯z−z_‖→0.
3.
Global algorithm and its convergence
In this section, we first put forward an outer space rectangle bisection method. Next, by combining the previous LRP and the branch-and-bound framework, an outer space branching search method is designed to globally solve the GAFOP. In addition, we derive the global convergence of the outer space branching search method.
3.1. Outer space rectangle bisection method
The outer space rectangle bisection method iteratively subdivides the currently investigated rectangle into two sub-rectangles. Consider any selected sub-rectangle Z={z∈Rp|z_i≤zi≤¯zi,i=1,2,…,p}⊆Z0. The outer space rectangle bisection method is given as follows:
(ⅰ) Let q=argmax{¯zi−z_i|i=1,2,…,p};
(ⅱ) Let
and
Through utilizing the proposed outer space rectangle bisection method, the selected sub-rectangle Z can be subdivided into two sub-rectangles Z1 and Z2.
3.2. Outer space branching search method
In this subsection, the basic steps of the proposed outer space branching search method are formulated as follows.
Step 0. Let the convergence error ϵ≥0, and let the initial outer space rectangle
Denote F=∅ as the set of the initial feasible points, let k=0, and let the set of all active nodes Ω0={Z0}.
Step 1. Solve the LRP over Z0, and define (y0,z0,r0) and LB0 as its optimal solution and optimal value. Let
If UB0−LB0≤ϵ, then the proposed algorithm stops. y0 and (y0,z0,ˆr0) are ϵ-optimal solutions of the GAFOP and EP over (Z0), respectively. Otherwise, proceed with Step 2.
Step 2. Use the proposed rectangle bisection method to subdivide Zk−1 into two sub-rectangles Zk,1 and Zk,2. Let Q={Zk,1,Zk,2}.
Step 3. For each Zk,t,t=1,2, compute the lower bound LB(Zk,t) and (y(Zk,t),z(Zk,t),r(Zk,t)) by solving the LRP over Zk,t, and let
If LB(Zk,t)>UBk, then set Q=Q∖Zk,t; else, let
If UBk=UB(Zk,t), then let yk=y(Zk,t) and (yk,zk,ˆrk)=(y(Zk,t),z(Zk,t),r(Zk,t)).
Step 4. Set Ωk=(Ωk−1∖Zk−1)⋃Q.
Step 5. Set LBk=min{LB(Z)|Z∈Ωk}, and let Zk be the sub-rectangle which satisfies LBk=LB(Zk).
If UBk−LBk≤ϵ, then the proposed algorithm stops. yk and (yk,zk) are the ϵ-global optimal solutions of the GAFOP and EP, respectively.
Otherwise, set k=k+1, and go back to Step 2.
3.3. Global convergence analysis
In this part, first of all, we define
Let v be the global optimal value of the EP over Θ0, and define r(yk,zk) as the objective functional value of the EP corresponding to the feasible solution (yk,zk). The global convergence analysis of the proposed algorithm can be given by the following theorem.
Theorem 3. Given any ϵ≥0, if the proposed algorithm finitely terminates after k iterations, then yk is a global ϵ-optimal solution to the GAFOP in the sense that
Otherwise, the proposed algorithm will generate an infinite sequence {yk}, whose accumulation point will be a global optimum solution to the GAFOP.
Proof. If the presented algorithm finitely terminates after k iterations, according to the termination of the algorithm, it follows that
By Step 3 of the presented algorithm, we can find a feasible solution (yk,zk) to the EP such that
Since (yk,zk) is feasible for the EP, we have
By using the above conclusions, we have
So, (yk,zk) is a global ϵ-optimal solution of the EP, with
Thus, yk is a global ϵ-optimum solution to the GAFOP.
If the presented algorithm does not finitely terminate, then it must produce an infinite feasible solution sequence {(yk,zk)}, and the sequence {(yk,zk)} has a convergence subsequence. Therefore, we can let
So, we have
From the branch-and-bound structure of the algorithm, we also get
Since y∗ is a feasible solution of the GAFOP over Z0, and due to Theorem 2, we can get
Combining the above inequalities, we have
Furthermore, by the equivalence of the GAFOP and EP, and the continuity of the function Λ(y), we can conclude the following:
Based on the above inequalities (2) and (3), we have
Therefore, this implies that any accumulation point y∗ of the sequence {yk} is a globally optimum solution to the GAFOP. The proof is complete.
4.
Numerical comparisons
For verifying the computational superiority of the algorithm, the presented algorithm is implemented in the software MATLAB R2014a and solved on the same microcomputer with an Intel(R) Core(TM) i5-7200U CPU @2.50 GHz processor and 4 GB RAM.
We first tested some randomly generated Problem 1 with small-size variables, numerically compared them with the known existing algorithms [19,39,40] and listed these numerical comparison results in Table 1. Next, we tested some randomly generated Problem 1 with large-size variables to verify our algorithm further and listed the numerical results in Table 2. In Table 2, Avg.Iter represents the average iteration times and Avg.Time represents the average execution CPU time in seconds.
The maximum CPU time limit of all algorithms is set at 3600s, and the approximation error is set as ϵ=10−2. "−" denotes the situation in which the used algorithm failed to terminate in 3600s. Since the known existing algorithms [19,39,40] failed to solve ten arbitrary randomly generated Problem 1 with large-size variables in 3600s, we only list the numerical results obtained by our algorithm in Table 2.
We solved ten arbitrary randomly generated examples for all test problems. First of all, we tested the randomly generated Problem 1 with small-size variables. Table 1 shows the best results, worst results and average results among these ten test results, and we highlighted in bold the winners of these average results in their numerical comparison results in Table 1. Second, we solved the randomly generated Problem 1 with large-size variables, and numerical results are reported in Table 2.
From the computational results of Table 1, it can be seen that, when p≥2,m≥10, and n≥6, the algorithm of Feng et al. [19] failed to solve any one of ten randomly generated Problem 1 in 3600s. When p≥2,m≥10, and n≥10, the algorithm of Wang et al. [39] failed to solve any one of ten randomly generated Problem 1 in 3600s. When p≥3,m≥10, and n≥20, the algorithm of Jiao & Liu [40] failed to solve any one of ten randomly generated Problem 1 in 3600s. However, in all cases, our algorithm can globally solve any one of ten randomly generated Problem 1. In addition, when p≥2,m≥10, and n≥6, compared with the known existing algorithms [19,39,40], our algorithm takes less running time and iterations. Thus, our algorithm has better computational superiority than the algorithms of Feng et al. [19], Wang et al. [39] and Jiao & Liu [40].
From the computational results of Table 2, it is obvious that the proposed algorithm can globally solve Problem 1 with large-size variables, and this demonstrates the strong robustness and the reliable stability of our algorithm.
Problem 1.
where dij,eij,bk,akj,i=1,2,…,p,k=1,2,…,m,j=1,2,…,n, are all randomly generated in the interval [0,10]; gi and hi,i=1,2,…,p, are all randomly generated in the unit interval [0,1]. The numerators gi and denominators hi of the linear fraction function in test Problem 1 are small constants.
Problem 2. [20]
Before executing the algorithm, by calculating the upper and lower bounds of z, we can obtain the initial rectangle Z1=Z={z∈R2∣1.7315≤z1≤1.9292,8.8500≤z2≤9.5500}.
We set the approximation error as ϵ=10−2, and a brief summary of the algorithm's solution steps for this problem is as follows.
Initialization. Solving the problem LRP over Z1 yields LB1=1.3076 and its optimal solution
Let F1={(y1,z1,r1)} and Ω1={Z1}.
According to
Following this, the upper bound of the currently known optimal value can be found: UB1=1.3478. Since UB1−LB1>ϵ, the algorithm continues with the following iterations.
Iteration 1. Subdivide Z1 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:
and
Solving the problem LRP over Z1,1 yields LBZ1,1=1.3076 and its optimal solution
and UBZ1,1=1.3478.
Solving the problem LRP over Z1,2 yields LBZ1,2=1.3657 and its optimal solution
and UBZ1,2=1.3478.
Let Ω2={Z1,1,Z1,2}, F2=F1∪{(yZ1,1,zZ1,1,rZ1,1),(yZ1,2,zZ1,2,rZ1,2)} and the upper bound UB2=min{1.3478,1.3478,1.3478}=1.3478. The currently best feasible solution (y2,z2,r2)=(yZ1,1,zZ1,1,rZ1,1), and the lower bound LB2=min{LB(Z)|Z∈Ω2}=min{LBZ1,1,LBZ1,2}=min{1.3076,1.3657}=1.3076.
Since LB2=LBZ1,1, let Z2=Z1,1. Since UB2−LB2>ϵ, continue to iteration 2.
Iteration 2. Consistent with the above, subdivide Z2 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:
and
Solving the problem LRP over Z2,1 yields LBZ2,1=1.3076 and its optimal solution
and UBZ2,1=1.3478.
Solving the problem LRP over Z2,2 yields LBZ1,2=1.3293 and its optimal solution
and UBZ2,2=1.3625.
Let Ω3={Z1,1,Z2,1,Z2,2}, F3=F2∪{(yZ2,1,zZ2,1,rZ2,1),(yZ2,2,zZ2,2,rZ2,2)} and the upper bound UB3=min{1.3478,1.3478,1.3625}=1.3478. The currently best feasible solution (y3,z3,r3)=(yZ2,1,zZ2,1,rZ2,1), and the lower bound LB3=min{LB(Z)|Z∈Ω3}=min{LBZ2,1,LBZ2,2}=min{1.3076,1.3293}=1.3076.
Since LB3=LBZ2,1, let Z3=Z2,1. Since UB3−LB3>ϵ, continue to iteration 3.
Iteration 3. During this iteration, subdivide Z3 into two sub-rectangles, compress the range of each sub-rectangle, and denote the remaining two sub-rectangles as follows:
and
Solving the problem LRP over Z3,1 yields LBZ3,1=1.4207 and its optimal solution
and UBZ3,1=1.3478.
Solving the problem LRP over Z3,2 yields LBZ3,2=1.3242 and its optimal solution
and UBZ3,2=1.3478.
Let Ω4={Z3,1,Z3,2,Z2,1,Z2,2}, F4=F3∪{(yZ3,1,zZ3,1,rZ3,1),(yZ3,2,zZ3,2,rZ3,2)}, and the upper bound UB3=min{1.3625,1.3478,1.3478}=1.3478. The currently best feasible solution (y4,z4,r4)=(yZ3,2,zZ3,2,rZ3,2), and the lower bound LB4=min{LB(Z)|Z∈Ω4}=min{LBZ3,1,LBZ3,2}=min{1.4207,1.3242}=1.3242.
Since LB4=LBZ3,2, let Z4=Z3,2. Since UB4−LB4>ϵ, continue to iteration 4.
Repeat the above iterative process, and the algorithm stops when UB−LB≤ϵ is satisfied. The optimal solution (y1,y2,y3)=(1.0167,0.5500,1.4500) and optimal value 1.34783 of the problem can be obtained after the algorithm executes 14 iterations.
5.
Conclusions
We study the GAFOP. By exploiting equivalent conversion and a new linearizing technique, the initial GAFOP is able to be converted into a series of LRPs. By integrating the outer space branching search method and the LRP, we put forward an efficient global algorithm for the GAFOP. In contrast to the known existing algorithms, our algorithm has the following computational superiority: (ⅰ) The branching search occurs in Rp outer space, which provides the possibility of mitigating the required computational efforts of the algorithm. (ⅱ) Numerical results demonstrate that our algorithm has superior efficiency compared to the known existing algorithms. Future work is to give a further improvement of our algorithm and extend our method to deal with the general nonlinear fractional optimization problem.
Acknowledgments
This paper is supported by the National Natural Science Foundation of China (11871196; 12071133; 12071112), the China Postdoctoral Science Foundation (2017M622340), the Key Scientific and Technological Research Projects in Henan Province (202102210147, 192102210114), the Science and Technology Climbing Program of the Henan Institute of Science and Technology (2018JY01), Henan Institute of Science and Technology Postdoctoral Science Foundation.
Conflict of interest
The authors declare that they have no conflicts of interest.