1.
Introduction
We mainly consider the linear multiplicative problem of the following form:
where ci is an n-dimensional column vector, di is a real number and αi is a real number different from zero. T represents the transpose of a vector, A∈Rm×n, b∈Rm and Y is a non-empty bounded set. In this article, for any y∈Y, we suppose that cTiy+di>0, i=1,...,p.
LMP is a typical non-convex optimization problem with important applications in real life. It and its variants have been applied in various fields such as robust optimization [1], financial optimization [2], VLSI chip design [3], decision tree optimization [4], network flow optimization [5], supply chain problem [6], investment portfolio [7], etc. Moreover, LMP is an NP-hard problem [8] with multiple local solutions rather than global solutions, which increases the computational complexity. Hence, researching this problem holds great significance.
Various solution algorithms for LMP and its special forms have been proposed by numerous experts and scholars. These algorithms can be broadly categorized into the following groups: branch-and-bound algorithms [9,10,11,12,13,14], finite algorithm [15], heuristic method [16], approximation algorithm [17], polynomial time algorithm [18], parameterized simplex method [19], cutting-plane method [20], level set algorithm [21], etc. Despite the advancements made by these approaches, solving high-dimensional LMP remains a challenging task. In the past 20 years, searching for global solutions of LMP in the outer space using different relaxation methods has become a hot topic among scholars [22,23,24,25,26,27,28,29]. For example, the authors in references [22,27] used new convex relaxation techniques to put forward different outer space branch-and-bound algorithms for solving LMP. References [23,25,26,28,29,30] adopted various linear relaxation programming problems and proposed novel outer space branch-and-bound algorithms, respectively.
In this paper, an outer space branch-and-bound algorithm is designed to solve large-scale LMP. The major characteristics of the algorithm are as follows: First, p auxiliary variables are introduced to transform LMP into an equivalent problem, where p is the number of linear functions. Second, based on the properties of exponential and logarithmic functions, the second equivalent problem (ELMP1) is established. Third, a novel linear relaxation approach is employed to derive the linear relaxation programming problem for ELMP1. Moreover, the branching rule in p-dimensional outer space is given, and the corresponding outer space branch-and-bound algorithm is developed by embedding the rectangular compression technique into the branch-and-bound framework. Finally, the computational complexity of the algorithm is analyzed to estimate the number of iterations in the worst case, which also implies that our algorithm is convergent.
Compared with existing methods [10,23,26,28], the proposed algorithm has the following advantages:
(1) The solved LMP is universal, and the exponents of its objective function are real numbers rather than being limited to just 1 or only positive values.
(2) After the first equivalent conversion, the exponents of the objective function are all positive. Therefore, when linearly approximating the objective function of equivalent problems, there is no need to consider the case where the coefficient is negative, which simplifies the implementation of linear relaxation.
(3) The branching process takes place only in the p-dimensional outer space of the linear function. This leads to cost savings compared to the branching operation in the n-dimensional decision variable space, as p is often much smaller than n in practical problems.
(4) To demonstrate the efficiency of the algorithm, we compared it with the methods in references [10,23,26,28], our algorithm is suitable for solving LMP with large-scale decision variables.
The rest of this paper is organized as follows: Section 2 presents the equivalent problems of LMP and establishes its linear relaxation problem. In Section 3, the branching operation and compression technology are discussed in detail. Moreover, the outer space branch-and-bound algorithm and its properties are described. Section 4 provides numerical comparison results for some problems. Finally, a brief summary of this paper is presented in Section 5.
2.
Equivalent problem and its linear relaxation problem
In order to search the global optimal solution of LMP, we transform it into the equivalent problem (ELMP). For convenience, we first define the following sets:
For any i∈{1,...,p}, denote
Since cTiy+di is a bounded linear function, by solving the above 2p linear programs, we can easily get that t_0i and ¯t0i. Simultaneously, the initial hyper-rectangle
is constructed. Thus, let us consider the following equivalent problem:
We denote the feasible region of ELMP as V={ti=cTiy+di,i∈I+,ti=1zi,zi=cTiy+di,i∈I−,y∈Y,t∈H0}. It is evident that V is a nonempty bounded compact set if and only if Y≠∅. Theorem 1 below explains the equivalence between LMP and ELMP.
Theorem 1. (y∗,t∗,z∗) is a global optimal solution for ELMP if and only if y∗ is a global optimal solution for LMP, where t∗i=cTiy∗+di when i∈I+, t∗i=1z∗i and z∗i=cTiy∗+di when i∈I−.
Proof. We will develop our proof in two aspects. On one hand, for any y∈Y, let ti=cTiy+di for i∈I+, ti=1zi and zi=cTiy+di for i∈I−, thus (y,t,z)∈V. If y∗ is a global optimal solution for LMP, then t∗i=cTiy∗+di for i∈I+, t∗i=1z∗i and z∗i=cTiy∗+di for i∈I−, so (y∗,t∗,z∗)∈V, which shows that (y∗,t∗,z∗) is a feasible solution to ELMP, and by the optimality of y∗, the following inequalities hold:
Thus, (y∗,t∗,z∗) is a global optimal solution for ELMP.
On the other hand, if (y∗,t∗,z∗) is a global optimal solution of ELMP, where t∗,z∗ are satisfied: if i∈I+, then t∗i=cTiy∗+di; if i∈I−, then t∗i=1z∗i and z∗i=cTiy∗+di. Suppose y is a global optimal solution of LMP such that ∏pi=1(cTiy+di)αi<∏pi=1(cTiy∗+di)αi holds, for i∈I+, let ti=cTiy+di, for i∈I−, let ti=1zi and zi=cTiy+di, it follows that
This contradicts the optimality of (y∗,t∗,z∗), thus y∗ is a global optimal solution of LMP. □
For the sake of convenience, we denote βi=−αi>0 for i∈I−. In the meantime, ln(∙) and exp(∙) represent the logarithmic and the exponential functions, respectively. The objective function of ELMP is further equivalently transformed according to the properties of the exponential and logarithmic functions, namely,
where ζ∈Rp and ζ=[α1,α2,⋯,ακ,βκ+1,⋯,βp]. Hence, ELMP is reformulated to the following form:
where Hk represents H0 or the sub-rectangle of H0. Obviously, the optimal solution for ELMP1 is the same as that for ELMP. Therefore, we shift our focus to solving ELMP1, but ELMP1 cannot be solved directly due to the nonlinearity of the objective function and the constraints zi=1ti for i∈I−. To address this, we propose a linear relaxation technique to obtain the lower bound problem of ELMP1.
Theorem 2. For i=1,...,p, ti∈[t_i,¯ti], define
where ki=ln¯ti−lnt_i¯ti−t_i. Then we have the following conclusions:
(i) g_(ti)≤g(ti); (ii) L_(y,t,z)≤L(y,t,z).
Proof. Since the function g(ti) is a monotonically increasing concave function on [t_i,¯ti] with respect to ti, g_(ti) is its secant approximation, so (ⅰ) and (ⅱ) obviously hold. □
Theorem 3. For each i∈I−, define
Then the functions ψ_(ti) and ¯ψ(ti) satisfy the conclusions below:
(i) ψ_(ti)≤1ti≤¯ψ(ti);
(ii) Denote Δti=¯ti−t_i, then limΔti→0(1ti−ψ_(ti))=0, limΔti→0(¯ψ(ti)−1ti)=0.
Proof. (ⅰ) For each i∈I−, since ti∈[t_i,¯ti] and ti>0, it follows from the definitions of ψ_(ti) and ¯ψ(ti) that
and
(ⅱ) From (ⅰ), when Δti→0 for each i∈I−, the following inequalities are valid:
and
□
Consequently, we obtain the linear relaxation program of ELMP1:
In the constraint of LRP, we substitute zi=cTiy+di into the inequalities zi≤¯ψ(ti) and ψ_(ti)≤zi, respectively, that is
For each i=1,...,p, ζi>0, LRP is reformulated as
We have derived the linear relaxation problem for ELMP1 through a series of relaxation processes. This relaxation enables us to simplify the problem by loosening certain constraints while providing a reliable lower bound for the global optimal value of ELMP1, facilitating informed decision-making in subsequent optimization steps.
3.
Branch-and-bound algorithm and its property analysis
In this section, we present an efficient deterministic algorithm for solving ELMP1 by combining the linear relaxation problem proposed in Section 2 with subsequent branching operation in Section 3.1 and rectangle compression technique in Section 3.2. The algorithm requires solving a series of linear relaxation problems on the subdivided rectangles of H0. Furthermore, we provide rigorous proofs for the convergence and complexity of the algorithm based on the employed branching operation.
3.1. Branching operation
For any selected sub-rectangle Hk={t∈Rp|t_i≤ti≤¯ti}⊆H0, the following branching rules are given:
(ⅰ) Let τ=argmax{¯ti−t_i,i=1,...,p};
(ⅱ) Hk is divided into two sub-rectangles
where Hi=[ti∈R|t_i≤ti≤¯ti,i=1,...,p,i≠τ].
3.2. Compression technology
We introduce a compression technique to enhance the convergence speed of the algorithm. When the algorithm iterates to the kth time, multiple sub-rectangles are obtained through rectangular subdivision. For any sub-rectangle ˜Hk⊆Hk, we further investigate whether ELMP1 over ˜Hk has a global minimum, where ˜Hk=˜H1טH2×⋯טHp and ˜Hi={ti∈R|t_i≤ti≤¯ti,i=1,...,p}. The embedded compression technology in the algorithm involves replacing sub-rectangles with smaller rectangles ˜Hk, while ensuring that the global optimal solution of ELMP1 remains intact and unaffected.
Theorem 4. When the algorithm iterates to the kth time, let ^UB be the current best upper bound of the global optimum of ELMP1. Denote
For any sub-rectangle ˜Hk⊆Hk, it can be inferred that
(i) If Ξ>^UB, then there is no global optimal solution over ˜Hk for ELMP1;
(ii) If Ξ<^UB, then ELMP1 has no global optimal solution over ¨H, where
with ¨Hι={tι∈R|πι≤tι≤¯tι}∩˜Hι.
Proof. (ⅰ) If Ξ>^UB, then
Therefore, ˜Hk does not contain a global optimal solution for ELMP1.
(ⅱ) If Ξ<^UB, for any t∈¨H,
Therefore, ¨H does not contain a global optimal solution for ELMP1. □
3.3. Branch-and-bound algorithm
The branching operation proposed in Section 3.1 partitions the initial rectangle H0 into smaller sub-rectangles, enabling the algorithm to search for local optimal solutions of ELMP1 over V that necessarily include the global minimal solution of ELMP1. During the kth iteration of the algorithm, we provide some relevant notations. Qk denotes the list of active nodes. For each branching node H∈Qk, (y(H),t(H)) and LB(H) represent the optimal solution and the optimal value of LRP(H), respectively. The current best lower bound for ELMP1 is noted as LBk=min{LB(H),H∈Qk}. vk represents the objective function value corresponding to the best feasible solution of ELMP1, and the current best upper bound of vk is denoted as UB. We choose a divided rectangle Hk from Qk that satisfies LB(H)=LBk, and segment it into two sub-rectangles Hk1 and Hk2 by branching operation. These sub-rectangles are then added to the set T, and the set T is updated as T={T∖Hk}⋃{Hk1,Hk2}. Let F be the set of feasible points, and ϵ denotes algorithmic tolerance. In a more precise manner, we can describe the presented outer space branch-and-bound algorithm as follows:
Step 0. Initialize the best known solution as null and the best known upper bound as infinity. Create a root node and initialize the list of active nodes with this root node. Set the algorithmic tolerance to the desired value.
Step 1. Solve a relaxation problem LRP(H0) in order to get a lower bound (or prove infeasibility). If problem is feasible, update the incumbent if necessary. Let
If (y0,t0,z0) is a feasible solution of ELMP1, then let UB=L(y0,t0,z0), F=F⋃{y0,t0,z0}. If UB−LB0≤ϵ, the algorithm terminates and obtains the global ϵ-optimal solution (y0,t0,z0) for ELMP1. Otherwise, denote T={H0}.
Step 2. Split the current node Hk into two new nodes Hkj(j=1,2) and reduce them by using the compression technique, the reduced rectangle is still denoted as Hkj(j=1,2) and set T={T∖Hk}⋃{Hk1,Hk2}.
Step 3. For each child node Hkj∈T(j=1,2), the corresponding optimal value LB(Hkj) and optimal solution (y(Hkj),t(Hkj)) are obtained by solving LRP(Hkj). Set F=F⋃(y(Hkj),t(Hkj),z(Hkj)), (ˆyk,ˆtk,ˆzk)=argmin(y,t,z)∈FL(y,t,z), set vk=L(ˆyk,ˆtk,ˆzk). If vk<UB, then update the upper bound UB=vk, the current best solution for ELMP1 is updated as (yk,tk,zk)=(ˆyk,ˆtk,ˆzk), and set F=∅. If LB(Hkj)>UB, then remove it from T, i.e. T=T∖{Hkj}. Set Qk=(Qk∖Hk)⋃T and update the lower bound LBk=min{LB(H)|H∈Qk}.
Step 4. Let the list of active nodes Qk+1={H|UB−LB(H)>ϵ,H∈Qk}. If Qk+1 is empty, return the best known solution as a global ϵ-optimal solution. Otherwise select an active node Hk+1∈argmin{LB(H),H∈Qk+1}. Set k=k+1 and go back to Step 2.
Definition 1 provides the concept of the global ϵ-optimal solution involved in the proposed algorithm.
Definition 1. Given ϵ>0, the feasible solution (ˆy,ˆt,ˆz) is considered a global ϵ-optimal solution for ELMP1, if L(ˆy,ˆt,ˆz)≤v+ε, where v represents the global optimal value of ELMP1.
3.4. Convergence analysis
This subsection discusses the convergence analysis of the algorithm. Supposing the algorithm is infinite, according to the branching operation, there exists an infinite rectangular sequence {Hk}∞k=1 such that for each k=1,2,..., we have Hk+1⊆Hk⊆⋯⊆H0, where Hk∈Rp. The following Lemma 1 provides a theoretical basis for Theorem 5.
Lemma 1. For any t∈H, when ¯ti−t_i→0, i=1,...,p, for which we have L(y,t,z)−L_(y,t,z)→0.
Proof. It follows from Theorem 2 that
Therefore, for any t∈H, L(y,t,z)−L_(y,t,z)→0 holds while ¯ti−t_i→0, i=1,...,p. □
Theorem 5. Given ϵ>0, assuming that the feasible domain of ELMP1 is non-empty, the algorithm either obtains a global ϵ-optimal solution of ELMP1 at a finite number of iterations, or produces a sequence of feasible solutions {(yk,tk,zk)}, each of whose accumulation points is a global ϵ-optimal solution of ELMP1.
Proof. Assuming that the algorithm terminates within a finite number of iterations, without loss of generality, let us assume that the algorithm terminates at the kth iteration, which gets UB−LBk≤ϵ. According to Step 3 in the algorithm, we have UB=vk=L(ˆyk,ˆtk,ˆzk), thus
It follows from (3.1) that
Thereby, (ˆyk,ˆtk,ˆzk) is a global ϵ-optimal solution to ELMP1.
If the algorithm iterates an infinite number of times and produces an infinite sequence of rectangles {Hk}∞k=1, where Hk=∏pi=1[t_ki,¯tki]∈Rp. Without losing generality, suppose that limk→∞Hk=H∞, then for the subsequences K of sequence {1,2,...} we have
For any k∈K, depending on Step 3 of the algorithm, the lower bound is updated to
For any k∈K, it follows that tk∈Hk⊆H. Therefore, {tk}∞k=1 exists a convergent subsequence {tk}k∈K, by formula (3.2), the limit of {tk}k∈K is in H∞, let
where ˆt is a accumulation point of {tk}k∈K. Since L(y,t,z) is continuous, combining with (3.3) we have
For any k∈K, tk∈Hk, it follows from Lemma 1 that
Hence, we have
Integrating (3.4)–(3.6), we obtain L(ˆy,ˆt,ˆz)=limk∈KL(yk,tk,zk)=limk∈KL_(yk,tk,zk). For each k∈K we get L(ˆyk,ˆtk,ˆzk)=∑pi=1ζiln(tki), therefore
Because {∑pi=1ζiln(tki)}k∈K is a subsequence of {∑pi=1ζiln(tki)}∞k=1, following from (3.7), we then obtain limk∈K∑pi=1ζiln(tki)=v. From the continuity of L(y,t,z) and formula (3.5), we have
So we get
Combining Eq (3.8), we conclude that (ˆyk,ˆtk,ˆzk) is a globally optimal solution to ELMP1. □
3.5. Complexity analysis
We use Ω(H) to define the size of the sub-rectangle H={t∈Rp,t_i≤ti≤¯ti,i=1,...,p}, i.e.,
In addition,
Lemma 2. Given any ϵ>0, for a sub-rectangle H⊆H0, if Ω(H)<ϵ, then for any optimal solution (y,t) of LRP(H), we have
Proof. Clearly, (y,t,z) is a feasible solution to ELMP1, it follows from Lemma 1 that
□
The following Theorem 6 illustrates an estimation of the maximum number of iterations for the proposed algorithm in the worst case, indirectly indicating that the algorithm will eventually find the global optimal solution for LMP.
Theorem 6. For any ϵ0∈(0,1), the algorithm iterates at most
times to obtain a global ϵ0-optimal solution in a worst case.
Proof. Let ϵ=ϵ0, suppose that during the kth iteration, when the algorithm reaches Step 3, H⊆H0 is a rectangle selected by the branching rule to be dissected, which satisfies
then, the algorithm is terminated. In the worst case, when the algorithm terminates at Step 4 on the kth iteration, splitting the initial rectangle H0 results in k+1 sub-rectangles, with the assumption that Hι is any one of these sub-rectangles and satisfies
here, ιi denotes the initial interval [t_0i,¯t0i] after ιi subdivision to produce [t_ιi,¯tιi]. From Lemma 2, it follows that
Since the subdivision H0 yields no more than ∏pi=12ιi sub-rectangles, if every sub-rectangle satisfies (3.9), the algorithm must terminate. By Eq (3.9) we have
Let χi=⌈log2pθλΩ(H0)ϵ0⌉,i=1,...,p. The initial rectangle is split into k+1 sub-rectangles and k+1=∏pi=12χi. At this point the algorithm terminates. Thus, the algorithm iterates at most 2∑pi=1⌈log2pθλΩ(H0)ϵ0⌉−1 times. Furthermore, it can be derived that
where (y∗,t∗,z∗) is a global optimal solution of ELMP1. For i∈I+, t∗i=cTiy∗+di. For i∈I−, z∗i=1t∗i and z∗i=cTiy∗+di. Based on the bounding process, let ˆyk be the best feasible solution obtained so far, and denote that {f(ˆyk)} is the decreasing sequence such that f(ˆyk)≤f(y). Combined with (3.10) can be obtained
Therefore, the algorithm terminates, and ˆyk is a global ϵ0-optimal solution to LMP. □
4.
Numerical experiments
In this section, several problems are employed to demonstrate the feasibility and effectiveness of the proposed algorithm in this paper. All linear programming problems are solved by the dual-simplex method, all tests of the algorithm are carried out using MATLAB9.2.0.538062 (R2017a) on an Inter(R)Core(TM) i5-8250U, CPU@1.60GHz, 4GB memory and 64 bit Windows10 operating system.
Initially, we utilize the existing branch and bound algorithms [10,23,26,28] and the proposed algorithm to compute the deterministic Problems 1–8 with a predefined convergence tolerance. This is to demonstrate the feasibility of our algorithm. To validate the efficiency of the proposed algorithm, we conduct tests on random Problems 9–11 with a tolerance of 10−6.
Problem 1 [10,26,28]
Problem 2 [10,26,28]
Problem 3 [10,26,28]
Problem 4 [26,28]
Problem 5 [10,26,28]
Problem 6 [26,28]
Problem 7 [23,26,28]
Problem 8 [23,26,28]
Problem 9 [10,23]
where ˆcij is generated randomly in the interval [0, 1], i=1,2,j=1,...,n. All elements ˆaij of the matrix ˆA are randomly generated in the interval [-1, 1], i.e., ˆaij=2ˆλ−1, where ˆλ is randomly generated in [0, 1]. The components of ˆb is set to ∑nj=1ˆaij+2ˆt, where ˆt is randomly generated at [0, 1].
Problem 10 [23,26]
where ˜cij is randomly generated in [0, 1]. ˜amj is randomly generated at [-1, 1], m=1,...,M,j=1,...,n. ˜bm=∑nj=1˜amj+2˜t, where ˜t is randomly generated at [0, 1].
Problem 11 [26,28]
where ¯cij, and ¯di are randomly generated in [0, 1], i=1,...,p,j=1,...,n. Each element of the matrix ¯A and ¯αi are randomly generated in [-1, 1]. The components of the vector ¯b are generated by ∑nj=1ˉaij+2ˉt, where ˉt is generated randomly at [0, 1].
Table 1 shows the numerical comparison between some algorithms and our algorithm on Problems 1–8.
For stochastic Problems 9–11, we solve 10 randomly generated problems for each set of parameters (p,m,n) and place their average number of iterations and average CPU time in Tables 2–6. Specifically, Problem 9 represents an LMP with only two linear functions and exponents of 1. Table 2 shows the results of numerical comparisons between our algorithm and the algorithms proposed in [10,23]. Problem 10 is an LMP with multiple linear functions and exponents of 1. Tables 3 and 4 display the numerical results of our algorithm compared with the algorithms in [23,26]. Additionally, Figures 1 and 2 plot some of the data results in Table 3. Problem 11 is an LMP with real exponents and multiple linear functions. Tables 5 and 6 show the numerical results of our algorithm compared with the algorithms in [26,28]. Figures 3–6 depict some of the data results from Tables 5 and 6.
For convenience, the symbols in the table headers in Tables 1–6 are specified as follows: Opt.val: the global optimum of the tested problem; Iter: the number of iterations of the algorithm; Time: the CPU time in seconds; Avg.Iter: the average number of iterations of the 10 randomly generated problems; Std.Iter: the standard deviation of the number of iterations; Avg.Time: the average CPU time of the 10 randomly generated problems; Std.Time: the standard deviation of the average CPU time; p: the number of linear functions; m: the number of constraints; n: the dimensionality of decision variables.
As can be seen from the numerical results in Table 1, our algorithm effectively calculates the global optimal solutions and optimal values for low-dimensional Problems 1–8. In comparison to the algorithms proposed in [10,26,28], our algorithm demonstrates shorter computation time and fewer iterations. Most deterministic problems require only one iteration, with a maximum of nine iterations, indicating the feasibility of our algorithm.
Upon observing Table 2, it is evident that our algorithm exhibits a lower average number of iterations and shorter average CPU time compared to the algorithm proposed in [10] for medium-scale Problem 9. The primary reason for this disparity is that our algorithm solves the problem in the p-dimensional space, whereas the algorithm in [10] tackles it in the n-dimensional space. In comparison to the algorithm presented in [23], it is apparent that the iterations of the algorithm in [23] is generally lower than our algorithm. However, when considering the average CPU time, our algorithm outperforms in terms of efficiency.
From Table 3, it is easy to see that the proposed algorithm is more efficient in solving Problem 10. First, when compared to the algorithm in [26], our algorithm consumes less time and requires fewer iterations. Second, in comparison to the algorithm in [23], our algorithm exhibits a lower number of iterations for parameter sets where p is fixed at 7. Although the algorithm in [23] may have less iterations than ours for some parameter groups, our average CPU time is still lower than that of [23]. For instance, when p=4, our algorithm demonstrates higher iterations than [23] as m and n vary, yet our CPU time is shorter than theirs.
To visualize the effectiveness of the proposed algorithm in this paper, Figures 1 and 2 depict the trends of the average number of iterations and average CPU time for fixing (m,n) to (10, 20), (100,200) and p to change from 4 to 7 in Problem 10, respectively. From Figure 1, when (m,n)=(10,20) (indicated by the solid line), the green solid line is always above the blue solid line, indicating that the number of iterations of the algorithm in [26] is higher than ours. On the other hand, the red solid line is above the blue solid line only after p=6, which means that the iterations of the algorithm in [23] are higher than ours after p>6. When (m,n)=(100,200) (indicated by the dashed line), both the red dashed line and the green dashed line are above the blue dashed line, indicating that the number of iterations of our algorithm is lower than the other two algorithms. In the vertical direction, the blue dashed line is always higher than the blue solid line, but the vertical distance between them is shorter than the corresponding vertical distance of the other two algorithms. This implies that as (m,n) increases, the number of iterations of our algorithm exhibits a slight increase, but the magnitude of the increase is not significant. Based on Figure 2, we observe that when (m,n)=(10,20), p=4,5,6, the three solid lines approximately coincide and when p=7, it is obvious that our algorithm takes less time. When (m,n)=(100,200), the distance between the red dashed line and the blue dashed line becomes larger as p increases. The time taken by our algorithm increases as (m,n) increases, but not significantly.
Table 4 gives the numerical results for solving the large-scale Problem 10. It can be observed that our algorithm is more time-saving compared to the algorithm in [26]. In contrast to the algorithm in [23], the first two sets of parameters yield better results than ours. However, for the case of (m,n)=(3,10,1000), the number of iterations in our algorithm is comparable to that of [23], but our algorithm exhibits lower time consumption. When (p,m,m)=(3,10,2000), although our algorithm requires fewer iterations than that of [23], the CPU time is slightly longer. When (p,m,n)=(4,10,1000),(4,10,2000), respectively, our algorithm demonstrates clear advantages. Specifically, the algorithm in [23] requires twice as many iterations and three times as much CPU time as ours.
Table 5 shows the results of a medium-scale test for Problem 11. Compared to the algorithms in [26,28], our algorithm exhibits a distinct advantage. Figures 3 and 4 show the average number of iterations and the average CPU time among the algorithms in [26,28] and our algorithm when (m,n) is fixed (10, 20), (100,200) on Problem 11, respectively. From Figure 3, it can be observed that the average number of iterations in the algorithm proposed by [26] is higher than our algorithm when (m,n)=(10,20) and lower than our algorithm only when p=5. When (m,n)=(100,200), the average number of iterations in our algorithm is lower than those of the algorithms in [26,28]. Moreover, our algorithm significantly outperforms the algorithms in [26,28] when (m,n)=(100,200). From Figure 4, we find that while (m,n)=(10,20),(100,200), the average CPU time of the proposed algorithm is less than the other two algorithms, and the trend of time change is not obvious as p increases.
Table 6 shows the numerical results of a large-scale Problem 11, we can see that the number of linear functions p is much smaller than n. Fixing (p,m)=(2,10), the number of iterations of our algorithm decreases as the decision variable n increases, while the number of iterations of the algorithms in the references [26,28] decreases and then increases as the decision variable n increases, as reflected in Figure 5. Moreover, we find that the algorithm in [26,28] consume more CPU time than our algorithm. Furthermore, we conclude that CPU time increases with an increase in the decision variable or linear function. Figure 6 provides a clearer picture of this conclusion.
5.
Conclusions
We propose a new branch-and-bound algorithm for solving LMP in outer space. First, the original problem is reduced to an equivalent problem by introducing auxiliary variables. Then, the equivalent problem is further simplified by leveraging the properties of exponential and logarithmic functions.The focus switches to resolving the second equivalent problem. Afterwards, the nonlinear constraints are linearly relaxed, and the objective function of the equivalent problem is linearly approximated to obtain the linear relaxation programming problem. Consequently, the proposed rectangular compression technique is embedded into the branch-and-bound framework, leading to the development of the outer space branch-and-bound algorithm. Furthermore, we demonstrate the computational complexity to estimate the maximum number of iterations of the algorithm in the worst case. Finally, in order to verify the feasibility and efficiency of the algorithm, some deterministic and random problems are tested. The experimental results show that the algorithm exhibits good computational performance, particularly for large-scale random LMP where the number of linear functions p is significantly smaller than the decision variable n.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (11961001), the Construction Project of first-class subjects in Ningxia higher Education(NXYLXK2017B09), the Basic discipline research projects supported by Nanjing Securities(NJZQJCXK202201).
Conflict of interest
The authors declare no conflicts of interest.