Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction


  • Received: 07 March 2023 Revised: 11 April 2023 Accepted: 12 April 2023 Published: 19 April 2023
  • In order to effectively control and predict the settlement deformation of the surrounding ground surface caused by deep foundation excavation, the deep foundation pit project of Baoding City Automobile Technology Industrial Park is explored as an example. The initial population approach of the whale algorithm (WOA) is optimized using Cubic mapping, while the weights of the shrinkage envelope mechanism are adjusted to avoid the algorithm falling into local minima, the improved whale algorithm (IWOA) is proposed. Meanwhile, 10 benchmark test functions are selected to simulate the performance of IWOA, and the advantages of IWOA in learning efficiency and convergence speed are verified. The IWOA-LSTM deep foundation excavation deformation prediction model is established by optimizing the input weights and hidden layer thresholds in the deep long short-term memory (LSTM) neural network using the improved whale algorithm. The IWOA-LSTM prediction model is compared with LSTM, WOA-optimized LSTM (WOA-LSTM) and traditional machine learning, the results show that the final prediction score of the IWOA-LSTM prediction model is higher than the score of other models, and the prediction accuracy is better than that of traditional machine learning.

    Citation: Ju Wang, Leifeng Zhang, Sanqiang Yang, Shaoning Lian, Peng Wang, Lei Yu, Zhenyu Yang. Optimized LSTM based on improved whale algorithm for surface subsidence deformation prediction[J]. Electronic Research Archive, 2023, 31(6): 3435-3452. doi: 10.3934/era.2023174

    Related Papers:

    [1] Hongwu Li, Yuling Feng, Hongwei Jiao, Youlin Shang . A novel algorithm for solving sum of several affine fractional functions. AIMS Mathematics, 2023, 8(4): 9247-9264. doi: 10.3934/math.2023464
    [2] Hongwu Li, Longfei Wang, Yingfeng Zhao . Global optimization algorithm for a class of linear ratios optimization problem. AIMS Mathematics, 2024, 9(6): 16376-16391. doi: 10.3934/math.2024793
    [3] Junqiao Ma, Hongwei Jiao, Jingben Yin, Youlin Shang . Outer space branching search method for solving generalized affine fractional optimization problem. AIMS Mathematics, 2023, 8(1): 1959-1974. doi: 10.3934/math.2023101
    [4] Bo Zhang, Yuelin Gao, Ying Qiao, Ying Sun . A nonlinear relaxation-strategy-based algorithm for solving sum-of-linear-ratios problems. AIMS Mathematics, 2024, 9(9): 25396-25412. doi: 10.3934/math.20241240
    [5] Xiaoli Huang, Yuelin Gao . An efficient outer space branch-and-bound algorithm for globally minimizing linear multiplicative problems. AIMS Mathematics, 2023, 8(11): 26045-26069. doi: 10.3934/math.20231327
    [6] Regina S. Burachik, Bethany I. Caldwell, C. Yalçın Kaya . Douglas–Rachford algorithm for control- and state-constrained optimal control problems. AIMS Mathematics, 2024, 9(6): 13874-13893. doi: 10.3934/math.2024675
    [7] Murat A. Sultanov, Vladimir E. Misilov, Makhmud A. Sadybekov . Numerical method for solving the subdiffusion differential equation with nonlocal boundary conditions. AIMS Mathematics, 2024, 9(12): 36385-36404. doi: 10.3934/math.20241726
    [8] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [9] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [10] Sema Akin Bas, Beyza Ahlatcioglu Ozkok . An iterative approach for the solution of fully fuzzy linear fractional programming problems via fuzzy multi-objective optimization. AIMS Mathematics, 2024, 9(6): 15361-15384. doi: 10.3934/math.2024746
  • In order to effectively control and predict the settlement deformation of the surrounding ground surface caused by deep foundation excavation, the deep foundation pit project of Baoding City Automobile Technology Industrial Park is explored as an example. The initial population approach of the whale algorithm (WOA) is optimized using Cubic mapping, while the weights of the shrinkage envelope mechanism are adjusted to avoid the algorithm falling into local minima, the improved whale algorithm (IWOA) is proposed. Meanwhile, 10 benchmark test functions are selected to simulate the performance of IWOA, and the advantages of IWOA in learning efficiency and convergence speed are verified. The IWOA-LSTM deep foundation excavation deformation prediction model is established by optimizing the input weights and hidden layer thresholds in the deep long short-term memory (LSTM) neural network using the improved whale algorithm. The IWOA-LSTM prediction model is compared with LSTM, WOA-optimized LSTM (WOA-LSTM) and traditional machine learning, the results show that the final prediction score of the IWOA-LSTM prediction model is higher than the score of other models, and the prediction accuracy is better than that of traditional machine learning.



    In this paper, we consider the following sum of affine ratios problem (where SARP is sum of affine ratios problem):

    (SARP):{minH(x)=pi=1nj=1cijxj+finj=1dijxj+gi,  s.t. xD={xRnAxb,x0},

    where A is an m×n order real matrix and b is an m dimension column vector. D, cij,dijRn, and fi,giR. The denominator of each ratio

    nj=1dijxj+gi0

    over D. Due to

    nj=1dijxj+gi0

    and the continuity of the ratio

    nj=1cijxj+finj=1dijxj+gi,

    we can obtain that

    nj=1dijxj+gi<0

    or

    nj=1dijxj+gi>0.

    If

    nj=1dijxj+gi<0,

    letting

    nj=1cijxj+finj=1dijxj+gi=(nj=1cijxj+fi)(nj=1dijxj+gi),

    it is obvious that

    (nj=1dijxj+gi)>0.

    Therefore, without loss of generality, we assume that

    nj=1dijxj+gi>0

    always holds.

    The SARP is a specific class of fractional programming problem. It has a wide range of applications in laminate manufacturing [1,2], portfolio optimization [3,4,5], finance and investment [6], computer vision [7], system engineering [8], information theory [9], material layout [10,11], and so on. In addition, it is obvious that the objective function of the SARP is neither quasi-convex nor quasi-concave, which is extremely challenging in terms of theory and computation. Thus, the research on the approach to solve the SARP has both theoretical and practical significance value.

    Up to now, a number of methods have been developed to solve the SARP. On the basis of the characteristics of the algorithms in each literature, it can be roughly separated into the following categories: the parametric simplex methods [12,13], the unified monotonic approach [14], the interior-point method [15,16], the image space analysis method [17], the trapezoidal branching searching algorithm [18], the branch-and-bound algorithms [19,20,21], and so on. It should be noted that the algorithms in the above references [19,20,21] can either only solve special forms of the SARP or the SARP with fewer variables. Recently, by exploiting equivalent transformation and the characteristics of general single ratio functions, Jiao and Ma [22] combined the acceleration technique to develop an efficient outer space rectangular branch-and-bound algorithm. Jiao et al. [23] put forward a practical algorithm for minimizing the SARP. In the same year, by using the equivalent conversion and linearization method, Jiao et al. [24] presented an effective branch-and-bound algorithm for the SARP. Meanwhile, Jiao et al. [25] also designed an image space branch-reduction-bound algorithm for globally solving the SARP. It should be emphasized that references [23,24,25] proposed several outer space branch-and-bound algorithms for the problem (SARP), and the partitioning spaces of these branch-and-bound algorithms all occur in the p-dimensional outer space. In addition, Pei and Zhu [26] designed a convex relaxation algorithm for maximizing the sum of the difference of convex functions ratios problems based on the branch-and-bound framework. Kuno [27] presented a trapezoidal branch-and-bound algorithm to solve the SARP. Based on the algorithm of Kuno [27], Shen et al. [28] proposed an accelerating trapezoidal branch-reduction-bound algorithm for globally solving the sum of linear ratios problems. By constructing the new accelerating technique, Jiao and Liu [29] developed a branch-reduction-bound algorithm for the sum of quadratic ratios problems. Especially, the literatures [30,31] provided for the first time an outer space branch-relaxation-bound algorithm for generalized linear fractional programming problems and generalized affine multiple product programming problems, respectively. However, the above-reviewed methods only deal with some particular forms of the SARP or are difficult to solve the SARP with large-scale variables. Therefore, it is still necessary to propose a practical efficient algorithm for addressing the general form of the SARP.

    The main purpose of this paper is to design an effective outcome space branch-and-bound algorithm to globally solve the SARP. Initially, we employ two equivalence conversions to transform the SARP into an equivalent problem (EP3). Subsequently, we design a linearization technique that is used to construct the affine relaxation problem (ARP) of the problem (EP3). Based on the ARP and the branch-and-bound framework, we present an outcome space branch-and-bound algorithm. Ultimately, the numerical experimental results demonstrate that our algorithm is feasible. In addition, the branch search process in this paper occurs in the (p1)-dimensional outer space, which makes the computational cost much lower. Compared with other methods that take place in the n-dimension or p-dimensional space, our method is more efficient. Furthermore, the algorithm is shown to converge to a global optimal solution of the SARP eventually. Meanwhile, the computational complexity of the algorithm is deduced in detail.

    The overall structure of the study adopts the format of six sections, including this introductory section. In Section 2, we convert the SARP to an equivalent problem (EP3) and construct its affine relaxation programming problem. In Section 3, we design an outcome space branch-and-bound algorithm for globally solving the SARP, prove the convergence of the algorithm, and derive the derivation of the computational complexity. In Section 4, a comparison of the numerical experimental results indicates that the presented algorithm is reliable. Finally, some concluding remarks are given in Section 5.

    To tackle the SARP globally, we transform the original problem (SARP) into an equivalence problem as below. For each xD, let

    t=1nj=1dpjxj+gp

    and

    zj=txj.

    By utilizing the well-known Charnes-Cooper transformation [32], the SARP can be rewritten as problem (EP1), as shown below.

    (EP1):{minˆH(t,z)=p1i=1nj=1cijzj+fitnj=1dijzj+git+nj=1cpjzj+fpt,s.t. nj=1dpjzj+gpt=1,Azbt, t0, z0.

    At this time, the index set becomes i{1,2,,p1}, and let

    χ={(t,z)Rn+1|nj=1dpjzj+gpt=1,  Azbt,t0,z0},

    where the set χ is non-empty and bounded.

    Then we focus on the following equivalence conversion, for each i{1,2,,p1}, by introducing the variable

    si=1nj=1dijzj+git,

    the lower bound

    s_0i=min(t,z)χ1nj=1dijzj+git

    and upper bound

    ¯s0i=max(t,z)χ1nj=1dijzj+git

    can be calculated, and we can obtain the initial rectangle

    S0=[s_0,¯s0],

    where

    s_0=(s_01,s_02,,s_0p1)

    and

    ¯s0=(¯s01,¯s02,,¯s0p1).

    Further, the problem (EP1) can be reduced into the problem (EP2) by the variable si as follows:

    (EP2):{min Φ(s,t,z)=p1i=1si(nj=1cijzj+fit)+nj=1cpjzj+fpt,s.t. si=1nj=1dijzj+git,  i=1,,p1,(t,z)χ,sS0.

    Remark 1. If (t,z) is a global optimal solution of the problem (EP1), then (s,t,z) is a global optimal solution of the problem (EP2) with

    si=1nj=1dijzj+git,i=1,2,,p1.

    Conversely, if (s,t,z) is a global optimal solution of the problem (EP2), then (t,z) is a global optimal solution of the problem (EP1). In addition, the global optimal value of the problems (EP1), (EP2), and (SARP) are equal.

    Based on

    nj=1dijzj+git0,

    the problem (EP2) can be rewritten as the following equivalent form:

    (EP3):{minΦ(s,t,z)=p1i=1si(nj=1cijzj+fit)+nj=1cpjzj+fpts.t. si(nj=1dijzj+git)=1,  i=1,,p1,(t,z)χ,sS0.

    Define S to be S0 or a sub-rectangle of S0, where

    S=[s_,¯s][s_0,¯s0],   s_=(s_1,s_2,,s_p1)s_0

    and

    ¯s=(¯s1,¯s2,,¯sp1)¯s0.

    For any SS0, we construct the affine relaxation programming problem of the problem (EP3) over S as follows:

    First, investigating the objective function, we can follow that

    p1i=1si(nj=1cijzj+fit)p1i=1(nj=1,cij>0cijs_izj+nj=1,cij<0cij¯sizj)+p1i=1,fi>0fis_it+p1i=1,fi<0fi¯sit. (1)

    Hence, we reformulate the objective function of the problem (EP3) as

    ΦR(s,t,z)=p1i=1(nj=1,cij>0cijs_izj+nj=1,cij<0cij¯sizj)+p1i=1,fi>0fis_it+p1i=1,fi<0fi¯sit+nj=1cpjzj+fpt.

    Next, investigating the constrained function, for any i{1,2,,p1},sSS0, define

    Ψi(si,t,z)=si(nj=1dijzj+git)=nj=1dijsizj+gisit,G_i={gis_i,  if gi>0,gi¯si,  if gi<0,¯Gi={gis_i,  if gi<0,gi¯si,  if gi>0,

    and

    Ψ_i(s_i,¯si,t,z)=nj=1,dij>0dijs_izj+nj=1,dij<0dij¯sizj+G_it, (2)
    ¯Ψi(s_i,¯si,t,z)=nj=1,dij>0dij¯sizj+nj=1,dij<0dijs_izj+¯Git. (3)

    Clearly, for any i{1,2,,p1}, we have that

    Ψ_i(s_i,¯si,t,z)Ψi(si,t,z)¯Ψi(s_i,¯si,t,z)

    always hold.

    Integrating (1)–(3), for any i{1,2,,p1}, the affine relaxation programming problem of the problem (EP3) over S is constructed as below:

    (ARP):{min ΦR(s,t,z)s.t. Ψ_i(s_i,¯si,t,z)1,  i=1,,p1,¯Ψi(s_i,¯si,t,z)1,  i=1,,p1,(t,z)χ,sS.

    By the construction process of the ARP, it is obvious that the optimal solution of the ARP provides a valid lower bound for that of the EP3 over S0.

    Theorem 1. For each i{1,2,,p1}, we have

    |Ψi(si,t,z)Ψ_i(s_i,¯si,t,z)|0  and |¯Ψi(s_i,¯si,t,z)Ψi(si,t,z)|0  as  |¯sis_i|0,

    where "" means "approaching".

    Proof. Based on the previous definitions of the functions Ψ_i(s_i,¯si,t,z), Ψi(si,t,z), and ¯Ψi(s_i,¯si,t,z), for any (t,z)χ,si[s_i,¯si], we have that

    |Ψi(si,t,z)Ψ_i(s_i,¯si,t,z)||¯Ψi(s_i,¯si,t,z)Ψ_i(s_i,¯si,t,z)|=|nj=1,dij>0dij¯sizj+nj=1,dij<0dijs_izj+¯Git(nj=1,dij>0dijs_izj+nj=1,dij<0dij¯sizj+G_it)|=nj=1|dij||¯sis_i|zj+|gi||¯sis_i|t=|¯sis_i|(nj=1|dij|zj+|gi|t).

    Since nj=1dijzj+git is a bounded function, we have

    |Ψi(si,t,z)Ψ_i(s_i,¯si,t,z)|0 as |¯sis_i|0.

    Similarly, we demonstrate

    |¯Ψi(s_i,¯si,t,z)Ψi(si,t,z)|0  as |¯sis_i|0,

    and the proof of the theorem is accomplished.

    From Theorem 1, it follows that the functions Ψ_i(s_i,¯si,t,z) and ¯Ψi(s_i,¯si,t,z) can infinitely approximate the function Ψi(si,t,z) as |¯sis_i|0, which guarantees the global convergence of the outcome space branch-and-bound algorithm.

    An outcome space branch-and-bound method is provided in this part to resolve the SARP based on the previous ARP. By resolving a series of ARPs over the initial rectangle S0 or a partitioned sub-rectangle of S0, the algorithm is able to acquire a global optimal solution. The simplest standard rectangle bisection rule, which is provided as follows, is designated in the proposed algorithm.

    (1) Let

    ¯skυs_kυ=max{¯skis_ki,i=1,2,,p1}

    and

    skυ=lkυ+ukυ2.

    (2) Let

    ˆs=(sk1,sk2,,skυ1,skυ,skυ+1,,skp1).

    The rectangle Sk is split by the point ˆs into the following two rectangles:

    Sk1=υ1i=1[s_ki,¯ski]×[s_kυ,skυ]×p1i=υ+1[s_ki,¯ski]

    and

    Sk2=υ1i=1[s_ki,¯ski]×[skυ,¯skυ]×p1i=υ+1[s_ki,¯ski].

    The steps of the outcome space branch-and-bound algorithm for solving the SARP are described below:

    Algorithm 1. Outcome space branch-and-bound algorithm.

    Step 0. Given the convergence error ϵ0 and the initial outer space rectangle

    S0={sRp1s_0isi¯s0i, i=1,2,,p1}.

    Solve the problem (ARP) over S0. If the problem (ARP) over S0 is not feasible, then the problem (SARP) is not feasible, and the proposed algorithm stops.

    Otherwise, we can obtain an optimal solution (ˆs0,t0,z0) and the optimal value LB(S0) of the problem (ARP) over S0. Let

    s0i=1nj=1dijz0j+git0,  i=1,2,,p1,

    then (s0,t0,z0) is a feasible solution of the problem (EP3). Let

    LB0=LB(S0)

    and

    UB0=Φ(s0,t0,z0).

    If

    UB0LB0ϵ,

    then the algorithm stops executing, and (s0,t0,z0) and z0t0 are the global ϵ-optimal solutions of the problems (EP3) and (SARP), respectively.

    Otherwise, denote by the set of all active nodes Θ0={S0}, denote by the set of the initial feasible point

    F={(s0,t0,z0)},

    let k=0, and proceed to Step 1.

    Step 1. Set

    UBk=UBk1.

    Using the former branching rule, subdivide Sk into two sub-rectangles Sk1 and Sk2. Let

    Q={Sk1,Sk2}.

    Step 2. For each

    Skτ,τ=1,2,

    solve the problem (ARP) over Skτ. If the problem (ARP) over Sk,τ is not feasible, then delete the rectangle Skτ. Otherwise, we get the optimal solution (ˆs(Skτ),t(Skτ),z(Skτ)) and the optimal value LB(Skτ) of the problem (ARP) over Skτ.

    If

    UBkLB(Skτ),

    then let

    Q=Q{Skτ}.

    Otherwise, let

    si(Skτ)=1nj=1dijzj(Skτ)+git(Skτ),  i=1,2,,p1,

    update the feasible point set by

    F=F{(s(Ωkτ),t(Skτ),z(Sk,τ))},

    let

    UBk=min{UBk,Φ(s(Skτ),t(Skτ),z(Sk,τ))},

    and denote by the currently best feasible solution (sk,tk,zk), which corresponds to UBk.

    Step 3. Set

    Θk=(ΘkSk)Q,

    update

    LBk=min{LB(S)SΘk},

    and let Sk be the sub-rectangle which satisfies

    LBk=LB(Sk).

    Step 4. If

    UBkLBkϵ,

    then the proposed algorithm stops, and zktk and (sk,tk,zk) are the ϵ-global optimal solutions of the problems (SARP) and (EP3), respectively.

    Otherwise, set k=k+1, and go back to Step 1.

    In this subsection, the global convergence of the proposed algorithm is discussed as follows:

    Theorem 2. If Algorithm 1 ceases finite iterations, then the proposed algorithm will produce a globally optimal solution to the SARP. Otherwise, Algorithm 1 fails to terminate within a finite number of iterations, and it will produce an infinite sequence {xk} such that any accumulation point will be a global optimal solution of the SARP.

    Proof. If Algorithm 1 ceases within a finite number of iterations, hypothesize that it is ceased at the kth iteration, denoted LBk as the current best lower bound, and denoted UBk as the current best upper bound, for any ϵ, and we have that

    UBkLBk<ϵ. (4)

    Assume that (sk,tk,zk) is a feasible solution of the problem (EP3) with that

    ski=1nj=1dijzkj+gitk.

    Letting υ(EP3) be the global optimal value of the problem (EP3), we can get

    Φ(sk,tk,zk)LBk=UBkLBkϵ (5)

    and

    LBkυ(EP3). (6)

    However, (sk,tk,zk) is the feasible solution of the problem (EP3), and we can follow that

    Φ(sk,tk,zk)υ(EP3). (7)

    By the above inequalities (4)–(7), we can get

    υ(EP3)Φ(sk,tk,zk)LBk+ϵυ(EP3)+ϵ.

    Therefore, (sk,tk,zk) is an ϵ-global optimum solution of the problem (EP3), and

    xk=zktk

    is an ϵ-global optimum solution of the SARP.

    If Algorithm 1 fails to terminate within a finite number of iterations, then it will produce an infinite feasible solution sequence {xk} of the SARP and an infinite feasible solution sequence (sk,tk,zk) of the problem (EP3) with

    ski=1nj=1dijzkj+gitk.

    Letting (t,z) be an accumulation point of the sequence {(tk,zk)}, we can get

    limk(tk,zk)=(t,z).

    By the continuity of the function

    si=1nj=1dijzj+git,  i{1,2,,p1},

    we can get that

    1nj=1dijzj+git=limk1nj=1dijzkj+gitk=limkski=si.

    Thus, (s,t,z) is a feasible solution of the problem (EP3).

    Since {LBk} is a monotonic non-decreasing bounded sequence, we get that

    LBk=ΦR(sk,tk,zk)

    and

    LBkυ(EP3),

    and also since {UBk} is a non-increasing and bounded sequence, we get that

    UBk=Φ(sk,tk,zk)

    and

    UBkυ(EP3).

    So, we have

    LBk=ΦR(sk,tk,zk)υ(EP3)Φ(sk,tk,zk)=UBk.

    Hence, from the termination condition of the algorithm, taking the limit on both side of the above inequalities, we get

    limkLBk=υ(EP3)=limkΦ(sk,tk,zk)=Φ(s,t,z)=limkUBk.

    Thus, the accumulation point (s,t,z) of the sequence (sk,tk,zk) is a global optimal solution of the problem (EP3). At the same time, the accumulation point x of the sequence {xk} with

    xk=zktk

    is a global optimal solution of the SARP, and the proof is complete.

    Remark 2. The algorithm proposed in this paper is a rectangular branch-and-bound global optimization algorithm. As is well-known, based on the convergence theory of branch-and-bound global optimization algorithms [33], the exhaustiveness of the branching process and the approximation of upper and lower bounds indicate that the proposed rectangular branch-and-bound global optimization algorithm must be convergent.

    Further, to examine the maximum number of iterations of Algorithm 1, we derive the computational complexity of Algorithm 1. First, we define the size Υ of the rectangle S as

    Υ(S)=max{¯sis_ii=1,2,,p1}.

    Theorem 3. Given a termination error ϵ>0, if there exists a rectangle S formed by the proposed algorithm at the kth iteration satisfying

    Υ(S)ϵ(p1)ω,

    then we have that

    UBkLB(S)ϵ,

    where LB(S) is the optimal value of the ARP over S, and UBk is the currently known best upper bound of the optimum value of the problem (EP3).

    Proof. Without losing generality, suppose that (tk,zk) is the optimal solution of the ARP over S, and let

    ski=1nj=1dijzkj+gitk,  i=1,2,,p1.

    Then (sk,tk,zk) is a feasible solution to the problem (EP3). By the method of updating the upper bound and lower bound, and the construction process of the ARP, we have that

    Φ(sk,tk,zk)UBkLB(S)=ΦR(sk,yk,zk).

    By the definition of the size Υ(S) of the rectangle S and

    Υ(S)ϵ(p1)ω,

    it follows that

    UBkLB(S) Φ(sk,tk,zk)ΦR(sk,tk,zk)= |p1i=1ski(nj=1cijzkj+fitk)+nj=1cpjzkj+fptk [p1i=1(nj=1,cij>0cijs_izkj+nj=1,cij<0cij¯skizkj+nj=1,fi>0fis_itk+nj=1,fi<0fi¯sitk+nj=1cpjzkj+fptk)]|= |p1i=1(sis_i)nj=1,cij>0cijzkj+p1i=1(si¯si)nj=1,cij<0cijzkj+p1i=1,fi>0(sis_i)fitk+p1i=1,fi<0(si¯si)fitk| p1i=1(¯sis_i)×(nj=1|cij|zkj+ni=1|fi|tk) (p1)ωΥ(S) ϵ.

    The proof of Theorem 3 is complete.

    Then we can examine the maximum iterations of Algorithm 1, see the following Theorem 4:

    Theorem 4. Given the convergent tolerance ϵ(0,1), Algorithm 1 can acquire a global optimal solution of the SARP at most

    (p1)[log2(p1)ωΥ(S0)ϵ]

    iterations.

    Proof. In general, we assume that the sub-rectangle S is selected for partitioning in Algorithm 1 at each iteration. After k(p1) iterations, we can follow that

    Υ(S)12kΥ(S0).

    Based on the former proof of Theorem 3, if

    12kΥ(S0)ϵ(p1)ω,

    i.e.,

    klog2(p1)ωΥ(S0)ϵ,

    we can follow that

    UBkLB(S)ϵ.

    Therefore, after at most

    (p1)[log2(p1)ωΥ(S0)ϵ]

    iterations, we can get

    0Φ(sk,tk,zk)Φ(s,t,z)Φ(sk,tk,zk)LB(S)=UBkLB(S)ϵ,

    where (s,t,z) is the optimal solution of the problem (EP3), and (sk,tk,zk) is the best currently know feasible solution for the problem (EP3). This implies that (sk,tk,zk) is an ϵ-global optimal solution of the problem (EP3) when Algorithm 1 terminates. At the same time, we can follow that zktk is an ϵ-global optimal solution of the SARP, and we complete the proof.

    Remark 3. From the above complexity analysis of the algorithm in Theorem 4, letting

    Γ=(p1)[log2(p1)ωΥ(S0)ϵ],

    the running time of Algorithm 1 is bounded by

    2(Γ1)×T(m+2p+1,n+1)

    for finding an ϵ-global optimal solution of the SARP, where T(m+2p+1,n+1) denotes the time taken to solve a linear programming problem with n+1 variables and m+2p+1 linear constraints.

    In this section, we numerically compare Algorithm 1 with some known extant branch-and-bound algorithms. All tests were implemented in MATLAB R2018a and run on a microcomputer with Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz processor and 8 GB RAM. For all test problems, we provide their computational results. The maximum time limit was set to 10000s for all algorithms.

    First of all, some small-size certainty examples (see the Appendix Examples 1–12) were tested with Algorithm 1 for comparison with the known extant algorithms [14,21,34,35,36,37,38,39,40], and a numerical comparison between some existing algorithms and Algorithm 1 on Examples 1–12 were reported in Table 1 with the given convergence tolerance, where some notations have been used for column headers: Opt. val.: global optimal value; Iter.: number of iterations of the algorithm; Time: the CPU execution time of the algorithm in seconds.

    Table 1.  Numerical comparison between some existing algorithms and Algorithm 1 on test Examples 1–12.
    No. Algorithms Opt. val. Optimal solution Iter. Time ϵ
    1 Algorithm 1 -4.83976 (0.1117, 2.3603) 16 0.254 102
    Benson [34] -4.84151 (0.1000, 2.3750) 4 0.190 102
    Jiao and Liu [39] -4.84151 (0.1000, 2.3750) 200 4.257 102
    2 Algorithm 1 -2.47143 (1.0000, 0.0000, 0.0000) 11 0.192 102
    Shen et al. [35] -2.47143 (1.0000, 0.0000, 0.0000) 2 0.015 102
    Jiao and Liu [39] -2.47124 (1.0001, 0.0000, 0.0001) 54 1.135 102
    3 Algorithm 1 -1.90000 (0.0000, 3.3333, 0.0000) 424 6.907 106
    Shen and Wang [36] -1.90000 (0.0000, 3.3333, 0.0000) 8 0.926 106
    4 Algorithm 1 1.62318 (0.0000, 0.2841) 39 0.566 102
    Jiao and Liu [39] 1.62319 (0.0000, 0.2861) 93 2.485 102
    5 Algorithm 1 2.86190 (5.0000, 0.0000, 0.0000) 199 2.776 103
    Jiao and Liu [39] 2.86241 (4.8302, 0.0000, 0.0666) 4008 128.0 103
    Shen and Lu [37] 2.86191 (5.0000, 0.0000, 0.0000) 16 0.125 103
    Gao and Jin [38] 2.86190 (5.0000, 0.0000, 0.0000) 12 28.29 103
    6 Algorithm 1 -4.09070 (1.1111, 0.0000, 0.0000) 28 0.359 102
    Jiao and Liu [39] -4.09062 (1.1106, 0.0000, 0.0015) 619 16.62 102
    Shen and Lu [37] -4.08741 (1.0715, 0.0000, 0.0000) 17 3.251 102
    7 Algorithm 1 3.71092 (0.0000, 1.6667, 0.0000) 89 1.124 104
    Jiao and Liu [39] 3.71093 (0.0000, 1.6667, 0.0000) 2747 94.64 104
    Gao and Jin [38] 3.7087 (0.0000, 1.6667, 0.0000) 5 4.190 104
    8 Algorithm 1 -3.00225 (0.0000, 2.8455, 0.0000) 36 0.566 102
    Jiao and Liu [39] -3.00292 (0.0000, 3.3333, 0.0000) 1072 31.746 102
    9 Algorithm 1 4.91267 (1.5015, 1.5024) 30 0.422 103
    Shen and Lu [37] 4.91259 (1.5000, 1.5000) 56 1.087 103
    10 Algorithm 1 -4.09070 (1.1111, 0.0000, 0.0000) 33 0.492 102
    Jiao et al. [40] -4.09070 (1.1111, 0.0000, 0.0000) 2 0.008 106
    Jiao and Liu [39] -4.09065 (1.1109, 0.0000, 0.0005) 977 32.41 106
    11 Algorithm 1 3.29167 (3.0000, 4.0000) 138 1.902 106
    Shen and Wang [36] 3.29167 (3.0000, 4.0000) 9 0.489 106
    12 Algorithm 1 4.42857 (5.0000, 0.0000, 0.0000) 67 0.931 104
    Jiao and Liu [39] 4.42794 (4.9930, 0.0000, 0.0000) 128 4.213 104
    Shi [21] 4.42857 (5.0000, 0.0000, 0.0000) 58 2.968 104

     | Show Table
    DownLoad: CSV

    From the numerical results in Table 1, for Examples 2, 5, 6, 10 and 12, we can follow that Algorithm 1 can obtain the better global optimal solutions and optimal values than the existing algorithms of Jiao and Liu [39] and Shen and Lu [37]. Algorithm 1 performs better than the method of Jiao and Liu [39] for finding the optimal solution in less time and fewer iterations. Therefore, in terms of test Examples 1–12, their experimental results verify that Algorithm 1 is valid and feasible.

    Next, we chose two large-scale test problems generated randomly to verify the proposed algorithm further, see Problems 1 and 2 for details. With the given approximation error ϵ=102, we first tested Problem 1 with large-size variables, numerical comparisons among Algorithm 1, the algorithm of Jiao and Liu [39], and the algorithm of Li et al. [41], which are reported in Table 2.

    Table 2.  Numerical comparisons among Algorithm 1, the algorithm of Jiao and Liu [39], and the algorithm of Li et al. [41] on Problem 1.
    (p,m,n) Algorithms Iter. Time
    min. ave. max. min. ave. max.
    (2,100, 1000) Jiao and Liu [39] 34 111.5 342 6.78 22.82 71.53
    Li et al. [41] 9 21.8 30 2.21 4.03 5.46
    Algorithm 1 6 14.8 23 1.56 2.73 3.41
    (2,100, 3000) Jiao and Liu [39] 55 99.8 159 82.16 147.45 235.12
    Li et al. [41] 9 23.4 27 11.26 18.12 24.12
    Algorithm 1 6 12.4 19 11.23 17.89 23.45
    (2,100, 5000) Jiao and Liu [39] 41 83.9 152 171.34 339.68 618.89
    Li et al. [41] 12 18.1 30 56.56 75.24 96.18
    Algorithm 1 8 12.3 25 38.51 50.45 68.32
    (2,100, 7000) Jiao and Liu [39] 49 78.7 124 392.12 634.51 987.42
    Li et al. [41] 10 16.9 27 99.84 126.53 185.62
    Algorithm 1 7 11.7 21 68.87 89.45 129.23
    (2,100, 10000) Jiao and Liu [39] 13 73.1 129 230.12 1188.43 2145.25
    Li et al. [41] 12 18.5 25 241.56 289.25 324.56
    Algorithm 1 8 12.8 17 166.56 204.43 245.52
    (3,100, 1000) Jiao and Liu [39] 215 814.4 1756 43.12 175.68 398.53
    Li et al. [41] 75 309.4 628 10.89 40.53 72.47
    Algorithm 1 53 238.2 465 7.45 28.54 48.98
    (3,100, 3000) Jiao and Liu [39] * * * * * *
    Li et al. [41] 92 257.8 378 123.56 254.58 367.92
    Algorithm 1 63 198.8 296 81.47 187.45 284.32
    (3,100, 5000) Jiao and Liu [39] * * * * * *
    Li et al. [41] 75 273.8 512 278.56 625.48 985.26
    Algorithm 1 54 191.7 348 208.45 452.78 712.45
    (3,100, 7000) Jiao and Liu [39] * * * * * *
    Li et al. [41] 89 243.5 325 645.38 1201.48 1572.48
    Algorithm 1 61 163.4 242 439.43 805.39 1168.71
    (3,100, 10000) Jiao and Liu [39] * * * * * *
    Li et al. [41] 130 249.8 426 1580.22 2678.56 4582.58
    Algorithm 1 89 165.2 287 1087.24 1786.13 3090.56
    (4,100, 1000) Jiao and Liu [39] * * * * * *
    Li et al. [41] 485 3436.6 9856 46.53 323.23 1169.14
    Algorithm 1 317 2332.8 7279 43.62 278.85 985.93

     | Show Table
    DownLoad: CSV

    In Tables 2 and 3, Avg.Iter denotes the average number of iterations of the algorithm, Avg.Time denotes the average execution CPU time of the algorithm in seconds, "" denotes the situation that the proposed algorithm failed to terminate in 10000s for some of the arbitrary 50 independently generated test examples. For random Problems 1 and 2, we solved 50 independently generated test instances and recorded their average results among these 50 tests, and we highlighted in bold the winner of average results in numerical comparisons.

    Table 3.  Numerical comparisons between Algorithm 1 and the algorithm of Li et al. [41] on Problem 2.
    (p,m,n) Algorithms Iter. Time
    min. ave. max. min. ave. max.
    (10,100,300) Li et al. [41] 9 13.6 19 5.28 8.87 12.7
    Algorithm 1 7 9.8 15 4.12 6.45 10.2
    (10,100,400) Li et al. [41] 10 16 25 6.90 12.65 20.66
    Algorithm 1 8 12.8 19 5.61 9.68 16.92
    (10,100,500) Li et al. [41] 10 17.4 30 8.07 15.89 26.52
    Algorithm 1 9 14.2 26 6.45 12.75 21.45
    (15,100,400) Li et al. [41] 50 121.6 201 46.78 118.75 201.66
    Algorithm 1 38 95.6 189 39.98 95.68 179.85
    (15,100,500) Li et al. [41] 49 118.1 258 54.57 137.92 303.49
    Algorithm 1 41 98.7 202 41.38 99.56 201.24
    (20,100,300) Li et al. [41] 157 321.2 861 126.19 255.46 694.20
    Algorithm 1 118 278.6 598 89.16 202.46 587.45
    (20,100,400) Li et al. [41] 99 399.9 1134 99.06 425.77 1199.2
    Algorithm 1 87 312.7 985 87.45 364.10 950.26

     | Show Table
    DownLoad: CSV

    Problem 1. (Li et al. [41])

    {min pi=1ˉcix+ˉfiˉdix+ˉgi,s.t.ˉAxˉb,x0,

    where ˉciRn, ˉdiRn, ˉARm×n, ˉbRm, ˉfiR,ˉgiR,i=1,2,,p; each element of ˉci, ˉdi, and ˉA is randomly generated from the interval [0,10]; each element of ˉb is equal to 10, and each element of ˉfi and ˉgi is randomly generated from the interval [0,1].

    Problem 2. (Li et al. [41])

    {min  pi=1nj=1˜γijxj+˜ξinj=1˜δijxj+˜ηi,s.t.˜Ax˜b,  x0,

    where ˜γij,˜ξi,˜δij,˜ηiR, i=1,2,,p,j=1,2,,n;˜ARm×n, ˜bRm; all ˜γij and ˜δij are randomly generated from [0.1,0.1]; all elements of ˜A are randomly generated from [0.01,1]; all elements of ˜b are equal to 10; and all ˜ξi and ˜ηi satisfy

    nj=1˜γijxj+˜ξi>0

    and

    nj=1˜δijxj+˜ηi>0.

    From the results in Table 2, for Problem 1 with the large-size number of variables, we first get the observation that the algorithm proposed in Jiao and Liu [39] is more time-consuming than Algorithm 1. Especially, when p=3, m=100,n=3000; p=3, m=100,n=5000; p=3, m=100,n=7000; p=3, m=100,n=10000; p=4, m=100,n=1000; the algorithm of Jiao & Liu [39] failed to solve all 50 independently generated instances in 10000s, but Algorithm 1 can obtain the global optimal solution of test Problem 1 with higher computational efficiency. Second, the computational efficiency of Algorithm 1 is superior to the algorithm of Li et al. [41] in all cases.

    From the numerical comparisons for Problem 2 in Table 3, the computational efficiency of Algorithm 1 is superior to the algorithm of Li et al. [41] in all cases.

    From the numerical comparisons in Tables 13, we can get that Algorithm 1 can globally solve the sum of affine ratios problem to obtain their global optimal solutions and optimal values with higher computational efficiency.

    This paper studies the sum of affine ratios problem and presents an outcome space branch-and-bound algorithm. In this algorithm, we proposed a novel linearization technique for constructing the affine relaxation problem of the equivalent problem. Moreover, the computational complexity of the algorithm is analyzed, and the maximum number of iterations of the algorithm is derived. Algorithm 1 can find an ϵ-global optimal solution in at most

    (p1)[log2(p1)ωΥ(S0)ϵ]

    iterations. Numerical comparisons show the effectiveness and superiority of Algorithm 1. Future work will extend Algorithm 1 to solve the sum of nonlinear ratios problem.

    Yan Shi: formal analysis, investigation, resources, methodology, writing-original draft, validation, and data curation; Qunzhen Zheng: formal analysis, invesigation, writing-review & editing, software, data curation, conceptualization, supervision, project administration; Jingben Yin: project administration, methodology, validation, and formal funding acquisition. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This paper is supported by the Key Scientific and Technological Research Projects in Henan Province (202102210147, 192102210114).

    The authors declare no conflicts of interest.

    Test Examples 1–12 are given as follows:

    Example 1. (Benson [34])

      {minf(x)=3.333x13x211.666x1+x2+1+4x13x21x1+x2+1,s.t.  5x1+4x210,x10.1,x20.1,2x1x22,x1,x20.

    Example 2. (Phuong and Tuy [14] and Shen et al. [35])

      {max3x1+x22x3+0.82x1x2+x3+4x12x2+x37x1+3x2x3,s.t.  x1+x2x31,x1+x2x31,12x1+5x2+12x334.8,12x1+12x2+7x329.1,6x1+x2+x34.1.

    Example 3. (Shen et al. [35], Shen and Wang [36])

      {max3x1+4x2+503x1+5x2+4x3+503x1+5x2+3x3+505x1+5x2+4x3+50x1+2x2+4x3+505x2+4x3+504x1+3x2+3x3+503x2+3x3+50,s.t.  6x1+3x2+3x310,10x1+3x2+8x310,x1,x2,x30.

    Example 4. (Shen et al. [35])

      {minx1+2x2+23x14x2+5+4x13x2+42x1+x2+3,s . t . x1+x21.5,x1x20,0x11,0x21.

    Example 5. (Shen and Lu [36], Gao and Jin [38])

      {min3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+4x2+504x1+3x2+2x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50,s . t . 2x1+x2+5x310,x1+6x2+2x310,9x1+7x2+3x310,x1,x2,x30.

    Example 6. (Shen and Lu [37])

      {max4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50,s . t . 2x1+x2+5x310,x1+6x2+3x310,5x1+9x2+2x310,9x1+7x2+3x310,x1,x2,x30.

    Example 7. (Shen and Lu [37], Gao and Jin [38])

      {min4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+4x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50,s . t . 2x1+x2+5x310,x1+6x2+3x310,9x1+7x2+3x310,x1,x2,x30.

    Example 8. (Shen and Lu [37])

      {max3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+4x2+504x1+3x2+2x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50,s . t . 6x1+3x2+3x310,10x1+3x2+8x310,x1,x2,x30.

    Example 9. (Shen and Lu [37], Gao and Jin [38])

      {min37x1+73x2+1313x1+13x2+13+63x118x2+3913x1+26x2+13,s . t . 5x13x2=3,1.5x13.

    Example 10. (Jiao and Liu [39], Jiao et al. [40], Shen Wang [36])

      {max4x1+3x2+3x3+503x2+2x3+50+3x1+4x2+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50,s . t . 2x1+x2+5x310,x1+6x2+3x310,5x1+9x2+2x310,9x1+7x2+3x310,x1,x2,x30.

    Example 11. (Shen and Wang [36], Shi [21])

      {max37x1+73x2+1313x1+13x2+13+63x118x2+3913x126x213+13x1+13x2+1363x118x2+39+13x1+26x2+1337x273x313,s . t . 5x13x2=3,1.5x13.

    Example 12. (Shi [21])

      {max4x1+3x2+3x3+503x2+3x3+50+3x1+4x3+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50,s . t . 2x1+x2+5x310,x1+6x2+2x310,9x1+7x2+3x310,x1,x2,x30.


    [1] C. Feng, D. Zhang, Sandy pebble in subway station foundation pit overall deformation model and its application, Chin. J. Rock Mech. Eng., S2 (2018), 4395–4405. http//:doi.org/10.13722/j.carolcarrollnkijrme.2018.0722. doi: 10.13722/j.carolcarrollnkijrme.2018.0722
    [2] X. Cao, X. Lu, Y. Gu, Study on axial pressure variation of steel support in deep foundation pit, Chin. J. Geotech. Eng., 44 (2022), 1988–1997. http//:doi.org/10.11779/CJGE202211004 doi: 10.11779/CJGE202211004
    [3] K. Cheng, R. Xu, H. Ying, B. Li, X. Gan, Z. Qiu, et al., Experimental study on excavation characteristics of a large 30.2m deep foundation pit in Hangzhou soft clay area, Chin. J. Rock Mech. Eng., 40 (2021), 851–863. http//:doi.org/10.13722/j.cnki.jrme.2020.0636 doi: 10.13722/j.cnki.jrme.2020.0636
    [4] G. Zheng, Deformation control method and engineering application of foundation pit in soft soil area, Chin. J. Geotech. Eng., 44 (2022), 1–36+201. http//:doi.org/10.11779/CJCE202201001 doi: 10.11779/CJCE202201001
    [5] X. Ni, C. Wang, D. Tang, Early warning and inducement analysis of super-large deformation of deep foundation pit in soft soil area, J. Cent. South Univ. (Sci. Technol.), 53 (2022), 2245–2254. http//:doi.org/10.11817/j.issn.1672-7207.2022.06.025 doi: 10.11817/j.issn.1672-7207.2022.06.025
    [6] S. Qiao, Z. Cai, Z. Zhang, Characteristics of soft soil Long and narrow deep foundation pit retaining system in Nansha Port Area, J. Zhejiang Univ., Eng. Sci., 56 (2022), 1473–1484. http//:doi.org/10.3785/j.issn.1008-973X.2022.08.001 doi: 10.3785/j.issn.1008-973X.2022.08.001
    [7] G. Meng, J. Liu, J. Huang, Research on horizontal displacement prediction of deep foundation pit envelope based on BP artificial neural network, Urban Rapid Rail Transition, 35 (2022), 80–88. http//:doi.org/10.3969/j.issn.1672-6073.2022.03.013 doi: 10.3969/j.issn.1672-6073.2022.03.013
    [8] Z. Zhang, M. Yuan, J. Deng, S. Xue, Slope displacement prediction based on improved grey-timeseries analysis time-varying model, Chin. J. Rock Mech. Eng., 33 (2014), 3791–3797. http//:doi.org/10.13722/j.cnki.jrme.2014.s2.049 doi: 10.13722/j.cnki.jrme.2014.s2.049
    [9] Y. Zhou, S. Li, C. Zhou, Intelligent approach based on random forest for safety risk prediction of deep foundation pit in subway stations, J. Comput. Civil Eng., 33 (2019), 05018004. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000796 doi: 10.1061/(ASCE)CP.1943-5487.0000796
    [10] Y. Zhou, W. Su, L. Ding, Predicting safety risks in deep foundation pits in subway infrastructure projects: support vector machine approach. J. Comput. Civil Eng., 31 (2017), 04017052. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000700 doi: 10.1061/(ASCE)CP.1943-5487.0000700
    [11] G. Hinton, R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science, 313 (2006), 504–507. https://doi.org/10.1126/science.1127647 doi: 10.1126/science.1127647
    [12] Y. Hong, J. Qian, Y. Ye, Application of CNN-LSTM Model based on Spatial-temporal correlation characteristics in deformation prediction of foundation pit engineering, Chin. J. Geotech. Eng., 43 (2021), 108–111. https://doi.org/10.11779/CJGE2021S2026 doi: 10.11779/CJGE2021S2026
    [13] Z. Zhang, D. Zhang, J. Li, Research on LSTM-MH-SA landslide displacement prediction model based on multi-head self-attention mechanism, Rock Soil Mech., 43 (2022), 477–486+507. https://doi.org/10.16285/smj.r.2021.2091 doi: 10.16285/smj.r.2021.2091
    [14] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software., 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [15] J. Nasiri, F. M. Khiyabani, A whale optimization algorithm (WOA) approach for clustering, Cogent Math. Stat., 5 (2018), 1483565. https://doi.org/10.1080/25742558.2018.1483565 doi: 10.1080/25742558.2018.1483565
    [16] S. Chakraborty, S. Sharma, A. K. Saha, S. Chakraborty, SHADE–WOA: A metaheuristic algorithm for global optimization, Appl. Soft Comput., 113 (2021), 107866. https://doi.org/10.1016/j.asoc.2021.107866 doi: 10.1016/j.asoc.2021.107866
    [17] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput., 9 (1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 doi: 10.1162/neco.1997.9.8.1735
    [18] S. Yang, D. Chen, S. Li, Carbon price forecasting based on modified ensemble empirical mode decomposition and long short-term memory optimized by improved whale optimization algorithm, Sci. Total. Environ., 716 (2020), 137117. https://doi.org/10.1016/j.scitotenv.2020.137117 doi: 10.1016/j.scitotenv.2020.137117
    [19] Z. Zhao, W. Chen, X. Wu, LSTM network: a deep learning approach for short‐term traffic forecast, IET Intell. Transp. Syst., 11 (2017), 68–75. https://doi.org/10.1049/iet-its.2016.0208 doi: 10.1049/iet-its.2016.0208
    [20] S. Mostafa, S. Yazdani, IWOA: An improved whale optimization algorithm for optimization problems, J. Comput. Des. Eng., 6 (2019), 243–259. https://doi.org/10.1016/j.jcde.2019.02.002 doi: 10.1016/j.jcde.2019.02.002
    [21] N. Xu, X. Wang, X. Meng, Gas concentration prediction based on IWOA-LSTM-CEEMDAN residual correction model, Sensors, 22 (2022), 4412. https://doi.org/10.3390/s22124412 doi: 10.3390/s22124412
    [22] Z. Zhuang, X. Zheng, Z. Chen, T. Jin, A reliable short‐term power load forecasting method based on VMD‐IWOA‐LSTM algorithm, IEEJ Trans. Electr. Electron. Eng., 2022. https://doi.org/10.1002/tee.23603 doi: 10.1002/tee.23603
    [23] X. Liu, Y. Bai, C. Yu, Multi-strategy improved sparrow search algorithm and application, Math. Comput., 96 (2022). https://doi.org/10.3390/mca27060096 doi: 10.3390/mca27060096
    [24] A. Chhabra, S. Sahana, N. Sani, A. Mohammadzadeh, H. Omar, Energy-Aware Bag-of-Tasks scheduling in the cloud computing system using hybrid oppositional differential evolution-enabled whale optimization algorithm, Energies, 15 (2022), 4571. https://doi.org/10.3390/en15134571 doi: 10.3390/en15134571
    [25] Y. Qi, Z. Cheng, Research on traffic congestion forecast based on deep learning, Information, 14 (2023), 108. https://doi.org/10.3390/info14020108 doi: 10.3390/info14020108
    [26] W. Guo, Y. Mao, Y. Chen, X. Zhang, Multi-objective optimization model of micro-grid access to 5G base station under the background of China's carbon peak shaving and carbon neutrality targets, Energy Res., 10 (2022), 1032993. https://doi.org/10.3389/fenrg.2022.1032993 doi: 10.3389/fenrg.2022.1032993
    [27] W. Lu, H. Rui, C. Liang, L. Jiang, S. Zhao, K. Li, A method based on GA-CNN-LSTM for daily tourist flow prediction at scenic spots, Entropy, 22 (2022), 261. https://doi.org/10.3390/e22030261 doi: 10.3390/e22030261
    [28] D. Li, Z. Li, K. Sun, Development of a novel soft sensor with long short-term memory network and normalized mutual information feature selection, Math. Probl. Eng., (2020), 1–11. https://doi.org/10.1155/2020/761701 doi: 10.1155/2020/761701
    [29] W. Sun, J. Wang, X. Wei, An improved whale optimization algorithm based on different searching paths and perceptual disturbance, Symmetry, 10 (2018), 210. https://doi.org/10.3390/sym1006021 doi: 10.3390/sym1006021
    [30] Y. Li, W. Pei, Q. Zhang, Improved whale optimization algorithm based on hybrid strategy and its application in location selection for electric vehicle charging stations, Energies, 15 (2022), 7035. https://doi.org/10.3390/en15197035 doi: 10.3390/en15197035
    [31] X. Cui, S. E, D. Niu, D. Wang, M. Li, An improved forecasting method and application of China's energy consumption under the carbon peak target, Sustainability, 13 (2021), 8670. https://doi.org/10.3390/su13158670 doi: 10.3390/su13158670
    [32] B. Khan, P. Singh, Selecting a meta-heuristic technique for smart micro-grid optimization problem: A comprehensive analysis, IEEE Access, 5 (2017), 13951–13977. https://doi.org/10.1109/ACCESS.2017.2728683 doi: 10.1109/ACCESS.2017.2728683
    [33] Y. Zhang, R. Li, J. Zhang, Optimization scheme of wind energy prediction based on artificial intelligence, Environ. Sci. Pollut. Res., 28 (2021), 39966–39981. https://doi.org/10.1007/s11356-021-13516-2 doi: 10.1007/s11356-021-13516-2
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1943) PDF downloads(125) Cited by(2)

Figures and Tables

Figures(10)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog