Loading [MathJax]/jax/output/SVG/jax.js
Research article

The first hitting time analysis of evolutionary algorithms based on renewal process

  • Running time analysis of evolutionary algorithms for continuous optimization is one research challenge in the field of evolutionary algorithms (EAs). However, the theoretical analysis results have rarely been applied to evolutionary algorithms for continuous optimization in practice, let alone their variants for evolution strategy. In this paper, we regarded the first hitting time of evolution strategy as the stopping time of the renewal process on the basis of the renewal process and in combination with Wald's inequality and stopping time theory. Afterwards, to demonstrate the application of the proposed model in the first hitting time analysis of (1 + 1) ES, we analyzed it with different mutation operators on the sphere function. First, we significantly improved the lower bound on the first hitting time of (1 + 1) ES with a uniform mutation operator, i.e., from Ω(n) to Ω(ecn). Next, O(n2n) was the upper bound on the first hitting time of (1 + 1) ES with a Gaussian mutation operator from the initial distance R to half of the initial distance R/2. The numerical experimental results showed that the theoretical calculation was consistent with the actual running time, which provides a novel method for analyzing the first hitting time of EAs.

    Citation: Zhensheng Zhou, Lin Wang, Xue Zou, Fei Wang, Zaijun Zhang, Xiaobo Yan. The first hitting time analysis of evolutionary algorithms based on renewal process[J]. Electronic Research Archive, 2024, 32(5): 2994-3015. doi: 10.3934/era.2024137

    Related Papers:

    [1] Weishang Gao, Qin Gao, Lijie Sun, Yue Chen . Design of a novel multimodal optimization algorithm and its application in logistics optimization. Electronic Research Archive, 2024, 32(3): 1946-1972. doi: 10.3934/era.2024089
    [2] Xiaopeng Yi, Huey Tyng Cheong, Zhaohua Gong, Chongyang Liu, Kok Lay Teo . Dynamic optimization of a two-stage fractional system in microbial batch process. Electronic Research Archive, 2024, 32(12): 6680-6697. doi: 10.3934/era.2024312
    [3] Tianbao Liu, Lingling Yang, Yue Li, Xiwen Qin . An improved dung beetle optimizer based on Padé approximation strategy for global optimization and feature selection. Electronic Research Archive, 2025, 33(3): 1693-1762. doi: 10.3934/era.2025079
    [4] Yike Lv, Xinzhu Meng . Evolution of infectious diseases induced by epidemic prevention publicity and interaction between heterogeneous strains. Electronic Research Archive, 2024, 32(8): 4858-4886. doi: 10.3934/era.2024223
    [5] Ming Wei, Congxin Yang, Bo Sun, Binbin Jing . A multi-objective optimization model for green demand responsive airport shuttle scheduling with a stop location problem. Electronic Research Archive, 2023, 31(10): 6363-6383. doi: 10.3934/era.2023322
    [6] Lingrui Zhang, Xue-zhi Li, Keqin Su . Dynamical behavior of Benjamin-Bona-Mahony system with finite distributed delay in 3D. Electronic Research Archive, 2023, 31(11): 6881-6897. doi: 10.3934/era.2023348
    [7] Hui Li, Shuai Wu, Ronghui Wang, Wenbin Hu, Haining Li . Modeling and application of implicit feedback in personalized recommender systems. Electronic Research Archive, 2025, 33(2): 1185-1206. doi: 10.3934/era.2025053
    [8] Sida Lin, Jinlong Yuan, Zichao Liu, Tao Zhou, An Li, Chuanye Gu, Kuikui Gao, Jun Xie . Distributionally robust parameter estimation for nonlinear fed-batch switched time-delay system with moment constraints of uncertain measured output data. Electronic Research Archive, 2024, 32(10): 5889-5913. doi: 10.3934/era.2024272
    [9] Hao Nong, Yitan Guan, Yuanying Jiang . Identifying the volatility spillover risks between crude oil prices and China's clean energy market. Electronic Research Archive, 2022, 30(12): 4593-4618. doi: 10.3934/era.2022233
    [10] Ruyang Yin, Jiping Xing, Pengli Mo, Nan Zheng, Zhiyuan Liu . BO-B&B: A hybrid algorithm based on Bayesian optimization and branch-and-bound for discrete network design problems. Electronic Research Archive, 2022, 30(11): 3993-4014. doi: 10.3934/era.2022203
  • Running time analysis of evolutionary algorithms for continuous optimization is one research challenge in the field of evolutionary algorithms (EAs). However, the theoretical analysis results have rarely been applied to evolutionary algorithms for continuous optimization in practice, let alone their variants for evolution strategy. In this paper, we regarded the first hitting time of evolution strategy as the stopping time of the renewal process on the basis of the renewal process and in combination with Wald's inequality and stopping time theory. Afterwards, to demonstrate the application of the proposed model in the first hitting time analysis of (1 + 1) ES, we analyzed it with different mutation operators on the sphere function. First, we significantly improved the lower bound on the first hitting time of (1 + 1) ES with a uniform mutation operator, i.e., from Ω(n) to Ω(ecn). Next, O(n2n) was the upper bound on the first hitting time of (1 + 1) ES with a Gaussian mutation operator from the initial distance R to half of the initial distance R/2. The numerical experimental results showed that the theoretical calculation was consistent with the actual running time, which provides a novel method for analyzing the first hitting time of EAs.



    Evolutionary algorithms (EAs) are a class of adaptive, global search algorithms inspired by natural evolutionary processes, mainly including genetic algorithms (GA), evolution strategy (ES), genetic programming (GP), and evolution programming (EP) [1]. EAs have been widely applied to the practical application domains, such as black box optimization and combinatorial optimization, yet a number of theoretical analysis results are relatively limited. As a commonly used indicator to measure the computational time, the first hitting time is the number of generations when EAs find the optimal solution or a satisfactory solution for the first time [2]. Moreover, the expected first hitting time is the average number of evaluations of the fitness function when EAs obtain the optimal or satisfactory solution for the first time, which evaluates the performance of EAs [3].

    In the past few years, most theoretical approaches of the first hitting time have focused on discrete search spaces. Drift analysis, combined with Markov chains in the analysis of computation time complexity of EAs, was first introduced by He and Yao [4]. As drift analysis was being developed, multiplicative drift [5,6] and variable drift [7,8,9] have shown that the lower bound of the first hitting time is stronger than its upper bound. Sarker et al. [10] considered the first hitting time of EAs as a random variable and derived its time on a pseudo-Boolean function. Yu et al. [11] proposed a switch analysis method for estimating the time bound of EAs by combining two Markov chains, one of which served as a reference chain. Qian et al. [12] transformed the bit-by-bit noise into one-bit noise and discussed the first hitting time of EAs on the OneMax and LeadingOnes functions. Wegener et al. [13] analyzed the running time of EAs on the pseudo-Boolean function using tail inequalities. In these works, however, there have been few theoretical analyses in EAs for continuous optimization. Huang et al. [14] combined statistical methods with the average gain model to obtain an empirical distribution function for the expected gain and derived the upper bound closed expression. Feng et al. [15] established an equivalent relationship to analyze the effect of selection operators on the computation time of EAs. To generalize the drift analysis model, Morinaga et al. [16] introduced σ-algebraic flow stochastic process into the model. Jägersküpper [17] combined drift analysis with Markov chains, treating drift as a Markov chain, and analyzed the first hitting time of EAs on the linear function. Many important theoretical analysis results of them primarily focus on simplified versions rather than the (1 + 1) ES, which stems from the difficulties posed by population-based characteristics and the intricate adaptive selection strategies involved.

    Agapie et al. [18] combined the transition kernel with the success region where each generation of (1 + 1) ES arrives and obtained the cumulative distribution function of success probability, which was estimated as the upper and lower bounds of the first hitting time for (1 + 1) ES with a uniform mutation operator on the sphere function. However, when the gap between the population fitness value and the optimal fitness value was substantial, they approximated the intersection of the fitness sphere and the mutation sphere as half of the mutation sphere. These approximations resulted in an inflated expected progress rate, ultimately causing significant looseness between the lower bound of the first hitting time of (1 + 1) ES on the sphere function and its actual running time. Morinaga et al. [19] proved that the expected hitting time of (1 + 1) ES on the convex quadratic functions was Θ(exp(1/d)). Zhang et al. [20] analyzed the upper bound of the expected hitting time of (1 + 1) ES on the sphere function by combining martingale theory with the Lebesgue-dominated convergence theorem, but they did not provide a corresponding theoretical lower bound analysis. Agapie et al. [21] combined the reduction of distance from the t-1 generation population to the optimal solution with differential equations and analyzed the lower bound of the hitting time of (1 + 1) ES with a Gaussian mutation operator on the sphere function as Ω(nlog(R0ε)). Doerr et al. [22] established a dynamic drift model to analyze the relationship between the evolutionary strategy parameter selection and the first hitting time and obtained the expected first hitting time Θ(nλ/logλ+n/logn) on the OneMax function. Akimoto et al. [23] improved additive drift by treating the progress rate as a monotonic decreasing function, and analyzed that the convergence rate of the 1/5 success rule (1 + 1) ES solving sphere function was Θ(1/d). Jägersküpper [24] hypothesized that individuals were generated through anisotropic mutation and further obtained the asymptotical running time of (1 + 1) ES with a Gaussian mutation operator solving sphere function. Although many achievements have been made in the analysis of the first hitting time of (1 + 1) ES on the sphere function, few theoretical research results pay attention to its upper bound with a Gaussian mutation operator. Meanwhile, due to the simplified relationship between the sphere objective function and the mutation sphere, its lower bound with a uniform mutation operator is not tighter.

    The main innovations and contributions of this paper can be outlined as follows:

    1) The proposed method is based on a non-negative progress rate stochastic process, which is conducive to separating specific cases and algorithms, making the model more general.

    2) The intersection of the fitness sphere and the mutation sphere is considered the success region where each generation of (1 + 1) ES arrives, and its lower bound with a uniform mutation operator is obtained for a tighter first hitting time.

    3) The effectiveness of the 1/5 success rule of the (1 + 1) ES is verified, and its upper bound with a Gaussian mutation operator is obtained.

    The structure of this paper is as follows. Some preliminary knowledge on the renewal theorem and stopping time is introduced, and then the renewal model is proposed in Section 2. The lower bound of the expected first hitting time of (1 + 1) ES with a uniform mutation operator is analyzed in Section 3. The upper bound of the expected first hitting time of (1 + 1) ES with a Gaussian mutation operator is analyzed in Section 4. Numerical experiments and results are reported in Section 5. Finally, Section 6 draws the conclusion.

    In this section, we introduce the general process framework of (1 + 1) ES, renewal process, stopping time, and mathematical modeling of the algorithm.

    We consider minimizing the sphere function in this paper. Its corresponding mathematical expression is as follows:

    {f(x)=ni=1x2i,s.t. xiQRn, (1)

    where f(x) is the fitness function and Q is the search space. Let ξt={ξt1,ξt2,...,ξtn}Q be the t-generation population of EAs and n represents the problem size. The process description of (1 + 1) ES can be described as follows [23]:

    Algorithm 1: (1 + 1) ES with 1/5 success rule
    1. Input: Initialize solution ξ0 and step size σ0;
    2. while t = 1, 2, …, until some stopping criterion is fulfilled do
    3. mutation: ξt:=ξt1+σt+GΔt;
    4. evaluate solution ξt;
    5. selection: if f(ξt1)<f(ξt), then
    6. ξt:=ξt1;
    7. σt:=σt1A;
    8. else
    9. ξt:=min{ξt,ξt1};
    10. end while
    11. output the satisfactory solution.

    In this paper, Algorithm 1 includes the use of mutation and selection. Δt uses a uniform mutation operator or Gaussian mutation operator; election is elite selection, that is, one individual is selected from the parent and offspring individuals as the population individual of the next generation. Within a certain range of iteration times G, the successful iteration times G is the iteration number that satisfies the condition of f(ξt)<f(ξt1). Let ps=GG, if ps>1/5, the mutation strength reduces, i.e., A=α; if ps<1/5, the mutation strength increases, i.e., A=α1/4. Otherwise, it remains unchanged. Generally speaking, 0.85α<1 [25]. However, the main results presented in this paper, i.e., Theorem 3, are independent of such a specific implementation and can be applied to almost any stochastic search algorithm.

    We consider the gradual process of EAs from their initial position to a satisfactory solution as a renewal process to establish a renewal model, which is different from the theoretical results on intelligent optimization algorithms [26,27]. Before establishing the model, the following theorem needs to be given.

    Definition 1 (fitness difference function). Let d(ξtopt), t>0 be a random variable, then d(ξtopt )=f(ξtopt )fopt  is called the fitness difference function, where ξtopt  represents the optimal individual of the t generation, and fopt  represents the target fitness value, which measures the distance of the t generation population to the optimal solution.

    Definition 2 (progress rate) [28]. During the running of EAs, the progress rate represents the decrease in the optimal distance between the t1 to t generation populations, i.e., Xt=d(ξt1opt )d(ξtopt ),t=1,2,, and quality gain represents the decrease in fitness between t1 generation and t generation populations, which reflects the convergence time of EAs. The larger the progress rate, the faster the distance reduction from the optimal solution changes, and the faster the convergence time. It is obvious that {Xt}t=0 is a non-negative stochastic process, and the definition is as follows.

    Definition 3 (renewal process of progress rate). Let {Xt}t=0 be a non-negative stochastic process. If there exists Yt=tn=1Xn=d(ξ0)d(ξtopt) for any t=1,2, then Yt is referred to as the renewal process of the progress rate of EAs in the t generation.

    In stochastic algorithms, the first hitting time is an effective way to characterize its performance, i.e., the number of generations when EAs find a satisfactory solution for the first time. It can be described by the stopping time of the renewal process, and the definition of stopping time is defined.

    Definition 4 (stopping time of renewal process). Let {Xt}t=0 be a non-negative stochastic process, if Tτ={τ|τ1k=1Xk<d(ξ0)ε,k=τXkd(ξ0)ε} is satisfied, then Tτ is described as a stopping time of Xt, where τ is a non-negative random variable.

    The stopping time of the renewal process is an important concept in stochastic processes, which is described as independent of the state of the progress rate stochastic process {Xt}t=Tτ after time Tτ and related to the state of the progress rate stochastic process {Xt}t=Tτ0 before that time.

    Theorem 1 (Wald's equation). If {Xt}t=0 are random variables with independent and identically distributed X having finite expectations E(X)<, and Tτ is a stopping time for {Xt}t=0 such that E(Tτ)<, then

    E(Tτt=1Xt)=E(Tτ)E(X). (2)

    Proof. Let: It = {1, if Tτt;0, if Tτ>t.. Because {It=1}={Tτt}=tk=1{Tτ=k} and the progress rate stochastic process {Xt}t=Tτ are independent of each other, it can be concluded that {It=0} and {Xt}t=Tτ are also independent of each other. Therefore,

    E(Tτt=1Xt)=E(t=1XtIt)=t=1E(Xt)E(It)=E(X)t=1P(Tτt)=E(X)E(Tτ).

    In Theorem 1, it holds true that the expectation of the renewal process of the progress rate is equal to the expected first hitting time multiplied by the expected progress rate.

    Inference 1 (Wald's inequation). According to Theorem 1, in the running of EAs, if Tτ is the stopping time of {Xt}t=0 and there are two functions M1 and M2 that make M1<E(Xt)<M2 hold, then E(Tτt=1Xt) satisfies:

    M1E(Tτ)<E(Tτt=1Xt)<M2E(Tτ). (3)

    We can derive the renewal theory theorem for the expected first hitting time based on Inference 1, as shown in Theorem 2.

    Theorem 2 (renewal theorem). If {Xt}t=0 are random variables with independent and identically distributed X having M1<E(X)<M2 and Tτ is a stopping time for {Xt}t=0 such that E(Tτ)<, then

    1M2<E(Tτ)dTτ<1M1. (4)

    Proof. For the lower bound proof, the stopping time Tτ={τ|τ1k=1Xk<d(ξ0)ε,k=τXk d(ξ0)ε} in Definition 4 is equivalent to Tτ={τ|τk=1Xk>d(ξ0)ε,τ1k=1Xkd(ξ0)ε}, it follows from Inference 1 that the equation dTτ<E(Tτn=1Xn)<M2E(Tτ) holds. Thereby, limdTτinfE(Tτ)dTτ>1M2. Now, we prove its upper bound. Let us fix a constant a and define another renewal process, i.e., Xrt={Xt,ifXtaa,ifXt>a, μa=min{M1,infE(Xrt)}. Since 0<Xrt<Xt and 0<μa<E(Xrt) < M2 for all t, it follows that E(Trn=1Xn)<E(Ttn=1Xn), i.e., E(Tr)>E(Tt). Due to Inference 1, this yields μaE(Tt)<μaE(Tr) < E(Trn=1Xn)<dTτ+a, i.e., limdTτsupE(Tτ)dTτ<1M1, thus 1M2<E(Tτ)dTτ<1M1.

    In the running of EAs, due to the difficulty in finding the optimal solution, when the fitness difference function value of the parent individual to the optimal solution is equal to ε, the algorithm terminates, that is, it reaches a specific stopping distance of Yt=E(Tτt=1Xt)=d(ξ0)ε, and then Theorem 3 represents the stochastic process model of the expected first hitting time analysis.

    Theorem 3 (renewal model). According to Theorem 2, in the running of EAs, if Tτ is the stopping time of {Xt}t=0 and there exist two functions M1 and M2, such that M1<E(Xt)<M2, when EAs pass through the interval [ε,d(ξ0)], then it holds for Tτ that

    d(ξ0)εM2<E(Tτ)<d(ξ0)εM1. (5)

    Theorem 3 can be obtained from the proof of the additive drift theorem of Kötzing [29], which is a continuous space adaptation of the original discrete space drift result of He and Yao [30]. In Theorem 3, the upper and lower bounds on the expected progress rate vary as a function of M1 and M2, which result in tighter upper and lower bounds on the first hitting time. Sections 3 and 4 analyze the expected first hitting time on the sphere function using the uniform and Gaussian mutation operators as a basis.

    In this section, we prove that there is an exponential lower bound on the expected first hitting time when the (1 + 1) ES with a uniform mutation operator is used to solve the sphere function minimization problem, which will be given in Theorem 4.

    Theorem 4. In the running of the (1 + 1) ES with elitist selection, when the radius of the sphere function of the n-dimensional Euclidean space goes from fitness difference function value R/2 to ε, then the lower bound of the first hitting time Tτ is satisfied:

    E(Tτ)>Ω(ecn). (6)

    Due to the symmetry of the sphere, we assume that the current individual is located on the ox axis, the distance from the center of the fitness hypersphere is R, the state of the individual is o, and the n dimensional sphere with o as the center and ρ as the radius is the mutation sphere E1, which contains the next generation of individuals that are likely to reach the set. At the same time, fitness hypersphere E2 contains all possible solution space sets of the next-generation individual better than the current individual, as shown in Figure 1. However, during the running of (1 + 1) ES, not all the next-generation individuals generated by the mutation operator are superior to the previous-generation individuals. Therefore, we regard the fitness spherical cap A2 and mutation spherical cap A1 as the reached region S0=E1E2=A1A2 by the next generation individuals, and the two spherical caps can be regarded as the cut-off of the same hyperplane P, i.e., the success region. The region is irregular and the calculation of the volume is relatively complicated, so we will calculate the volume of the spherical cap by means of the spherical sector and the spherical cone. Next, we will give some important conclusions.

    Figure 1.  Success region of elitist ES on sphere.

    Suppose that in the n-dimensional sphere, the uniform mutation operator is uniformly distributed in the sphere with radius ρ, then, its corresponding volume is denoted as

    Voln(ρ)=πn2ρnΓ(n2+1),Dn(ρ)={xRn,||x||<ρ}, (7)

    and the probability density function of the sphere with radius ρ is

    f(x)=1Voln(ρ)=Γ(n2+1)πn2ρnIDn(ρ)(x), (8)

    where IDn(ρ)(x) is the indicator function. When the uniform mutation operator is evenly distributed in the radius ρ sphere, the corresponding value is 1; otherwise, the value is 0. For more details, see [31].

    Lemma 1. Let the radius of the hypersphere containing the spherical cap be r and its height be h. Then, the volume of the spherical sector is

    Volsn(r)=1nScapr, (9)

    where Scap is the area of the spherical cap and n represents the dimension of the hypersphere.

    Lemma 2. Let the radius of the hypersphere containing the spherical cap be r and its height be h, n>3. Then, the area of the spherical cap is

    Scap={{2πn12rn2Γ(n12)(n3)!!(n2)!![h(rh)n32m=1(2m1)!!(2m)!!(2hrh2r2)m],neven;(πn+12rn1πn12rn1arccos(2rhh2r)πn12rn2(rh)×n22m=1(2m2)!!(2m1)!!(2rhh2r)2m1)2Γ(n12)(n3)!!(n2)!!,nodd. (10)

    Lemma 3. Let the radius of the bottom surface of the spherical cap be rcap and its height be h. Then, the volume of the spherical cone is

    Volcn(rcap)=1nπn12Γ(n12+1)(rcap)n1(rh). (11)

    Theorem 5. Let the radius of the hypersphere containing the spherical cap be r, its radius of the bottom surface be rcap, and its height be h. Then, the volume of the spherical cap is

    Vol=Volsn(r)Volcn(rcap)=1nScapr1nπn12Γ(n12+1)(rcap)n1(rh). (12)

    To calculate the volume of the fitness spherical cap, it is necessary to calculate not only the area Sfcap of the fitness spherical cap, but also the bottom radius rfcap, its heights h2 and h1, and the simultaneous equations:

    {ni=1x2i=R;(x1R)2+ni=2x2i=ρ. (13)

    We get it by solving

    x1=R(112(ρR)2), (14)

    where x1 denotes the abscissa of the common hyperplane of the fitness and mutation spherical cap. Let the heights of the corresponding spherical caps be h2 and h1. The association with Eq (14) yields

    {{h2=Rx1=ρ22R;h1=ρ(Rx1)=ρ(1ρ2R). (15)

    From Eq (15), we can obtain h2 and h1. Next, we compute the bottom radius of the common hyperplane of the fitness and mutation spherical cap, and it is useful to assume that the height of the spherical cap is 0<h1,h2<R. Then, the bottom radius rfcap is

    rcap=ρ2(Rx1)2=ρ1ρ24R2. (16)

    Before calculating the volume of the fitness spherical cap, its area is calculated without loss of generality. Assume that the radius of the mutation sphere where the fitness sphere cap intersects with the mutation sphere is ρ, the radius of the fitness sphere is R, its heights are h2<R and n>3, and the bottom radius is rcap; then the volume of the fitness spherical cap is discussed in two cases:

    ⅰ) When n is odd:

    Sfcap=2πn12Rn2Γ(n12)(n3)!!(n2)!![ρ22R(Rρ22R)n32m=1(2m1)!!(2m)!!(ρ2R2ρ44R4)m]. (17)

    We substitute Eqs (14)-(16) into Vol=1nSfcapR1nπn12Γ(n12+1)(rcap)n1(Rh2) to obtain the volume of the fitness spherical cap as:

    {Volfcapn=1n2πn12Rn2Γ(n12)(n3)!!(n2)!![ρ22R(Rρ22R)n32m=1(2m1)!!(2m)!!(ρ2R2ρ44R4)m]R1nπn12Γ(n12+1)(ρ2ρ44R2)n1(Rρ22R) < 1nπn12ρ2Rn2Γ(n12)(n3)!!(n2)!!. (18)

    ⅱ) When n is even:

    Sfcap=[πn+12Rn1πn12Rn1arccos(ρ2ρ44R2R2)πn12Rn2(2Rρ22R2)n22m=1(2m2)!!(2m1)!!(ρ2ρ44R2R2)2m1]2Γ(n12)(n3)!!(n2)!!.

    Similarly, we substitute Eqs (14)-(16) into Eq (12) to obtain the volume of the fitness spherical cap as:

    {Volfcapn=1n2Γ(n12)(n3)!!(n2)!![πn+12Rn1πn12Rn1arccos(ρ2ρ44R2R2)πn12Rn2(2Rρ22R2)n22m=1(2m2)!!(2m1)!!(ρ2ρ44R2R2)2m1]R1nπn12Γ(n12+1)(ρ2ρ44R2)n1(Rρ22R) < 1n2πn+12RnΓ(n12)(n3)!!(n2)!!. (19)

    Assume that the radius of the fitness sphere where the mutation spherical cap intersects with the fitness sphere is R, the radius of the mutation sphere is ρ, its heights are h1<ρ and n>3, and the bottom radius is rcap. Then the volume of the mutation spherical cap is discussed in two cases:

    ⅰ) When n is odd:

    {Smcap=2πn12ρn2Γ(n12)(n3)!!(n2)!![ρρ22R(ρρ+ρ22R)n32m=1(2m1)!!(2m)!!(2(ρρ22R)ρ(ρρ22R)2ρ2)m]=2πn12ρn2Γ(n12)(n3)!!(n2)!![ρρ22Rρ22Rn32m=1(2m1)!!(2m)!!(1ρ24R2)m]. (20)

    Inserting Eqs (15), (16), and (20) in Vol=1nSmcapρ1nπn12Γ(n12+1)(rcap)n1(ρh1), we obtain the volume of the mutation spherical cap as:

    {Volmcapn=1n2πn12ρn2Γ(n12)(n3)!!(n2)!![ρρ22Rρ22Rn32m=1(2m1)!!(2m)!!(1ρ24R2)m]ρ 1nπn12Γ(n12+1)(ρ1ρ24R2)n1(ρ(ρρ22R)) < 1n2πn12ρn1Γ(n12)(n3)!!(n2)!!(ρρ22R). (21)

    ⅱ) When n is even:

    {Smcap=2Γ(n12)(n3)!!(n2)!![πn+12ρn1πn12ρn1arccos(2ρ(ρρ22R)(ρρ22R)2ρ)πn12ρn2(ρ(ρρ22R))n22m=1(2m2)!!(2m1)!!(2ρ(ρρ22R)(ρρ22R)2ρ)2m1]=2Γ(n12)(n3)!!(n2)!![πn+12ρn1πn12ρn1arccos(4R24ρRρ22R)πn12ρn2Rn22m=1(2m2)!!(2m1)!!(4R24ρRρ22R)2m1]. (22)

    Inserting Eqs (15), (16), and (22) in Vol=1nSmcapρ1nπn12Γ(n12+1)(rcap)n1(ρh1), we obtain the volume of the mutation spherical cap as:

    {Volmcapn=1n2Γ(n12)(n3)!!(n2)!![πn+12ρn1πn12ρn1arccos(4R24ρRρ22R)πn12ρn2Rn22m=1(2m2)!!(2m1)!!(4R24ρRρ22R)2m1]ρ 1nπn12Γ(n12+1)(ρ1ρ24R2)n1(ρ(ρρ22R)) < 1n2πn+12ρnΓ(n12)(n3)!!(n2)!!. (23)

    Since the expected progress rate probability density function is not easy to accurately calculate [32], it is difficult to calculate the expected first hitting time of the (1 + 1) ES on the sphere function. Therefore, we estimate by calculating the sum of the volume of the mutation spherical cap and the volume of the fitness spherical cap, which provides a theoretical basis for improving the estimation of the lower bound of the first hitting time of (1 + 1) ES on the sphere function.

    The position of the next generation of individuals generated by (1 + 1) ES during the running of the sphere function is the gray part of Figure 1, i.e., the intersection of the irregular successful regions is the union of spherical caps A1 and A2:

    {{D={(x,v)A1A2}A2={(x,v)|Rρ2/2Rxρ,0vρ2x2}A1={(x,v)|0xRρ2/2R,0v2Rxx2}.

    By Definition 2, we get the expected progress rate:

    {E(Xt)=E(d(ξt1opt)d(ξtopt)) =E(Rd(ξtopt)) =ξtoptDf(x)(Rd(ξtopt))dξtopt =DRf(x,v)dvdxDf(x,v)(Rx)2+v2dvdx =Γ(n2+1)Rπn2ρnD1 dvdxDΓ(n2+1)πn2ρn(Rx)2+v2dvdx. (24)

    ⅰ) When n is even:

    {ProbA1A2=Volfcapn+Volmcapn =1nπn12ρ2Rn2Γ(n12)(n3)!!(n2)!!+1n2πn12ρn1Γ(n12)(n3)!!(n2)!!(ρρ22R) =1nπn12ρ2(Rn1+2ρn2ρn1)RΓ(n12)(n3)!!(n2)!!.  (25)

    According to Eqs (24) and (25), we can obtain:

    {E(Xt)=Γ(n2+1)Rπn2ρnD1 dvdxDΓ(n2+1)πn2ρn(Rx)2+v2dvdx <1nΓ(n2+1)π12Γ(n12)(n3)!!(n2)!!(Rn1ρn2+2ρ) Γ(n2+1)πn2ρn(ρRρ22Rρ2x20(Rx)2+v2dvdx+ Rρ22R02Rxx20(Rx)2+v2dvdx) =1nΓ(n2+1)π12Γ(n12)(n3)!!(n2)!!(Rn1ρn2+2ρ)Γ(n2+1)πn2ρn(G1+G2). (26)

    In order to calculate G1, we reduce the dimension of the integral to make w=v2. Thus, we obtain:

    {G1=ρRρ22Rρ2x20(Rx)2+v2dvdx =ρRρ22Rρ2x20πn12Γ(n12)wn121(Rx)2+wdwdx. (27)

    Because the integrand function of this integral contains the radical (Rx)2+w, it is difficult to solve the integral. By taking into account the idea of substitution that regards (Rx)2 as the common factor of (Rx)2+w under the radical sign, where w=u(Rx)2, then dw=(Rx)2du, (Rx)2+w=(Rx)1+u and 1+u=1+12u+18u2+ +12(12)(12m+1)m!um+Rm(u). When w=0, u=0; when w=ρ2x2, u=ρ2x2(Rx)2. Hence, we obtain:

    {G1=ρRρ22Rρ2x2(Rx)20πn12Γ(n12)un121(Rx)n3(Rx)2+u(Rx)2(Rx)2dudx =πn12Γ(n12)ρRρ22Rρ2x2(Rx)20un121(Rx)n1+ududx πn12Γ(n12)ρRρ22Rρ2x2(Rx)20un121(Rx)n(1+12u+18u2)dudx =πn12Γ(n12)(ρ2x2)n12(Rx)n1[2n1+1n+1ρ2x2(Rx)2 +14(n+3)(ρ2x2)2(Rx)4]×1n+1[(Rρ+ρ22R)n+1(Rρ)n+1]. (28)

    Similarly,

    {G2=πn12Γ(n12)(2Rxx2)n12(Rx)n1[2n1+1n+12Rxx2(Rx)2 +14(n+3)(2Rxx2)2(Rx)4]×1n+1[Rn+1(ρ22R)n+1] . (29)

    According to Eqs (26)-(29), we can draw the following conclusions:

    E(Xt)<Ω((Rρ)n2).

    ⅱ) When n is odd:

    {ProbA1A2=Volfcapn+Volmcapn = 1n2πn+12RnΓ(n12)(n3)!!(n2)!!+1n2πn+12ρnΓ(n12)(n3)!!(n2)!! = 1n2πn+12(Rn+ρn)Γ(n12)(n3)!!(n2)!!. (30)

    By Eqs (24) and (30), we can draw the following conclusions:

    {E(Xt)=Γ(n2+1)Rπn2ρnD1 dvdxΓ(n2+1)πn2ρnD(Rx)2+v2dvdx <Γ(n2+1)Rπn2ρn1n2πn+12(Rn+ρn)Γ(n12)(n3)!!(n2)!! Γ(n2+1)πn2ρnRρ22R02Rxx20(Rx)2+v2dvdx) =1nΓ(n2+1)Γ(n12)R((Rρ)n+1)(n3)!!(n2)!!Γ(n2+1)πn2ρnRρ22R02Rxx20(Rx)2+v2dvdx. (31)

    Also, from Eqs (27)-(29), the following asymptotic expressions can be approximated:

    {E(Xt)<1nΓ(n2+1)Γ(n12)R((Rρ)n+1)(n3)!!(n2)!!Γ(n2+1)πn2ρnπn12Γ(n12)((ρ2x2)n12(Rx)n1(2n1+ 1n+1ρ2x2(Rx)2+14(n+3)(ρ2x2)2(Rx)4)×1n+1((Rρ+ρ22R)n+1(Rρ)n+1)+ (2Rxx2)n12(Rx)n1(2n1+1n+12Rxx2(Rx)2+14(n+3)(2Rxx2)2(Rx)4)×1n+1(Rn+1(ρ22R)n+1)) =O((Rρ)n. (32)

    Obviously, when the dimension of the sphere function is odd or even, we find that the upper bound on the expected progress rate is exponential. By Theorem 3, the expected first hitting time E(Tτ) from the fitness difference function value R to R/2ε in the running of the sphere function with a uniform mutation operator is:

    E(Tτ)>RR/2+εO((Rρ)n)=Ω((ρR)n).

    Let c=ln(ρR), thus (ρR)=ec, then we can get that E(Tτ)>Ω(ecn) is the expected first hitting time of the elitist selection (1 + 1) ES with a uniform mutation operator solving sphere function. Therefore, Theorem 4 holds.

    Theorem 4 has significantly improved the current lower bound Ω(n), which has a significant effect on reducing the time complexity of analyzing the 1/5 success rule of (1 + 1) ES.

    Without loss of generality, the Gaussian mutation operator is different from the uniform mutation operator. It is assumed that independent offspring individuals are generated through the mutation vector m=σnN(0,I) and obey Gaussian distribution (see Figure 1).

    Theorem 6. Let x=(x1,x2,,xn) be an individual from the population σnN(0,I). Then

    k2=x21+x22+x2n, (33)

    which obeys the chi-square distribution with degree of freedom n, and is denoted as k2χ2(n). The probability density function of k=x21+x22+x2n is

    p(y)=2yn1σn2n2Γ(n2)exp(y22σ2). (34)

    These results are a simplified form of [32]. For a more detailed proof, see [33]. Its mathematical expectations are as follows:

    {k:=E(y)=0yp(y)dy=0y2yn1σn2n2Γ(n2)exp(y22σ2)dy = Γ(n+12)Γ(n2)2σ . (35)

    When n, then

    k=limnE(y)=nσ. (36)

    The validity of relation l2=k2m2+(Rm)2 is evident based on the observations from Figure 1. Since offspring individuals ξtopt are generated by the mutation vector m=σnN(0,I) and d(ξt1opt) is a constant, d(ξt1opt)d(ξtopt) is a random variable that depends on the mutation vector, and the specific form of d(ξtopt) is as follows [33]:

    {d(ξtopt)=k2m2+(Rm)2 =Rm2R+12mR. (37)

    The d(ξt1opt)d(ξtopt) probability density function is

    p(m)=12πσexp(m22σ2). (38)

    Thus, we can get the expected progress rate E(Xt):

    {E(Xt)=E(d(ξt1opt)d(ξtopt)) =E(Rd(ξtopt)) =m2m1(RRm2R+12mR)12πσexp(m22σ2)dm, (39)

    where m1=k22R and m2=k=nσ. 

    By normalizing the expected progress rate E(Xt) and the mutation strength σ [32]:

    E(Xt)=nRE(Xt),σ=nRσ. (40)

    And we can get the normalized expected progress rate E(Xt):

    {E(Xt)=E(d(ξt1opt)d(ξtopt)) =1/nσ/2σ2π(tσ2)exp(t22)dt σ2π1/nσ/2(tσ2)(t221+t48)dt >σ2π((σ80n2n+18n2+σ2n)(140(σ2)6+18(σ2)4+(σ2)2)) = Ω(1n2n). (41)

    Since (1 + 1) ES is adopted as the elitist selection strategy, R2ε<E(Xt)<E(X0)<R holds for t(0,τ). By Theorem 3, when running (1 + 1) ES on the sphere function, the first hitting time E(Tτ) from the fitness difference function value R2 to ε with a Gaussian mutation operator holds:

    E(Tτ)<R(R2ε)Ω(1n2n)=O(n2n), (42)

    which implies that we obtain the upper bound of the first hitting time. The above analysis shows that we can consider that each individual follows an independent isotropic distribution. Jägersküpper [24] adopted a similar approach to analyzing the first hitting time of (1 + 1) ES on the sphere function and obtained the lower bound of the first hitting time, while the upper bound was not given the relevant theoretical analysis. It is shown that the proposed method is valuable for the first hitting time analysis of population-based algorithms.

    The purpose of the experiments in this section is to verify the upper and lower bounds on the theoretical accuracy of the expected first hitting time for (1 + 1) ES with uniform and Gaussian mutation operators solving sphere functions.

    In this section, we mainly set the following experimental parameters: R=d(ξ0)=(x1,x2,,xn)=(5,0,,0)=5,ρ=1,ε=0.001, and n[1,100]. For each given problem size n, we conduct 100 runs for Algorithm 1 on the sphere function. We define Tri to denote the i-th run first hitting time of Algorithm 1 and E(Tτ)=100i=1Tri100 to denote the actual running average first hitting time, i.e., E(Tτ)=t.

    Theorem 4 gives the expected first hitting time of (1 + 1) ES with a uniform mutation operator from the previous Ω(n) to Ω(ecn). We arbitrarily take n[20,100] to verify that its lower bound is exponentially related to the problem size n. To verify the exponential time of the first hitting time, we only need to verify the linear relationship between logt and the problem size n, thus verifying our theoretical results. MATLAB simulation results are shown in Figure 2.

    Figure 2.  Problem size n and log t of (1 + 1) ES with a uniform mutation operator.

    We can see from Figure 2 that logt increases with increasing problem size n, showing a linear relationship. Consequently, we have derived a tighter lower bound for the time complexity, which is consistent with the theoretical lower bound for the analysis time.

    In Section 4, O(n2n) is given as the upper bound of the expected first hitting time of (1 + 1) ES with a Gaussian mutation operator. The first hitting time of a polynomial is acceptable in real-world optimization problems. MATLAB simulation results are shown in Figure 3.

    Figure 3.  Problem size n and the first hitting time of (1 + 1) ES with a Gaussian mutation operator.

    We can see from Figure 3 that when about n<20, the first hitting time increases as problem size n increases, which is faster than that of about n>20. We think that the first hitting time is a polynomial when n is large enough.

    In this paper, it is difficult to estimate the upper and lower bounds of (1 + 1) ES in the first time-solving sphere function, and the renewal model is proposed to solve it. The proposed method does not aim to directly determine the first hitting time, but views the individual population as a special renewal process, by introducing the stopping time of the special renewal process and combining Wald's inequality with the renewal theorem. The model is independent of the specific implementation of the algorithm and presents a gradual convergence process of EAs on a continuous search space, showing the gradual convergence process of (1 + 1) ES in a continuous search space. To verify the validity of the proposed method, we analyze the expected first hitting time of (1 + 1) ES on the sphere function. We obtain the lower bound of exponential time Ω(dm) with a uniform operator (1 + 1) ES and the upper bound of polynomial time O(n2n) with a uniform operator (1 + 1) ES. Furthermore, the proposed method can be applicable to other intelligent optimization algorithms, such as differential evolution algorithm and particle swarm optimization.

    This will also be further investigated in our future work. Based on the progress rate, statistical methods will be used to estimate the first hitting time of EAs, especially in the running time analysis with recombination, which will be very challenging. In addition, based on the discussed upper bound on the running time of (1 + 1) ES with a Gaussian mutation operator, another direction for further research is to analyze the lower bound on the running time of (1 + 1) ES with a Gaussian mutation operator. A closed interval can be constructed by upper and lower bounds to estimate the first hitting time of EAs, and then the theoretical results can be extended to the practical application of EAs.

    The authors declare they have not used artificial intelligence (AI) tools in the creation of this article.

    The authors extend their sincere gratitude to the anonymous reviewers for their helpful comments and insightful suggestions. This work was supported by the Science and Technology Plan Project of Guizhou Province (QKH-Foundation-ZK[2022] General 550, QKH-Platform Talent-ZCKJ[2021]007), the Youth Science and Technology Talent Growth Project of the Guizhou Provincial Education Department (QJH-KY[2021]115, QJH-KY[2022]177), the Open Project of the Guizhou Key Laboratory of Pattern Recognition and Intelligent System (GZMUKL[2022]KF05), the Ministry of Education's Industry-University Collaborative Education Project (221001766110209), and the Research Fund Project of the Guiyang Institute of Humanities and Technology (2022RWJS034).

    The authors declare that there are no conflicts of interest.



    [1] T. Bäck, H. P. Schwefel, An overview of evolutionary algorithms for parameter optimization, Evol. Comput., 1 (1993), 1–23. https://doi.org/10.1162/evco.1993.1.1.1 doi: 10.1162/evco.1993.1.1.1
    [2] J. He, X. Yao, Average drift analysis and population scalability, IEEE Trans. Evol. Comput., 21 (2016), 426–439. https://doi.org/10.1109/tevc.2016.2608420 doi: 10.1109/tevc.2016.2608420
    [3] X. Yao, Unpacking and understanding evolutionary algorithm, in IEEE World Congress on Computational Intelligence, Berlin, Springer Berlin Heidelberg, (2012), 60–76. https://doi.org/10.1007/978-3-642-30687-7_4
    [4] J. He, X. Yao, Towards an analytic framework for analysing the computation time of evolutionary algorithms, Artif. Intell., 145 (2003), 59–97. https://doi.org/10.1016/s0004-3702(02)00381-8 doi: 10.1016/s0004-3702(02)00381-8
    [5] C. Witt, Tight bounds on the optimization time of a randomized search heuristic on linear functions, Comb. Probab. Comput., 22 (2013), 294–318. https://doi.org/10.1017/s0963548312000600 doi: 10.1017/s0963548312000600
    [6] B. Doerr, T. Kötzing, J. A. G. Lagodzinski, J. Lengler, The impact of lexicographic parsimony pressure for ORDER/MAJORIT on the run time, Theor. Comput. Sci., 816 (2020), 144–168. https://doi.org/10.1016/j.tcs.2020.01.011 doi: 10.1016/j.tcs.2020.01.011
    [7] B. Doerr, C. Doerr, J. Yang, Optimal parameter choices via precise black-box analysis, in Proceedings of the Genetic and Evolutionary Computation Conference 2016, New York: ACM, (2016), 1123–1130. https://doi.org/10.1145/2908812.2908950
    [8] P. K. Lehre, C. Witt, Tail bounds on hitting times of randomized search heuristics using variable drift analysis, Comb. Probab. Comput., 30 (2020), 550–569. https://doi.org/10.1017/s0963548320000565 doi: 10.1017/s0963548320000565
    [9] C. Gießen, C. Witt, Optimal mutation rates for the (1 + λ) EA on OneMax through asymptotically tight drift analysis, Algorithmica, 80 (2017), 1710–1731. https://doi.org/10.1007/s00453-017-0360-y doi: 10.1007/s00453-017-0360-y
    [10] R. Sarker, M. Mohammadian, X. Yao, I. Wegener, Methods for the Analysis of Evolutionary Algorithms on Pseudo-Boolean Functions, Springer US, New York, USA, (2002), 349–369. https://doi.org/10.17877/DE290R-15272
    [11] Y. Yu, C. Qian, Z. H. Zhou, Switch analysis for running time analysis of evolutionary algorithms, IEEE Trans. Evol. Comput., 19 (2015), 777–792. https://doi.org/10.1109/tevc.2014.2378891 doi: 10.1109/tevc.2014.2378891
    [12] C. Qian, C. Bian, W. Jiang, K. Tang, Running time analysis of the (1 + 1)-EA for OneMax and LeadingOnes under bit-wise noise, Algorithmica, 81 (2018), 749–795. https://doi.org/10.1007/s00453-018-0488-4 doi: 10.1007/s00453-018-0488-4
    [13] I. Wegener, Methods for the Analysis of Evolutionary Algorithms on Pseudo-Boolean Functions Evolutionary Optimization, Springer US, New York, USA, 48 (2003), 349–369. https://doi.org/10.1007/0-306-48041-7_14
    [14] H. Huang, J. P. Su, Y. S. Zhang, Z. F. Hao, An experimental method to estimate running time of evolutionary algorithms for continuous optimization, IEEE Trans. Evol. Comput., 24 (2020), 275–289. https://doi.org/10.1109/tevc.2019.2921547 doi: 10.1109/tevc.2019.2921547
    [15] F. J. Feng, H. Huang, Y. S. Zhang, Z. F. Hao, Comparative analysis for first hitting time of evolutionary algorithms based on equal-in-time model, Chin. J. Comput., 42 (2019), 2297–2308. https://doi.org/10.11897/SP.J.1016.2019.02297 doi: 10.11897/SP.J.1016.2019.02297
    [16] D. Morinaga, K. Fukuchi, J. Sakuma, Y. Akimoto, Convergence rate of the (1 + 1) ES on locally strongly convex and Lipschitz smooth functions, IEEE Trans. Evol. Comput., 2023. https://doi.org/10.1109/tevc.2023.3266955
    [17] J. Jägersküpper, Combining Markov-chain analysis and drift analysis: The (1 + 1) evolutionary algorithm on linear functions reloaded, Algorithmica, 59 (2010), 409–424. https://doi.org/10.1007/s00453-010-9396-y doi: 10.1007/s00453-010-9396-y
    [18] A. Agapie, M. Agapie, G. Rudolph, G. Zbaganu, Convergence of evolutionary algorithms on the n-dimensional continuous space, IEEE Trans. Cybern., 43 (2013), 1462–1472. https://doi.org/10.1109/TCYB.2013.2257748 doi: 10.1109/TCYB.2013.2257748
    [19] D. Morinaga, K. Fukuchi, J. Sakuma, Y. Akimoto, Convergence rate of the (1 + 1) evolution strategy with success-based step-size adaptation on convex quadratic functions, in Proceedings of the Genetic and Evolutionary Computation Conference, (2021), 1169–1177. https://doi.org/10.1145/3449639.3459289
    [20] Y. S. Zhang, H. Huang, Z. F. Hao, G. W. Hu, First hitting time analysis of continuous evolutionary algorithms based on average gain, Chin. J. Comput., 42 (2019), 624–635. https://doi.org/10.11897/SP.J.1016.2019.00624 doi: 10.11897/SP.J.1016.2019.00624
    [21] A. Agapie, Spherical distributions used in evolutionary algorithms, Mathematics, 9 (2021), 2227–7390. https://doi.org/10.3390/math9233098 doi: 10.3390/math9233098
    [22] B. Doerr, C. Witt, J. Yang, Running time analysis for self-adaptive mutation rates, in Proceedings of the Genetic and Evolutionary Computation Conference, (2018), 1475–1482. https://doi.org/10.1007/S00453-020-00726-2
    [23] Y. Akimoto, A. Anne, G. Tobias, Drift theory in continuous search spaces: expected hitting time of the (1 + 1) ES with 1/5 success rule, in Proceedings of the Genetic and Evolutionary Computation Conference, (2018), 801–808. https://doi.org/10.1145/3205455.3205606
    [24] J. Jägersküpper, Lower bounds for randomized direct search with isotropic sampling, Oper. Res. Lett., 36 (2008), 327–332. https://doi.org/10.1016/j.orl.2007.10.003 doi: 10.1016/j.orl.2007.10.003
    [25] H. G. Beyer, H. P. Schwefel, Evolution strategies–a comprehensive introduction, Nat. Comput., 1 (2002), 3–52. https://doi.org/10.1023/a:1015059928466 doi: 10.1023/a:1015059928466
    [26] F. Neumann, C. Witt, Runtime Analysis of the (1 + 1) EA on weighted Sums of transformed linear functions, in International Conference on Parallel Problem Solving from Nature, Cham: Springer International Publishing, (2022), 542–554. https://doi.org/10.1007/978-3-642-30687-7_4
    [27] B. Doerr, T. Kötzing, Lower bounds from fitness levels made easy, in Proceedings of the Genetic and Evolutionary Computation Conference, New York: ACM, (2021), 1142–1150. https://doi.org/10.1145/3449639.3459352
    [28] H. G. Beyer, The Theory of Evolution Strategies, Springer Science & Business Media, Berlin, Germany, 2001. https://doi.org/10.1007/978-3-662-04448-3_6
    [29] T. Kötzing, Concentration of first hitting times under additive drift, Algorithmica, 75 (2016), 490–506.
    [30] J. He, X. Yao, Drift analysis and average time complexity of evolutionary algorithms, Artif. Intell., 127 (2001), 57–85. https://doi.org/10.1016/s0004-3702(01)00058-3 doi: 10.1016/s0004-3702(01)00058-3
    [31] K. W. Fang, Symmetric Multivariate and Related Distributions, CRC Press, London, UK, 2018. https://doi.org/10.1201/9781351077040
    [32] A. Agapie, O. Solomon, L. Bădin, Theory of (1 + 1) ES on sphere revisited, IEEE Trans. Evol. Comput., 27 (2023), 938–948. https://doi.org/10.1109/tevc.2022.3217524 doi: 10.1109/tevc.2022.3217524
    [33] H. G. Beyer, Toward a theory of evolution strategies: Some asymptotical results from the (1 + λ)-theory, Evol. Comput., 1 (1993), 165–188. https://doi.org/10.1162/evco.1993.1.2.165 doi: 10.1162/evco.1993.1.2.165
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1021) PDF downloads(77) Cited by(0)

Figures and Tables

Figures(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog