
The indirect effect of predation due to fear has proven to have adverse effects on the reproductive rate of the prey population. Here, we present a deterministic two-species predator-prey model with prey herd behavior, mutual interference, and the effect of fear. We give conditions for the existence of some local and global bifurcations at the coexistence equilibrium. We also show that fear can induce extinction of the prey population from a coexistence zone in finite time. Our numerical simulations reveal that varying the strength of fear of predators with suitable choice of parameters can stabilize and destabilize the coexistence equilibrium solutions of the model. Further, we discuss the outcome of introducing a constant harvesting effort to the predator population in terms of changing the dynamics of the system, in particular, from finite time extinction to stable coexistence.
Citation: Kwadwo Antwi-Fordjour, Rana D. Parshad, Hannah E. Thompson, Stephanie B. Westaway. Fear-driven extinction and (de)stabilization in a predator-prey model incorporating prey herd behavior and mutual interference[J]. AIMS Mathematics, 2023, 8(2): 3353-3377. doi: 10.3934/math.2023173
[1] | Xiao Chen, Zhaoyou Zeng . Bird sound recognition based on adaptive frequency cepstral coefficient and improved support vector machine using a hunter-prey optimizer. Mathematical Biosciences and Engineering, 2023, 20(11): 19438-19453. doi: 10.3934/mbe.2023860 |
[2] | Anlu Yuan, Tieyi Zhang, Lingcong Xiong, Zhipeng Zhang . Torque control strategy of electric racing car based on acceleration intention recognition. Mathematical Biosciences and Engineering, 2024, 21(2): 2879-2900. doi: 10.3934/mbe.2024128 |
[3] | Xiaoqiang Dai, Kuicheng Sheng, Fangzhou Shu . Ship power load forecasting based on PSO-SVM. Mathematical Biosciences and Engineering, 2022, 19(5): 4547-4567. doi: 10.3934/mbe.2022210 |
[4] | Liangyu Yang, Tianyu Shi, Jidong Lv, Yan Liu, Yakang Dai, Ling Zou . A multi-feature fusion decoding study for unilateral upper-limb fine motor imagery. Mathematical Biosciences and Engineering, 2023, 20(2): 2482-2500. doi: 10.3934/mbe.2023116 |
[5] | Qixin Zhu, Mengyuan Liu, Hongli Liu, Yonghong Zhu . Application of machine learning and its improvement technology in modeling of total energy consumption of air conditioning water system. Mathematical Biosciences and Engineering, 2022, 19(5): 4841-4855. doi: 10.3934/mbe.2022226 |
[6] | Xiaoke Li, Fuhong Yan, Jun Ma, Zhenzhong Chen, Xiaoyu Wen, Yang Cao . RBF and NSGA-II based EDM process parameters optimization with multiple constraints. Mathematical Biosciences and Engineering, 2019, 16(5): 5788-5803. doi: 10.3934/mbe.2019289 |
[7] | Fuhua Wang, Zongdong Zhang, Kai Wu, Dongxiang Jian, Qiang Chen, Chao Zhang, Yanling Dong, Xiaotong He, Lin Dong . Artificial intelligence techniques for ground fault line selection in power systems: State-of-the-art and research challenges. Mathematical Biosciences and Engineering, 2023, 20(8): 14518-14549. doi: 10.3934/mbe.2023650 |
[8] | Xiaoshan Qian, Lisha Xu, Xinmei Yuan . Soft-sensing modeling of mother liquor concentration in the evaporation process based on reduced robust least-squares support-vector machine. Mathematical Biosciences and Engineering, 2023, 20(11): 19941-19962. doi: 10.3934/mbe.2023883 |
[9] | Yufeng Li, Chengcheng Liu, Weiping Zhao, Yufeng Huang . Multi-spectral remote sensing images feature coverage classification based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 4443-4456. doi: 10.3934/mbe.2020245 |
[10] | Bin Wang, Fagui Liu . Task arrival based energy efficient optimization in smart-IoT data center. Mathematical Biosciences and Engineering, 2021, 18(3): 2713-2732. doi: 10.3934/mbe.2021138 |
The indirect effect of predation due to fear has proven to have adverse effects on the reproductive rate of the prey population. Here, we present a deterministic two-species predator-prey model with prey herd behavior, mutual interference, and the effect of fear. We give conditions for the existence of some local and global bifurcations at the coexistence equilibrium. We also show that fear can induce extinction of the prey population from a coexistence zone in finite time. Our numerical simulations reveal that varying the strength of fear of predators with suitable choice of parameters can stabilize and destabilize the coexistence equilibrium solutions of the model. Further, we discuss the outcome of introducing a constant harvesting effort to the predator population in terms of changing the dynamics of the system, in particular, from finite time extinction to stable coexistence.
Statistics from the World Energy Agency show that the energy consumption of the industrial sector accounts for nearly 1/3 rd of the global primary energy supply and produces approximately 36% of energy-related CO2, and that the electricity consumption of the motor system accounts for more than 50% of the total electricity consumption. Therefore, improving the energy efficiency of the motor system is of great significance for reducing energy consumption, improving cost efficiency, reducing pollution emission and responding to demands for national energy conservation and emission reduction. Cui et al. proposed that a correct rating of the motor energy efficiency is the best way to improve the energy efficiency of motor systems [1].
In recent years, there have been many advances in artificial intelligence technology in various fields [2,3,4,5,6], whereas there are few types of research on the energy efficiency classification of electric motors. In 2012, guo biao (GB) specifies the minimum energy efficiency values for small and medium-sized motors at rated efficiency and provides initial motor energy efficiency standards [7]. However, in the actual production process, the motor is always operating under variable working conditions due to the power quality, load characteristics, and other influential factors. So the motor energy efficiency standard reported in [8] cannot be effectively applied to an actual situation. Luo et al. conducted a comprehensive analysis of the factors affecting the motor energy efficiency and developed a motor energy efficiency evaluation index system [8]. Li has performed an in-depth study of the main factors affecting motor efficiency and used analytic hierarchy process (AHP) to obtain the weight of each index [9]. These studies provide ideas for research on motor energy efficiency rating.
In this paper, the three aspects of power quality, load characteristics and the characteristics of the motor itself are used as the first-level evaluation indicators, also, there are seven second-level indicators that have a greater impact on the energy efficiency of the motor that have been selected and divided into 10-grade intervals. For electrical energy efficiency classes, Ma and Lv used a support vector machine model for classification and achieved an accuracy of 94% with a small number of test samples [10]. The accuracy rate decreased with increasing sample data.
Currently, SVM theory has unique advantages in terms of solving classification and target recognition problems [11], but the key parameters in SVM i.e., the penalty factor c and kernel parameter σ, are determined by empirical assignment or cross-validation, which are not only inefficient but they are also susceptible to the local optima problem. Metaheuristic algorithm are widely used for the optimization of various complex problems due to their non-derivative mechanism, flexibility, simplicity and ease of implementation. Classical examples of metaheuristics include particle swarm optimization, ant colony optimization, evolutionary algorithms, and simulated annealing. Inspired by more natural biological behavior, the development of this field has recently experienced something similar to a gold rush in terms of a proliferation of similar new or improved algorithms of all kinds. The more recent metaheuristic algorithms are monarch butterfly optimization [12], the whale optimization [13], satin bowerbird optimization [14] the moth search algorithm [15], hunger games search [16], harris hawks optimization [17], the slime mould algorithm [18], runge kutta method [19], the colony predation algorithm [20] the weighted mean of vectors algorithm [21].
The optimization search process for these algorithms has a common feature of using both exploration and development. The exploration phase should utilize and enhance the stochastic operator capability as much as possible to diversify the search in the feature space; the development phase usually follows the exploration phase and should enhance the local search in the vicinity of the more optimal solution. A good optimization algorithm should be able to make a reasonable and fine balance between exploration and exploitation. Otherwise, it is prone to falling into local optimality and premature maturity. Most metaheuristic algorithms have the disadvantage of being insensitive to the tuning of user-defined parameters and not always converging to the global optimum.
In 2015, wang et al. proposed elephant herd optimization (EHO) inspired by the nomadic behavior of elephant herds [22]. Elephant herds have two special behaviors: 1) the herd is composed of multiple clans and led by the female, and 2) adult males will leave the herd to live alone. Therefore two mechanisms have been developed to mimic the nomadic behavior of elephant herds their, clan renewal mechanism and their separation mechanism. EHO has been shown to be effective at finding optimal solutions [23]. EHO differs from other metaheuristic algorithms in that EHO can generate multiple clans, each of which is a group, and it is a multiple group collaboration mechanism. However, the original EHO only addresses the position update of one clan and does not consider the collaboration of the whole elephant group. The performance of EHO may be significantly improved if the multiple group collaboration approach can be fully utilized and applied.
The accurate evaluation of motor energy efficiency under off-condition operation can provide an important basis for the energy-saving upgrade of the motor and the elimination of backward motors. In the past, research on motor energy efficiency mainly focused on the rated operating condition, and there were few studies on motor energy efficiency under off-conditions.
1) In this article, we present seven evaluation indexes that have a great influence on motor energy efficiency from the three aspects of power quality, load characteristics, as well as motor characteristics, and establish a quantitative classification standard to achieve an accurate grade of the motor energy efficiency.
2) Moreover, we developed an improved EHO (IEHO) algorithm to optimize the motor energy efficiency rating model of SVMs.
3) In order to improve the superior seeking ability of the EHO algorithms, we propose five improvements and compare them with other algorithms.
The organization of this paper is as follows: Section 2 introduces the principle and the optimization search of the standard EHO, and Section 3 introduces the improved EHO, which first analyzes the shortcomings of the standard EHO; we then propose five improvement strategies. Section 4 presents a comparison of the IEHO with other optimization algorithms which was achieved via testing and experimental analysis. Section 5 introduces the model of the energy efficiency evaluation system of electric machines based on the IEHO combined with SVM (IEHO-SVM) algorithm, and experiments and analyses are discussed. The conclusion and future works are discussed in Section 6.
The EHO algorithm was inspired by the nomadic behavior of the elephant herd [22]. Elephants live in groups and consist of multiple clans under the leadership of one of the best female elephant ministers. Each clan will select an excellent female elephant to serve as the matriarch, and the matriarch will lead the female elephants or baby elephants who have a direct or indirect blood relationship with her. Male elephants will leave the group to live alone once they become adults. Although male elephants are far away from their family, they can keep in touch with the elephants in the family through low-frequency vibrations.
To make the elephant herding behavior solve global optimization problems, we have established the following idealized rules: 1) An elephant tribe consists of multiple clans and an optimal female elephant female minister, and each clan has a fixed number of elephants. 2) Each clan in each generation elects a superior matriarch, and then a fixed number of male elephants leave, followed by an equal number of new ones.
The position of each elephant in the clan is affected by the matriarch. For the elephant j in the clan ci, the position is updated as follows:
xnewci,j=xci,j+α(xbest,ci−xci,j)r | (2.1) |
where a∈[0,1] represents the scaling factor, xnewci,j represents the new position of Elephant j in the clan ci, xci,j represents the old position of Elephant j in the clan ci, xbest,ci represents the matriarch in the clan ci and r∈[0,1] is a random number.
The position of the matriarch has a specific update formula, and the updated position is as follows:
xnew best ,ci=β×xcenter ,ci | (2.2) |
where β∈[0,1] represents a control parameter that determines the factor of xcenter,ci influence on xbest,ci. xcenter,ci represents the center position of the clan ci. It can be determined by the following equation.
xcenter,ci,d=1ncj×nci∑j=1xci,j,d | (2.3) |
where d∈[1,D] represents the d th dimension, D is the total number of dimensions and nci represents the number of elephants of Clan ci.
In the elephant group, the male elephant in adulthood prefers to leave the group to live alone. To simulate this process, the adult elephant is regarded as the least adaptive elephant in the algorithm. At the same time, to make the algorithm have greater searchability, it is assumed that the elephant with the worst fitness work the separation operator shown in Eq 2.4 in each generation:
xworst ,ci=xmin+(xmax−xmin+1)×r | (2.4) |
where xmax and xmin respectively represent the maximum and minimum values of the search space, xworst,ci represents the position of the worst elephant in the clan ci. The EHO algorithm flowchart is shown in Figure 1.
The standard EHO algorithm updates the position of each elephant by using clan update and separation operations. Its position update method is singular and has the following disadvantages:
1) The initial position of the elephant is randomly generated, which affects the efficiency of elephant swarm optimization.
2) It only utilizes the position variable and ignores the fact that the moving speed will affect the convergence speed of the elephant group.
3) In the clan update process, the position update of the best female elephant in the entire elephant group is not considered. For ordinary elephants, the position is updated according to the position of themselves and that of the matriarch. The matriarch position is updated by the middle position of the clan, makeing it easy to form the local optimal position.
4) In the iteration process, the optimal solution of the clan is updated and not retained. Equation 2.4 was used in the separation process to replace the worst elephant in each generation of the clan, simulating the adult male elephant leaving and producing a new baby elephant. Although the ethnic diversity was preserved, as a newborn baby elephant, it would be protected by other elephants and should have a good position.
Based on these shortcomings, we propose the following improvement strategies.
The distribution and initial value of the initial position of the population affect the optimization efficiency of the population. It is evenly distributed in the solution space, which is beneficial to improve the population diversity and global searching ability.
This paper introduces a population initialization strategy based on chaotic mapping [24] and opposition-based learning [25]. The strategy initializes the population by using better randomness, spatial ergodicity and non-repetition of chaotic mapping; it also uses the opposition-based learning method to generate its corresponding opposite individuals. Then, it compares and selects individuals with good fitness values as the initial individuals to improve the overall performance of the algorithm optimization efficiency.
Chaotic mapping is used to generate chaotic sequences and random sequences are generated by simple deterministic systems. Table 1 shows several chaotic mapping functions in common use.
Chaotic mapping function | Expression |
Logistic | Zk+1=λZk(1−Zk),0<λ<4 |
ICMIC | Zk+1=sin(α/Zk),α∈(0,∞) |
Chebyshev | Zk+1=cos(kcos−1Zk),k=4,Zk∈(0,1) |
Tent | Zk+1={Zk/0.7,Zk∈(0,0.7]10/3Zk(1−Zk),Zk∈(0.7,1) |
According to the analysis of the study presented in [26], when λ=4, the distribution of a logistic mapping chaotic sequence is unevenly distributed, and the optimization speed and efficiency are slow. When the initial value is 0 or is fixed, the ICMIC mapping function cannot continue to iterate. Chebyshev mapping has higher requirements for functions. The chaotic sequence of tent mapping is evenly distributed, the optimization is faster and the search efficiency is higher. Tent mapping has been applied in this study to generate the initial population of a chaotic sequence.
The initialization steps are as follows:
Step 1: Generate random numbers:
Zkci,j,d∈(0,1) | (3.1) |
where d represents the dimension d=1,2,…,D; K represents the number of current iterations.
Step 2: Calculate the tent function chaotic map:
Zk+1ci,j,d={Zkci,j,d/0.7,Zkci,j,d∈(0,0.7]10/3Zkci,j,d(1−Zkci,j,d),Zkci,j,d∈(0.7,1) | (3.2) |
Step 3: Generate N positions, which gives the original population X=(xci,j,1,xci,j,2,…,xci,j,D) and N = ci×j:
xci,j,d=xmin,d×(1−Zk+1ci,j,d)+xmax,dZk+1ci,j,d | (3.3) |
where xmax,d and xmin,d represent the upper and lower limits of elephants in the D dimension.
Step 4: Perform opposition-based population initialization OX=(oxci,j,1,oxci,j,2,…,oxci,j,D) based on opposition-based learning:
oxci,j,d=xmin,dZk+1ci,j,d+xmax,d×(1−Zk+1ci,j,d) | (3.4) |
where oxci,j,d represents the individual opposing xci,j,1 and X and OX are opposing populations.
Step 5: Merge the original population and the opposing population, calculate the individual fitness, and take the first N positions with high fitness as the initial population.
Inspired by the PSO algorithm [27], the individual elephants were given a speed to simulate elephant motion; additionally a global speed strategy is proposed, which can accelerate the convergence speed of the elephant herd and balance the convergence speed by introducing inertia weights to avoid premature population maturity.
Speed is also initialized by chaotic mapping and opposition-based learning, taken as vmax,d=0.2(xmax,d−xmin,d) and vmax,d=−vmin,d. The elephant's position is updated as follows:
vnewci,j=ωkvci,j+α×(xbest,ci−xci,j)×rxnewci,j=xci,j+vnewci,j | (3.5) |
where xnewci,j and vnewci,j represent the updating position and speed of Elephant j in Clan ci respectively, and ωk represents the linearly decreasing inertia weight in the iterative process.
ωk=(ωbegin−ωend)(Kmax−k)Kmax+ωend | (3.6) |
where ωbegin and ωendrepresent the weight at the beginning and end of the iteration respectively, taken as ωbegin=0.9 and ωend=0.4; Kmax represents the maximum number of iterations.
In the above equation, a represents the step size from the current position of the elephant to the optimal position. When the value of a is large, the global optimization ability is strong, while the local optimization ability is poor; when the value of a is small, the global optimization ability is poor, while the local optimization ability is strong. In the standard EHO algorithm, the value of a is fixed and the superior and inferior individuals in the clan have the same search range, which does not match the actual situation. The actual situation is that the better individual is close to the optimal solution, so a smaller learning step size should be used to enhance the local optimization ability; altternatively, the less fit individual is far away from the optimal solution, so a larger learning step size should be used to enhance the global optimization ability. Therefore, an adaptive learning step size strategy is proposed.
1). The optimal solution xmbest of the clan requires less learning, learning step take a fixed value and taken as a1=0.2.
2). The optimal solution xcbest,ci of the clan is learned from the optimal solution xmbest of the clan.
Learning step acbest,ci is expressed as follows:
acbest ,ci=a2(kKmax)×f(xcbest ,ci)f(xmbest ) | (3.7) |
where f(xcbest,ci) and f(xmbest) represent the fitness values of xcbest,ci and xmbest respectively.
3). Other elephants xotherci,j of the clan learn from the matriarch xcbest,ci of the clan. Learning step aotherci,j is expressed as follows:
aotherc ,j=a3(kKmax)×f(xotherci, ,)f(xcbest ,ci) | (3.8) |
where f(xotherci,j) and f(xcbest,ci) represent the fitness values of xotherci,j and xcbest,ci respectively and a1<a2<a3 are constants.
In a large elephant herd tribe, each individual has a unique level of fitness. There are excellent female ministers and matriarchs and males and other ordinary elephants, so the elephants have been divided into four groups, and different groups of elephants have different learning styles. The first group is the best female elephant of the tribe, represented by xmbest. The second group is the best matriarch of the clan, represented by xcbest,ci. The third group is the other female elephants and young elephants, represented by xotherci,j. The fourth group is the worst individual of the clan, represented by xworst,ci.
The learning mode of the first group xmbest is as follows:
As the best female elephant in the herd, her position and speed should be updated according to the matriarch information of all clans. The update mode is as follows:
vnewmbest =ωk×vmbest+a1(xcenter −xmbest)rxnewmbest=xmbest+vnewmbestxcenter =1nci×nci∑i=1xcbest,ci | (3.9) |
where xnewmbest and vnewmbest respectively represent the update position and speed of xmbest, xcenter represents the central location for all matriarchs and nci represents the total number of clans.
The learning mode of the second group xcbest,ci is as follows:
The best matriarch of the clan should learn from the best female elephant of the tribe to achieve a larger search area. The update mode is as follows:
vnewcbest ,ci=ωkvcbest ,ci+acbest,ci(xmbest−xcbest,ci)rxnewcbest ,ci=xcbest,ci+vnewcbest,ci | (3.10) |
where xnewcbest,ci and vnewcbest,ci respectively represent the update position and speed of xcbest,ci.
The learning mode of the third group xotherci,j is as follows:
The other female elephants and young elephants should learn from the best matriarch of the clan to achieve a faster learning speed. The update mode is as follows:
vnewotherci,j=ωkvotherci,j+aotherci,j(xcbest,ci−xotherci,j)rxnewotherci,j=xotherci,j+vnewotherci,j | (3.11) |
where xnewotherci,j and vnewotherci,j respectively represent the update position and speed of xcbest,ci.
In order to optimize the learning of the elephants in the clan, a mutual learning strategy is proposed. The ordinary elephants in the clan learn from the matriarch first, and then from others who are better than themselves.
Elephant p is randomly selected from Clan ci; then, its new fitness value is calculated. If the fitness value of Elephant j is better than that of Elephant p, Elephant p needs to learn from Elephant j. If it is the opposite Elephant j needs to learn from Elephant p. If the fitness values of Elephants j and p are equal, then there is no need to learn from each other. The update mode is as follows:
{f(xnewotherci,j)<f(xnewotherci,p),ˆvnewotherci,j=vnewotherci,j+r(xnewotherci,j−xnewotherci,p)f(xnewotherci,j)>f(xnewotherc,p),ˆvnewotherci,j=vnewotherc,j+r(xnewotherci,p−xnewotherci,j)ˆxnewotherci,j=xnewotherci,,+ˆvnewotherci,,j | (3.12) |
where xnewotherci,j and vnewotherci,j respectively represent the position and speed of Elephant p after learning.
In the standard EHO algorithm, the separation operator is used to replace the worst solution in the clan. At the same time, the clan optimal solution continuously updated during the iteration, and not preserved. Therefore, an elite retention strategy was developed. The worst elephants in the fourth group directly learned from the matriarch and replaced the worst solution with the clan optimal solution. At the same time, the fitness values for each elephant before and after updating were compared, and the positions of elephants with high fitness values were retained. The method preserves the optimal solution for each generation's clan and the optimal solution for individual elephants, which can effectively accelerate the optimization speed of the elephant group.
The IEHO algorithm flowchart is shown in Figure 2.
The learning mode of the fourth group xworst,ci is as follows: the worst male elephant learns directly from the matriarch. The update mode is as follows:
xnewworst,ci=xcbest,ci | (3.13) |
Standard EHO mainly includes initialization, fitness evaluation, sorting and optimal selection, a clan updating operator and a separating operator. In the associated formula, M indicates the number of clans, N indicates the number of elephants in each clan, D is the number of dimensions of the problem and T represents the maximum number of iterations. The complexity of the initialization stage is O(1). In the main part of the algorithm, the complexity of the sorting algorithm is O(N∗Mlog(N∗M)). The clan updating operator is O(M∗N); the separating operator is O(N). The overall complexity of the algorithm is O(1+T∗(N∗MlogN∗M+M∗N+M)).
The proposed IEHO algorithm mainly includes the following parts: initialization, fitness evaluation, sorting and optimal selection, position and speed updating and a fitness comparison. During the initial stage, the complexity of initializing the individuals in each dimension based on dyadic learning is O(M∗N∗D). In the main part of the algorithm, the complexity of the sorting algorithm is O(N∗Mlog(N∗M)), the complexity of updating the speed and position of the individuals is O(M∗N) and the complexity of adaptive comparison is O(M∗N). From the above analysis, we can acquire the complexity of the whole algorithm: O(M∗N∗(D+T∗(log(M∗N)+2))).
To verify the effectiveness of the IEHO algorithm, 12 typical reference functions proposed in [28] were selected for testing, as is shown in Table 2. f1−f6 are unimodal functions with only one global optimal point, which is used to test the convergence accuracy of the algorithm. f7−f12 are multi-peak functions with multiple local optimal points, which are used to test the algorithm's global search capability, local development and ability to jump out of local optima.
Function | Expression | Search range | Extremum |
Schwefel2.22 | f1(x)=∑ni=1|xi|+∏ni=1|xi| | [−10.00, 10.00] | 0 |
Schwefel2.21 | f2(x)=max{|xi|,1≤i≤n} | [−100.00,100.00] | 0 |
Sum of different powers | f3(x)=∑ni=1|xi|i+1 | [−1.00, 1.00] | 0 |
Zakharov | f4(x)=∑ni=1x2i+(∑ni=10.5ixi)2+(∑ni=10.5ixi)4 | [−5.00, 10.00] | 0 |
Sphere | f5(x)=∑ni=1x2i | [−100.00,100.00] | 0 |
Qing | f6(x)=∑ni=1(x2i−i)2 | [−500.00,500.00] | 0 |
Griewank | f7(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | [−300.00,300.00] | 0 |
Rastrigin | f8(x)=∑ni=1[x2i−10cos(2πxi)+10] | [−5.12, 5.12] | 0 |
Levy | f9(x)=sin2(πw1)+∑n−1i=1(w1−1)2[1+10sin2(πwi+1)]+(wn−1)2[1+sin2(2πwn)]] | [−10.00., 10.00] | 0 |
Schaffers F7 | f10(x)=[1n−1∑n−1i=1(√si(sin(50s0.2i)+1))]2,si=√x2i+x2i+1 | [−10.00, 10.00] | 0 |
Ackley | f11(x)=−20exp(−0.2√1n∑nix2i−exp[1n∑nicos(2πxi)]+20+e | [−32.00, 32.00] | 0 |
Salomon | f12(x)=1−cos(2π√∑ni=1x2i)+0.1√∑ni=1x2i | [−100.00,100.00] | 0 |
Due to the fact that five improved strategies are proposed, to prove the effectiveness of the final five strategy combinations of IEHO, we did not consider partial strategies based on the EHO algorithm that are stronger than the IEHO algorithm, and a strategy combination comparison experiment was conducted. First, a comparison experiment of strategy combinations was conducted, and Schwefel2.22 and Qing were selected as the benchmark test functions; the experimental results are shown in Figure 3. EHO + S1 is the combination of EHO with Strategy 1; EHO + S12 is the combination of EHO with Strategy 1 and Strategy 2; EHO + S123 is the combination of EHO with Strategy 1, Strategy 2 and Strategy 3; EHO + S1234 is the combination of EHO with Strategy 1, Strategy 2, Strategy 3 and Strategy 4; EHO + S12345 is the combination of EHO with 5 strategies. Furthermore, to prove the convergence speed, convergence accuracy and stability of IEHO, the IEHO algorithm has been compared with the multi-mechanism hybrid EHO (MCEHO) algorithm proposed in [21], original EHO algorithm [22], the PSO algorithm [29] and the artificial bee colony (ABC) algorithm [30,31].
To ensure fairness and accuracy of the algorithm comparison, the parameters of the five algorithms were set to be the same, the population size was set as 50 and the maximum number of iterations was set to be 500. For EHO, MCEHO and IEHO, we set nclan=5 and nci=10. Each algorithm ran the optimal value independently 30 times on each test function in 30D and 100D. The mean value and standard deviation were calculated to evaluate the convergence accuracy and stability of each algorithm. The simulation environment was a Windows 10 operating system equipped with an i5-6500 CPU, 3.2 GHz main frequency, 8 GB of memory, and MATLAB 2016a simulation software.
It can be seen in Figure 3 that with the increase of strategy combinations, the EHO performance gradually improved, the speed and adaptive learning step size can effectively improve the convergence speed, group learning and elite retention can effectively improve the convergence accuracy and the IEHO performance from the combination of five strategies was optimal. The IEHO still follows the nomadic behavior of the elephant herd, IEHO is an improvement on the standard EHO, only its search method is improved during the iterative process so that the elephant herd becomes more intelligent; essentially, it is still EHO. These five strategies are also useful for improving the performance of other optimization algorithms.
Table 3 shows the means and standard deviations of the fitness values for the 12 test functions calculated by the five algorithms under the conditions of fixed iteration times for the low dimension (30) and high dimension (100), respectively. The performance of each of the five algorithms on each test function is shown in Figure 6, and the three-dimensional mathematical model of each of the 12 test functions is also presented.
f | D | PSO | ABC | EHO | MCEHO | IEHO |
Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | ||
30 | 0.981 (1.13) | 9.04 × 10−19 (3.98 × 10−19) | 6.38 × 10−14 (5.18 × 10−14) | 1.48 × 10−17 (1.42 × 10−17) | 1.61 × 10−26 (2.23 times 10−26) | |
f1 | 100 | 2.43 (3.45) | 7.85 × 10−18 (6.56 × 10−18) | 3.89 × 10−13 (2.18 × 10−13) | 2.76 × 10−16 (1.43 × 10−16) | 4.91 × 10−25 (3.29 × 10−25) |
30 | 6.59 (2.37) | 8.54 × 10−19 (4.74 × 10−19) | 5.29 × 10−18 (4.21 × 10−19) | 2.31 × 10−23 (1.78 × 10−23) | 1.76 × 10−26 (2.48 × 10−26) | |
f2 | 100 | 30.1 (28.9) | 7.31 × 10−18 (5.98 × 10−18) | 3.88 × 10−17 (3.29 × 10−17) | 3.78 × 10−22 (2.91 × 10−22) | 4.79 × 10−25 (2.77 × 10−25) |
30 | 9.13 × 10−12 (3.02 × 10−11) | 0.00 (0.00) | 0.0 (0.00) | 0.00 (0.00) | 0.00 (0.00) | |
f3 | 100 | 8.67 × 10−10 (7.54 × 10−10) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
30 | 57.8 (24.7) | 2.09 × 10−27 (1.15 × 10−27) | 4.11 × 10−30 (3.78 × 10−30) | 8.85 × 10−36 (2.86 × 10−36) | 3.21 × 10−47 (1.76 × 10−47) | |
f4 | 100 | 3.78 (1.88) | 4.93 × 10−26 (2.78 × 10−26) | 2.78 × 10−29 (1.66 × 10−29) | 4.66 × 10−35 (4.01 × 10−35) | 3.76 × 10−47 (2.45 × 10−47) |
30 | 7.75 (1.95 × 10−10) | 2.41 × 10−14 (2.51 × 10−14) | 3.39 × 10−16 (2.99 × 10−16) | 5.54 × 10−18 (1.68 × 10−18) | 2.18 × 10−28 (4.67 × 10−27) | |
f5 | 100 | 9.36 (6.5) | 6.89 × 10−13 (6.01 × 10−13) | 4.18 × 10−15 (3.32 × 10−15) | 8.96 × 10−17 (9.03 × 10−17) | 6.69 × 10−27 (7.43 × 10−27) |
30 | 2.12 × 107 (1.42 × 107) | 4.46 × 108 (3.74 × 108) | 3.89 × 107 (4.18 × 107) | 4.01 × 105 (9.73 × 104) | 6.58 (2.14 × 102) | |
f6 | 100 | 3.78 × 108 (2.38 × 108) | 5.67 × 109 (5.13) | 5.88 × 108 (4.99 × 108) | 3.09 × 106 (2.98 × 106) | 8.37 × 102 (4.76 × 102) |
30 | 2.43 × 10−4 (1.03 × 10−4) | 8.63 × 10−6 (2.54 × 10−6) | 5.17 × 10−8 (4.11 × 10−8) | 1.96 × 10−10 (1.03 × 10−10) | 4.58 × 10−16 (2.17 × 10−20) | |
f7 | 100 | 8.49 × 10−3 (6.29 × 10−3) | 3.71 × 10−5 (4.78 × 10−5) | 2.97E-07(8.92E-08) | 3.88E-09(1.19E-09) | 8.73 × 10−15 (2.19 × 10−17) |
30 | 42.1 (15.4) | 3.75 × 10−5 (9.06 × 10−6) | 3.88 × 10−5 (2.17 × 10−5) | 3.19 × 10−6 (7.42 × 10−7) | 6.77 × 10−15 (5.23 × 10−16) | |
f8 | 100 | 49.4 (40.1) | 5.13 × 10−4 (4.99 × 10−4) | 5.22 × 10−4 (4.89 × 10−4) | 7.81 × 10−5 (5.95 × 10−5) | 8.98 × 10−14 (5.22 × 10−16) |
30 | 1.72 (0.137) | 2.31 (0.79) | 0.218 (0.133) | 1.25 (3.1 × 10−2) | 1.62 × 10−2 (0.79 × 10−2) | |
f9 | 100 | 4.09 (0.393) | 4.38 (8.21) | 8.22 (7.17) | 4.81 (3.97) | 1.19 (0.482) |
30 | 7.65 (0.798) | 2.21 × 10−9 (1.23 × 10−10) | 2.19 × 10−10 (2.18 × 10−10) | 2.23 × 10−12 (1.13 × 10−12) | 1.84 × 10−20 (1.07 × 10−20) | |
f10 | 100 | 24.3 (3.25) | 3.34 × 10−8 (5.88 × 10−9) | 5.39 × 10−9 (4.18 × 10−9) | 4.78 × 10−11 (3.89 × 10−11) | 1.09 × 10−10 (0.48 × 10−19) |
30 | 1.93 (0.678) | 8.45 × 10−16 (8.12 × 10−16) | 2.31 × 10−13 (5.19 × 10−12) | 3.94 × 10−14 (3.86 × 10−14) | 8.13 × 10−18 (8.01 × 10−18) | |
f11 | 100 | 2.19 (49.1) | 3.99 × 10−18 (4.13 × 10−17) | 3.11 × 10−12 (2.91 × 10−11) | 6.83 × 10−14 (5.39 × 10−15) | 3.88 × 10−17 (0.19 × 10−17) |
30 | 0.313 (4.38) | 4.53 × 10−9 (3.98 × 10−9) | 3.87 × 10−10 (3.09 × 10−10) | 5.76 × 10−21 (4.91 × 10−21) | 8.15 × 10−27 (7.16 × 10−27) | |
f12 | 100 | 1.19 (0.159) | 2.17 × 10−8 (1.88 × 10−8) | 6.23 × 10−15 (6.61 × 10−15) | 7.19 × 10−20 (5.72 × 10−20) | 6.35 × 10−26 (5.39 × 10−26) |
The left plots in Figures 4–6 show the three-dimensional search spatial distributions for the 12 test functions. X1 and X2 are function search intervals, and the vertical axis is the function value. The graphs on the right are the convergence curves for the intelligent algorithms for optimizing the test functions. The horizontal axis is the number of iterations, and the vertical axis is the logarithm of the optimal values.
As can be seen in Figures 4–6, after 500 iterations, all five algorithms were able to converge to the extreme value of the test function with different convergence speeds and convergence accuracies; however, PSO resulted in the worst performance. For the unimodal functions f1−f6, in the beginning, the convergence rates of EHO and IEHO were faster and better than that of MCEHO. This shows that the initial population initialization strategy based on chaotic mapping and opposition-based learning can facilitate a better position and fitness value for the initial population and improve the global searching ability. The convergence speed of the IEHO algorithm improved after 100 iterations, while that of the MCEHO algorithm improved after 150 iterations. The convergence speeds of the EHO and ABC is similar, but slightly higher than ABC. The IEHO algorithm was able to converge to the minimum value after about 100 iterations, and the convergence accuracy was higher than that of other algorithms, indicating that the addition of a speed strategy can effectively accelerate the population optimization speed. Regarding the unimodal function f3, the IEHO algorithm was able to converge to 0 and the standard deviation was 0.
For the multimodal functions f7−f12, the convergence speed and convergence precision of EHO were better than those for PSO, indicating that EHO has obvious advantages over PSO when it comes to solving multi-extremum problems. The EHO and ABC algorithms had similar convergence accuracies, except for the test functions f6 and f10. Regarding the other functions, EHO was found to be superior to ABC optimization, as it resulted in a faster convergence speed and higher convergence accuracy. However, EHO is susceptible to the local optima problem; MCEHO can jump out of partial local optima but it is still hard to find global optima. The IEHO algorithm's optimization speed and precision were found to be superior to those of the MCEHO algorithm, indicating that adaptive step grouping learning and an elite retention strategy can effectively improve the algorithm's ability to develop global optimal and jump out of local optima.
As can be seen in Table 3, for the unimodal and multi-peak test functions, the IEHO algorithm had the lowest means and standard deviations among all of the algorithms. Its mean value was closest to the theoretical value and able to find the minimum point. As the number of dimensions of the test function increases, the model complexity exponentially grows and increases the difficulties of optimization. To further validate IEHO performance on complex high-dimensional problems, we added a 100D high-dimensional test function experiment; the parameter settings are the same as those for lower dimensions. According to the data in Table 3, the IEHO algorithm performed poorly for f6 with low convergence accuracy. Regarding the other test functions, it was able to maintain good optimization stability and was found to be superior to the other four algorithms, which fully verified the advantages of IEHO as a tool to solve high-dimensional problems. From the above analysis, it can be seen that the IEHO algorithm presented in this paper has a better convergence effect for both in terms of mean and standard deviation, on low- and high-dimensional test functions, as well as a stronger ability to find the optima as compared with the algorithms based on PSO, ABC, EHO and MCEHO.
In a motor system, the transmission process for the current from the power supply to the motor to the load will generate energy loss, and the energy efficiency of the motor is affected by many factors [32]. In this study, the power quality, motor characteristics and load characteristics of the motor system were considered as the first-level indexes. From these three aspects, seven secondary indexes that have a greater impact on motor energy efficiency were selected to obtain the motor energy efficiency rating system as shown in Figure 7.
The motor energy efficiency has been divided into 10 grades according to the national standards, the upper and lower limits of each indicator's deviation and the impact on the motor energy efficiency. The quantified motor energy efficiency grading standards are based on the parameter values of various indicators at different levels, as shown in Table 4.
Grade | Load factor | Transmission efficiency | Harmonic distortion rate % | Three-phase voltage unbalance degree % | Voltage offset % | Frequency error % | Rated efficiency % |
1 | (0.70, 0.80] | 0.98 | 1.00 | 1.00 | 2.00 | 0.10 | 89.50 |
2 | (0.80, 0.90] | (0.96, 0.98] | (1.00, 2.00] | (1.00, 2.00] | (2.00, 3.00] | (0.10, 0.20] | (88.30, 89.5] |
3 | (0.60, 0.70] | (0.94, 0.96] | (2.00, 3.00] | (2.00, 3.00] | (3.00, 4.00] | (0.20, 0.30] | (87.1, 88.3] |
4 | (0.90, 1.00] | (0.92, 0.94] | (3.00, 4.00] | (3.00, 4.00] | (4.00, 5.00] | (0.30, 0.40] | (85.9, 87.1] |
5 | (0.50, 0.60] | (0.90, 0.92] | (4.00, 5.00] | (4.00, 5.00] | (5.00, 5.50] | (0.40, 0.50] | (84.7, 85.9] |
6 | (0.45, 0.50] | (0.89, 0.90] | (5.00, 5.50] | (5.00, 5.50] | (5.50, 6.00] | (0.50, 0.55] | (83.5, 84.7] |
7 | (0.40, 0.45] | (0.88, 0.89] | (5.50, 6.00] | (5.50, 6.00] | (6.00, 6.50] | (0.55, 0.60] | (82.30, 83.50] |
8 | (0.35, 0.40] | (0.87, 0.88] | (6.00, 6.50] | (6.00, 6.50] | (6.50, 7.00] | (0.60, 0.65] | (81.10, 82.30] |
9 | (0.30, 0.35] | (0.86, 0.87] | (6.50, 7.00] | (6.50, 7.00] | (7.00, 7.50] | (0.65, 0.70] | (80.00, 81.10] |
10 | 0.30 | 0.86 | 7.00 | 7.00 | 7.50 | 0.70 | 80.00 |
The data of the training model were randomly generated. A total of 1000 samples were randomly generated in each indicator interval and marked as{(xi1,xi2,…,xi7),yi}, where xij represents the random number, j=1,2,…,7 represents the seven characteristic index components of the corresponding sample, i=1,2,…1000 represents the corresponding 1000 sample quantity and yi=1,2,…,10 represents the corresponding sample label. First, the training set and test set data were divided and preprocessed; then, the IEHO-SVM model was trained with the training data; finally, the model was verified with the test data.
The SVM is a two-class classification model [33], and it defined as a linear classifier with the maximum interval in the feature space. Different from the perceptron SVM, the learning strategy is to find the separation hyperplane with the maximum interval, which is essential in solving the optimal problem of convex quadratic programming. For one-class classification models, support vector data description is usually used [34,35].
There is no completely linearly separable data. By introducing a "soft interval", incomplete linearly separable data can be solved by allowing partial error points. For some linear indivisible problems, it may be nonlinearly divisible by using nonlinear kernel functions to map the original feature space to the higher-dimensional Hilbert space [36]; a nonlinear SVM can be obtained by using a linear SVM, so the nonlinear separable problem is transformed into a linearly separable problem.
Taking a nonlinear SVM as an example, the learning algorithm is as follows:
Input: Training set T={(x1,y1),(x2,y2),…,(xn,yn)}, where xi∈R,yi∈{−1,1}. Output: Separation hyperplane and classification decision function.
Step 1: Select a proper Gaussian kernel width σ and penalty factor c and construct and solve convex quadratic programming problems.
maxa∑ni=1ai−12∑ni=1∑nj=1[aiyiκ(xi,xj)yiai] s.t. ∑ni=1aiyi=0,0≤ai≤c | (5.1) |
Get the optimal solution a∗=(a∗1,a∗2,…,a∗n)T.
Step 2: Select the a component of a∗, a∗i and calculate w∗ and b∗:
w∗=n∑i=1a∗iyiκ(xi,xj)b∗=yi−n∑i=1a∗iyiκ(xi,xj)κ(xi,xj)=exp(−‖xi−xj‖22σ2) | (5.2) |
Step 3: Compute the separation hyperplane:
w∗T⋅ϕ(x)+b∗=0 | (5.3) |
Step 4: Compute the classification decision function:
f(x)=sign(w∗Tϕ(x)+b∗) | (5.4) |
According to the above analysis, the main parameters affecting SVM learning ability are the penalty factor c and Gaussian kernel width σ. The selection of the parameters has an important influence on the running result of the SVM model, which determines its learning ability and generalization ability. The penalty factor c represents the tolerance of error, where a larger c means a higher likelihood of overfitting, and a smaller c means a higher likelihood of underfitting. The Gaussian kernel width σ, which is inversely proportional to the number of support vectors, affects the model training and prediction speed. Traditional parameter selection methods include manual selection and grid optimization, which are more suitable for small data. When the amount of data increases, the amount of computation and calculation time will also increase and it becomes easier to fall into local optimization.
In this study, the IEHO algorithm was used to optimize SVM parameters c and σ, and the algorithm was applied to the motor energy efficiency rating. The specific algorithm process is shown in Figure 7. The position vector for each elephant corresponds to a group of specific parameter combinations. The evaluation function for its location fitness is defined as follows:
F=1NN∑i=1‖Yi−Y∗i‖2 | (5.5) |
where N represents the number of samples; Yi and Y∗i represent the actual value and the predicted value of SVM, respectively.
In order to evaluate the performance of the algorithm and its robustness on the training data, the following two strategies were performed.
1) A stratified sampling method was used to divide the data set into four different proportions: (80%, 20%), (70%, 30%), (60%, 40%) and (50%, 50%).
2) Using the same data, the IEHO-SVM algorithm proposed in this paper was compared with the original SVM and k-nearest neighbor [37] algorithms.
The classification accuracies of the three algorithms on different training sets and test sets are shown in Table 5. The classification results for the three algorithms on the 50% training sets and test sets are shown in Figure 8.
Set of instances | KNN | SVM | IEHO-SVM |
80% Train, 20% Test | 90% | 87% | 98% |
70% Train, 30% Test | 91% | 86% | 99% |
60% Train, 40% Test | 90% | 88% | 100% |
50% Train, 50% Test | 92% | 89% | 100% |
As we can see in Table 5 and Figure 9, the accuracy of the IEHO-SVM algorithm on different training sets and test sets was higher than 98%, whereas that of the KNN algorithm was around 91% and that of the original SVM algorithm was around 87%. This experiment demonstrates that the robustness of the IEHO-SVM algorithm on training data is not limited to a particular training-test set combination, but that better results can be achieved in all these cases. So, we found that the IEHO-SVM algorithm can accurately grade the motor energy efficiency with high accuracy and reliability.
In the past, research on motor energy efficiency mainly focused on the rated operating condition, but there were few studies on motor energy efficiency performed under off-conditions. The correct grade of the motor energy efficiency can provide scientific guidance for motor energy-saving upgrades, fault damage and the elimination of backward motors. To determine an accurate grade of the motor energy efficiency, we selected seven evaluation indexes that have a great influence on motor energy efficiency from the three aspects of power quality, load characteristics, and motor characteristics, and established a quantitative classification standard.
Aiming to eliminate disadvantages of the original EHO algorithm, such as a slow convergence speed, a low convergence accuracy and susceptibility to the local optima problem, and developed five improvement measures. A new EHO algorithm, i.e., the IEHO algorithm, was designed and compared with other algorithms. The results show that the IEHO algorithm has a higher convergence precision and faster convergence speed than other algorithms, can jump out of local optima and exhibits better performance for optimization.
The IEHO algorithm was used to optimize the SVM parameters, and the IEHO-SVM model was established; it was subsequently applied to the motor energy efficiency rating system. Compared with the KNN and original SVM algorithms, the experimental results show that the IEHO-SVM algorithm can accurately grade the motor energy efficiency with an accuracy higher than 98%, which was better than the KNN and original SVM algorithms.
The IEHO algorithm and IEHO-SVM model proposed in this paper has strong portability, and it can not only be used for motor energy efficiency rating, but also for other optimization problems. The motor energy efficiency grading model established in this paper is simple and easy to realize, has a high practical value and is of great significance for realizing energy conservation and emission reduction and improving energy utilization.
Possible opportunities and future proposals are as follows. Initialization strategies for orthogonal learning can be attempted next, and location updates can be attempted for quantum computing, parallel computing, and co-evolutionary approaches. The IEHO algorithm presented in this paper been demonstrated to effectively solve static optimization problems, and the development of dynamic optimization and variants of multi-objective optimization methods that can handle more kinds of optimization problems can be attempted next.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
[1] | A. J. Lotka, Elements of physical biology, Williams and Wilkins, Baltimore. Reprinted as Elements of mathematical biology, Dover, New York, 1925. |
[2] | V. Volterra, Variazioni e fluttuazioni del numero d'individui in specie animali conviventi, Mem. R. Accad. Naz. Lincei, 2 (1926), 31–113. |
[3] | G. F. Gause, The struggle for existence, Williams & Wilkins, Baltimore, Maryland, USA, 1934. |
[4] |
J. R. Beddington, Mutual interference between parasites or predators and its effect on searching efficiency, J. Animal Ecol., 44 (1975), 331–340. https://doi.org/10.2307/3866 doi: 10.2307/3866
![]() |
[5] |
D. L. DeAngelis, R. A. Goldstein, R. V. ONeill, A model for trophic interaction, Ecology, 56 (1975), 881–892. https://doi.org/10.2307/1936298 doi: 10.2307/1936298
![]() |
[6] |
C. S. Holling, The components of predation as revealed by a study of small mammal predation of the European pine sawfly, Canad. Entomol., 91 (1959), 293–320. https://doi.org/10.4039/Ent91293-5 doi: 10.4039/Ent91293-5
![]() |
[7] |
P. A. Abrams, Why ratio dependence is (still) a dad model of predation, Biol. Rev., 90 (2015), 794–814. https://doi.org/10.1111/brv.12134 doi: 10.1111/brv.12134
![]() |
[8] |
M. A. Aziz-Alaoui, The study of a Leslie-Gower type tri-tropic population models, Chaos, Solitons & Fractals, 14 (2002), 1275–1293. https://doi.org/10.1016/S0960-0779(02)00079-6 doi: 10.1016/S0960-0779(02)00079-6
![]() |
[9] |
C. S. Holling, The functional response of predators to prey density and its role on mimicry and population regulations, Memoirs of the Entomological Society of Canada, 97 (1965), 5–60. https://doi.org/10.4039/entm9745fv doi: 10.4039/entm9745fv
![]() |
[10] |
K. Antwi-Fordjour, R. D. Parshad, M. A. Beauregard, Dynamics of a predator-prey model with generalized functional response and mutual interference, Math. Biosci., 360 (2020), 108407. https://doi.org/10.1016/j.mbs.2020.108407 doi: 10.1016/j.mbs.2020.108407
![]() |
[11] |
R. K. Upadhyay, V. Rai, Why chaos is rarely observed in natural populations, Chaos, Solitons & Fractals, 8 (1997), 1933–1939. https://doi.org/10.1016/S0960-0779(97)00076-3 doi: 10.1016/S0960-0779(97)00076-3
![]() |
[12] |
A. V. Banerjee, A simple model of herd behavior, Q. J. Econ., 107 (1992), 797–817. https://doi.org/10.2307/2118364 doi: 10.2307/2118364
![]() |
[13] |
L. Rook, An Economic Psychological Approach to Herd Behavior, J. Econ. Issues, 40 (2006), 75–95. https://doi.org/10.1080/00213624.2006.11506883 doi: 10.1080/00213624.2006.11506883
![]() |
[14] |
R. M. Raafat, N. Chater, C. Frith, Herding in Humans, Trends Cogn. Sci., 13 (2009), 420–428. https://doi.org/10.1016/j.tics.2009.08.002 doi: 10.1016/j.tics.2009.08.002
![]() |
[15] | T. B. Veblen, The Theory of the Leisure Class, New York: Dover, 1899. |
[16] | C. W. Cobb, P. H. Douglas, A theory of production, The American Economic Review, 18 (1928), 139–165. |
[17] |
V. Ajraldi, M. Pittavino, E. Venturino, Modeling herd behavior in population systems, Nonlinear Anal. Real World Appl., 12 (2011), 2319–2338. https://doi.org/10.1016/j.nonrwa.2011.02.002 doi: 10.1016/j.nonrwa.2011.02.002
![]() |
[18] |
K. Vilches, E. Gonzalez-Olivares, A Rojas-Palma, Prey herd behavior by a generic non-differentiable functional response, Math. Model. Nat. Phenom., 13 (2018), 26. https://doi.org/10.1051/mmnp/2018038 doi: 10.1051/mmnp/2018038
![]() |
[19] |
M. L. Rosenzweig, Paradox of enrichment: Destabilization of exploitation ecosystem in ecological time, Science, 171 (1971), 385–387. https://doi.org/10.1126/science.171.3969.385 doi: 10.1126/science.171.3969.385
![]() |
[20] |
P. A. Braza, Predator–prey dynamics with square root functional responses, Nonlinear Anal.-Real, 13 (2012), 1837–1843. https://doi.org/10.1016/j.nonrwa.2011.12.014 doi: 10.1016/j.nonrwa.2011.12.014
![]() |
[21] |
B. W. Kooi, E. Venturino, Ecoepidemic predator-prey model with feeding satiation, prey heard behavior and abandoned infected prey, Math. Biosci., 274 (2016), 58–72. https://doi.org/10.1016/j.mbs.2016.02.003 doi: 10.1016/j.mbs.2016.02.003
![]() |
[22] |
R. D. Parshad, K. Antwi-Fordjour, M. E. Takyi, Some Novel results in two species competition, SIAM J. Appl. Math., 81 (2021), 1847–1869. https://doi.org/10.1137/20M1387274 doi: 10.1137/20M1387274
![]() |
[23] |
A. Ardito, P Ricciardi, Lyapunov functions for a generalized Gauss-type model, J. Math. Biol., 33 (1995), 816–828. https://doi.org/10.1007/BF00187283 doi: 10.1007/BF00187283
![]() |
[24] |
E. Saez, I Szanto, A polycycle and limit cycles in a non-differentiable predator-prey model, Proc. Indian Acad. Sci. (Math. Sci.), 117 (2007), 219–231. https://doi.org/10.1007/s12044-007-0018-9 doi: 10.1007/s12044-007-0018-9
![]() |
[25] |
N. Beroual, T. Sari, A predator-prey system with Holling-type functional response, P. Am. Math. Soc., 148 (2020), 5127–5140. https://doi.org/10.1090/proc/15166 doi: 10.1090/proc/15166
![]() |
[26] |
A. P. Farrell, J. P. Collins, A. L. Greer, H. R. Thieme, Do fatal infectious diseases eradicate host species? J. Math. Bio., 77 (2018), 2103–2164. https://doi.org/10.1007/s00285-018-1249-3 doi: 10.1007/s00285-018-1249-3
![]() |
[27] |
H. I. Freedman, Stability analysis of a predator-prey system with mutual interference and density-dependent death rates, B. Math. Biol., 41 (1979), 67–78. https://doi.org/10.1016/S0092-8240(79)80054-3 doi: 10.1016/S0092-8240(79)80054-3
![]() |
[28] |
M. P. Hassell, Mutual interference between searching insect parasites, J. Anim. Ecol., 40 (1971), 473–486. https://doi.org/10.2307/3256 doi: 10.2307/3256
![]() |
[29] |
M. P. Hassell, Density dependence in single species population, J. Anim. Ecol., 44 (1975), 283–295. https://doi.org/10.2307/3863 doi: 10.2307/3863
![]() |
[30] |
M. P. Hassell, G. C. Varley, New inductive population model for insect parasites and its bearing on biological control, Nature, Lond., 223 (1969), 1133–1137. https://doi.org/10.1038/2231133a0 doi: 10.1038/2231133a0
![]() |
[31] |
R. Arditi, J. M. Callois, Y. Tyutyunov, C. Jost, Does mutual interference always stabilize predator–prey dynamics? A comparison of models, CR Biol., 327 (2004), 1037–1057. https://doi.org/10.1016/j.crvi.2004.06.007 doi: 10.1016/j.crvi.2004.06.007
![]() |
[32] |
H. I. Freedman, G. S. K. Wolkowicz, Predator-prey systems with group defence: The paradox of enrichment revisited, B. Math. Biol., 48 (1986), 493–508. https://doi.org/10.1016/S0092-8240(86)90004-2 doi: 10.1016/S0092-8240(86)90004-2
![]() |
[33] |
L. H. Erbe, H. I. Freedman, Modeling persistence and mutual interference among subpopulations of ecological communities, B. Math. Biol., 47 (1985), 295–304. https://doi.org/10.1016/S0092-8240(85)90055-2 doi: 10.1016/S0092-8240(85)90055-2
![]() |
[34] |
K. Wang, Y. Zhu, Periodic solutions, permanence and global attractivity of a delayed impulsive prey–predator system with mutual interference, Nonlinear Anal.-Real, 14 (2013), 1044–1054. https://doi.org/10.1016/j.nonrwa.2012.08.016 doi: 10.1016/j.nonrwa.2012.08.016
![]() |
[35] |
R. K. Upadhyay, R. D. Parshad, K. Antwi-Fordjour, E. Quansah, S. Kumari, Global dynamics of stochastic predator-prey with mutual interference and prey defense, J. Appl. Math. Comput., 60 (2019), 169–190. https://doi.org/10.1007/s12190-018-1207-7 doi: 10.1007/s12190-018-1207-7
![]() |
[36] |
X. Lin, F. D. Chen, Almost periodic solution for a Volterra model with mutual interference and Beddington–DeAngelis functional response, Appl. Math. Comput., 214 (2009), 548–556. https://doi.org/10.1016/j.amc.2009.04.028 doi: 10.1016/j.amc.2009.04.028
![]() |
[37] |
K. Wang, Existence and global asymptotic stability of positive periodic solution for a predator–prey system with mutual interference, Nonlinear Anal.-Real, 12 (2009), 2774–2783. https://doi.org/10.1016/j.nonrwa.2008.08.015 doi: 10.1016/j.nonrwa.2008.08.015
![]() |
[38] |
Z. Ma, F. Chen, C Wu, W Chen, Dynamic behaviors of a Lotka–Volterra predator–prey model incorporating a prey refuge and predator mutual interference, Appl. Math. Comput., 219 (2013), 7945–7953. https://doi.org/10.1016/j.amc.2013.02.033 doi: 10.1016/j.amc.2013.02.033
![]() |
[39] |
L. Zanette, L., A. F. White, M. C. Allen, M. Clinchy, Perceived predation risk reduces the number of offspring songbirds produce per year, Science, 334 (2011), 1398–1401. https://doi.org/10.1126/science.1210908 doi: 10.1126/science.1210908
![]() |
[40] |
J. P. Suraci, M. Clinchy, L. M. Dill, D. Roberts, L. Y. Zanette, Fear of large carnivores causes a trophic cascade, Nat. Commun., 7 (2016), 10698. https://doi.org/10.1038/ncomms10698 doi: 10.1038/ncomms10698
![]() |
[41] |
F. Hua, K. E. Sieving, R. J. Fletcher, C. A. Wright, Increased perception of predation risk to adults and offspring alters avian reproductive strategy and performance, Behav. Ecol., 25 (2014), 509–519. https://doi.org/10.1093/beheco/aru017 doi: 10.1093/beheco/aru017
![]() |
[42] |
A. J. Wirsing, W. J. Ripple, A comparison of shark and wolf research reveals similar behavioral responses by prey, Front. Ecol. Environ., 9 (2011), 335–341. https://doi.org/10.1890/090226 doi: 10.1890/090226
![]() |
[43] |
A. G. Bauman, J. C. L. Seah, F. A. Januchowski-Hartley, J. Fong, P. A. Todd, Fear effects associated with predator presence and habitat structure interact to alter herbivory on coral reefs, Biol. Lett., 15 (2019), 20190409. https://doi.org/10.1098/rsbl.2019.0409 doi: 10.1098/rsbl.2019.0409
![]() |
[44] |
X. Wang, L. Zanette, X. Zou, Modelling the fear effect in predator-prey interactions, J. Math. Bio., 73 (2016), 1179–1204. https://doi.org/10.1007/s00285-016-0989-1 doi: 10.1007/s00285-016-0989-1
![]() |
[45] |
S. Pal, N. Pal, S. Samanta, J. Chattopadhyay, Effect of hunting cooperation and fear in a predator-prey model, Ecol. Complex., 39 (2019), 100770. https://doi.org/10.1016/j.ecocom.2019.100770 doi: 10.1016/j.ecocom.2019.100770
![]() |
[46] |
R. K. Upadhyay, S. Mishra, Population dynamic consequences of fearful prey in a spatiotemporal predator-prey system, Math. Biosci. Eng., 16 (2019), 338–372. https://doi.org/10.3934/mbe.2019017 doi: 10.3934/mbe.2019017
![]() |
[47] |
S. Pal, S. Majhi, S. Mandal, N. Pal, Role of Fear in a Predator–Prey Model with Beddington–DeAngelis Functional Response, Z. Naturforsch., 74 (2019), 581–595. https://doi.org/10.1515/zna-2018-0449 doi: 10.1515/zna-2018-0449
![]() |
[48] |
H. Zhang, Y. Cai, S. Fu, W. Wang, Impact of the fear effect in a prey-predator model incorporating a prey refuge, Appl. Math. Comput., 36 (2019), 328–337. https://doi.org/10.1016/j.amc.2019.03.034 doi: 10.1016/j.amc.2019.03.034
![]() |
[49] | K. Seonguk, K Antwi-Fordjour, Prey group defense to predator aggregated induced fear, Eur. Phys. J. Plus, 137 (2022), 1–17. |
[50] |
H. Verma, K. Antwi-Fordjour, M. Hossain, N Pal, R. D. Parshad, P. Mathur, A "Double" Fear Effect in a Tri-tropic Food Chain Model, Eur. Phys. J. Plus, 136 (2021), 1–17. https://doi.org/10.1140/epjp/s13360-021-01900-3 doi: 10.1140/epjp/s13360-021-01900-3
![]() |
[51] | Z. Xiao, Z. Li, Stability Analysis of a Mutual Interference Predator-prey Model with the Fear Effect, J. Appl. Sci. Eng., 22 (2019), 205–211. |
[52] |
F. Brauer, A. C. Soudack, Stability regions and transition phenomena for harvested predator–prey systems, J. Math. Biol., 7 (1979), 319–337. https://doi.org/10.1007/BF00275152 doi: 10.1007/BF00275152
![]() |
[53] |
T. K. Kar, Modelling and analysis of a harvested prey–predator system incorporating a prey refuge, J. Comput. Appl. Math., 185 (2006), 19–33. https://doi.org/10.1016/j.cam.2005.01.035 doi: 10.1016/j.cam.2005.01.035
![]() |
[54] |
D. Xiao, W. Li, M Han, Dynamics of a ratio-dependent predator–prey model with predator harvesting, J. Math. Anal. Appl., 324 (2006), 14–29. https://doi.org/10.1016/j.jmaa.2005.11.048 doi: 10.1016/j.jmaa.2005.11.048
![]() |
[55] |
G. Dai, M. Tang, Coexistence region and global dynamics of a harvested predator–prey system, SIAM J. Appl. Math., 58 (1998), 193–210. https://doi.org/10.1137/S0036139994275799 doi: 10.1137/S0036139994275799
![]() |
[56] |
J. Liu, L. Zhang, Bifurcation analysis in a prey–predator model with nonlinear predator harvesting, J. Franklin I., 353 (2016), 4701–4714. https://doi.org/10.1016/j.jfranklin.2016.09.005 doi: 10.1016/j.jfranklin.2016.09.005
![]() |
[57] |
H. Fattahpour, W. Nagata, H. R. Z. Zangeneh, Prey–predator dynamics with two predator types and Michaelis–Menten predator harvesting, Differ. Equ. Dyn. Syst., (2019), 1–26. https://doi.org/10.1007/s12591-019-00500-z doi: 10.1007/s12591-019-00500-z
![]() |
[58] |
S. Chakraborty, S. Pal, N. Bairagi, Predator-prey interaction with harvesting: mathematical study with biological ramifications, Appl. Math. Model., 36 (2011), 4044–4059. https://doi.org/10.1016/j.apm.2011.11.029 doi: 10.1016/j.apm.2011.11.029
![]() |
[59] |
X. Gao, S. Ishag, S. Fu, W. Li, W. Wang, Bifurcation and Turing pattern formation in a diffusive ratio-dependent predator–prey model with predator harvesting, Nonlinear Anal.-Real, 51 (2020), 102962. https://doi.org/10.1016/j.nonrwa.2019.102962 doi: 10.1016/j.nonrwa.2019.102962
![]() |
[60] |
N. H. Fakhry, R. K. Naji, The Dynamics of A Square Root Prey-Predator Model with Fear, Iraqi J. Sci., (2020), 139–146. https://doi.org/10.24996/ijs.2020.61.1.15 doi: 10.24996/ijs.2020.61.1.15
![]() |
[61] | Y. Huang, Z. Li, The Stability of a Predator-Prey Model with Fear Effect in Prey and Square Root Functional Responses, Ann. of Appl. Math., 36 (2020), 186–194. |
[62] |
D. Sen, S. Ghorai, S. Sharma, M Banerjee, Allee effect in prey's growth reduces the dynamical complexity in prey-predator model with generalist predator, Appl. Math. Model., 91 (2021), 768–790. https://doi.org/10.1016/j.apm.2020.09.046 doi: 10.1016/j.apm.2020.09.046
![]() |
[63] | L. Perko, Differential equations and dynamical systems, Vol. 7, Springer Science & Business Media, 2013. |
[64] | J. D. Murray, Mathematical biology, Springer, New York, 1993. |
[65] |
A. Dhooge, W. Govaerts, Y. A Kuznetsov, H. G. E. Meijer, B. Sautois, New features of the software MatCont for bifurcation analysis of dynamical systems, Math. Comp. Model. Dyn, 14 (2009), 147–175. https://doi.org/10.1080/13873950701742754 doi: 10.1080/13873950701742754
![]() |
[66] |
P. Panday, N. Pal, S. Samanta, J Chattopadhyay, Stability and bifurcation analysis of a three-species food chain model with fear, Int. J. Bifurcat. Chaos, 28 (2018), 1850009. https://doi.org/10.1142/S0218127418500098 doi: 10.1142/S0218127418500098
![]() |
[67] |
J. Lyu, P. J. Schofield, K. M. Reaver, M. Beauregard, R. D. Parshad, A comparison of the Trojan Y Chromosome strategy to harvesting models for eradication of nonnative species, Nat. Resour. Model., 33 (2020), e12252. https://doi.org/10.1111/nrm.12252 doi: 10.1111/nrm.12252
![]() |
[68] |
J. Sugie, R. Kohno, R. Miyazaki, On a predator-prey system of Holling type, P. Am. Math. Soc., 125 (1997), 2041–2050. https://doi.org/10.1090/S0002-9939-97-03901-4 doi: 10.1090/S0002-9939-97-03901-4
![]() |
[69] |
R. D. Parshad, S. Wickramsooriya, S. Bailey, A remark on "Biological control through provision of additional food to predators: A theoretical study"[Theor. Popul. Biol. 72 (2007) 111–120], Theor. Popul. Biol., 132 (2020), 60–68. https://doi.org/10.1016/j.tpb.2019.11.010 doi: 10.1016/j.tpb.2019.11.010
![]() |
Chaotic mapping function | Expression |
Logistic | Zk+1=λZk(1−Zk),0<λ<4 |
ICMIC | Zk+1=sin(α/Zk),α∈(0,∞) |
Chebyshev | Zk+1=cos(kcos−1Zk),k=4,Zk∈(0,1) |
Tent | Zk+1={Zk/0.7,Zk∈(0,0.7]10/3Zk(1−Zk),Zk∈(0.7,1) |
Function | Expression | Search range | Extremum |
Schwefel2.22 | f1(x)=∑ni=1|xi|+∏ni=1|xi| | [−10.00, 10.00] | 0 |
Schwefel2.21 | f2(x)=max{|xi|,1≤i≤n} | [−100.00,100.00] | 0 |
Sum of different powers | f3(x)=∑ni=1|xi|i+1 | [−1.00, 1.00] | 0 |
Zakharov | f4(x)=∑ni=1x2i+(∑ni=10.5ixi)2+(∑ni=10.5ixi)4 | [−5.00, 10.00] | 0 |
Sphere | f5(x)=∑ni=1x2i | [−100.00,100.00] | 0 |
Qing | f6(x)=∑ni=1(x2i−i)2 | [−500.00,500.00] | 0 |
Griewank | f7(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | [−300.00,300.00] | 0 |
Rastrigin | f8(x)=∑ni=1[x2i−10cos(2πxi)+10] | [−5.12, 5.12] | 0 |
Levy | f9(x)=sin2(πw1)+∑n−1i=1(w1−1)2[1+10sin2(πwi+1)]+(wn−1)2[1+sin2(2πwn)]] | [−10.00., 10.00] | 0 |
Schaffers F7 | f10(x)=[1n−1∑n−1i=1(√si(sin(50s0.2i)+1))]2,si=√x2i+x2i+1 | [−10.00, 10.00] | 0 |
Ackley | f11(x)=−20exp(−0.2√1n∑nix2i−exp[1n∑nicos(2πxi)]+20+e | [−32.00, 32.00] | 0 |
Salomon | f12(x)=1−cos(2π√∑ni=1x2i)+0.1√∑ni=1x2i | [−100.00,100.00] | 0 |
f | D | PSO | ABC | EHO | MCEHO | IEHO |
Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | ||
30 | 0.981 (1.13) | 9.04 × 10−19 (3.98 × 10−19) | 6.38 × 10−14 (5.18 × 10−14) | 1.48 × 10−17 (1.42 × 10−17) | 1.61 × 10−26 (2.23 times 10−26) | |
f1 | 100 | 2.43 (3.45) | 7.85 × 10−18 (6.56 × 10−18) | 3.89 × 10−13 (2.18 × 10−13) | 2.76 × 10−16 (1.43 × 10−16) | 4.91 × 10−25 (3.29 × 10−25) |
30 | 6.59 (2.37) | 8.54 × 10−19 (4.74 × 10−19) | 5.29 × 10−18 (4.21 × 10−19) | 2.31 × 10−23 (1.78 × 10−23) | 1.76 × 10−26 (2.48 × 10−26) | |
f2 | 100 | 30.1 (28.9) | 7.31 × 10−18 (5.98 × 10−18) | 3.88 × 10−17 (3.29 × 10−17) | 3.78 × 10−22 (2.91 × 10−22) | 4.79 × 10−25 (2.77 × 10−25) |
30 | 9.13 × 10−12 (3.02 × 10−11) | 0.00 (0.00) | 0.0 (0.00) | 0.00 (0.00) | 0.00 (0.00) | |
f3 | 100 | 8.67 × 10−10 (7.54 × 10−10) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
30 | 57.8 (24.7) | 2.09 × 10−27 (1.15 × 10−27) | 4.11 × 10−30 (3.78 × 10−30) | 8.85 × 10−36 (2.86 × 10−36) | 3.21 × 10−47 (1.76 × 10−47) | |
f4 | 100 | 3.78 (1.88) | 4.93 × 10−26 (2.78 × 10−26) | 2.78 × 10−29 (1.66 × 10−29) | 4.66 × 10−35 (4.01 × 10−35) | 3.76 × 10−47 (2.45 × 10−47) |
30 | 7.75 (1.95 × 10−10) | 2.41 × 10−14 (2.51 × 10−14) | 3.39 × 10−16 (2.99 × 10−16) | 5.54 × 10−18 (1.68 × 10−18) | 2.18 × 10−28 (4.67 × 10−27) | |
f5 | 100 | 9.36 (6.5) | 6.89 × 10−13 (6.01 × 10−13) | 4.18 × 10−15 (3.32 × 10−15) | 8.96 × 10−17 (9.03 × 10−17) | 6.69 × 10−27 (7.43 × 10−27) |
30 | 2.12 × 107 (1.42 × 107) | 4.46 × 108 (3.74 × 108) | 3.89 × 107 (4.18 × 107) | 4.01 × 105 (9.73 × 104) | 6.58 (2.14 × 102) | |
f6 | 100 | 3.78 × 108 (2.38 × 108) | 5.67 × 109 (5.13) | 5.88 × 108 (4.99 × 108) | 3.09 × 106 (2.98 × 106) | 8.37 × 102 (4.76 × 102) |
30 | 2.43 × 10−4 (1.03 × 10−4) | 8.63 × 10−6 (2.54 × 10−6) | 5.17 × 10−8 (4.11 × 10−8) | 1.96 × 10−10 (1.03 × 10−10) | 4.58 × 10−16 (2.17 × 10−20) | |
f7 | 100 | 8.49 × 10−3 (6.29 × 10−3) | 3.71 × 10−5 (4.78 × 10−5) | 2.97E-07(8.92E-08) | 3.88E-09(1.19E-09) | 8.73 × 10−15 (2.19 × 10−17) |
30 | 42.1 (15.4) | 3.75 × 10−5 (9.06 × 10−6) | 3.88 × 10−5 (2.17 × 10−5) | 3.19 × 10−6 (7.42 × 10−7) | 6.77 × 10−15 (5.23 × 10−16) | |
f8 | 100 | 49.4 (40.1) | 5.13 × 10−4 (4.99 × 10−4) | 5.22 × 10−4 (4.89 × 10−4) | 7.81 × 10−5 (5.95 × 10−5) | 8.98 × 10−14 (5.22 × 10−16) |
30 | 1.72 (0.137) | 2.31 (0.79) | 0.218 (0.133) | 1.25 (3.1 × 10−2) | 1.62 × 10−2 (0.79 × 10−2) | |
f9 | 100 | 4.09 (0.393) | 4.38 (8.21) | 8.22 (7.17) | 4.81 (3.97) | 1.19 (0.482) |
30 | 7.65 (0.798) | 2.21 × 10−9 (1.23 × 10−10) | 2.19 × 10−10 (2.18 × 10−10) | 2.23 × 10−12 (1.13 × 10−12) | 1.84 × 10−20 (1.07 × 10−20) | |
f10 | 100 | 24.3 (3.25) | 3.34 × 10−8 (5.88 × 10−9) | 5.39 × 10−9 (4.18 × 10−9) | 4.78 × 10−11 (3.89 × 10−11) | 1.09 × 10−10 (0.48 × 10−19) |
30 | 1.93 (0.678) | 8.45 × 10−16 (8.12 × 10−16) | 2.31 × 10−13 (5.19 × 10−12) | 3.94 × 10−14 (3.86 × 10−14) | 8.13 × 10−18 (8.01 × 10−18) | |
f11 | 100 | 2.19 (49.1) | 3.99 × 10−18 (4.13 × 10−17) | 3.11 × 10−12 (2.91 × 10−11) | 6.83 × 10−14 (5.39 × 10−15) | 3.88 × 10−17 (0.19 × 10−17) |
30 | 0.313 (4.38) | 4.53 × 10−9 (3.98 × 10−9) | 3.87 × 10−10 (3.09 × 10−10) | 5.76 × 10−21 (4.91 × 10−21) | 8.15 × 10−27 (7.16 × 10−27) | |
f12 | 100 | 1.19 (0.159) | 2.17 × 10−8 (1.88 × 10−8) | 6.23 × 10−15 (6.61 × 10−15) | 7.19 × 10−20 (5.72 × 10−20) | 6.35 × 10−26 (5.39 × 10−26) |
Grade | Load factor | Transmission efficiency | Harmonic distortion rate % | Three-phase voltage unbalance degree % | Voltage offset % | Frequency error % | Rated efficiency % |
1 | (0.70, 0.80] | 0.98 | 1.00 | 1.00 | 2.00 | 0.10 | 89.50 |
2 | (0.80, 0.90] | (0.96, 0.98] | (1.00, 2.00] | (1.00, 2.00] | (2.00, 3.00] | (0.10, 0.20] | (88.30, 89.5] |
3 | (0.60, 0.70] | (0.94, 0.96] | (2.00, 3.00] | (2.00, 3.00] | (3.00, 4.00] | (0.20, 0.30] | (87.1, 88.3] |
4 | (0.90, 1.00] | (0.92, 0.94] | (3.00, 4.00] | (3.00, 4.00] | (4.00, 5.00] | (0.30, 0.40] | (85.9, 87.1] |
5 | (0.50, 0.60] | (0.90, 0.92] | (4.00, 5.00] | (4.00, 5.00] | (5.00, 5.50] | (0.40, 0.50] | (84.7, 85.9] |
6 | (0.45, 0.50] | (0.89, 0.90] | (5.00, 5.50] | (5.00, 5.50] | (5.50, 6.00] | (0.50, 0.55] | (83.5, 84.7] |
7 | (0.40, 0.45] | (0.88, 0.89] | (5.50, 6.00] | (5.50, 6.00] | (6.00, 6.50] | (0.55, 0.60] | (82.30, 83.50] |
8 | (0.35, 0.40] | (0.87, 0.88] | (6.00, 6.50] | (6.00, 6.50] | (6.50, 7.00] | (0.60, 0.65] | (81.10, 82.30] |
9 | (0.30, 0.35] | (0.86, 0.87] | (6.50, 7.00] | (6.50, 7.00] | (7.00, 7.50] | (0.65, 0.70] | (80.00, 81.10] |
10 | 0.30 | 0.86 | 7.00 | 7.00 | 7.50 | 0.70 | 80.00 |
Set of instances | KNN | SVM | IEHO-SVM |
80% Train, 20% Test | 90% | 87% | 98% |
70% Train, 30% Test | 91% | 86% | 99% |
60% Train, 40% Test | 90% | 88% | 100% |
50% Train, 50% Test | 92% | 89% | 100% |
Chaotic mapping function | Expression |
Logistic | Zk+1=λZk(1−Zk),0<λ<4 |
ICMIC | Zk+1=sin(α/Zk),α∈(0,∞) |
Chebyshev | Zk+1=cos(kcos−1Zk),k=4,Zk∈(0,1) |
Tent | Zk+1={Zk/0.7,Zk∈(0,0.7]10/3Zk(1−Zk),Zk∈(0.7,1) |
Function | Expression | Search range | Extremum |
Schwefel2.22 | f1(x)=∑ni=1|xi|+∏ni=1|xi| | [−10.00, 10.00] | 0 |
Schwefel2.21 | f2(x)=max{|xi|,1≤i≤n} | [−100.00,100.00] | 0 |
Sum of different powers | f3(x)=∑ni=1|xi|i+1 | [−1.00, 1.00] | 0 |
Zakharov | f4(x)=∑ni=1x2i+(∑ni=10.5ixi)2+(∑ni=10.5ixi)4 | [−5.00, 10.00] | 0 |
Sphere | f5(x)=∑ni=1x2i | [−100.00,100.00] | 0 |
Qing | f6(x)=∑ni=1(x2i−i)2 | [−500.00,500.00] | 0 |
Griewank | f7(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | [−300.00,300.00] | 0 |
Rastrigin | f8(x)=∑ni=1[x2i−10cos(2πxi)+10] | [−5.12, 5.12] | 0 |
Levy | f9(x)=sin2(πw1)+∑n−1i=1(w1−1)2[1+10sin2(πwi+1)]+(wn−1)2[1+sin2(2πwn)]] | [−10.00., 10.00] | 0 |
Schaffers F7 | f10(x)=[1n−1∑n−1i=1(√si(sin(50s0.2i)+1))]2,si=√x2i+x2i+1 | [−10.00, 10.00] | 0 |
Ackley | f11(x)=−20exp(−0.2√1n∑nix2i−exp[1n∑nicos(2πxi)]+20+e | [−32.00, 32.00] | 0 |
Salomon | f12(x)=1−cos(2π√∑ni=1x2i)+0.1√∑ni=1x2i | [−100.00,100.00] | 0 |
f | D | PSO | ABC | EHO | MCEHO | IEHO |
Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | Ave (Std) | ||
30 | 0.981 (1.13) | 9.04 × 10−19 (3.98 × 10−19) | 6.38 × 10−14 (5.18 × 10−14) | 1.48 × 10−17 (1.42 × 10−17) | 1.61 × 10−26 (2.23 times 10−26) | |
f1 | 100 | 2.43 (3.45) | 7.85 × 10−18 (6.56 × 10−18) | 3.89 × 10−13 (2.18 × 10−13) | 2.76 × 10−16 (1.43 × 10−16) | 4.91 × 10−25 (3.29 × 10−25) |
30 | 6.59 (2.37) | 8.54 × 10−19 (4.74 × 10−19) | 5.29 × 10−18 (4.21 × 10−19) | 2.31 × 10−23 (1.78 × 10−23) | 1.76 × 10−26 (2.48 × 10−26) | |
f2 | 100 | 30.1 (28.9) | 7.31 × 10−18 (5.98 × 10−18) | 3.88 × 10−17 (3.29 × 10−17) | 3.78 × 10−22 (2.91 × 10−22) | 4.79 × 10−25 (2.77 × 10−25) |
30 | 9.13 × 10−12 (3.02 × 10−11) | 0.00 (0.00) | 0.0 (0.00) | 0.00 (0.00) | 0.00 (0.00) | |
f3 | 100 | 8.67 × 10−10 (7.54 × 10−10) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) | 0.00 (0.00) |
30 | 57.8 (24.7) | 2.09 × 10−27 (1.15 × 10−27) | 4.11 × 10−30 (3.78 × 10−30) | 8.85 × 10−36 (2.86 × 10−36) | 3.21 × 10−47 (1.76 × 10−47) | |
f4 | 100 | 3.78 (1.88) | 4.93 × 10−26 (2.78 × 10−26) | 2.78 × 10−29 (1.66 × 10−29) | 4.66 × 10−35 (4.01 × 10−35) | 3.76 × 10−47 (2.45 × 10−47) |
30 | 7.75 (1.95 × 10−10) | 2.41 × 10−14 (2.51 × 10−14) | 3.39 × 10−16 (2.99 × 10−16) | 5.54 × 10−18 (1.68 × 10−18) | 2.18 × 10−28 (4.67 × 10−27) | |
f5 | 100 | 9.36 (6.5) | 6.89 × 10−13 (6.01 × 10−13) | 4.18 × 10−15 (3.32 × 10−15) | 8.96 × 10−17 (9.03 × 10−17) | 6.69 × 10−27 (7.43 × 10−27) |
30 | 2.12 × 107 (1.42 × 107) | 4.46 × 108 (3.74 × 108) | 3.89 × 107 (4.18 × 107) | 4.01 × 105 (9.73 × 104) | 6.58 (2.14 × 102) | |
f6 | 100 | 3.78 × 108 (2.38 × 108) | 5.67 × 109 (5.13) | 5.88 × 108 (4.99 × 108) | 3.09 × 106 (2.98 × 106) | 8.37 × 102 (4.76 × 102) |
30 | 2.43 × 10−4 (1.03 × 10−4) | 8.63 × 10−6 (2.54 × 10−6) | 5.17 × 10−8 (4.11 × 10−8) | 1.96 × 10−10 (1.03 × 10−10) | 4.58 × 10−16 (2.17 × 10−20) | |
f7 | 100 | 8.49 × 10−3 (6.29 × 10−3) | 3.71 × 10−5 (4.78 × 10−5) | 2.97E-07(8.92E-08) | 3.88E-09(1.19E-09) | 8.73 × 10−15 (2.19 × 10−17) |
30 | 42.1 (15.4) | 3.75 × 10−5 (9.06 × 10−6) | 3.88 × 10−5 (2.17 × 10−5) | 3.19 × 10−6 (7.42 × 10−7) | 6.77 × 10−15 (5.23 × 10−16) | |
f8 | 100 | 49.4 (40.1) | 5.13 × 10−4 (4.99 × 10−4) | 5.22 × 10−4 (4.89 × 10−4) | 7.81 × 10−5 (5.95 × 10−5) | 8.98 × 10−14 (5.22 × 10−16) |
30 | 1.72 (0.137) | 2.31 (0.79) | 0.218 (0.133) | 1.25 (3.1 × 10−2) | 1.62 × 10−2 (0.79 × 10−2) | |
f9 | 100 | 4.09 (0.393) | 4.38 (8.21) | 8.22 (7.17) | 4.81 (3.97) | 1.19 (0.482) |
30 | 7.65 (0.798) | 2.21 × 10−9 (1.23 × 10−10) | 2.19 × 10−10 (2.18 × 10−10) | 2.23 × 10−12 (1.13 × 10−12) | 1.84 × 10−20 (1.07 × 10−20) | |
f10 | 100 | 24.3 (3.25) | 3.34 × 10−8 (5.88 × 10−9) | 5.39 × 10−9 (4.18 × 10−9) | 4.78 × 10−11 (3.89 × 10−11) | 1.09 × 10−10 (0.48 × 10−19) |
30 | 1.93 (0.678) | 8.45 × 10−16 (8.12 × 10−16) | 2.31 × 10−13 (5.19 × 10−12) | 3.94 × 10−14 (3.86 × 10−14) | 8.13 × 10−18 (8.01 × 10−18) | |
f11 | 100 | 2.19 (49.1) | 3.99 × 10−18 (4.13 × 10−17) | 3.11 × 10−12 (2.91 × 10−11) | 6.83 × 10−14 (5.39 × 10−15) | 3.88 × 10−17 (0.19 × 10−17) |
30 | 0.313 (4.38) | 4.53 × 10−9 (3.98 × 10−9) | 3.87 × 10−10 (3.09 × 10−10) | 5.76 × 10−21 (4.91 × 10−21) | 8.15 × 10−27 (7.16 × 10−27) | |
f12 | 100 | 1.19 (0.159) | 2.17 × 10−8 (1.88 × 10−8) | 6.23 × 10−15 (6.61 × 10−15) | 7.19 × 10−20 (5.72 × 10−20) | 6.35 × 10−26 (5.39 × 10−26) |
Grade | Load factor | Transmission efficiency | Harmonic distortion rate % | Three-phase voltage unbalance degree % | Voltage offset % | Frequency error % | Rated efficiency % |
1 | (0.70, 0.80] | 0.98 | 1.00 | 1.00 | 2.00 | 0.10 | 89.50 |
2 | (0.80, 0.90] | (0.96, 0.98] | (1.00, 2.00] | (1.00, 2.00] | (2.00, 3.00] | (0.10, 0.20] | (88.30, 89.5] |
3 | (0.60, 0.70] | (0.94, 0.96] | (2.00, 3.00] | (2.00, 3.00] | (3.00, 4.00] | (0.20, 0.30] | (87.1, 88.3] |
4 | (0.90, 1.00] | (0.92, 0.94] | (3.00, 4.00] | (3.00, 4.00] | (4.00, 5.00] | (0.30, 0.40] | (85.9, 87.1] |
5 | (0.50, 0.60] | (0.90, 0.92] | (4.00, 5.00] | (4.00, 5.00] | (5.00, 5.50] | (0.40, 0.50] | (84.7, 85.9] |
6 | (0.45, 0.50] | (0.89, 0.90] | (5.00, 5.50] | (5.00, 5.50] | (5.50, 6.00] | (0.50, 0.55] | (83.5, 84.7] |
7 | (0.40, 0.45] | (0.88, 0.89] | (5.50, 6.00] | (5.50, 6.00] | (6.00, 6.50] | (0.55, 0.60] | (82.30, 83.50] |
8 | (0.35, 0.40] | (0.87, 0.88] | (6.00, 6.50] | (6.00, 6.50] | (6.50, 7.00] | (0.60, 0.65] | (81.10, 82.30] |
9 | (0.30, 0.35] | (0.86, 0.87] | (6.50, 7.00] | (6.50, 7.00] | (7.00, 7.50] | (0.65, 0.70] | (80.00, 81.10] |
10 | 0.30 | 0.86 | 7.00 | 7.00 | 7.50 | 0.70 | 80.00 |
Set of instances | KNN | SVM | IEHO-SVM |
80% Train, 20% Test | 90% | 87% | 98% |
70% Train, 30% Test | 91% | 86% | 99% |
60% Train, 40% Test | 90% | 88% | 100% |
50% Train, 50% Test | 92% | 89% | 100% |