Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Support vector machine optimization via an improved elephant herding algorithm for motor energy efficiency rating

  • Academic editor: Hamid Reza Karimi
  • Accurate evaluation of motor energy efficiency under off-condition operation can provide an important basis for an energy-saving upgrade of the motor and the elimination of backward motors. By considering the power quality, motor characteristics and load characteristics, a motor energy efficiency evaluation system with seven indexes and 10 grades was constructed. An improved elephant herding optimization method combined with a support vector machine rating model is proposed, it achieved an accuracy higher than 98%. Considering the slow convergence speed and low convergence precision of the standard elephant herding optimization (EHO) method, it is easy to fall into the local optimum problem. To improve population initialization, chaotic mapping and adversarial learning were used to achieve EHO with population diversity and global search capability. Group learning and elite retention have been added to improve the local development ability of the algorithm. The improved EHO has been compared with other intelligent optimization algorithms by using 12 benchmark functions, and the results show that the improved algorithm has better optimization performance.

    Citation: Xinrui Ren, Jianbo Yu, Zhaomin Lv. Support vector machine optimization via an improved elephant herding algorithm for motor energy efficiency rating[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 11957-11982. doi: 10.3934/mbe.2022557

    Related Papers:

    [1] Xiao Chen, Zhaoyou Zeng . Bird sound recognition based on adaptive frequency cepstral coefficient and improved support vector machine using a hunter-prey optimizer. Mathematical Biosciences and Engineering, 2023, 20(11): 19438-19453. doi: 10.3934/mbe.2023860
    [2] Anlu Yuan, Tieyi Zhang, Lingcong Xiong, Zhipeng Zhang . Torque control strategy of electric racing car based on acceleration intention recognition. Mathematical Biosciences and Engineering, 2024, 21(2): 2879-2900. doi: 10.3934/mbe.2024128
    [3] Xiaoqiang Dai, Kuicheng Sheng, Fangzhou Shu . Ship power load forecasting based on PSO-SVM. Mathematical Biosciences and Engineering, 2022, 19(5): 4547-4567. doi: 10.3934/mbe.2022210
    [4] Liangyu Yang, Tianyu Shi, Jidong Lv, Yan Liu, Yakang Dai, Ling Zou . A multi-feature fusion decoding study for unilateral upper-limb fine motor imagery. Mathematical Biosciences and Engineering, 2023, 20(2): 2482-2500. doi: 10.3934/mbe.2023116
    [5] Qixin Zhu, Mengyuan Liu, Hongli Liu, Yonghong Zhu . Application of machine learning and its improvement technology in modeling of total energy consumption of air conditioning water system. Mathematical Biosciences and Engineering, 2022, 19(5): 4841-4855. doi: 10.3934/mbe.2022226
    [6] Xiaoke Li, Fuhong Yan, Jun Ma, Zhenzhong Chen, Xiaoyu Wen, Yang Cao . RBF and NSGA-II based EDM process parameters optimization with multiple constraints. Mathematical Biosciences and Engineering, 2019, 16(5): 5788-5803. doi: 10.3934/mbe.2019289
    [7] Fuhua Wang, Zongdong Zhang, Kai Wu, Dongxiang Jian, Qiang Chen, Chao Zhang, Yanling Dong, Xiaotong He, Lin Dong . Artificial intelligence techniques for ground fault line selection in power systems: State-of-the-art and research challenges. Mathematical Biosciences and Engineering, 2023, 20(8): 14518-14549. doi: 10.3934/mbe.2023650
    [8] Xiaoshan Qian, Lisha Xu, Xinmei Yuan . Soft-sensing modeling of mother liquor concentration in the evaporation process based on reduced robust least-squares support-vector machine. Mathematical Biosciences and Engineering, 2023, 20(11): 19941-19962. doi: 10.3934/mbe.2023883
    [9] Yufeng Li, Chengcheng Liu, Weiping Zhao, Yufeng Huang . Multi-spectral remote sensing images feature coverage classification based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 4443-4456. doi: 10.3934/mbe.2020245
    [10] Bin Wang, Fagui Liu . Task arrival based energy efficient optimization in smart-IoT data center. Mathematical Biosciences and Engineering, 2021, 18(3): 2713-2732. doi: 10.3934/mbe.2021138
  • Accurate evaluation of motor energy efficiency under off-condition operation can provide an important basis for an energy-saving upgrade of the motor and the elimination of backward motors. By considering the power quality, motor characteristics and load characteristics, a motor energy efficiency evaluation system with seven indexes and 10 grades was constructed. An improved elephant herding optimization method combined with a support vector machine rating model is proposed, it achieved an accuracy higher than 98%. Considering the slow convergence speed and low convergence precision of the standard elephant herding optimization (EHO) method, it is easy to fall into the local optimum problem. To improve population initialization, chaotic mapping and adversarial learning were used to achieve EHO with population diversity and global search capability. Group learning and elite retention have been added to improve the local development ability of the algorithm. The improved EHO has been compared with other intelligent optimization algorithms by using 12 benchmark functions, and the results show that the improved algorithm has better optimization performance.



    Statistics from the World Energy Agency show that the energy consumption of the industrial sector accounts for nearly 1/3 rd of the global primary energy supply and produces approximately 36% of energy-related CO2, and that the electricity consumption of the motor system accounts for more than 50% of the total electricity consumption. Therefore, improving the energy efficiency of the motor system is of great significance for reducing energy consumption, improving cost efficiency, reducing pollution emission and responding to demands for national energy conservation and emission reduction. Cui et al. proposed that a correct rating of the motor energy efficiency is the best way to improve the energy efficiency of motor systems [1].

    In recent years, there have been many advances in artificial intelligence technology in various fields [2,3,4,5,6], whereas there are few types of research on the energy efficiency classification of electric motors. In 2012, guo biao (GB) specifies the minimum energy efficiency values for small and medium-sized motors at rated efficiency and provides initial motor energy efficiency standards [7]. However, in the actual production process, the motor is always operating under variable working conditions due to the power quality, load characteristics, and other influential factors. So the motor energy efficiency standard reported in [8] cannot be effectively applied to an actual situation. Luo et al. conducted a comprehensive analysis of the factors affecting the motor energy efficiency and developed a motor energy efficiency evaluation index system [8]. Li has performed an in-depth study of the main factors affecting motor efficiency and used analytic hierarchy process (AHP) to obtain the weight of each index [9]. These studies provide ideas for research on motor energy efficiency rating.

    In this paper, the three aspects of power quality, load characteristics and the characteristics of the motor itself are used as the first-level evaluation indicators, also, there are seven second-level indicators that have a greater impact on the energy efficiency of the motor that have been selected and divided into 10-grade intervals. For electrical energy efficiency classes, Ma and Lv used a support vector machine model for classification and achieved an accuracy of 94% with a small number of test samples [10]. The accuracy rate decreased with increasing sample data.

    Currently, SVM theory has unique advantages in terms of solving classification and target recognition problems [11], but the key parameters in SVM i.e., the penalty factor c and kernel parameter σ, are determined by empirical assignment or cross-validation, which are not only inefficient but they are also susceptible to the local optima problem. Metaheuristic algorithm are widely used for the optimization of various complex problems due to their non-derivative mechanism, flexibility, simplicity and ease of implementation. Classical examples of metaheuristics include particle swarm optimization, ant colony optimization, evolutionary algorithms, and simulated annealing. Inspired by more natural biological behavior, the development of this field has recently experienced something similar to a gold rush in terms of a proliferation of similar new or improved algorithms of all kinds. The more recent metaheuristic algorithms are monarch butterfly optimization [12], the whale optimization [13], satin bowerbird optimization [14] the moth search algorithm [15], hunger games search [16], harris hawks optimization [17], the slime mould algorithm [18], runge kutta method [19], the colony predation algorithm [20] the weighted mean of vectors algorithm [21].

    The optimization search process for these algorithms has a common feature of using both exploration and development. The exploration phase should utilize and enhance the stochastic operator capability as much as possible to diversify the search in the feature space; the development phase usually follows the exploration phase and should enhance the local search in the vicinity of the more optimal solution. A good optimization algorithm should be able to make a reasonable and fine balance between exploration and exploitation. Otherwise, it is prone to falling into local optimality and premature maturity. Most metaheuristic algorithms have the disadvantage of being insensitive to the tuning of user-defined parameters and not always converging to the global optimum.

    In 2015, wang et al. proposed elephant herd optimization (EHO) inspired by the nomadic behavior of elephant herds [22]. Elephant herds have two special behaviors: 1) the herd is composed of multiple clans and led by the female, and 2) adult males will leave the herd to live alone. Therefore two mechanisms have been developed to mimic the nomadic behavior of elephant herds their, clan renewal mechanism and their separation mechanism. EHO has been shown to be effective at finding optimal solutions [23]. EHO differs from other metaheuristic algorithms in that EHO can generate multiple clans, each of which is a group, and it is a multiple group collaboration mechanism. However, the original EHO only addresses the position update of one clan and does not consider the collaboration of the whole elephant group. The performance of EHO may be significantly improved if the multiple group collaboration approach can be fully utilized and applied.

    The accurate evaluation of motor energy efficiency under off-condition operation can provide an important basis for the energy-saving upgrade of the motor and the elimination of backward motors. In the past, research on motor energy efficiency mainly focused on the rated operating condition, and there were few studies on motor energy efficiency under off-conditions.

    1) In this article, we present seven evaluation indexes that have a great influence on motor energy efficiency from the three aspects of power quality, load characteristics, as well as motor characteristics, and establish a quantitative classification standard to achieve an accurate grade of the motor energy efficiency.

    2) Moreover, we developed an improved EHO (IEHO) algorithm to optimize the motor energy efficiency rating model of SVMs.

    3) In order to improve the superior seeking ability of the EHO algorithms, we propose five improvements and compare them with other algorithms.

    The organization of this paper is as follows: Section 2 introduces the principle and the optimization search of the standard EHO, and Section 3 introduces the improved EHO, which first analyzes the shortcomings of the standard EHO; we then propose five improvement strategies. Section 4 presents a comparison of the IEHO with other optimization algorithms which was achieved via testing and experimental analysis. Section 5 introduces the model of the energy efficiency evaluation system of electric machines based on the IEHO combined with SVM (IEHO-SVM) algorithm, and experiments and analyses are discussed. The conclusion and future works are discussed in Section 6.

    The EHO algorithm was inspired by the nomadic behavior of the elephant herd [22]. Elephants live in groups and consist of multiple clans under the leadership of one of the best female elephant ministers. Each clan will select an excellent female elephant to serve as the matriarch, and the matriarch will lead the female elephants or baby elephants who have a direct or indirect blood relationship with her. Male elephants will leave the group to live alone once they become adults. Although male elephants are far away from their family, they can keep in touch with the elephants in the family through low-frequency vibrations.

    To make the elephant herding behavior solve global optimization problems, we have established the following idealized rules: 1) An elephant tribe consists of multiple clans and an optimal female elephant female minister, and each clan has a fixed number of elephants. 2) Each clan in each generation elects a superior matriarch, and then a fixed number of male elephants leave, followed by an equal number of new ones.

    The position of each elephant in the clan is affected by the matriarch. For the elephant j in the clan ci, the position is updated as follows:

    xnewci,j=xci,j+α(xbest,cixci,j)r (2.1)

    where a[0,1] represents the scaling factor, xnewci,j represents the new position of Elephant j in the clan ci, xci,j represents the old position of Elephant j in the clan ci, xbest,ci represents the matriarch in the clan ci and r[0,1] is a random number.

    The position of the matriarch has a specific update formula, and the updated position is as follows:

    xnew best ,ci=β×xcenter ,ci (2.2)

    where β[0,1] represents a control parameter that determines the factor of xcenter,ci influence on xbest,ci. xcenter,ci represents the center position of the clan ci. It can be determined by the following equation.

    xcenter,ci,d=1ncj×ncij=1xci,j,d (2.3)

    where d[1,D] represents the d th dimension, D is the total number of dimensions and nci represents the number of elephants of Clan ci.

    In the elephant group, the male elephant in adulthood prefers to leave the group to live alone. To simulate this process, the adult elephant is regarded as the least adaptive elephant in the algorithm. At the same time, to make the algorithm have greater searchability, it is assumed that the elephant with the worst fitness work the separation operator shown in Eq 2.4 in each generation:

    xworst ,ci=xmin+(xmaxxmin+1)×r (2.4)

    where xmax and xmin respectively represent the maximum and minimum values of the search space, xworst,ci represents the position of the worst elephant in the clan ci. The EHO algorithm flowchart is shown in Figure 1.

    Figure 1.  EHO algorithm flowchart.

    The standard EHO algorithm updates the position of each elephant by using clan update and separation operations. Its position update method is singular and has the following disadvantages:

    1) The initial position of the elephant is randomly generated, which affects the efficiency of elephant swarm optimization.

    2) It only utilizes the position variable and ignores the fact that the moving speed will affect the convergence speed of the elephant group.

    3) In the clan update process, the position update of the best female elephant in the entire elephant group is not considered. For ordinary elephants, the position is updated according to the position of themselves and that of the matriarch. The matriarch position is updated by the middle position of the clan, makeing it easy to form the local optimal position.

    4) In the iteration process, the optimal solution of the clan is updated and not retained. Equation 2.4 was used in the separation process to replace the worst elephant in each generation of the clan, simulating the adult male elephant leaving and producing a new baby elephant. Although the ethnic diversity was preserved, as a newborn baby elephant, it would be protected by other elephants and should have a good position.

    Based on these shortcomings, we propose the following improvement strategies.

    The distribution and initial value of the initial position of the population affect the optimization efficiency of the population. It is evenly distributed in the solution space, which is beneficial to improve the population diversity and global searching ability.

    This paper introduces a population initialization strategy based on chaotic mapping [24] and opposition-based learning [25]. The strategy initializes the population by using better randomness, spatial ergodicity and non-repetition of chaotic mapping; it also uses the opposition-based learning method to generate its corresponding opposite individuals. Then, it compares and selects individuals with good fitness values as the initial individuals to improve the overall performance of the algorithm optimization efficiency.

    Chaotic mapping is used to generate chaotic sequences and random sequences are generated by simple deterministic systems. Table 1 shows several chaotic mapping functions in common use.

    Table 1.  Common chaotic mapping functions.
    Chaotic mapping function Expression
    Logistic Zk+1=λZk(1Zk),0<λ<4
    ICMIC Zk+1=sin(α/Zk),α(0,)
    Chebyshev Zk+1=cos(kcos1Zk),k=4,Zk(0,1)
    Tent Zk+1={Zk/0.7,Zk(0,0.7]10/3Zk(1Zk),Zk(0.7,1)

     | Show Table
    DownLoad: CSV

    According to the analysis of the study presented in [26], when λ=4, the distribution of a logistic mapping chaotic sequence is unevenly distributed, and the optimization speed and efficiency are slow. When the initial value is 0 or is fixed, the ICMIC mapping function cannot continue to iterate. Chebyshev mapping has higher requirements for functions. The chaotic sequence of tent mapping is evenly distributed, the optimization is faster and the search efficiency is higher. Tent mapping has been applied in this study to generate the initial population of a chaotic sequence.

    The initialization steps are as follows:

    Step 1: Generate random numbers:

    Zkci,j,d(0,1) (3.1)

    where d represents the dimension d=1,2,,D; K represents the number of current iterations.

    Step 2: Calculate the tent function chaotic map:

    Zk+1ci,j,d={Zkci,j,d/0.7,Zkci,j,d(0,0.7]10/3Zkci,j,d(1Zkci,j,d),Zkci,j,d(0.7,1) (3.2)

    Step 3: Generate N positions, which gives the original population X=(xci,j,1,xci,j,2,,xci,j,D) and N = ci×j:

    xci,j,d=xmin,d×(1Zk+1ci,j,d)+xmax,dZk+1ci,j,d (3.3)

    where xmax,d and xmin,d represent the upper and lower limits of elephants in the D dimension.

    Step 4: Perform opposition-based population initialization OX=(oxci,j,1,oxci,j,2,,oxci,j,D) based on opposition-based learning:

    oxci,j,d=xmin,dZk+1ci,j,d+xmax,d×(1Zk+1ci,j,d) (3.4)

    where oxci,j,d represents the individual opposing xci,j,1 and X and OX are opposing populations.

    Step 5: Merge the original population and the opposing population, calculate the individual fitness, and take the first N positions with high fitness as the initial population.

    Inspired by the PSO algorithm [27], the individual elephants were given a speed to simulate elephant motion; additionally a global speed strategy is proposed, which can accelerate the convergence speed of the elephant herd and balance the convergence speed by introducing inertia weights to avoid premature population maturity.

    Speed is also initialized by chaotic mapping and opposition-based learning, taken as vmax,d=0.2(xmax,dxmin,d) and vmax,d=vmin,d. The elephant's position is updated as follows:

    vnewci,j=ωkvci,j+α×(xbest,cixci,j)×rxnewci,j=xci,j+vnewci,j (3.5)

    where xnewci,j and vnewci,j represent the updating position and speed of Elephant j in Clan ci respectively, and ωk represents the linearly decreasing inertia weight in the iterative process.

    ωk=(ωbeginωend)(Kmaxk)Kmax+ωend (3.6)

    where ωbegin and ωendrepresent the weight at the beginning and end of the iteration respectively, taken as ωbegin=0.9 and ωend=0.4; Kmax represents the maximum number of iterations.

    In the above equation, a represents the step size from the current position of the elephant to the optimal position. When the value of a is large, the global optimization ability is strong, while the local optimization ability is poor; when the value of a is small, the global optimization ability is poor, while the local optimization ability is strong. In the standard EHO algorithm, the value of a is fixed and the superior and inferior individuals in the clan have the same search range, which does not match the actual situation. The actual situation is that the better individual is close to the optimal solution, so a smaller learning step size should be used to enhance the local optimization ability; altternatively, the less fit individual is far away from the optimal solution, so a larger learning step size should be used to enhance the global optimization ability. Therefore, an adaptive learning step size strategy is proposed.

    1). The optimal solution xmbest of the clan requires less learning, learning step take a fixed value and taken as a1=0.2.

    2). The optimal solution xcbest,ci of the clan is learned from the optimal solution xmbest of the clan.

    Learning step acbest,ci is expressed as follows:

    acbest ,ci=a2(kKmax)×f(xcbest ,ci)f(xmbest ) (3.7)

    where f(xcbest,ci) and f(xmbest) represent the fitness values of xcbest,ci and xmbest respectively.

    3). Other elephants xotherci,j of the clan learn from the matriarch xcbest,ci of the clan. Learning step aotherci,j is expressed as follows:

    aotherc ,j=a3(kKmax)×f(xotherci, ,)f(xcbest ,ci) (3.8)

    where f(xotherci,j) and f(xcbest,ci) represent the fitness values of xotherci,j and xcbest,ci respectively and a1<a2<a3 are constants.

    In a large elephant herd tribe, each individual has a unique level of fitness. There are excellent female ministers and matriarchs and males and other ordinary elephants, so the elephants have been divided into four groups, and different groups of elephants have different learning styles. The first group is the best female elephant of the tribe, represented by xmbest. The second group is the best matriarch of the clan, represented by xcbest,ci. The third group is the other female elephants and young elephants, represented by xotherci,j. The fourth group is the worst individual of the clan, represented by xworst,ci.

    The learning mode of the first group xmbest is as follows:

    As the best female elephant in the herd, her position and speed should be updated according to the matriarch information of all clans. The update mode is as follows:

    vnewmbest =ωk×vmbest+a1(xcenter xmbest)rxnewmbest=xmbest+vnewmbestxcenter =1nci×ncii=1xcbest,ci (3.9)

    where xnewmbest and vnewmbest respectively represent the update position and speed of xmbest, xcenter represents the central location for all matriarchs and nci represents the total number of clans.

    The learning mode of the second group xcbest,ci is as follows:

    The best matriarch of the clan should learn from the best female elephant of the tribe to achieve a larger search area. The update mode is as follows:

    vnewcbest ,ci=ωkvcbest ,ci+acbest,ci(xmbestxcbest,ci)rxnewcbest ,ci=xcbest,ci+vnewcbest,ci (3.10)

    where xnewcbest,ci and vnewcbest,ci respectively represent the update position and speed of xcbest,ci.

    The learning mode of the third group xotherci,j is as follows:

    The other female elephants and young elephants should learn from the best matriarch of the clan to achieve a faster learning speed. The update mode is as follows:

    vnewotherci,j=ωkvotherci,j+aotherci,j(xcbest,cixotherci,j)rxnewotherci,j=xotherci,j+vnewotherci,j (3.11)

    where xnewotherci,j and vnewotherci,j respectively represent the update position and speed of xcbest,ci.

    In order to optimize the learning of the elephants in the clan, a mutual learning strategy is proposed. The ordinary elephants in the clan learn from the matriarch first, and then from others who are better than themselves.

    Elephant p is randomly selected from Clan ci; then, its new fitness value is calculated. If the fitness value of Elephant j is better than that of Elephant p, Elephant p needs to learn from Elephant j. If it is the opposite Elephant j needs to learn from Elephant p. If the fitness values of Elephants j and p are equal, then there is no need to learn from each other. The update mode is as follows:

    {f(xnewotherci,j)<f(xnewotherci,p),ˆvnewotherci,j=vnewotherci,j+r(xnewotherci,jxnewotherci,p)f(xnewotherci,j)>f(xnewotherc,p),ˆvnewotherci,j=vnewotherc,j+r(xnewotherci,pxnewotherci,j)ˆxnewotherci,j=xnewotherci,,+ˆvnewotherci,,j (3.12)

    where xnewotherci,j and vnewotherci,j respectively represent the position and speed of Elephant p after learning.

    In the standard EHO algorithm, the separation operator is used to replace the worst solution in the clan. At the same time, the clan optimal solution continuously updated during the iteration, and not preserved. Therefore, an elite retention strategy was developed. The worst elephants in the fourth group directly learned from the matriarch and replaced the worst solution with the clan optimal solution. At the same time, the fitness values for each elephant before and after updating were compared, and the positions of elephants with high fitness values were retained. The method preserves the optimal solution for each generation's clan and the optimal solution for individual elephants, which can effectively accelerate the optimization speed of the elephant group.

    The IEHO algorithm flowchart is shown in Figure 2.

    Figure 2.  IEHO algorithm flowchart.

    The learning mode of the fourth group xworst,ci is as follows: the worst male elephant learns directly from the matriarch. The update mode is as follows:

    xnewworst,ci=xcbest,ci (3.13)

    Standard EHO mainly includes initialization, fitness evaluation, sorting and optimal selection, a clan updating operator and a separating operator. In the associated formula, M indicates the number of clans, N indicates the number of elephants in each clan, D is the number of dimensions of the problem and T represents the maximum number of iterations. The complexity of the initialization stage is O(1). In the main part of the algorithm, the complexity of the sorting algorithm is O(NMlog(NM)). The clan updating operator is O(MN); the separating operator is O(N). The overall complexity of the algorithm is O(1+T(NMlogNM+MN+M)).

    The proposed IEHO algorithm mainly includes the following parts: initialization, fitness evaluation, sorting and optimal selection, position and speed updating and a fitness comparison. During the initial stage, the complexity of initializing the individuals in each dimension based on dyadic learning is O(MND). In the main part of the algorithm, the complexity of the sorting algorithm is O(NMlog(NM)), the complexity of updating the speed and position of the individuals is O(MN) and the complexity of adaptive comparison is O(MN). From the above analysis, we can acquire the complexity of the whole algorithm: O(MN(D+T(log(MN)+2))).

    To verify the effectiveness of the IEHO algorithm, 12 typical reference functions proposed in [28] were selected for testing, as is shown in Table 2. f1f6 are unimodal functions with only one global optimal point, which is used to test the convergence accuracy of the algorithm. f7f12 are multi-peak functions with multiple local optimal points, which are used to test the algorithm's global search capability, local development and ability to jump out of local optima.

    Table 2.  Benchmark test functions.
    Function Expression Search range Extremum
    Schwefel2.22 f1(x)=ni=1|xi|+ni=1|xi| [10.00, 10.00] 0
    Schwefel2.21 f2(x)=max{|xi|,1in} [100.00,100.00] 0
    Sum of different powers f3(x)=ni=1|xi|i+1 [1.00, 1.00] 0
    Zakharov f4(x)=ni=1x2i+(ni=10.5ixi)2+(ni=10.5ixi)4 [5.00, 10.00] 0
    Sphere f5(x)=ni=1x2i [100.00,100.00] 0
    Qing f6(x)=ni=1(x2ii)2 [500.00,500.00] 0
    Griewank f7(x)=14000ni=1x2ini=1cos(xii)+1 [300.00,300.00] 0
    Rastrigin f8(x)=ni=1[x2i10cos(2πxi)+10] [5.12, 5.12] 0
    Levy f9(x)=sin2(πw1)+n1i=1(w11)2[1+10sin2(πwi+1)]+(wn1)2[1+sin2(2πwn)]] [10.00., 10.00] 0
    Schaffers F7 f10(x)=[1n1n1i=1(si(sin(50s0.2i)+1))]2,si=x2i+x2i+1 [10.00, 10.00] 0
    Ackley f11(x)=20exp(0.21nnix2iexp[1nnicos(2πxi)]+20+e [32.00, 32.00] 0
    Salomon f12(x)=1cos(2πni=1x2i)+0.1ni=1x2i [100.00,100.00] 0

     | Show Table
    DownLoad: CSV

    Due to the fact that five improved strategies are proposed, to prove the effectiveness of the final five strategy combinations of IEHO, we did not consider partial strategies based on the EHO algorithm that are stronger than the IEHO algorithm, and a strategy combination comparison experiment was conducted. First, a comparison experiment of strategy combinations was conducted, and Schwefel2.22 and Qing were selected as the benchmark test functions; the experimental results are shown in Figure 3. EHO + S1 is the combination of EHO with Strategy 1; EHO + S12 is the combination of EHO with Strategy 1 and Strategy 2; EHO + S123 is the combination of EHO with Strategy 1, Strategy 2 and Strategy 3; EHO + S1234 is the combination of EHO with Strategy 1, Strategy 2, Strategy 3 and Strategy 4; EHO + S12345 is the combination of EHO with 5 strategies. Furthermore, to prove the convergence speed, convergence accuracy and stability of IEHO, the IEHO algorithm has been compared with the multi-mechanism hybrid EHO (MCEHO) algorithm proposed in [21], original EHO algorithm [22], the PSO algorithm [29] and the artificial bee colony (ABC) algorithm [30,31].

    Figure 3.  Strategy combination comparison test results.

    To ensure fairness and accuracy of the algorithm comparison, the parameters of the five algorithms were set to be the same, the population size was set as 50 and the maximum number of iterations was set to be 500. For EHO, MCEHO and IEHO, we set nclan=5 and nci=10. Each algorithm ran the optimal value independently 30 times on each test function in 30D and 100D. The mean value and standard deviation were calculated to evaluate the convergence accuracy and stability of each algorithm. The simulation environment was a Windows 10 operating system equipped with an i5-6500 CPU, 3.2 GHz main frequency, 8 GB of memory, and MATLAB 2016a simulation software.

    It can be seen in Figure 3 that with the increase of strategy combinations, the EHO performance gradually improved, the speed and adaptive learning step size can effectively improve the convergence speed, group learning and elite retention can effectively improve the convergence accuracy and the IEHO performance from the combination of five strategies was optimal. The IEHO still follows the nomadic behavior of the elephant herd, IEHO is an improvement on the standard EHO, only its search method is improved during the iterative process so that the elephant herd becomes more intelligent; essentially, it is still EHO. These five strategies are also useful for improving the performance of other optimization algorithms.

    Table 3 shows the means and standard deviations of the fitness values for the 12 test functions calculated by the five algorithms under the conditions of fixed iteration times for the low dimension (30) and high dimension (100), respectively. The performance of each of the five algorithms on each test function is shown in Figure 6, and the three-dimensional mathematical model of each of the 12 test functions is also presented.

    Table 3.  Comparison results for the five algorithms for the test functions process.
    f D PSO ABC EHO MCEHO IEHO
    Ave (Std) Ave (Std) Ave (Std) Ave (Std) Ave (Std)
    30 0.981 (1.13) 9.04 × 1019 (3.98 × 1019) 6.38 × 1014 (5.18 × 1014) 1.48 × 1017 (1.42 × 1017) 1.61 × 1026 (2.23 times 1026)
    f1 100 2.43 (3.45) 7.85 × 1018 (6.56 × 1018) 3.89 × 1013 (2.18 × 1013) 2.76 × 1016 (1.43 × 1016) 4.91 × 1025 (3.29 × 1025)
    30 6.59 (2.37) 8.54 × 1019 (4.74 × 1019) 5.29 × 1018 (4.21 × 1019) 2.31 × 1023 (1.78 × 1023) 1.76 × 1026 (2.48 × 1026)
    f2 100 30.1 (28.9) 7.31 × 1018 (5.98 × 1018) 3.88 × 1017 (3.29 × 1017) 3.78 × 1022 (2.91 × 1022) 4.79 × 1025 (2.77 × 1025)
    30 9.13 × 1012 (3.02 × 1011) 0.00 (0.00) 0.0 (0.00) 0.00 (0.00) 0.00 (0.00)
    f3 100 8.67 × 1010 (7.54 × 1010) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
    30 57.8 (24.7) 2.09 × 1027 (1.15 × 1027) 4.11 × 1030 (3.78 × 1030) 8.85 × 1036 (2.86 × 1036) 3.21 × 1047 (1.76 × 1047)
    f4 100 3.78 (1.88) 4.93 × 1026 (2.78 × 1026) 2.78 × 1029 (1.66 × 1029) 4.66 × 1035 (4.01 × 1035) 3.76 × 1047 (2.45 × 1047)
    30 7.75 (1.95 × 1010) 2.41 × 1014 (2.51 × 1014) 3.39 × 1016 (2.99 × 1016) 5.54 × 1018 (1.68 × 1018) 2.18 × 1028 (4.67 × 1027)
    f5 100 9.36 (6.5) 6.89 × 1013 (6.01 × 1013) 4.18 × 1015 (3.32 × 1015) 8.96 × 1017 (9.03 × 1017) 6.69 × 1027 (7.43 × 1027)
    30 2.12 × 107 (1.42 × 107) 4.46 × 108 (3.74 × 108) 3.89 × 107 (4.18 × 107) 4.01 × 105 (9.73 × 104) 6.58 (2.14 × 102)
    f6 100 3.78 × 108 (2.38 × 108) 5.67 × 109 (5.13) 5.88 × 108 (4.99 × 108) 3.09 × 106 (2.98 × 106) 8.37 × 102 (4.76 × 102)
    30 2.43 × 104 (1.03 × 104) 8.63 × 106 (2.54 × 106) 5.17 × 108 (4.11 × 108) 1.96 × 1010 (1.03 × 1010) 4.58 × 1016 (2.17 × 1020)
    f7 100 8.49 × 103 (6.29 × 103) 3.71 × 105 (4.78 × 105) 2.97E-07(8.92E-08) 3.88E-09(1.19E-09) 8.73 × 1015 (2.19 × 1017)
    30 42.1 (15.4) 3.75 × 105 (9.06 × 106) 3.88 × 105 (2.17 × 105) 3.19 × 106 (7.42 × 107) 6.77 × 1015 (5.23 × 1016)
    f8 100 49.4 (40.1) 5.13 × 104 (4.99 × 104) 5.22 × 104 (4.89 × 104) 7.81 × 105 (5.95 × 105) 8.98 × 1014 (5.22 × 1016)
    30 1.72 (0.137) 2.31 (0.79) 0.218 (0.133) 1.25 (3.1 × 102) 1.62 × 102 (0.79 × 102)
    f9 100 4.09 (0.393) 4.38 (8.21) 8.22 (7.17) 4.81 (3.97) 1.19 (0.482)
    30 7.65 (0.798) 2.21 × 109 (1.23 × 1010) 2.19 × 1010 (2.18 × 1010) 2.23 × 1012 (1.13 × 1012) 1.84 × 1020 (1.07 × 1020)
    f10 100 24.3 (3.25) 3.34 × 108 (5.88 × 109) 5.39 × 109 (4.18 × 109) 4.78 × 1011 (3.89 × 1011) 1.09 × 1010 (0.48 × 1019)
    30 1.93 (0.678) 8.45 × 1016 (8.12 × 1016) 2.31 × 1013 (5.19 × 1012) 3.94 × 1014 (3.86 × 1014) 8.13 × 1018 (8.01 × 1018)
    f11 100 2.19 (49.1) 3.99 × 1018 (4.13 × 1017) 3.11 × 1012 (2.91 × 1011) 6.83 × 1014 (5.39 × 1015) 3.88 × 1017 (0.19 × 1017)
    30 0.313 (4.38) 4.53 × 109 (3.98 × 109) 3.87 × 1010 (3.09 × 1010) 5.76 × 1021 (4.91 × 1021) 8.15 × 1027 (7.16 × 1027)
    f12 100 1.19 (0.159) 2.17 × 108 (1.88 × 108) 6.23 × 1015 (6.61 × 1015) 7.19 × 1020 (5.72 × 1020) 6.35 × 1026 (5.39 × 1026)

     | Show Table
    DownLoad: CSV

    The left plots in Figures 46 show the three-dimensional search spatial distributions for the 12 test functions. X1 and X2 are function search intervals, and the vertical axis is the function value. The graphs on the right are the convergence curves for the intelligent algorithms for optimizing the test functions. The horizontal axis is the number of iterations, and the vertical axis is the logarithm of the optimal values.

    Figure 4.  Optimization process for the five algorithms when applied to f1f4 with D = 30.
    Figure 5.  Optimization process for the five algorithms when applied to f5f8 with D = 30.
    Figure 6.  Optimization process for the five algorithms when applied to f9f12 with D = 30.

    As can be seen in Figures 46, after 500 iterations, all five algorithms were able to converge to the extreme value of the test function with different convergence speeds and convergence accuracies; however, PSO resulted in the worst performance. For the unimodal functions f1f6, in the beginning, the convergence rates of EHO and IEHO were faster and better than that of MCEHO. This shows that the initial population initialization strategy based on chaotic mapping and opposition-based learning can facilitate a better position and fitness value for the initial population and improve the global searching ability. The convergence speed of the IEHO algorithm improved after 100 iterations, while that of the MCEHO algorithm improved after 150 iterations. The convergence speeds of the EHO and ABC is similar, but slightly higher than ABC. The IEHO algorithm was able to converge to the minimum value after about 100 iterations, and the convergence accuracy was higher than that of other algorithms, indicating that the addition of a speed strategy can effectively accelerate the population optimization speed. Regarding the unimodal function f3, the IEHO algorithm was able to converge to 0 and the standard deviation was 0.

    For the multimodal functions f7f12, the convergence speed and convergence precision of EHO were better than those for PSO, indicating that EHO has obvious advantages over PSO when it comes to solving multi-extremum problems. The EHO and ABC algorithms had similar convergence accuracies, except for the test functions f6 and f10. Regarding the other functions, EHO was found to be superior to ABC optimization, as it resulted in a faster convergence speed and higher convergence accuracy. However, EHO is susceptible to the local optima problem; MCEHO can jump out of partial local optima but it is still hard to find global optima. The IEHO algorithm's optimization speed and precision were found to be superior to those of the MCEHO algorithm, indicating that adaptive step grouping learning and an elite retention strategy can effectively improve the algorithm's ability to develop global optimal and jump out of local optima.

    As can be seen in Table 3, for the unimodal and multi-peak test functions, the IEHO algorithm had the lowest means and standard deviations among all of the algorithms. Its mean value was closest to the theoretical value and able to find the minimum point. As the number of dimensions of the test function increases, the model complexity exponentially grows and increases the difficulties of optimization. To further validate IEHO performance on complex high-dimensional problems, we added a 100D high-dimensional test function experiment; the parameter settings are the same as those for lower dimensions. According to the data in Table 3, the IEHO algorithm performed poorly for f6 with low convergence accuracy. Regarding the other test functions, it was able to maintain good optimization stability and was found to be superior to the other four algorithms, which fully verified the advantages of IEHO as a tool to solve high-dimensional problems. From the above analysis, it can be seen that the IEHO algorithm presented in this paper has a better convergence effect for both in terms of mean and standard deviation, on low- and high-dimensional test functions, as well as a stronger ability to find the optima as compared with the algorithms based on PSO, ABC, EHO and MCEHO.

    In a motor system, the transmission process for the current from the power supply to the motor to the load will generate energy loss, and the energy efficiency of the motor is affected by many factors [32]. In this study, the power quality, motor characteristics and load characteristics of the motor system were considered as the first-level indexes. From these three aspects, seven secondary indexes that have a greater impact on motor energy efficiency were selected to obtain the motor energy efficiency rating system as shown in Figure 7.

    Figure 7.  Motor energy efficiency evaluation system.

    The motor energy efficiency has been divided into 10 grades according to the national standards, the upper and lower limits of each indicator's deviation and the impact on the motor energy efficiency. The quantified motor energy efficiency grading standards are based on the parameter values of various indicators at different levels, as shown in Table 4.

    Table 4.  Motor energy efficiency classification standard.
    Grade Load factor Transmission efficiency Harmonic distortion rate % Three-phase voltage unbalance degree % Voltage offset % Frequency error % Rated efficiency %
    1 (0.70, 0.80] 0.98 1.00 1.00 2.00 0.10 89.50
    2 (0.80, 0.90] (0.96, 0.98] (1.00, 2.00] (1.00, 2.00] (2.00, 3.00] (0.10, 0.20] (88.30, 89.5]
    3 (0.60, 0.70] (0.94, 0.96] (2.00, 3.00] (2.00, 3.00] (3.00, 4.00] (0.20, 0.30] (87.1, 88.3]
    4 (0.90, 1.00] (0.92, 0.94] (3.00, 4.00] (3.00, 4.00] (4.00, 5.00] (0.30, 0.40] (85.9, 87.1]
    5 (0.50, 0.60] (0.90, 0.92] (4.00, 5.00] (4.00, 5.00] (5.00, 5.50] (0.40, 0.50] (84.7, 85.9]
    6 (0.45, 0.50] (0.89, 0.90] (5.00, 5.50] (5.00, 5.50] (5.50, 6.00] (0.50, 0.55] (83.5, 84.7]
    7 (0.40, 0.45] (0.88, 0.89] (5.50, 6.00] (5.50, 6.00] (6.00, 6.50] (0.55, 0.60] (82.30, 83.50]
    8 (0.35, 0.40] (0.87, 0.88] (6.00, 6.50] (6.00, 6.50] (6.50, 7.00] (0.60, 0.65] (81.10, 82.30]
    9 (0.30, 0.35] (0.86, 0.87] (6.50, 7.00] (6.50, 7.00] (7.00, 7.50] (0.65, 0.70] (80.00, 81.10]
    10 0.30 0.86 7.00 7.00 7.50 0.70 80.00

     | Show Table
    DownLoad: CSV

    The data of the training model were randomly generated. A total of 1000 samples were randomly generated in each indicator interval and marked as{(xi1,xi2,,xi7),yi}, where xij represents the random number, j=1,2,,7 represents the seven characteristic index components of the corresponding sample, i=1,2,1000 represents the corresponding 1000 sample quantity and yi=1,2,,10 represents the corresponding sample label. First, the training set and test set data were divided and preprocessed; then, the IEHO-SVM model was trained with the training data; finally, the model was verified with the test data.

    The SVM is a two-class classification model [33], and it defined as a linear classifier with the maximum interval in the feature space. Different from the perceptron SVM, the learning strategy is to find the separation hyperplane with the maximum interval, which is essential in solving the optimal problem of convex quadratic programming. For one-class classification models, support vector data description is usually used [34,35].

    There is no completely linearly separable data. By introducing a "soft interval", incomplete linearly separable data can be solved by allowing partial error points. For some linear indivisible problems, it may be nonlinearly divisible by using nonlinear kernel functions to map the original feature space to the higher-dimensional Hilbert space [36]; a nonlinear SVM can be obtained by using a linear SVM, so the nonlinear separable problem is transformed into a linearly separable problem.

    Taking a nonlinear SVM as an example, the learning algorithm is as follows:

    Input: Training set T={(x1,y1),(x2,y2),,(xn,yn)}, where xiR,yi{1,1}. Output: Separation hyperplane and classification decision function.

    Step 1: Select a proper Gaussian kernel width σ and penalty factor c and construct and solve convex quadratic programming problems.

    maxani=1ai12ni=1nj=1[aiyiκ(xi,xj)yiai] s.t. ni=1aiyi=0,0aic (5.1)

    Get the optimal solution a=(a1,a2,,an)T.

    Step 2: Select the a component of a, ai and calculate w and b:

    w=ni=1aiyiκ(xi,xj)b=yini=1aiyiκ(xi,xj)κ(xi,xj)=exp(xixj22σ2) (5.2)

    Step 3: Compute the separation hyperplane:

    wTϕ(x)+b=0 (5.3)

    Step 4: Compute the classification decision function:

    f(x)=sign(wTϕ(x)+b) (5.4)

    According to the above analysis, the main parameters affecting SVM learning ability are the penalty factor c and Gaussian kernel width σ. The selection of the parameters has an important influence on the running result of the SVM model, which determines its learning ability and generalization ability. The penalty factor c represents the tolerance of error, where a larger c means a higher likelihood of overfitting, and a smaller c means a higher likelihood of underfitting. The Gaussian kernel width σ, which is inversely proportional to the number of support vectors, affects the model training and prediction speed. Traditional parameter selection methods include manual selection and grid optimization, which are more suitable for small data. When the amount of data increases, the amount of computation and calculation time will also increase and it becomes easier to fall into local optimization.

    In this study, the IEHO algorithm was used to optimize SVM parameters c and σ, and the algorithm was applied to the motor energy efficiency rating. The specific algorithm process is shown in Figure 7. The position vector for each elephant corresponds to a group of specific parameter combinations. The evaluation function for its location fitness is defined as follows:

    F=1NNi=1YiYi2 (5.5)

    where N represents the number of samples; Yi and Yi represent the actual value and the predicted value of SVM, respectively.

    In order to evaluate the performance of the algorithm and its robustness on the training data, the following two strategies were performed.

    1) A stratified sampling method was used to divide the data set into four different proportions: (80%, 20%), (70%, 30%), (60%, 40%) and (50%, 50%).

    2) Using the same data, the IEHO-SVM algorithm proposed in this paper was compared with the original SVM and k-nearest neighbor [37] algorithms.

    The classification accuracies of the three algorithms on different training sets and test sets are shown in Table 5. The classification results for the three algorithms on the 50% training sets and test sets are shown in Figure 8.

    Table 5.  Accuracies of three algorithms.
    Set of instances KNN SVM IEHO-SVM
    80% Train, 20% Test 90% 87% 98%
    70% Train, 30% Test 91% 86% 99%
    60% Train, 40% Test 90% 88% 100%
    50% Train, 50% Test 92% 89% 100%

     | Show Table
    DownLoad: CSV
    Figure 8.  Flowchart of the IEHO-SVM model.

    As we can see in Table 5 and Figure 9, the accuracy of the IEHO-SVM algorithm on different training sets and test sets was higher than 98%, whereas that of the KNN algorithm was around 91% and that of the original SVM algorithm was around 87%. This experiment demonstrates that the robustness of the IEHO-SVM algorithm on training data is not limited to a particular training-test set combination, but that better results can be achieved in all these cases. So, we found that the IEHO-SVM algorithm can accurately grade the motor energy efficiency with high accuracy and reliability.

    Figure 9.  Grading results for three algorithms on the 50% training set and test set.

    In the past, research on motor energy efficiency mainly focused on the rated operating condition, but there were few studies on motor energy efficiency performed under off-conditions. The correct grade of the motor energy efficiency can provide scientific guidance for motor energy-saving upgrades, fault damage and the elimination of backward motors. To determine an accurate grade of the motor energy efficiency, we selected seven evaluation indexes that have a great influence on motor energy efficiency from the three aspects of power quality, load characteristics, and motor characteristics, and established a quantitative classification standard.

    Aiming to eliminate disadvantages of the original EHO algorithm, such as a slow convergence speed, a low convergence accuracy and susceptibility to the local optima problem, and developed five improvement measures. A new EHO algorithm, i.e., the IEHO algorithm, was designed and compared with other algorithms. The results show that the IEHO algorithm has a higher convergence precision and faster convergence speed than other algorithms, can jump out of local optima and exhibits better performance for optimization.

    The IEHO algorithm was used to optimize the SVM parameters, and the IEHO-SVM model was established; it was subsequently applied to the motor energy efficiency rating system. Compared with the KNN and original SVM algorithms, the experimental results show that the IEHO-SVM algorithm can accurately grade the motor energy efficiency with an accuracy higher than 98%, which was better than the KNN and original SVM algorithms.

    The IEHO algorithm and IEHO-SVM model proposed in this paper has strong portability, and it can not only be used for motor energy efficiency rating, but also for other optimization problems. The motor energy efficiency grading model established in this paper is simple and easy to realize, has a high practical value and is of great significance for realizing energy conservation and emission reduction and improving energy utilization.

    Possible opportunities and future proposals are as follows. Initialization strategies for orthogonal learning can be attempted next, and location updates can be attempted for quantum computing, parallel computing, and co-evolutionary approaches. The IEHO algorithm presented in this paper been demonstrated to effectively solve static optimization problems, and the development of dynamic optimization and variants of multi-objective optimization methods that can handle more kinds of optimization problems can be attempted next.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.



    [1] X. Cui, Y. Luo, Y. Yang, Y. Guo, H. Wang, X. Liu, Energy saving mechanism and energy saving approach of asynchronous motor under periodic variable working conditions, Proc. Chin. J. Electr. Eng., 28 (2008), 90–97.
    [2] D. Yang, H. R. Karimi, L. Gelman, A fuzzy fusion rotating machinery fault diagnosis framework based on the enhancement deep convolutional neural networks, Sensors, 22 (2022), 671. https://doi.org/10.3390/s22020671 doi: 10.3390/s22020671
    [3] Z. Lv, Online monitoring of batch processes combining subspace design of latent variables with support vector data description, Complex Eng. Syst., 1 (2021). https://org.doi/10.20517/ces.2021.02 doi: 10.20517/ces.2021.02
    [4] J. Yu, X. Yan, Data-feature-driven nonlinear process monitoring based on joint deep learning models with dual-scale, Inf. Sci., 591 (2022), 381–399. https://doi.org/10.1016/j.ins.2021.12.106 doi: 10.1016/j.ins.2021.12.106
    [5] J. Yu, X. Yan, Active features extracted by deep belief network for process monitoring, ISA Trans., 84 (2018), 247–261. https://doi.org/10.1016/j.isatra.2018.10.011 doi: 10.1016/j.isatra.2018.10.011
    [6] J. Yu, X. Yan, Multiscale intelligent fault detection system based on agglomerative hierarchical clustering using stacked denoising autoencoder with temporal information, Appl. Soft Comput., 95 (2020), 106525. https://doi.org/10.1016/j.asoc.2020.106525 doi: 10.1016/j.asoc.2020.106525
    [7] Y. Zhao, X. Li, S. Yang, Minimum allowable values of energy efficiency and energy efficiency grades for small and medium three-phase asynchronous motors 18613, 2012.
    [8] C. Luo, W. B. Ma, J. Zhao, Evaluation and analysis of the influence factors on the energy consumption of the motor system, Motor Control Appl., 43 (2016), 98–103.
    [9] C. Li, Research on energy saving evaluation index system of motor system, Motor Control Appl., 43 (2016), 74–77.
    [10] L. X. Ma, M. Y. Lv, Research on intelligent quantification and classification method of power energy efficiency, Power Sci. Eng., 33 (2017), 46–49.
    [11] J. Cervantes, F. Garcia-Lamont, L. Rodríguez-Mazahua, A. Lopez, A comprehensive survey on support vector machine classification: Applications, challenges and trends, Neurocomputing, 408 (2020), 189–215. https://doi.org/10.1016/j.neucom.2019.10.118 doi: 10.1016/j.neucom.2019.10.118
    [12] G. G. Wang, S. Deb, Z. Cui, Monarch butterfly optimizations, Neural Comput. Appl., 31 (2019), 1995–2014. https://doi.org/10.1007/s00521-015-1923-y doi: 10.1007/s00521-015-1923-y
    [13] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [14] S. H. S. Moosavi, V. K. Bardsiri, Satin bowerbird optimizer: A new optimization algorithm to optimize ANFIS for software development effort estimation, Eng. Appl. Artif. Intell., 60 (2017), 1–15. https://doi.org/10.1016/j.engappai.2017.01.006 doi: 10.1016/j.engappai.2017.01.006
    [15] G. G. Wang, Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems, Memetic Comput., 10 (2018), 151–164. https://doi.org/10.1007/s12293-016-0212-3 doi: 10.1007/s12293-016-0212-3
    [16] F. A. Hashim, E. H. Houssein, M. S. Mabrouk, W. Al‐Atabany, S. Mirjalili, Henry gas solubility optimization: A novel physics-based algorithm, Future Gener. Comput. Syst., 101 (2019), 646–667. https://doi.org/10.1016/j.future.2019.07.015 doi: 10.1016/j.future.2019.07.015
    [17] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comput. Syst., 97 (2019), 849–872. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
    [18] S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
    [19] I. Ahmadianfar, A. A. Heidari, A. H. Gandomi, X. Chu, H. Chen, RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method, Expert Syst. Appl., 181 (2021), 115079. https://doi.org/10.1016/j.eswa.2021.115079 doi: 10.1016/j.eswa.2021.115079
    [20] J. Tu, H. Chen, M. Wang, A. H. Gandomi, The colony predation algorithmg, J. Bionic Eng., 18 (2021), 674–710. https://doi.org/10.1007/s42235-021-0050-y doi: 10.1007/s42235-021-0050-y
    [21] I. Ahmadianfar, A. AsgharHeidari, S. Noshadian, H. Chen, A. HGandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022), 116516. https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
    [22] G. G. Wang, S. Deb, X. Z. Gao, L. Coelho, A new metaheuristic optimization algorithm motivated by elephant herding behavior, Bio-Inspir. Comput., 8 (2016), 394–409. https://doi.org/10.1504/IJBIC.2016.081335 doi: 10.1504/IJBIC.2016.081335
    [23] Z. Zhang, H. Wang, H. Zhou, S. You, Parameter estimation of chaotic systems based on multi-mechanism hybrid image group Algorithm, Microelectron. Comput., 6 (2020), 40–45.
    [24] M. S. Tavazoei, M. Haeri, Comparison of different one-dimensional maps as chaotic search pattern in chaos optimization algorithms, Appl. Math. Comput., 187 (2007), 1076–1085. https://doi.org/10.1016/j.amc.2006.09.087 doi: 10.1016/j.amc.2006.09.087
    [25] F. Chakraborty, P. K. Roy, D. Nandi, Oppositional elephant herding optimization with dynamic Cauchy mutation for multilevel image thresholding, Evol. Intell., 12 (2019), 445–467. https://doi.org/10.1007/s12065-019-00238-1 doi: 10.1007/s12065-019-00238-1
    [26] W. Luo, H. Jin, H. Li, R. Zhou, Blind source separation of radar signals based on chaotic adaptive firework algorithm, Syst. Eng. Electr., 42 (2020), 2497–2505.
    [27] F. Marini, B. Walczak, Particle swarm optimization (PSO). A tutorial, Chemom. Intell. Lab. Syst., 149 (2015), 153–165. https://doi.org/10.1016/j.chemolab.2015.08.020 doi: 10.1016/j.chemolab.2015.08.020
    [28] J. Liang, B. Qu, P. Suganthan, Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective RealParameter Numerical Optimization, in Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, (2014).
    [29] N. Lynn, P. N. Suganthan, Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm Evol. Comput., 24 (2015), 11–24. https://doi.org/10.1016/j.swevo.2015.05.002 doi: 10.1016/j.swevo.2015.05.002
    [30] Y. Xue, J. Jiang, B. Zhao, T. Ma, A self-adaptive artificial bee colony algorithm based on global best for global optimization, Soft Comput., 22 (2018), 2935–2952. https://doi.org/10.1007/s00500-017-2547-1 doi: 10.1007/s00500-017-2547-1
    [31] W. F. Gao, L. L. Huang, S. Y. Liu, C. Dai, Artificial bee colony algorithm based on information learning, IEEE Trans. Cybern, 45 (2015), 2827–2839. https://doi.org/10.1109/TCYB.2014.2387067 doi: 10.1109/TCYB.2014.2387067
    [32] Y. luo, H. Jin, H. Li, H. Rong, Blind source separation of radar signals based on chaotic adaptive fireworks algorithm, Syst. Eng. Electron. Technol., 42 (2020), 95–103.
    [33] J. Peng, Y. Zhou, C. L. P. Chen, Region-kernel-based support vector machines for hyperspectral image classification, IEEE Trans. Geosci. Remote Sensing, 53 (2015), 4810–4824. https://doi.org/10.1109/TGRS.2015.2410991 doi: 10.1109/TGRS.2015.2410991
    [34] Z. Lv, X. Yan, Q. Jiang, Batch process monitoring based on self-adaptive subspace support vector data description, Chemom. Intell. Lab. Syst., 170 (2017), 25–31. https://doi.org/10.1016/j.chemolab.2017.09.009 doi: 10.1016/j.chemolab.2017.09.009
    [35] J. Yu, X. Yan, Whole process monitoring based on unstable neuron output information in hidden layers of deep belief network, IEEE Trans. Cybern., 50 (2020), 3998–4007. https://doi.org/10.1109/TCYB.2019.2948202 doi: 10.1109/TCYB.2019.2948202
    [36] Y. Shao, C. Zhang, X. Wang, N. Deng, Improvements on twin support vector machines, IEEE Trans. Neural Networks, 22 (2020), 962–968. https://doi.org/10.1109/TNN.2011.2130540 doi: 10.1109/TNN.2011.2130540
    [37] S. Zhang, X. Li, M. Zong, X. Zhu, R. Wang, Efficient kNN classification with different numbers of nearest neighbors, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 1774–1785. https://doi.org/10.1109/TNNLS.2017.2673241 doi: 10.1109/TNNLS.2017.2673241
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2213) PDF downloads(55) Cited by(0)

Figures and Tables

Figures(9)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog