
The association between adhesion function and papillary thyroid carcinoma (PTC) is increasingly recognized; however, the precise role of adhesion function in the pathogenesis and prognosis of PTC remains unclear. In this study, we employed the robust rank aggregation algorithm to identify 64 stable adhesion-related differentially expressed genes (ARDGs). Subsequently, using univariate Cox regression analysis, we identified 16 prognostic ARDGs. To construct PTC survival risk scoring models, we employed Lasso Cox and multivariate + stepwise Cox regression methods. Comparative analysis of these models revealed that the Lasso Cox regression model (LPSRSM) displayed superior performance. Further analyses identified age and LPSRSM as independent prognostic factors for PTC. Notably, patients classified as low-risk by LPSRSM exhibited significantly better prognosis, as demonstrated by Kaplan-Meier survival analyses. Additionally, we investigated the potential impact of adhesion feature on energy metabolism and inflammatory responses. Furthermore, leveraging the CMAP database, we screened 10 drugs that may improve prognosis. Finally, using Lasso regression analysis, we identified four genes for a diagnostic model of lymph node metastasis and three genes for a diagnostic model of tumor. These gene models hold promise for prognosis and disease diagnosis in PTC.
Citation: Shuo Sun, Xiaoni Cai, Jinhai Shao, Guimei Zhang, Shan Liu, Hongsheng Wang. Machine learning-based approach for efficient prediction of diagnosis, prognosis and lymph node metastasis of papillary thyroid carcinoma using adhesion signature selection[J]. Mathematical Biosciences and Engineering, 2023, 20(12): 20599-20623. doi: 10.3934/mbe.2023911
[1] | Shangbo Zhou, Yuxiao Han, Long Sha, Shufang Zhu . A multi-sample particle swarm optimization algorithm based on electric field force. Mathematical Biosciences and Engineering, 2021, 18(6): 7464-7489. doi: 10.3934/mbe.2021369 |
[2] | Xin Zhou, Shangbo Zhou, Yuxiao Han, Shufang Zhu . Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5241-5268. doi: 10.3934/mbe.2022246 |
[3] | Yufeng Wang, BoCheng Wang, Zhuang Li, Chunyu Xu . A novel particle swarm optimization based on hybrid-learning model. Mathematical Biosciences and Engineering, 2023, 20(4): 7056-7087. doi: 10.3934/mbe.2023305 |
[4] | Fei Chen, Yanmin Liu, Jie Yang, Meilan Yang, Qian Zhang, Jun Liu . Multi-objective particle swarm optimization with reverse multi-leaders. Mathematical Biosciences and Engineering, 2023, 20(7): 11732-11762. doi: 10.3934/mbe.2023522 |
[5] | Dongning Chen, Jianchang Liu, Chengyu Yao, Ziwei Zhang, Xinwei Du . Multi-strategy improved salp swarm algorithm and its application in reliability optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5269-5292. doi: 10.3934/mbe.2022247 |
[6] | Zhiyan Xing, Yanlong Yang, Zuopeng Hu . Nash equilibrium realization of population games based on social learning processes. Mathematical Biosciences and Engineering, 2023, 20(9): 17116-17137. doi: 10.3934/mbe.2023763 |
[7] | Qing Wu, Chunjiang Zhang, Mengya Zhang, Fajun Yang, Liang Gao . A modified comprehensive learning particle swarm optimizer and its application in cylindricity error evaluation problem. Mathematical Biosciences and Engineering, 2019, 16(3): 1190-1209. doi: 10.3934/mbe.2019057 |
[8] | Kefeng Fan, Cun Xu, Xuguang Cao, Kaijie Jiao, Wei Mo . Tri-branch feature pyramid network based on federated particle swarm optimization for polyp segmentation. Mathematical Biosciences and Engineering, 2024, 21(1): 1610-1624. doi: 10.3934/mbe.2024070 |
[9] | Xuepeng Zheng, Bin Nie, Jiandong Chen, Yuwen Du, Yuchao Zhang, Haike Jin . An improved particle swarm optimization combined with double-chaos search. Mathematical Biosciences and Engineering, 2023, 20(9): 15737-15764. doi: 10.3934/mbe.2023701 |
[10] | Ruiping Yuan, Jiangtao Dou, Juntao Li, Wei Wang, Yingfan Jiang . Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(2): 1903-1918. doi: 10.3934/mbe.2023087 |
The association between adhesion function and papillary thyroid carcinoma (PTC) is increasingly recognized; however, the precise role of adhesion function in the pathogenesis and prognosis of PTC remains unclear. In this study, we employed the robust rank aggregation algorithm to identify 64 stable adhesion-related differentially expressed genes (ARDGs). Subsequently, using univariate Cox regression analysis, we identified 16 prognostic ARDGs. To construct PTC survival risk scoring models, we employed Lasso Cox and multivariate + stepwise Cox regression methods. Comparative analysis of these models revealed that the Lasso Cox regression model (LPSRSM) displayed superior performance. Further analyses identified age and LPSRSM as independent prognostic factors for PTC. Notably, patients classified as low-risk by LPSRSM exhibited significantly better prognosis, as demonstrated by Kaplan-Meier survival analyses. Additionally, we investigated the potential impact of adhesion feature on energy metabolism and inflammatory responses. Furthermore, leveraging the CMAP database, we screened 10 drugs that may improve prognosis. Finally, using Lasso regression analysis, we identified four genes for a diagnostic model of lymph node metastasis and three genes for a diagnostic model of tumor. These gene models hold promise for prognosis and disease diagnosis in PTC.
With the advancement of computational intelligence technology, many different types of metaheuristic algorithms have emerged. The metaheuristic algorithms can be classified into individual-based metaheuristic algorithms and population-based metaheuristic algorithms. The particle swarm optimization (PSO) algorithm is a representative of population-based intelligence algorithms. PSO is widely used due to its advantages of convergence speed, simplicity and few parameters [1]. The application of PSO appeared in various fields [2,3,4,5]. However, traditional PSO algorithms have difficulty escaping from the local optimum, resulting in low convergence accuracy. Therefore, lots of improved versions of PSO algorithms have blossomed. Specifically, these PSO variants can be roughly divided into five kinds: 1) strategy selection [6,7,8], 2) hybrid algorithms [9,10,11], 3) parameter design [12,13], 4) topology structure optimization [14,15] and 5) other approaches [16,17,18,19,20], as shown in Figure 1. Drawing upon five strands of research into PSO variants, this study attempts to further carry out research on the first aspect, i.e., the influence of multi-strategy selection on the performance of PSO.
As briefly reviewed above, a considerable amount of literature has grown up around the theme of a single strategy, such as the learning strategy, the crossover strategy, the mutation strategy and so on. In terms of learning strategies, Liang et al. [6] proposed a comprehensive learning PSO algorithm (CLPSO), in which the particles could learn from better particles on the same dimension to increase diversity. CLPSO shows good performance for solving multi-modal optimization problems. Similarly, HCLPSO [7] was proposed based on the comprehensive learning (CL) strategy through two-population cooperation, which was integrated learning from personal best particle (pbest) and global best particle (gbest) to balance exploration and exploitation processes. To improve the global search ability and prevent premature convergence, Liang et al. [21] proposed a CL strategy on the basis of crossover. Moreover, Xu et al. [22] proposed a PSO algorithm depending on dimension learning. In respect of crossover strategy, Chen et al. [23] introduced a PSO algorithm relying on two crossover methods (arithmetic crossover and differential evolution crossover) to construct guidance vectors concerning premature convergence of the population. At the later stage of iterations, Zhang et al. [24] developed a crossover operation that is carried out on the current poor pbest in order to enhance the quality of the offspring individuals and promote the convergence speed. From the perspective of mutation strategy, a PSO variant was designed based on the cooperation of large-scale and small-scale mutations, aspiring to balance the global and local search capabilities [25]. Focusing on enhancing the diversity of the population, Li et al. [26] proposed a differential mutation method to increase the global exploration potentiality of the PSO algorithm. Identically, drawing inspiration from the phenomenon of fireworks explosions, Huang et al. [27] designed a mutation operator to enhance the exploration capability of multi-objective optimization algorithms.
Up to now, a proliferation of studies has also grown up around the issue of the multi-strategy PSO algorithm. Wang et al. [28] combined Cauchy mutation and opposition learning to prevent premature convergence. In order to accelerate the convergence speed of the population, Ouyang et al. [29] used the strategies of stochastic learning and opposition learning. Zhang et al. [30] proposed the strategy of learning from excellent individuals and dynamic differential evolution. Wang et al. [31] utilized the CL strategy and non-uniform mutation operator to promote the exploration capability of the algorithm. The method of Gaussian mutation was also used to increase the ability to break away from the local optimum. Likewise, the CL strategy with an archive set and the elite learning strategy were employed to enhance the global search and local exploitation abilities of the algorithm, respectively [32]. In another PSO variant, a multi-strategy learning PSO algorithm was proposed and applied to different stages of evolution for large-scale optimization problems [33]. In addition, the self-adaptive method, which has been extensively studied in relation to the ensemble PSO algorithm, is particularly applicable to multi-strategy selection for PSO variants. For instance, an ensemble learning PSO algorithm (EPSO) was proposed to select the best strategy for generating fine offspring by tracking the success rate of strategy improvement over a given time period [34]. Based on game theory, another multi-strategy selection mechanism, SDPSO [8], was designed. Strategies with high payoffs were selected to generate offspring with a higher probability. Moreover, Li et al. [35] designed different particle velocity formulas from exploitation, the local optimum, exploration and convergence, thus forming a hybrid learning strategy method.
Furthermore, reinforcement learning (RL) has penetrated deeply into evolutionary algorithms (EAs), embracing new areas for RL-guided EAs and forming new hybrid methods [36]. RL, as a sub-field of machine learning, is inspired by behaviorism in psychology. RL is the way that agents learn by "trial and error, " and what they learn is a mapping from environmental states to actions, with the goal of making agents get the maximum reward [37]. RL has been widely adopted in intelligent approach. Liu et al. [38] used the RL technique to guide the parameter optimization of the PSO algorithm. Wang et al. [39] embedded RL into the hierarchical learning PSO algorithm for selecting the number of layers. For the optimization of functions, Li et al. [40] resorted to the RL technique for multi-modal multi-objective optimization problems. Hu et al. [41] employed a similar method to select differential evolution mutation strategies and then solve constrained optimization problems. The RL technique was also adopted to solve dynamic multi-objective optimization problems [42]. Moreover, deep RL was embedded into a multi-objective evolutionary optimization algorithm based on decomposition for the selection of adaptive operators [43]. RL-guided EAs were also given to solve application problems in diverse domains [44,45,46,47,48].
Based on the above description, it can be seen that these studies provide important insights into the relationship between RL techniques and EAs. Although some PSO variants employ the idea of Q-learning, the purpose is only to select appropriate parameters [38] or level numbers [39]. To the best of our knowledge, however, the method of self-adaptive multi-strategy selection combined with the RL technique has not been provided in existing multi-swarm cooperation PSO variants, and the effect of the multi-swarm PSO algorithm using RL as an auxiliary tool is still unknown. It follows that some effective schemes, such as Q-learning, mixed techniques, etc., are still underutilized to improve the performance of EAs. Therefore, under the guidance of the RL technique, a new multi-strategy selection mechanism is designed. In the evolutionary process, the same particle can be assigned different strategies while different particles can be given the same strategy. Each strategy can be evaluated by the Q-table and refreshed automatically during iterations. Hence, a more flexible structure for multi-strategy self-learning is reshaped. The main contributions of this paper are highlighted as follows:
∙ Q-learning, as the classical RL technique, is thoroughly investigated in the multi-strategy self-learning PSO algorithm. A multi-strategy self-learning method is constructed to select appropriate strategies for producing high-quality offspring.
∙ The traditional state division in Q-learning is the uniform division method [38], [40,41]. In the proposed algorithm, a non-uniform division approach is proposed which is more suitable for optimization problems. Meanwhile, experiment results also show that this partition is related to the dimension of the particles and is more favorable for solving high-dimensional problems.
∙ A novel multi-strategy self-learning particle swarm optimizer (MPSORL) is designed to obtain a better balance between exploration and exploitation. MPSORL has faster convergence speed and better global optimization ability than compared approaches in most cases.
∙ Systematic experiments on two popular test suites and a real-world case have demonstrated that the multi-strategy self-learning mechanism is effective, and the proposed MPSORL outperforms other state-of-the-art algorithms.
The remainder of this paper is arranged as follows. The related theory is introduced in Section 2. The MPSORL algorithm in detail is designed in Section 3. Section 4 is the simulation experiment and analysis. Further discussion is presented in Section 5. Section 6 concludes this paper.
PSO simulates the foraging behavior of birds, and a particle is often compared to a bird [49]. PSO focuses on two properties of particles: position and velocity. Each particle searches the space independently, remembers its own past optimal solution and also knows the current optimal solution found by the whole swarm of particles. Generally, Equations (2.1) and (2.2) are used to describe the particles' velocity and position, respectively [50].
vk+1ij=w∗vkij+c1∗r1∗(pbestkij−xkij)+c2∗r2∗(gbestkj−xkij) | (2.1) |
xk+1ij=xkij+vk+1ij | (2.2) |
where w denotes the inertia weight, and c1, c2 denote the learning factors. r1 and r2 are uniformly distributed random numbers in the range of [0, 1]. Let i=1,2,⋯,N, where N is the number of particles. Likewise, let j=1,2,⋯,D, where D is the dimension of a particle. k denotes the number of the iteration. xkij and vkij are the position and velocity components of the ith particle at the kth iteration. For the minimization problem, the optimal position pbestki found by the particle can be expressed as follows:
pbestki={xki,iff(xki)<f(pbestk−1i)pbestk−1i,iff(xki)≥f(pbestk−1i) | (2.3) |
where f(⋅) denotes the fitness value. Furthermore, Equation (2.4) describes the current optimal position for the entire particle space.
gbestk=pbestkg | (2.4) |
where g=argmin1≤i≤N[f(pbestki)].
In RL, there are five essential elements: agent, environment, states, actions and reward. The Q-learning algorithm is a method that uses temporal difference to solve RL control problems [51]. Concretely, at a certain time t, the agent decides the action at; and the environment offers a reward rt+1, generates the next state st+1 according to the agent's action at and then gives feedback to the agent. Here, the state set is denoted by S={s1,s2,⋯,sp}, and p denotes the number of states. The action set is denoted by A={a1,a2,⋯,aq}, and q denotes the number of actions. The characteristic of the Q-learning algorithm is to construct a two-dimensional table, named the Q-table. According to the potential states and actions, the agent looks up the table to obtain the Q-value and then give the optimal action. The details are shown in Figure 2.
In the Q-table, the Q-value is computed as follows:
Q(st,at)=Q(st,at)+α[rt+1+γmaxa∈AQ(st+1,a)−Q(st,at)]. | (2.5) |
Notably, Equation (2.5) has the following attributes:
1) The right-hand side of the formula consists of three parts: the old value Q(st,at), the immediate reward rt+1, and the estimate of the optimal future value Q(st+1,a).
2) The parameter α represents the learning rate, which denotes a trade-off between the previous learning and the current learning.
3) The parameter γ refers to the factor that considers the impact of future rewards on the present, and it is a number in the range [0, 1]. If γ→1, the future state has a greater impact on Q-value. On the contrary, if γ→0, Q-value is more affected by the current state.
The pseudocode for Q-learning is expressed in Algorithm 1.
Algorithm 1 Q-learning algorithm |
1: Initialize Q(s, a), α, γ, r, ε 2: Repeat : 3: Choose the best action at for the current state st using policy (e.g., ε-greedy strategy); 4: Execute action at; 5: Get the immediate reward rt+1; 6: Get the maximum Q-value for the next state st+1; 7: Update the Q-table by Eq (2.5); 8: Update the current state st←st+1; 9: Until the condition is terminal |
When solving global optimization problems with EAs, an important issue is how to balance exploration and exploitation. However, it is hard to maintain such a balance owing to the characteristics of the problems. In order to better match the properties of different functions, it is desirable to find a method to select the appropriate strategy adaptively. In the multi-strategy PSO algorithm, different strategies can greatly affect the performance of the algorithm. In other words, the selection of strategy plays a vital role in improving the performance of the algorithm. In fact, as one of the most well-known RL algorithms, Q-learning can directly learn the optimal strategy [51]. New states and rewards are generated by executing the action, and then the Q-table is updated. The excellence or worseness of a strategy depends on the cumulative reward after playing the strategy for a long time. Based on this idea, a multi-strategy self-learning method based on the RL technique is proposed. Therefore, a new EA based on the framework of PSO, named MPSORL, is designed. The specific steps are described in the next section. Table 1 shows a comparison of terms between PSO and Q-learning.
Q-learning | PSO |
Agents | Particles |
Environment | Optimization problems |
State | Fitness value |
Action | Strategy |
This section will describe the detailed design of the Q-learning algorithm, the related theory of which has been presented briefly in Section 2.2. The purpose is to embed the newly designed Q-learning elements into the multi-strategy PSO algorithm and then form the multi-strategy self-learning method.
For any subject, the distribution of student scores can always be divided into five levels: excellent, good, medium, pass and fail. In most cases, the number of individuals located at each level may not necessarily be equal. Referring to the student scores distribution analogy, it is reasonable that the state adopts a non-uniform division according to the fitness value. As a result, the fitness value is divided into five grades from the smallest to the largest, namely, s1 to s5, as shown in Figure 3. It is not difficult to find from Figure 3 that the number of particles may not be equally distributed. Thus, the number of particles within each level is not the same. To the best of our knowledge, the traditional state division in Q-learning has only focused on the uniform division method [38,40,41]. The two methods are compared, and the advantages of the non-uniform state division are illustrated by the experiment in Section 5.1.
If the opportunity to try is distributed equally among actions, the average reward of each action is eventually taken as an approximation of the expected reward. If the opportunity to try is always given to the action with the highest current average reward, it may not be easy to find the optimal action. It can be seen that these two methods are contradictory. So, a compromise needs to be made between exploration and exploitation. Specifically, the ε-greedy strategy is adopted to execute a compromise between exploration and exploitation based on probability, that is, at each trial, it explores with probability ε and selects an action at random with uniform probability. On the contrary, it exploits with probability 1−ε and chooses the currently best action. Here, ε is a smaller number within [0, 1], and the value is analyzed experimentally in Section 4.2. The ε-greedy strategy is displayed in Eq (3.1).
a={argmaxa∈AQ(s,a),withprobability1−εrandom,otherwise. | (3.1) |
For a sequence (s,a,s′,r), the payoff cannot be obtained until the transition arises from state s to state s′ by executing action a. Each particle has an associated reward r. If the objective function value decreases, the reward is set to 1. If the objective function value cannot be reduced to an acceptable level of state, the value of the reward is set to 0. The rule is presented by Eq (3.2) as follows:
r={1,ifsi−1←si,i=2,⋯,p0,otherwise. | (3.2) |
As described in Section 3.2, under the framework of SDPSO [8], this paper uses a cooperation framework of small (pop1) and large (pop2) subpopulations [8,34]. Unlike the previous work, a new multi-strategy self-learning method is developed in the large population, i.e, the RL technique is adopted to train the behavior of particles. The Q-table is shown in Table 2. Among them, actions refer to the strategies that particles use to generate offspring individuals. Multi-strategy and self-adaptive selection techniques can provide complementary advantages. Here, LIPS [52], UPSO [53], LDWPSO [50] and CLPSO [6] are denoted as actions a1, a2, a3 and a4, respectively. Concretely, LIPS can work well in searching for multiple local optima in multimodal optimization problems, UPSO shows the advantages of dealing with unimodal problems, the convergence speed of LDWPSO is faster than other compared approaches, and CLPSO can preserve the population diversity throughout the iteration process. As a result, a multi-strategy self-adaptive selection scheme is developed by adopting these strategies. According to the fitness value of the individuals, the state of the population is first divided inhomogeneously. Then, the ε-greedy strategy is used to select the action for each particle. Next, the offspring individuals are generated to update pbest and gbest. Thus, the fitness of offspring individuals acts as an input to the environment, and the next state is calculated for each particle. The process of state transition produces the corresponding reward value. Then, the Q-table is updated. This process is repeated until the termination condition is met. From the above statement, the framework of multi-strategy self-learning is displayed in Figure 4.
State | Action | |||
a1 | a2 | a3 | a4 | |
s1 | Q(s1, a1) | Q(s1, a2) | Q(s1, a3) | Q(s1, a4) |
s2 | Q(s2, a1) | Q(s2, a2) | Q(s2, a3) | Q(s2, a4) |
s3 | Q(s3, a1) | Q(s3, a2) | Q(s3, a3) | Q(s3, a4) |
s4 | Q(s4, a1) | Q(s4, a2) | Q(s4, a3) | Q(s4, a4) |
s5 | Q(s5, a1) | Q(s5, a2) | Q(s5, a3) | Q(s5, a4) |
Based on the above description, the pseudocode of MPSORL is described in Algorithm 2. First, some input parameters are defined. Then, when the program enters the loop, pop1 is updated by CLPSO in lines 4–8. For pop2, lines 10–33 are to assign the optimal strategy for each particle adaptively. Specifically, lines 10–11 are to divide state and execute selection action based on the Q-table. Lines 12–17 are to generate offspring individuals by means of various strategies. For every learning period (LP) iteration, the Q-table is updated in lines 19–33. The action is reselected in line 22, which is to enhance exploration abilities. It should be noted that no additional parameters are introduced. The procedures in lines 1–34 are executed iteratively until the termination condition is met.
Algorithm 2 The proposed algorithm |
Input:
Initialize N, D, the maximum number of fitness evaluations (Max_Fes); Initialize the maximum number of iterations (Max_Iter), k=0, kk=0, fit=0; Initialize the inertia weight w, learning factors c1, c2, position, velocity; Evaluate the fitness value, fit=fit+N, record pbest and gbest; Initialize Q-table, state, reward, learning period LP, ε, α, γ. Output: Best solution. 1: while termination condition is not met do 2: k=k+1; 3: /* pop1 */ 4: for i=1:length(pop1) do 5: Update the particle's velocity and position by CLPSO; 6: Evaluate the fitness of the particle, fit=fit+1; 7: Update pbesti and gbest; 8: end for 9: /* pop2 */ 10: Divide state for pop2 according to Section 3.2.1; 11: Select action based on Q-table using ε-greedy strategy according to Eq (3.1); 12: for i=length(pop1)+1:N do 13: Update the particle's velocity by action ai; 14: Update the particle's position; 15: Evaluate the fitness of the particle, fit=fit+1; 16: Update pbesti and gbest; 17: end for 18: /* update Q-table */ 19: if k<Max_Iter then 20: if mod(k, LP) = 0 then 21: Determine next state; 22: Select action using ε-greedy strategy; 23: Calculate reward according to Eq (3.2); 24: Update Q-table using Eq (2.5); 25: end if 26: else 27: if mod(kk, LP) = 0 then 28: Determine next state; 29: Calculate reward according to Eq (3.2); 30: Update Q-table using Eq (2.5); 31: end if 32: kk=kk+1; 33: end if 34: end while |
In MPSORL, the worst-case time complexity of pop1 is O (N1D), where N1 is the population size of pop1. The worst-case time complexity of pop2 is O (N2D), where N2 is the population size of pop2. Compared with the standard PSO, MPSORL requires sorting the particles in the pop2 and then assigning each particle to a specific state. The mean time complexities of the two procedures can be denoted by (N2log(N2)) and O (N2D), respectively. In addition, MPSORL needs to update the Q-table through Q-learning. The worst-case time complexity is O (N2D). According to the above component complexity analysis, MPSORL has the overall computational time complexity O (N1D+3N2D+N2log(N2)).
This paper introduces the RL technique to classify particle states and guide particle movements, and a multi-strategy self-learning mechanism based on the RL technique is proposed to select the optimal strategy to update particles' information. In the same state, particles may adopt different strategies, and particles in different states may also adopt the same strategy. The details are analyzed below:
1) Self-learning method based on the RL technique. The action set is constructed by the strategy pool. The whole process is controlled by the Q-table, which facilitates independent learning and completes the strategy selection. This makes the proposed algorithm significantly different from other multi-strategy selection PSO algorithms. For instance, the limitation of EPSO [34] lies in the fact that the information about fitness improvement is not fully utilized to update the adoption probability of the strategies. In addition, in SDPSO [8], a restart operator might reduce the convergence speed.
2) State divisions of particles unequal in the population. Particles are divided into five grades according to fitness: excellent, good, medium, pass and fail. The better the particle's fitness value is, the higher its state level. The divisions distinguish the MPSORL from the RL-guided PSO variants [38,40,41] which have only focused on the uniform division.
3) Feedback mechanism. Strategy selection in existing multi-strategy PSO variants does not provide feedback information, relying on their performance evaluations [8,34,35]. In Eq (2.5), the Q-value is evaluated by the immediate and future rewards. The future reward has varying degrees of influence on the Q-value. According to the theory of Q-learning and the individual's multi-step evolutionary performance, if one regulates future payoffs and then feeds them back to the agent, the offspring may also be influenced.
In this section, orthogonal experiments are first adopted to determine five vital parameters. Then, the superiority of the multi-strategy PSO algorithm is verified on 30-dimensional problems of the CEC2017 test suite [54]. Next, the performance of the algorithm is further verified on the 50- and 100-dimensional problems of CEC2017, respectively. Subsequently, MPSORL is tested on the latest suite, CEC2019 [55]. The detailed information for CEC2017 and CEC2019 is outlined in Tables 3 and 4. Finally, the algorithm is also applied to a real-world problem.
Function type | No. | Test Functions | F(x∗) | D |
Unimodal Functions | F1 | Shifted and Rotated Bent Cigar Function | 100 | 30/50/100 |
F2 | Shifted and Rotated Sum of Different Power Function* | 200 | 30/50/100 | |
Simple Multimodal Functions | F3 | Shifted and Rotated Zakharov Function | 300 | 30/50/100 |
F4 | Shifted and Rotated Rosenbrock Function | 400 | 30/50/100 | |
F5 | Shifted and Rotated Rastrigin Function | 500 | 30/50/100 | |
F6 | Shifted and Rotated Expanded Schaffer's F6 Function | 600 | 30/50/100 | |
F7 | Shifted and Rotated Lunacek Bi-Rastrigin Function | 700 | 30/50/100 | |
F8 | Shifted and Rotated Non-Continuous Rastrigin Function | 800 | 30/50/100 | |
F9 | Shifted and Rotated Levy Function | 900 | 30/50/100 | |
Hybrid Functions | F10 | Shifted and Rotated Schwefel Function | 1000 | 30/50/100 |
F11 | Hybrid Function 1 (N = 3) | 1100 | 30/50/100 | |
F12 | Hybrid Function 2 (N = 3) | 1200 | 30/50/100 | |
F13 | Hybrid Function 3 (N = 3) | 1300 | 30/50/100 | |
F14 | Hybrid Function 4 (N = 4) | 1400 | 30/50/100 | |
F15 | Hybrid Function 5 (N = 4) | 1500 | 30/50/100 | |
F16 | Hybrid Function 6 (N = 4) | 1600 | 30/50/100 | |
F17 | Hybrid Function 6 (N = 5) | 1700 | 30/50/100 | |
F18 | Hybrid Function 6 (N = 5) | 1800 | 30/50/100 | |
F19 | Hybrid Function 6 (N = 5) | 1900 | 30/50/100 | |
Composition Functions | F20 | Hybrid Function 6 (N = 6) | 2000 | 30/50/100 |
F21 | Composition Function 1 (N = 3) | 2100 | 30/50/100 | |
F22 | Composition Function 2 (N=3) | 2200 | 30/50/100 | |
F23 | Composition Function 3 (N = 4) | 2300 | 30/50/100 | |
F24 | Composition Function 4 (N = 4) | 2400 | 30/50/100 | |
F25 | Composition Function 5 (N = 5) | 2500 | 30/50/100 | |
F26 | Composition Function 6 (N = 5) | 2600 | 30/50/100 | |
F27 | Composition Function 7 (N = 6) | 2700 | 30/50/100 | |
F28 | Composition Function 8 (N = 6) | 2800 | 30/50/100 | |
F29 | Composition Function 9 (N = 3) | 2900 | 30/50/100 | |
F30 | Composition Function 10 (N = 3) | 3000 | 30/50/100 | |
Note: *F2 has been excluded because it shows unstable behavior especially for higher dimensions. |
No. | Functions | F(x∗) | D | Search range |
F31 | Storn's Chebyshev Polynomial Fitting Problem | 1 | 9 | [-8192, 8192] |
F32 | Inverse Hilbert Matrix Problem | 1 | 16 | [-16384, 16384] |
F33 | Lennard-Jones Minimum Energy Cluster | 1 | 18 | [-4, 4] |
F34 | Rastrigin Function | 1 | 10 | [-100,100] |
F35 | Griewank's Function | 1 | 10 | [-100,100] |
F36 | Weierstrass Function | 1 | 10 | [-100,100] |
F37 | Modified Schwefel Function | 1 | 10 | [-100,100] |
F38 | Expanded Schaffer's F6 Function | 1 | 10 | [-100,100] |
F39 | Happy Cat Function | 1 | 10 | [-100,100] |
F40 | Ackley Function | 1 | 10 | [-100,100] |
Experimental environment is Windows 7, Intel(R) Xeon(R) CPU E5-2667 v4@3.30 GHz, and the memory is 16 GB. All the code was tested on the Matlab R2020b platform.
The population size is set to 40, 40 and 80 in 30, 50 and 100 dimensions, respectively. Max_Fes is set to 10,000 × dimension. The results are based on the errors of the optimized values minus the standard optimal value. Each algorithm is evaluated on each test function for 30 independent runs. Moreover, for comparative fairness, the same number of evaluations is used. All test functions are minimization problems.
The performance of the algorithm is measured by two non-parametric tests, namely, the Wilcoxon rank sum test with a significance level 0.05 and the Friedman test [56,57]. The symbols "+/≈/−" mean that the proposed algorithm is better, has no significant difference or is relatively worse than the comparison algorithm, respectively. The parameter settings are presented in Table 5.
Algorithm | Key parameters | Year | Ref. |
LDWPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 1998 | [50] |
CLPSO | w:0.9−0.2, c:3.0−1.5, Vmax = 0.5*range | 2006 | [6] |
UPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5 | 2004 | [53] |
LIPS | χ:0.7298, Vmax = 0.5*range, nsize = 3 | 2013 | [52] |
HCLPSO | w:0.99−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.2*range | 2015 | [7] |
EPSO | w:0.9−0.4, c:3.0−1.5, w1:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2017 | [34] |
SDPSO | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2022 | [8] |
MRFO | S = 2 | 2020 | [58] |
I-GWO | a = 2−0 | 2021 | [59] |
BeSD | K = 5 | 2021 | [60] |
MPSORL | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | − | − |
In MPSORL, there are five important parameters: the size of pop1 (N1), the learning period (LP), the exploration rate (ε), the learning factor (α), the decay rate (γ). The parameter levels of MPSORL are shown in Table 6. The orthogonal table of L16(45) is shown in Table 7. The algorithm corresponding to each combination of parameters is tested on 30-dimensional problems from CEC2017. From Figure 5 below, it is not difficult to find that the optimal parameters combination is: 0.4, 50, 0.8, 0.6 and 0.8.
Level | Factors | ||||
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | |
1 | 0.1 | 20 | 0.80 | 0.3 | 0.6 |
2 | 0.2 | 50 | 0.85 | 0.4 | 0.7 |
3 | 0.3 | 100 | 0.90 | 0.5 | 0.8 |
4 | 0.4 | 200 | 0.95 | 0.6 | 0.9 |
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | average rank | |
1 | 0.1 | 20 | 0.8 | 0.3 | 0.6 | 8.5 |
2 | 0.1 | 50 | 0.85 | 0.4 | 0.7 | 9.345 |
3 | 0.1 | 100 | 0.9 | 0.5 | 0.8 | 10.672 |
4 | 0.1 | 200 | 0.95 | 0.6 | 0.9 | 11.483 |
5 | 0.2 | 20 | 0.85 | 0.5 | 0.9 | 7.81 |
6 | 0.2 | 50 | 0.8 | 0.6 | 0.8 | 6.207 |
7 | 0.2 | 100 | 0.95 | 0.3 | 0.7 | 10.293 |
8 | 0.2 | 200 | 0.9 | 0.4 | 0.6 | 9.207 |
9 | 0.3 | 20 | 0.9 | 0.6 | 0.7 | 7.879 |
10 | 0.3 | 50 | 0.95 | 0.5 | 0.6 | 9.828 |
11 | 0.3 | 100 | 0.8 | 0.4 | 0.9 | 6.379 |
12 | 0.3 | 200 | 0.85 | 0.3 | 0.8 | 7.293 |
13 | 0.4 | 20 | 0.95 | 0.4 | 0.8 | 9.362 |
14 | 0.4 | 50 | 0.9 | 0.3 | 0.9 | 7.931 |
15 | 0.4 | 100 | 0.85 | 0.6 | 0.6 | 6.793 |
16 | 0.4 | 200 | 0.8 | 0.5 | 0.7 | 7.017 |
In this section, in order to verify the superiority of the multi-strategy PSO algorithm, several single-strategy PSO algorithms, namely, LDWPSO, UPSO, CLPSO and LIPS, are selected to be compared with the multi-strategy MPSORL algorithm.
Table 8 summarizes the statistical results by comparing these four single-strategy PSO algorithms with MPSORL via Wilcoxon rank sum test. As shown in Table 8, for unimodal functions (F1, F3), it can be found that the MPSORL algorithm performs best on F3. Except for F4, MPSORL ranks first for all simple multimodal functions (F4−F10). For the hybrid functions (F11−F20), the MPSORL algorithm obtains excellent results on F11, F12, F16−F18 and F20, and there is no significant difference with the first-ranked algorithms on F13 and F14 functions. For the composition functions (F21−F30), the computational results on F21−F24, F26, F29, and F30 are significantly better than those of the other algorithms, and there is no significant difference on the F25 and F28 functions compared with the comparison algorithms that achieved the best results.
IEEE CEC2017 with 30D | LDWPSO | UPSO | CLPSO | LIPS | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 4.64E+03±5.02E+03+ | 2.03E+03±2.03E+03+ | 8.17E+00±1.07E+01− | 9.24E+02±1.60E+03+ | 1.05E+02±2.10E+02 |
F3 | 3.07E+02±4.39E+02+ | 1.46E+02±2.08E+02+ | 2.84E+04±6.40E+03+ | 2.49E+03±1.59E+03+ | 2.65E−02±7.68E−02 |
F4 | 9.26E+01±2.63E+01+ | 3.58E+01±3.85E+01≈ | 5.90E+01±2.66E+01+ | 3.59E+01±2.95E+01≈ | 4.13E+01±2.72E+01 |
F5 | 6.95E+01±1.83E+01+ | 6.99E+01±1.13E+01+ | 4.76E+01±8.81E+00+ | 1.05E+02±2.29E+01+ | 3.25E+01±1.00E+01 |
F6 | 4.27E−01±5.54E−01+ | 1.20E+00±9.13E−01+ | 4.92E−07±2.49E−07+ | 2.29E+01±8.13E+00+ | 2.35E−13±6.63E−14 |
F7 | 1.18E+02±2.17E+01+ | 9.67E+01±1.34E+01+ | 9.22E+01±1.01E+01+ | 1.13E+02±1.93E+01+ | 8.33E+01±1.15E+01 |
F8 | 6.93E+01±1.74E+01+ | 6.44E+01±1.13E+01+ | 5.33E+01±7.36E+00+ | 9.36E+01±2.08E+01+ | 3.67E+01±1.33E+01 |
F9 | 1.40E+02±1.22E+02+ | 9.18E+01±6.82E+01+ | 1.18E+02±5.19E+01+ | 1.52E+03±9.83E+02+ | 1.27E+00±1.05E+00 |
F10 | 3.21E+03±6.26E+02+ | 2.90E+03±4.36E+02+ | 2.18E+03±2.71E+02+ | 3.11E+03±4.73E+02+ | 1.86E+03±3.64E+02 |
F11 | 8.61E+01±4.17E+01+ | 5.86E+01±2.68E+01+ | 6.69E+01±2.04E+01+ | 1.04E+02±3.48E+01+ | 4.65E+01±3.30E+01 |
F12 | 3.79E+04±1.95E+04+ | 8.47E+04±8.91E+04+ | 4.02E+05±2.22E+05+ | 1.30E+05±1.33E+05+ | 2.34E+04±1.03E+04 |
F13 | 9.64E+03±1.20E+04+ | 7.86E+03±6.06E+03+ | 3.41E+02±1.26E+02≈ | 4.42E+03±5.13E+03+ | 4.90E+02±5.07E+02 |
F14 | 6.14E+03±4.66E+03+ | 3.66E+03±3.79E+03≈ | 2.89E+04±2.86E+04+ | 8.02E+03±5.99E+03+ | 3.94E+03±5.06E+03 |
F15 | 5.33E+03±5.78E+03+ | 2.47E+03±2.83E+03+ | 1.44E+02±1.03E+02− | 1.97E+03±3.02E+03+ | 2.88E+02±3.70E+02 |
F16 | 7.60E+02±2.01E+02+ | 6.57E+02±1.25E+02+ | 5.16E+02±1.21E+02+ | 7.24E+02±2.56E+02+ | 3.97E+02±1.79E+02 |
F17 | 2.26E+02±1.59E+02+ | 1.69E+02±7.73E+01+ | 1.22E+02±5.72E+01+ | 2.88E+02±1.55E+02+ | 9.27E+01±4.66E+01 |
F18 | 9.36E+04±5.84E+04≈ | 1.04E+05±4.07E+04≈ | 1.37E+05±6.93E+04+ | 8.81E+04±7.43E+04≈ | 8.72E+04±5.27E+04 |
F19 | 7.41E+03±1.02E+04+ | 2.04E+03±2.27E+03+ | 5.83E+01±4.54E+01− | 2.50E+03±3.42E+03+ | 1.55E+02±1.42E+02 |
F20 | 2.78E+02±1.26E+02+ | 2.77E+02±7.74E+01+ | 1.56E+02±7.37E+01≈ | 4.76E+02±1.29E+02+ | 1.34E+02±6.92E+01 |
F21 | 2.70E+02±1.43E+01+ | 2.62E+02±3.02E+01+ | 2.40E+02±4.08E+01+ | 2.95E+02±2.04E+01+ | 2.33E+02±9.38E+00 |
F22 | 3.90E+02±1.10E+03+ | 1.01E+02±1.73E+00+ | 1.13E+02±6.95E+00+ | 3.19E+02±8.32E+02≈ | 1.00E+02±1.08E+00 |
F23 | 4.29E+02±2.15E+01+ | 4.32E+02±2.02E+01+ | 4.00E+02±1.20E+01+ | 5.35E+02±3.83E+01+ | 3.86E+02±1.07E+01 |
F24 | 5.15E+02±3.12E+01+ | 4.87E+02±1.20E+01+ | 4.76E+02±9.15E+01+ | 5.70E+02±4.66E+01+ | 4.53E+02±7.86E+00 |
F25 | 3.96E+02±1.37E+01+ | 3.89E+02±5.82E+00+ | 3.87E+02±1.13E+00≈ | 3.87E+02±4.05E+00≈ | 3.87E+02±1.05E+00 |
F26 | 1.43E+03±7.28E+02+ | 6.40E+02±7.57E+02≈ | 4.02E+02±1.94E+02+ | 9.96E+02±1.18E+03≈ | 3.75E+02±3.19E+02 |
F27 | 5.37E+02±1.22E+01+ | 5.48E+02±1.57E+01+ | 5.10E+02±5.27E+00− | 5.84E+02±2.16E+01+ | 5.15E+02±6.39E+00 |
F28 | 4.13E+02±2.81E+01+ | 3.15E+02±3.50E+01≈ | 4.20E+02±6.12E+00+ | 3.10E+02±2.98E+01≈ | 3.36E+02±5.26E+01 |
F29 | 6.76E+02±1.76E+02+ | 7.95E+02±1.28E+02+ | 5.44E+02±6.75E+01+ | 1.03E+03±1.70E+02+ | 4.94E+02±4.09E+01 |
F30 | 5.83E+03±3.07E+03+ | 8.18E+03±6.88E+03+ | 4.82E+03±8.11E+02+ | 6.87E+03±2.12E+03+ | 4.06E+03±1.14E+03 |
+/≈/− | 28/1/0 | 24/5/0 | 22/3/4 | 23/6/0 | / |
Overall, better or comparable results are achieved on 25 of 29 functions, accounting for 86.2%. On F1, F15, F19 and F27, MPSORL is a little worse than CLPSO. Finally, in the last row of Table 8, the statistical results show that the MPSORL algorithm performs appreciably better than the other four comparison algorithms. In addition, Figure 6 illustrates the Friedman test results. The average rank of MPSORL is 1.38, ranking first, which is better than other algorithms. It implies that the proposed multiple-strategy MPSORL algorithm has remarkable advantages compared with the single-strategy PSO algorithms.
For some functions, which are randomly chosen, the convergence process of the algorithm is plotted. From Figure 7, it can be seen that MPSORL has the highest convergence accuracy among all PSO variants on unimodal functions F3 and simple multimodal functions F5, F8–F10. Also, there is a clear rate of convergence on F9. Among compared algorithms, LDWPSO shows a faster convergence speed but easily fails into local optima. On hybrid functions F12, F16 and F20, the convergence accuracy and rate of CLPSO are significantly worse than those of MPSORL on F12; but on the other problems, the convergence trend is similar, while the accuracy is not as good as MPSORL. On composition functions F21–F24 and F26, MPSORL is significantly better than CLPSO in terms of higher convergence precision and faster convergence speed. Furthermore, other PSO variants converge much faster but perform worse in fine tuning solutions. Therefore, the obvious advantage of the multi-strategy algorithm over the single-strategy PSO algorithm is also verified in terms of convergence.
In this section, three PSO variants and three other metaheuristic algorithms are selected, namely, HCLPSO, EPSO, SDPSO, MRFO, I-GWO and BeSD. Concretely, HCLPSO, EPSO and SDPSO are multi-population cooperative algorithms, among which adaptive multi-strategy selection methods are adopted by EPSO and SDPSO. Next, we compare the performances of these algorithms and MPSORL on 50D and 100D, respectively.
Table 9 presents the statistical results of MPSORL and the other six comparison algorithms on 29 50D CEC2017 test functions. Among them, the MPSORL algorithm performs best on F1 for unimodal functions. For simple multimodal functions (F4−F10), MPSORL performs remarkably on F5, F6, F9 and F10. Also, there is no significant difference between MPSORL and the top-ranked algorithms on F7 and F8. In addition, for the hybrid functions (F11−F20), MPSORL performs significantly better than the other six algorithms on F13, F15, F17, and F20. On F11 and F19, there is not much difference between MPSORL and the algorithms that achieved the best results. For the composition functions (F21−F30), MPSORL obtains the top performance on F21, and there is no remarkable difference with the algorithms having the best results on F22, F23, F25, F26 and F28−F30.
IEEE CEC2017 with 50D | HCLPSO | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 1.65E+02±2.03E+02≈ | 2.15E+03±2.12E+03+ | 3.46E+02±3.11E+02+ | 4.94E+03±5.51E+03+ | 5.00E+03±6.84E+03+ | 8.66E+02±1.13E+03+ | 1.35E+02±1.34E+02 |
F3 | 1.55E+01±1.61E+01− | 6.38E−04±1.14E−03− | 5.11E+00±2.31E+01− | 6.98E+03±2.44E+03+ | 1.91E+04±4.05E+03+ | 1.22E+03±7.08E+02+ | 5.22E+02±5.03E+02 |
F4 | 7.85E+01±4.42E+01≈ | 4.05E+01±3.72E+01− | 5.97E+01±4.46E+01≈ | 8.48E+01±5.02E+01≈ | 1.01E+02±6.05E+01+ | 9.56E+01±4.32E+01+ | 7.11E+01±3.82E+01 |
F5 | 1.05E+02±1.88E+01+ | 1.12E+02±1.81E+01+ | 8.12E+01±2.15E+01≈ | 3.27E+02±4.73E+01+ | 3.53E+02±2.02E+01+ | 1.78E+02±1.27E+01+ | 7.83E+01±1.77E+01 |
F6 | 6.63E−13±1.43E−13+ | 2.90E−08±9.81E−08+ | 1.15E−05±1.96E−05≈ | 4.38E+01±1.12E+01+ | 3.60E−02±1.10E−01+ | 8.17E−03±4.72E−03+ | 4.55E−13±1.12E−13 |
F7 | 1.77E+02±1.47E+01≈ | 1.62E+02±2.47E+01≈ | 1.89E+02±2.52E+01+ | 7.11E+02±1.49E+02+ | 4.28E+02±2.94E+01+ | 2.72E+02±2.09E+01+ | 1.69E+02±2.47E+01 |
F8 | 1.02E+02±2.03E+01+ | 1.12E+02±1.87E+01+ | 7.77E+01±1.78E+01≈ | 3.44E+02±4.50E+01+ | 3.60E+02±1.62E+01+ | 1.81E+02±1.51E+01+ | 7.85E+01±1.58E+01 |
F9 | 2.32E+02±2.11E+02+ | 2.12E+02±1.18E+02+ | 1.08E+02±1.10E+02+ | 9.67E+03±2.08E+03+ | 5.35E+01±2.02E+02+ | 4.20E+02±9.59E+02+ | 2.62E+01±2.74E+01 |
F10 | 3.94E+03±5.03E+02≈ | 3.88E+03±4.66E+02≈ | 3.84E+03±3.84E+02≈ | 6.71E+03±1.07E+03+ | 1.22E+04±6.62E+02+ | 6.29E+03±4.18E+02+ | 3.78E+03±4.13E+02 |
F11 | 1.49E+02±3.44E+01+ | 1.65E+02±5.18E+01+ | 1.86E+02±4.90E+01+ | 1.42E+02±3.76E+01+ | 2.41E+02±2.99E+01+ | 1.07E+02±2.31E+01≈ | 1.18E+02±4.29E+01 |
F12 | 5.18E+05±2.49E+05+ | 1.42E+05±1.16E+05− | 8.07E+04±4.03E+04− | 5.70E+05±2.96E+05+ | 5.93E+06±3.15E+06+ | 1.84E+05±1.06E+05− | 3.81E+05±2.02E+05 |
F13 | 6.33E+02±2.47E+02+ | 1.50E+03±1.26E+03+ | 6.97E+02±4.59E+02≈ | 2.63E+03±3.33E+03+ | 5.36E+04±2.13E+05+ | 1.30E+03±1.42E+03+ | 5.10E+02±4.38E+02 |
F14 | 2.70E+04±1.69E+04≈ | 1.54E+04±1.17E+04− | 1.03E+04±7.06E+03− | 2.77E+04±2.05E+04≈ | 2.29E+05±9.37E+04+ | 1.11E+02±1.75E+01− | 3.64E+04±2.73E+04 |
F15 | 3.00E+02±2.31E+02+ | 5.40E+02±4.62E+02+ | 3.78E+02±4.07E+02≈ | 6.29E+03±5.46E+03+ | 3.97E+05±8.32E+05+ | 4.77E+03±2.46E+03+ | 1.99E+02±1.25E+02 |
F16 | 1.01E+03±2.61E+02≈ | 1.10E+03±2.75E+02+ | 8.91E+02±2.85E+02≈ | 1.89E+03±3.66E+02+ | 1.38E+03±6.26E+02+ | 7.69E+02±1.58E+02− | 9.49E+02±1.85E+02 |
F17 | 8.33E+02±1.48E+02+ | 8.14E+02±2.37E+02+ | 7.20E+02±1.69E+02≈ | 1.50E+03±3.63E+02+ | 1.25E+03±2.86E+02+ | 7.29E+02±1.12E+02≈ | 6.65E+02±1.77E+02 |
F18 | 1.40E+05±9.18E+04≈ | 1.03E+05±1.93E+05− | 6.48E+04±5.77E+04− | 1.39E+05±6.87E+04≈ | 2.71E+06±1.18E+06+ | 5.24E+03±2.46E+03− | 1.86E+05±1.80E+05 |
F19 | 6.16E+02±8.92E+02≈ | 3.76E+02±3.38E+02≈ | 6.33E+02±1.08E+03≈ | 1.39E+04±1.09E+04+ | 2.06E+04±1.24E+04+ | 7.03E+03±7.90E+03≈ | 7.31E+02±1.36E+03 |
F20 | 5.05E+02±2.01E+02+ | 6.17E+02±1.59E+02+ | 5.00E+02±2.19E+02+ | 1.25E+03±3.68E+02+ | 1.01E+03±3.19E+02+ | 4.80E+02±1.28E+02+ | 3.99E+02±1.64E+02 |
F21 | 3.04E+02±2.13E+01+ | 3.08E+02±2.28E+01+ | 2.72E+02±2.36E+01≈ | 4.77E+02±4.99E+01+ | 5.39E+02±1.84E+01+ | 3.48E+02±1.32E+01+ | 2.70E+02±1.12E+01 |
F22 | 3.17E+03±2.10E+03≈ | 3.96E+03±1.82E+03+ | 2.89E+03±2.18E+03≈ | 7.34E+03±1.02E+03+ | 1.18E+04±3.25E+03+ | 3.27E+03±3.46E+03≈ | 3.19E+03±2.08E+03 |
F23 | 5.46E+02±2.71E+01+ | 5.56E+02±2.92E+01+ | 5.22E+02±3.04E+01≈ | 8.30E+02±1.12E+02+ | 7.92E+02±1.84E+01+ | 5.95E+02±2.53E+01+ | 5.29E+02±2.52E+01 |
F24 | 6.35E+02±3.23E+01+ | 6.17E+02±2.89E+01+ | 5.77E+02±2.50E+01− | 8.99E+02±1.16E+02+ | 8.52E+02±1.54E+01+ | 6.45E+02±2.96E+01+ | 5.92E+02±2.36E+01 |
F25 | 5.08E+02±3.32E+01≈ | 5.15E+02±3.05E+01≈ | 5.21E+02±3.32E+01≈ | 5.59E+02±3.68E+01+ | 5.44E+02±2.80E+01+ | 5.76E+02±1.76E+01+ | 5.20E+02±3.22E+01 |
F26 | 1.42E+03±9.15E+02≈ | 1.59E+03±1.07E+03≈ | 1.70E+03±5.29E+02≈ | 3.89E+03±3.87E+03≈ | 4.48E+03±8.29E+02+ | 2.27E+03±1.62E+03+ | 1.51E+03±8.29E+02 |
F27 | 6.31E+02±2.76E+01≈ | 6.46E+02±3.54E+01≈ | 6.67E+02±3.19E+01≈ | 9.66E+02±1.74E+02+ | 6.46E+02±5.44E+01≈ | 6.21E+02±4.76E+01− | 6.48E+02±4.34E+01 |
F28 | 4.80E+02±2.21E+01≈ | 4.88E+02±2.04E+01≈ | 4.92E+02±2.25E+01≈ | 4.95E+02±2.72E+01≈ | 4.80E+02±2.30E+01≈ | 5.17E+02±2.04E+01+ | 4.93E+02±1.81E+01 |
F29 | 6.94E+02±1.96E+02≈ | 7.62E+02±1.50E+02≈ | 7.21E+02±1.59E+02≈ | 1.53E+03±3.60E+02+ | 1.39E+03±2.72E+02+ | 7.59E+02±8.20E+01≈ | 7.10E+02±1.80E+02 |
F30 | 7.02E+05±5.32E+04≈ | 7.03E+05±6.25E+04≈ | 7.43E+05±7.14E+04≈ | 1.05E+06±3.28E+05+ | 1.52E+07±1.16E+07+ | 8.25E+05±4.54E+04+ | 7.05E+05±5.83E+04 |
+/≈/− | 13/15/1 | 15/9/5 | 5/19/5 | 24/5/0 | 27/2/0 | 19/5/5 | / |
On the whole, 21 of the 29 functions have achieved desirable results, accounting for about 72.4%. Furthermore, the statistical results in the last row from Table 9 also demonstrate that MPSORL has reached better results when compared with the state-of-the-art algorithms. Moreover, Figure 8 shows the Friedman test results, and the MPSORL algorithm ranks first.
Additionally, the convergence properties of some functions are depicted. The detailed information is displayed in Figure 9. On F1, it can be observed that other algorithms are easily trapped in local optima except for MPSORL and HCLPSO, which have good convergency. MPSORL has high convergence accuracy, but its convergence speed is not as fast as SDSPO on F5. For F9 and F10, HCLPSO, EPSO, SDPSO and MPSORL perform similarly with respect to convergence. However, MPSORL keeps improving its fitness on F10. In addition, MRFO suffers from premature convergence during the optimization process. On F13 and F15, MPSORL has not only fast convergence speed but also high convergence accuracy. Moreover, MPSORL is remarkably better than other comparison algorithms on F17 and F21 in terms of higher convergence precision. After 250,000 evaluations, MPSORL is still improving on F20, whereas other algorithms have not further improved. By comparing these results, it can be concluded that the proposed MPSORL algorithm can outperform these compared algorithms on most of these test examples.
In order to further verify the effectiveness of the proposed MPSORL in solving high-dimensional problems, Table 10 presents the comparison results between MPSORL and the other five algorithms on 100-dimensional problems. Specifically, for simple multimodal functions (F4−F10), MPSORL ranks first on F5 and F7–F10, accounting for about 71.4%. For hybrid functions (F11−F20), MPSORL is significantly better than the other five comparison algorithms on F13, F16, F19 and F20. In parallel, MPSORL and the superior algorithms on F15 and F17 have no significant differences. For composition functions (F21−F30), MPSORL achieves the best results on F22, F24, F26 and F28. Among the remaining six functions, the best results of the other five functions are not significantly different from those of MPSORL except for F25. This implies that the proposed algorithm achieves better or comparable results on 21 of 29 functions. Furthermore, the statistical results in the last row of Table 10 show that MPSORL outperforms the latest compared algorithms.
IEEE CEC2017 with 100D | HCLPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 6.74E+03±7.22E+03≈ | 7.59E+03±8.88E+03≈ | 5.59E+03±7.18E+03≈ | 1.30E+05±5.51E+04+ | 3.34E+03±2.87E+03≈ | 7.31E+03±7.60E+03 |
F3 | 7.51E+03±3.89E+03− | 2.07E+04±1.75E+04− | 8.58E+04±1.31E+04≈ | 9.01E+04±1.11E+04≈ | 3.42E+04±5.67E+03− | 8.75E+04±1.84E+04 |
F4 | 2.16E+02±3.25E+01≈ | 1.95E+02±4.18E+01− | 2.37E+02±4.09E+01≈ | 3.06E+02±3.91E+01+ | 2.81E+02±4.11E+01+ | 2.28E+02±5.13E+01 |
F5 | 2.91E+02±5.87E+01≈ | 3.05E+02±8.14E+01≈ | 7.98E+02±6.75E+01+ | 9.15E+02±3.20E+01+ | 6.46E+02±3.17E+01+ | 2.81E+02±5.72E+01 |
F6 | 2.56E−05±8.27E−05− | 4.90E−02±3.90E−02+ | 5.13E+01±6.93E+00+ | 1.93E+00±9.54E−01+ | 4.36E−01±1.80E−01+ | 8.82E−04±1.13E−03 |
F7 | 5.79E+02±7.60E+01+ | 5.43E+02±1.46E+02+ | 2.17E+03±2.71E+02+ | 1.10E+03±6.67E+01+ | 7.33E+02±4.82E+01+ | 4.22E+02±8.11E+01 |
F8 | 2.98E+02±5.97E+01≈ | 3.21E+02±6.77E+01+ | 8.83E+02±8.99E+01+ | 9.15E+02±3.05E+01+ | 6.52E+02±3.42E+01+ | 2.88E+02±6.62E+01 |
F9 | 1.55E+03±7.37E+02+ | 4.79E+03±2.78E+03+ | 2.06E+04±1.69E+03+ | 4.15E+03±3.17E+03+ | 1.91E+04±5.45E+03+ | 5.35E+02±3.20E+02 |
F10 | 1.17E+04±1.02E+03+ | 1.20E+04±7.35E+02+ | 1.41E+04±1.21E+03+ | 2.89E+04±7.52E+02+ | 1.84E+04±6.20E+02+ | 1.10E+04±8.17E+02 |
F11 | 7.57E+02±1.78E+02≈ | 9.67E+02±1.92E+02+ | 5.66E+02±1.18E+02− | 5.74E+03±1.66E+03+ | 6.27E+02±1.35E+02− | 7.10E+02±1.19E+02 |
F12 | 7.72E+05±3.83E+05− | 3.04E+05±1.00E+05− | 1.40E+06±5.88E+05+ | 4.01E+07±1.53E+07+ | 2.26E+06±7.87E+05+ | 1.03E+06±4.16E+05 |
F13 | 1.40E+03±8.97E+02+ | 1.64E+03±9.03E+02+ | 6.05E+03±8.30E+03+ | 7.10E+03±7.79E+03+ | 3.27E+03±1.64E+03+ | 9.36E+02±4.44E+02 |
F14 | 1.11E+05±3.95E+04≈ | 5.95E+04±2.98E+04− | 1.33E+05±8.46E+04≈ | 3.85E+06±1.83E+06+ | 4.78E+03±3.89E+03− | 1.43E+05±6.90E+04 |
F15 | 6.80E+02±2.92E+02+ | 5.05E+02±1.33E+02≈ | 2.34E+03±2.69E+03+ | 3.05E+03±3.30E+03+ | 5.44E+02±3.77E+02≈ | 5.34E+02±1.39E+02 |
F16 | 3.20E+03±3.53E+02+ | 2.90E+03±4.68E+02≈ | 3.95E+03±6.79E+02+ | 5.73E+03±1.58E+03+ | 3.33E+03±2.88E+02+ | 2.76E+03±4.80E+02 |
F17 | 2.41E+03±3.39E+02≈ | 2.22E+03±2.87E+02≈ | 2.96E+03±5.02E+02+ | 4.80E+03±4.04E+02+ | 2.15E+03±2.11E+02≈ | 2.23E+03±2.91E+02 |
F18 | 2.09E+05±8.99E+04− | 1.17E+05±3.31E+04− | 3.29E+05±1.21E+05≈ | 1.13E+07±7.10E+06+ | 5.71E+04±1.82E+04− | 4.13E+05±1.83E+05 |
F19 | 4.43E+02±2.87E+02+ | 3.32E+02±2.18E+02≈ | 2.93E+03±3.10E+03+ | 3.16E+03±3.87E+03+ | 9.38E+02±5.61E+02+ | 2.85E+02±1.11E+02 |
F20 | 2.25E+03±2.52E+02+ | 2.08E+03±3.21E+02+ | 3.19E+03±5.90E+02+ | 4.25E+03±3.40E+02+ | 2.43E+03±2.55E+02+ | 1.87E+03±2.71E+02 |
F21 | 5.47E+02±5.29E+01≈ | 5.32E+02±6.67E+01≈ | 9.60E+02±1.28E+02+ | 1.12E+03±3.20E+01+ | 7.73E+02±2.92E+01+ | 5.41E+02±6.73E+01 |
F22 | 1.32E+04±1.15E+03+ | 1.21E+04±4.15E+03+ | 1.68E+04±1.49E+03+ | 2.98E+04±1.01E+03+ | 2.02E+04±4.18E+02+ | 1.17E+04±3.25E+03 |
F23 | 8.03E+02±2.63E+01+ | 7.83E+02±3.24E+01≈ | 1.33E+03±1.38E+02+ | 1.38E+03±3.55E+01+ | 1.06E+03±2.63E+01+ | 7.86E+02±2.07E+01 |
F24 | 1.31E+03±5.20E+01≈ | 1.30E+03±5.71E+01≈ | 2.19E+03±2.78E+02+ | 1.76E+03±3.11E+01+ | 1.32E+03±9.25E+01≈ | 1.29E+03±6.33E+01 |
F25 | 7.38E+02±6.47E+01− | 7.78E+02±6.94E+01≈ | 7.84E+02±5.27E+01≈ | 8.61E+02±4.59E+01+ | 8.62E+02±4.13E+01+ | 7.88E+02±6.23E+01 |
F26 | 7.29E+03±6.01E+02≈ | 7.72E+03±6.71E+02+ | 1.69E+04±7.42E+03+ | 1.22E+04±5.14E+02+ | 1.23E+04±2.30E+03+ | 6.28E+03±2.76E+03 |
F27 | 7.91E+02±3.21E+01≈ | 7.73E+02±3.15E+01≈ | 1.22E+03±1.85E+02+ | 9.53E+02±1.30E+02+ | 8.53E+02±5.71E+01+ | 7.77E+02±2.78E+01 |
F28 | 5.71E+02±3.64E+01≈ | 5.65E+02±2.80E+01≈ | 5.70E+02±3.28E+01≈ | 6.69E+02±3.07E+01+ | 6.69E+02±2.81E+01+ | 5.65E+02±2.94E+01 |
F29 | 2.93E+03±2.89E+02≈ | 3.01E+03±4.44E+02≈ | 3.97E+03±5.99E+02+ | 4.90E+03±4.35E+02+ | 3.49E+03±2.48E+02+ | 3.08E+03±4.84E+02 |
F30 | 5.83E+03±2.82E+03≈ | 5.73E+03±2.78E+03≈ | 7.14E+03±3.85E+03≈ | 2.43E+05±1.07E+05+ | 1.06E+04±2.71E+03+ | 5.80E+03±3.15E+03 |
+/≈/− | 10/14/5 | 10/14/5 | 20/8/1 | 28/1/0 | 21/4/4 | / |
Next, Figure 10 shows the result of the Friedman test. The MPSORL algorithm still ranks first. Moreover, the trends of convergence of some functions are plotted in Figure 11. The detailed analysis is as follows: First, MPSORL has higher accuracy than other algorithms on F7 and F9. For F10, the fitness value is still increasing in the later stages of evolution, indicating good search ability. Second, the results also show that MPSORL could provide a superior convergence rate with fewer evaluations on F13, F16, F24 and F26. In addition, SDPSO and HCLPSO have fast convergence speeds but poor exploitation on F20 and F22 compared to MPSORL. Overall, MPSORL can obtain better solutions with high accuracy and stronger local searching abilities. The conclusions also imply that strategy selection based on RL can improve the performance of MPSORL. Moreover, it can be seen that the performance of the MPSORL algorithm does not decrease with the increase of dimensions, and it is also competitive in higher dimensional problems.
CEC2019, which has 10 functions to optimize, is a more complicated test suite than CEC2017. In this section, seven PSO algorithms (LDWPSO, UPSO, CLPSO, LIPS, HCLPSO, EPSO and SDPSO) and three latest metaheuristic algorithms (MRFO, I-GWO and BeSD) are compared with MPSORL. The results of non-parametric test are shown in Table 11. MPSORL is obviously better than LDWPSO, UPSO, CLPSO, LIPS, HCLPSO, EPSO, MRFO, I-GWO and BeSD and is tied with SDPSO. The only weakness is that the performance of MPSORL is relatively worse than that of MRFO and BeSD on F31 and F32. The "No Free Lunch" Theorem [61], which states that there is no general-purpose optimization algorithm that can solve all types of optimization problems, may explain this phenomenon. At the same time, the Friedman test reveals that the MPSORL algorithm outperforms all of the compared algorithms. The results are plotted in Figure 12, which also validates that the framework of the reinforcement learning technique to guide the strategy selection of the PSO algorithm is effective on a different test suite.
IEEE CEC2019 | LDWPSO | UPSO | CLPSO | LIPS | HCLPSO | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 5.22E+04±8.95E+04≈ | 5.71E+04±3.49E+04− | 8.62E+05±5.36E+05+ | 4.70E+04±4.30E+04≈ | 6.08E+04±1.20E+05≈ | 5.93E+04±6.88E+04 |
F32 | 2.61E+02±1.04E+02≈ | 3.23E+02±1.35E+02≈ | 6.25E+02±1.54E+02+ | 2.73E+02±9.38E+01≈ | 2.43E+02±8.17E+01≈ | 2.44E+02±1.03E+02 |
F33 | 3.95E−01±7.47E−02≈ | 4.80E−01±4.01E−01+ | 9.06E−01±3.06E−01+ | 6.22E−01±8.09E−01≈ | 3.95E−01±7.47E−02+ | 3.66E−01±1.33E−01 |
F34 | 8.52E+00±4.08E+00+ | 9.20E+00±2.93E+00+ | 4.27E+00±1.21E+00≈ | 1.05E+01±4.19E+00+ | 4.41E+00±1.47E+00≈ | 4.55E+00±1.77E+00 |
F35 | 1.05E−01±6.19E−02+ | 3.86E−02±1.90E−02≈ | 6.52E−02±2.62E−02+ | 1.00E−02±8.04E−03− | 5.13E−02±2.18E−02≈ | 3.33E−02±2.24E−02 |
F36 | 3.15E−01±5.71E−01+ | 4.36E−01±5.00E−01+ | 1.59E−01±7.46E−02+ | 4.76E−01±5.95E−01+ | 1.99E−03±2.78E−03+ | 3.89E−04±1.69E−03 |
F37 | 4.41E+02±2.38E+02+ | 4.66E+02±1.71E+02+ | 1.47E+02±7.73E+01≈ | 7.55E+02±2.49E+02+ | 2.49E+02±1.11E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.99E+00±4.61E−01+ | 2.21E+00±4.28E−01+ | 2.03E+00±2.62E−01+ | 3.04E+00±4.87E−01+ | 1.65E+00±3.53E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.32E−01±6.08E−02≈ | 6.99E−02±3.05E−02− | 1.75E−01±3.92E−02+ | 1.23E−01±4.73E−02≈ | 1.01E−01±3.46E−02− | 1.30E−01±3.08E−02 |
F40 | 1.42E+01±9.09E+00≈ | 1.46E+01±8.32E+00≈ | 1.88E+01±3.91E+00+ | 1.93E+01±3.65E+00≈ | 1.54E+01±8.57E+00≈ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 5/3/2 | 8/2/0 | 4/5/1 | 4/5/1 | / |
IEEE CEC2019 | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 3.20E+04±2.87E+04≈ | 4.42E+04±5.14E+04≈ | 0.00E+00±0.00E+00− | 7.24E+04±1.35E+05≈ | 0.00E+00±0.00E+00− | 5.93E+04±6.88E+04 |
F32 | 2.30E+02±8.66E+01≈ | 2.58E+02±1.05E+02≈ | 3.70E+00±3.69E−01− | 1.06E+03±3.38E+02+ | 4.84E+00±1.97E+00− | 2.44E+02±1.03E+02 |
F33 | 3.41E−01±1.55E−01≈ | 3.55E−01±1.40E−01≈ | 3.95E−01±7.47E−02≈ | 1.95E+00±1.18E+00+ | 9.35E−01±2.89E−01+ | 3.66E−01±1.33E−01 |
F34 | 5.82E+00±1.98E+00+ | 4.48E+00±1.50E+00≈ | 2.50E+01±1.55E+01+ | 1.98E+01±4.20E+00+ | 8.50E+00±1.96E+00+ | 4.55E+00±1.77E+00 |
F35 | 8.57E−02±4.08E−02+ | 4.81E−02±3.37E−02≈ | 1.11E−01±1.07E−01+ | 5.26E−01±9.12E−02+ | 5.04E−02±1.27E−02≈ | 3.33E−02±2.24E−02 |
F36 | 2.46E−01±2.42E−01+ | 2.72E−02±1.21E−01≈ | 2.13E+00±1.70E+00+ | 1.93E−01±3.60E−01+ | 1.34E−01±1.17E−01+ | 3.89E−04±1.69E−03 |
F37 | 2.40E+02±1.47E+02≈ | 1.80E+02±1.04E+02≈ | 7.29E+02±3.16E+02+ | 7.33E+02±2.42E+02+ | 4.46E+02±1.29E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.65E+00±4.69E−01+ | 1.19E+00±5.05E−01≈ | 2.45E+00±3.70E−01+ | 2.06E+00±2.53E−01+ | 2.50E+00±1.64E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.27E−01±3.64E−02≈ | 1.32E−01±2.78E−02≈ | 2.05E−01±7.63E−02+ | 1.84E−01±4.15E−02+ | 1.11E−01±1.76E−02− | 1.30E−01±3.08E−02 |
F40 | 1.24E+01±9.65E+00+ | 1.22E+01±9.85E+00≈ | 1.50E+01±9.02E+00+ | 1.50E+01±9.03E+00+ | 1.76E+01±4.33E+00+ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 0/10/0 | 7/1/2 | 9/1/0 | 6/1/3 | / |
The research background of the problem is the spread spectrum radar polyphase code design, which is a nonlinear non-convex min-max problem with continuous variables having numerous local optima [62]. The mathematical model is expressed as follows:
minx∈Xf(x)=max{ϕ1(x),ϕ2(x),⋯,ϕ2m(x)} | (4.1) |
where
X={(x1,⋯,xn)∈Rn∣0≤xj≤2π(j=1,2,⋯,n)},m=2n−1 | (4.2) |
ϕ2i−1(x)=n∑j=icos(j∑k=|2i−j−1|+1xk),i=1,2,⋯,n | (4.3) |
ϕ2i(x)=0.5+n∑j=i+1cos(j∑k=|2i−j|+1xk),i=1,2,⋯,n−1 | (4.4) |
ϕm+i(x)=−ϕi(x). | (4.5) |
The parameters are set as follows: The dimension is 20, Max_Fes is set to 100,000, and the population size is 30. The boxplot of statistical results after 30 independent runs is shown in Figure 13. It is apparent that the average optimal value of MPSORL is minimum. It indicates that the proposed algorithm has certain advantages in solving the practical application problem.
A multi-strategy self-learning method is constructed by using Q-learning theory, and a novel multi-strategy selection particle swarm optimizer is designed. In order to intuitively show the superior performance of MPSORL, the design method and convergence performance about the proposed algorithm are further discussed from the following three aspects: 1) state division, 2) convergence comparison and 3) diversity analysis.
The state division depends on the transformation of the particle fitness. This section discusses the effect of state partitioning, and here two cases of uniform and non-uniform partitioning are compared and analyzed. The uniform partition state is set to S1 = [20, 40, 60, 80]. The non-uniform partition state is set to S2 = [10, 25, 45, 70].
The other parameters set are the same as Section 4.1. MPSORL runs 30 times independently on CEC2017 with 30, 50, and 100 dimensions, respectively. As can be seen from Table 12, state partitioning has little effect on the 30 dimensional problems. However, MPSORL with non-uniform state division provides higher R+ values than R− values in all cases. Meanwhile, in the 50 and 100 dimensions, it is not difficult to find that the effect of non-uniform state division is better than that of uniform division. The higher the dimension is, the better the non-uniform partition. As a result, we can conclude that the state division has a direct effect on the algorithm's performance with the increased number of decision variables. Therefore, the state of the proposed algorithm is set to non-uniform status division S2.
MPSORL | Non-uniform vs. Uniform status division | |||||
Wins (+) | Losses (−) | Ties | R+ | R− | p-value | |
30D | 14 | 15 | 0 | 220 | 215 | 0.957 |
50D | 19 | 10 | 0 | 278 | 157 | 0.191 |
100D | 20 | 9 | 0 | 297 | 138 | 0.086 |
To compare the difference between the convergence performance of multi-strategy PSO algorithms, Table 13 presents some of the common functions. In the following, these functions are used to compare and analyze the convergence speed of multi-strategy PSO algorithms, including EPSO, SDPSO, and MPSORL. Among them, EPSO selects the optimizer based on the success rate of strategies. SDPSO selects strategies by calculating their payoffs. However, the MPSORL algorithm chooses the strategies based on the Q-table. The parameters are set as follows: The number of runs is 20, the population size is 40, and Max_Iter is 500. On these different types of functions, the proposed MPSORL converges faster than the other two multi-strategy PSO algorithms by looking at the trajectories of particle movement, as shown in Figure 14(b), (d), (f), (h). Therefore, according to the experiments in Section 4 and further verification in this section, the results reveal that the MPSORL algorithm outperforms other state-of-the-art algorithms in terms of accuracy, convergence speed and non-parametric statistical significance. It can be proved that the proposed MPSORL, guided by RL technique, has good robustness, superiority and competitiveness.
Function type | No. | Function | F(x∗) | D | search range |
Bowl-Shaped | F41 | Sphere Function | 0 | 2 | [-100 100] |
Many Local Minima | F42 | Rastrigin Function | 0 | 2 | [-5.12 5.12] |
Many Local Minima | F43 | Ackley Function | 0 | 2 | [-32.768, 32.768] |
Plate-Shaped | F44 | Matyas Function | 0 | 2 | [-10 10] |
In order to further verify the exploration and exploitation capabilities of MPSORL, the diversities of several multi-swarm PSO variants (HCLPSO, EPSO and SDPSO) are compared. The diversity metric of the population is as follows [63]:
Diversity=1NN∑i=1√D∑j=1(xij−ˉxj)2 | (5.1) |
ˉxj=1NN∑i=1xij | (5.2) |
where xij indicates the jth dimension of the ith particle, and ˉxj represents the jth dimension of the mean position of the population. The population size is 40, and Max_Iter is 500. Figure 15(a)–(d) illustrates the diversity curves of the four functions proposed in Section 5.2. On all different types of functions, MPSORL has the greatest diversity at the early stage while preserving an appreciable convergence trend at the later stage, focusing on fine search. Conversely, HCLPSO has the worst diversity compared to other PSO variants. EPSO shows inferior diversity on the Sphere, Rastrigin, and Matyas functions, while facing a weaker exploitation ability on these four functions after 200 iterations. Similarly, SDPSO maintains stronger diversity in most cases but has some defects, such as poor local search on the Rastrigin and Matyas functions. To summarize, owing to the introduction of RL techniques and the effective multi-strategy self-learning mechanism, MPSORL achieves a good trade-off between exploration and exploitation compared with several other multi-swarm PSO variants.
This paper combines the RL technique with the PSO algorithm as the research engine, and then an adaptive multi-strategy self-learning framework based on Q-learning is proposed. During the evolution, each particle matches a strategy. The strategy is selected based on the Q-table. The information associated with the Q-table consists of states, action and reward. Based on the student achievement distribution analogy, a non-uniform state division method is designed to better match the change in dimensions of decision variables. Extensive experimental results show that the multi-strategy self-learning mechanism guided by the RL technique is more flexible and can effectively improve the performance of the proposed algorithm.
The multi-strategy learning framework can also be applied to other intelligent algorithms to solve global optimization problems, such as the differential evolution algorithm, the genetic algorithm, etc. MPSORL has achieved better performance in searching for optimal solutions to problems when compared with some state-of-the-art algorithms. However, as with most swarm intelligent approaches, the proposed algorithm is time-consuming in dealing with large-scale complex optimization problems. In addition, this work simply provides valuable insights into RL-guided PSO algorithms. Therefore, it can be expected in future research that developing more efficient strategies for strategy pools and optimizing the self-adaptive selection mechanism can further improve the efficiency of the algorithms.
This work is supported by the National Natural Science Foundation of China (No.61966030), Key Laboratory of Industrial Internet of Things and Network Control and Ministry of Education (2022-ZJ-Y21).
The authors declare no conflicts of interest.
[1] |
K. R. Joseph, S. Edirimanne, G. D. Eslick, Multifocality as a prognostic factor in thyroid cancer: A meta-analysis, Int. J. Surg., 50 (2018), 121–125. http://.doi.org/10.1016/j.ijsu.2017.12.035 doi: 10.1016/j.ijsu.2017.12.035
![]() |
[2] |
A. Arianpoor, M. Asadi, E. Amini, A. Ziaeemehr, Investigating the prevalence of risk factors of papillary thyroid carcinoma recurrence and disease-free survival after thyroidectomy and central neck dissection in Iranian patients, Acta Chir. Belg., 120 (2020), 173–178. http://.doi.org/10.1080/00015458.2019.1576447 doi: 10.1080/00015458.2019.1576447
![]() |
[3] |
V. Zaydfudim, I. D. Feurer, M. R. Griffin, J. E. Phay, The impact of lymph node involvement on survival in patients with papillary and follicular thyroid carcinoma, Surgery, 144 (2008), 1077–1078. http://.doi.org/10.1016/j.surg.2008.08.034 doi: 10.1016/j.surg.2008.08.034
![]() |
[4] |
I. M. Boschin, M. R. Pelizzo, F. Giammarile, D. Rubello, P. Colletti, Lymphoscintigraphy in differentiated thyroid cancer, Clin. Nucl. Med., 40 (2015), e343–350. http://.doi.org/10.1097/RLU.0000000000000825 doi: 10.1097/RLU.0000000000000825
![]() |
[5] |
D. Hou, H. Xu, B. Yuan, J. Liu, Y. Lu, M. Liu, Effects of active localization and vascular preservation of inferior parathyroid glands in central neck dissection for papillary thyroid carcinoma, World J. Surg. Oncol., 18 (2020), 95. http://.doi.org/10.1186/s12957-020-01867-y doi: 10.1186/s12957-020-01867-y
![]() |
[6] |
I. Elia, G. Doglioni, S. M. Fendt, Metabolic hallmarks of metastasis formation, Trends Cell Biol., 28 (2018), 673–684. http://.doi.org/10.1016/j.tcb.2018.04.002 doi: 10.1016/j.tcb.2018.04.002
![]() |
[7] |
H. Harjunpää, M. Llort Asens, C. Guenther, S. C. Fagerholm, Cell adhesion molecules and their roles and regulation in the immune and tumor microenvironment, Front. Immunol., 10 (2019), 1078. http://.doi.org/10.3389/fimmu.2019.01078 doi: 10.3389/fimmu.2019.01078
![]() |
[8] |
L. Mautone, C. Ferravante, A. Tortora, Higher integrin alpha 3 beta1 expression in papillary thyroid cancer is associated with worst outcome, Cancers (Basel), 13 (2021), 2937.http://.doi.org/10.3390/cancers13122937 doi: 10.3390/cancers13122937
![]() |
[9] | J. Weiss, F. Kuusisto, K. Boyd, Machine learning for treatment assignment: Improving individualized risk attribution, AMIA Annu. Symp. Proc., 2015 (2015), 1306–1315. |
[10] | J. C. Weiss, D. Page, P. L. Peissig, Statistical Relational Learning to predict primary myocardial infarction from electronic health records, Proc. Innov. Appl. Artif. Intell. Conf., 2012 (2012), 2341–2347. |
[11] |
M. E. Ritchie, B. Phipson, D. Wu, Limma powers differential expression analyses for RNA-sequencing and microarray studies, Nucleic Acids Res., 43 (2015), e47.http://.doi.org/10.1093/nar/gkv007 doi: 10.1093/nar/gkv007
![]() |
[12] |
G. Tomas, M. Tarabichi, D. Gacquer, A general method to derive robust organ-specific gene expression-based differentiation indices: application to thyroid cancer diagnostic, Oncogene, 31 (2012), 4490–4498.http://.doi.org/10.1038/onc.2011.626 doi: 10.1038/onc.2011.626
![]() |
[13] |
M. Tarabichi, M. Saiselet, C. Tresallet, Revisiting the transcriptional analysis of primary tumours and associated nodal metastases with enhanced biological and statistical controls: application to thyroid cancer, Br. J. Cancer, 112 (2015), 1665–1674.http://.doi.org/10.1038/bjc.2014.665 doi: 10.1038/bjc.2014.665
![]() |
[14] |
A. Subramanian, P. Tamayo, V. K. Mootha, Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles, Proc. Natl. Acad. Sci. U. S. A., 102 (2005), 15545–15550. http://.doi.org/10.1073/pnas.0506580102 doi: 10.1073/pnas.0506580102
![]() |
[15] |
A. Liberzon, A. Subramanian, R. Pinchback, Molecular signatures database (MSigDB) 3.0, Bioinformatics, 27 (2011), 1739–1740. http://.doi.org/10.1093/bioinformatics/btr260 doi: 10.1093/bioinformatics/btr260
![]() |
[16] |
A. Liberzon, C. Birger, H. Thorvaldsdottir, The Molecular Signatures Database (MSigDB) hallmark gene set collection, Cell Syst., 1 (2015), 417–425. http://.doi.org/10.1016/j.cels.2015.12.004 doi: 10.1016/j.cels.2015.12.004
![]() |
[17] |
G. Huang, X. Xu, C. Ju, Identification and validation of autophagy-related gene expression for predicting prognosis in patients with idiopathic pulmonary fibrosis, Front. Immunol., 13 (2022), 997138. http://.doi.org/10.3389/fimmu.2022.997138 doi: 10.3389/fimmu.2022.997138
![]() |
[18] |
X. Sun, Z. Zhang, Z. Wang, The role of Angiogenesis and remodeling (AR) associated signature for predicting prognosis and clinical outcome of immunotherapy in pan-cancer, Front. Immunol., 13 (2022), 1033967. http://.doi.org/10.3389/fimmu.2022.1033967 doi: 10.3389/fimmu.2022.1033967
![]() |
[19] |
J. Ruan, S. Xu, R. Chen, EMLI-ICC: an ensemble machine learning-based integration algorithm for metastasis prediction and risk stratification in intrahepatic cholangiocarcinoma, Brief. Bioinform., 23 (2022), bbac450. http://.doi.org/10.1093/bib/bbac450 doi: 10.1093/bib/bbac450
![]() |
[20] |
X. Wang, L. Yang, C. Yu, An integrated computational strategy to predict personalized cancer drug combinations by reversing drug resistance signatures, Comput. Biol. Med., 163 (2023), 107230. http://.doi.org/10.1016/j.compbiomed.2023.107230 doi: 10.1016/j.compbiomed.2023.107230
![]() |
[21] |
H. Zhang, P. Xia, J. Liu, ATIC inhibits autophagy in hepatocellular cancer through the AKT/FOXO3 pathway and serves as a prognostic signature for modeling patient survival, Int. J. Biol. Sci., 17 (2021), 4442–4458. http://.doi.org/10.7150/ijbs.65669 doi: 10.7150/ijbs.65669
![]() |
[22] |
X. Bao, J. Chi, Y. Zhu, High FAAP24 expression reveals poor prognosis and an immunosuppressive microenvironment shaping in AML, Cancer Cell Int., 23 (2023), 117. http://.doi.org/10.1186/s12935-023-02937-3 doi: 10.1186/s12935-023-02937-3
![]() |
[23] |
B. Cheng, C. Tang, J. Xie, Cuproptosis illustrates tumor micro-environment features and predicts prostate cancer therapeutic sensitivity and prognosis, Life Sci., 325 (2023), 121659. http://.doi.org/10.1016/j.lfs.2023.121659 doi: 10.1016/j.lfs.2023.121659
![]() |
[24] |
S. He, Y. Ding, Z. Ji, HOPX is a tumor-suppressive biomarker that corresponds to T cell infiltration in skin cutaneous melanoma, Cancer Cell Int., 23 (2023), 122. http://.doi.org/10.1186/s12935-023-02962-2 doi: 10.1186/s12935-023-02962-2
![]() |
[25] |
Z. Liu, L. Liu, S. Weng, Machine learning-based integration develops an immune-derived lncRNA signature for improving outcomes in colorectal cancer, Nat. Commun., 13 (2022), 816. http://.doi.org/10.1038/s41467-022-28421-6 doi: 10.1038/s41467-022-28421-6
![]() |
[26] |
Y. Chen, Y. Pan, H. Gao, Mechanistic insights into super-enhancer-driven genes as prognostic signatures in patients with glioblastoma, J. Cancer Res. Clin. Oncol., 149 (2023), 12315–12332. http://.doi.org/10.1007/s00432-023-05121-2 doi: 10.1007/s00432-023-05121-2
![]() |
[27] |
A. Huang, L. Li, X. Liu, Hedgehog signaling is a potential therapeutic target for vascular calcification, Gene, 872 (2023), 147457. http://.doi.org/10.1016/j.gene.2023.147457 doi: 10.1016/j.gene.2023.147457
![]() |
[28] |
P. Zhou, J. Shen, X. Ge, Classification and characterisation of extracellular vesicles-related tuberculosis subgroups and immune cell profiles, J. Cell. Mol. Med., 27 (2023), 2482–2494. http://.doi.org/10.1111/jcmm.17836 doi: 10.1111/jcmm.17836
![]() |
[29] |
R. Kolde, S. Laur, P. Adler, Robust rank aggregation for gene list integration and meta-analysis, Bioinformatics, 28 (2012), 573–580. http://.doi.org/10.1093/bioinformatics/btr709 doi: 10.1093/bioinformatics/btr709
![]() |
[30] |
C. H. Gao, G. Yu, P. Cai, ggVennDiagram: An intuitive, easy-to-use, and highly customizable R package to generate Venn Diagram, Front. Genet., 12 (2021), 706907. http://.doi.org/10.3389/fgene.2021.706907 doi: 10.3389/fgene.2021.706907
![]() |
[31] |
Y. Zhou, B. Zhou, L. Pache, Metascape provides a biologist-oriented resource for the analysis of systems-level datasets, Nat. Commun., 10 (2019), 1523. http://.doi.org/10.1038/s41467-019-09234-6 doi: 10.1038/s41467-019-09234-6
![]() |
[32] |
S. Hanzelmann, R. Castelo, J. Guinney, GSVA: gene set variation analysis for microarray and RNA-seq data, BMC Bioinf., 14 (2013), 7. http://.doi.org/10.1186/1471-2105-14-7 doi: 10.1186/1471-2105-14-7
![]() |
[33] |
K. Yoshihara, M. Shahmoradgoli, E. Martinez, Inferring tumour purity and stromal and immune cell admixture from expression data, Nat. Commun., 4 (2013), 2612. http://.doi.org/10.1038/ncomms3612 doi: 10.1038/ncomms3612
![]() |
[34] |
A. M. Newman, C. L. Liu, M. R. Green, Robust enumeration of cell subsets from tissue expression profiles, Nat. Methods, 12 (2015), 453–457. http://.doi.org/10.1038/nmeth.3337 doi: 10.1038/nmeth.3337
![]() |
[35] |
J. H. Friedman, T. Hastie, R. Tibshirani, Regularization paths for generalized linear models via coordinate descent, J. Stat. Software, 33 (2010), 1–22. http://.doi.org/10.18637/jss.v033.i01 doi: 10.18637/jss.v033.i01
![]() |
[36] |
P. Blanche, J. F. Dartigues, H. Jacqmin-Gadda, Estimating and comparing time-dependent areas under receiver operating characteristic curves for censored event times with competing risks, Stat. Med., 32 (2013), 5381–5397. http://.doi.org/10.1002/sim.5958 doi: 10.1002/sim.5958
![]() |
[37] |
M. Kuhn, Building Predictive models in R using the caret package, J. Stat. Software, 28 (2008), 1–26. http://.doi.org/10.18637/jss.v028.i05 doi: 10.18637/jss.v028.i05
![]() |
[38] |
M. Uhlen, C. Zhang, S. Lee, A pathology atlas of the human cancer transcriptome, Science, 357 (2017), eaan2507. http://.doi.org/10.1126/science.aan2507 doi: 10.1126/science.aan2507
![]() |
[39] |
N. Enz, G. Vliegen, I. De Meester, CD26/DPP4-a potential biomarker and target for cancer therapy, Pharmacol Ther, 198 (2019), 135–159. http://.doi.org/10.1016/j.pharmthera.2019.02.015 doi: 10.1016/j.pharmthera.2019.02.015
![]() |
[40] |
Q. He, H. Cao, Y. Zhao, Dipeptidyl peptidase-4 stabilizes integrin alpha4beta1 complex to promote thyroid cancer cell metastasis by activating transforming growth factor-beta signaling pathway, Thyroid, 32 (2022), 1411–1422. http://.doi.org/10.1089/thy.2022.0317 doi: 10.1089/thy.2022.0317
![]() |
[41] |
X. Hu, S. Chen, C. Xie, DPP4 gene silencing inhibits proliferation and epithelial-mesenchymal transition of papillary thyroid carcinoma cells through suppression of the MAPK pathway, J. Endocrinol. Invest., 44 (2021), 1609–1623. http://.doi.org/10.1007/s40618-020-01455-7 doi: 10.1007/s40618-020-01455-7
![]() |
[42] |
G. Peppino, R. Ruiu, M. Arigoni, Teneurins: Role in cancer and potential role as diagnostic biomarkers and targets for therapy, Int. J. Mol. Sci., 22 (2021), 2321. http://.doi.org/10.3390/ijms22052321 doi: 10.3390/ijms22052321
![]() |
[43] |
S. P. Cheng, M. J. Chen, M. N. Chien, Overexpression of teneurin transmembrane protein 1 is a potential marker of disease progression in papillary thyroid carcinoma, Clin. Exper. Med., 17 (2017), 555–564. http://.doi.org/10.1007/s10238-016-0445-y doi: 10.1007/s10238-016-0445-y
![]() |
[44] |
S. Lemarchant, M. Pruvost, J. Montaner, ADAMTS proteoglycanases in the physiological and pathological central nervous system, J. Neuroinflamm., 10 (2013), 133. http://.doi.org/10.1186/1742-2094-10-133 doi: 10.1186/1742-2094-10-133
![]() |
[45] |
W. Sun, G. Ma, L. Zhang, DNMT3A-mediated silence in ADAMTS9 expression is restored by RNF180 to inhibit viability and motility in gastric cancer cells, Cell Death Dis., 12 (2021), 428. http://.doi.org/10.1038/s41419-021-03628-5 doi: 10.1038/s41419-021-03628-5
![]() |
[46] |
N. Wang, X. Huo, B. Zhang, METTL3-Mediated ADAMTS9 Suppression facilitates angiogenesis and carcinogenesis in gastric cancer, Front. Oncol., 12 (2022), 861807. http://.doi.org/10.3389/fonc.2022.861807 doi: 10.3389/fonc.2022.861807
![]() |
[47] |
K. Goto, M. Morimoto, M. Osaki, The impact of AMIGO2 on prognosis and hepatic metastasis in gastric cancer patients, BMC Cancer, 22 (2022), 280. http://.doi.org/10.1186/s12885-022-09339-0 doi: 10.1186/s12885-022-09339-0
![]() |
[48] |
R. Izutsu, M. Osaki, J. P. Jehun, Liver metastasis formation is defined by AMIGO2 expression via adhesion to hepatic endothelial cells in human gastric and colorectal cancer cells, Pathol. Res. Pract., 237 (2022), 154015. http://.doi.org/10.1016/j.prp.2022.154015 doi: 10.1016/j.prp.2022.154015
![]() |
[49] |
Z. Han, Y. Feng, Y. Deng, Integrated analysis reveals prognostic value and progression-related role of AMIGO2 in prostate cancer, Transl. Androl. Urol., 11 (2022), 914–928. http://.doi.org/10.21037/tau-21-1148 doi: 10.21037/tau-21-1148
![]() |
[50] |
E. Rassart, F. Desmarais, O. Najyb, Apolipoprotein D, Gene, 756 (2020), 144874. http://.doi.org/10.1016/j.gene.2020.144874 doi: 10.1016/j.gene.2020.144874
![]() |
[51] |
F. Desmarais, V. Herve, K. F. Bergeron, Cerebral apolipoprotein D exits the brain and accumulates in peripheral tissues, Int. J. Mol. Sci., 22 (2021), 4118. http://.doi.org/10.3390/ijms22084118 doi: 10.3390/ijms22084118
![]() |
[52] |
C. J. Lai, H. C. Cheng, C. Y. Lin, Activation of liver X receptor suppresses angiogenesis via induction of ApoD, Faseb J., 31 (2017), 5568–5576. http://.doi.org/10.1096/fj.201700374R doi: 10.1096/fj.201700374R
![]() |
[53] |
M. Schulze, C. Violonchi, S. Swoboda, RELN signaling modulates glioblastoma growth and substrate-dependent migration, Brain Pathol., 28 (2018), 695–709. http://.doi.org/10.1111/bpa.12584 doi: 10.1111/bpa.12584
![]() |
[54] | O. Dohi, H. Takada, N. Wakabayashi, Epigenetic silencing of RELN in gastric cancer, Int. J. Oncol., 36 (2010), 85–92. http://doi.org/10.3892/ijo_00000478 |
[55] |
Z. Li, X. Wang, Y. Yang, Identification and validation of RELN mutation as a response indicator for immune checkpoint inhibitor therapy in melanoma and non-small cell lung cancer, Cells, 11 (2022), 3841. http://.doi.org/10.3390/cells11233841 doi: 10.3390/cells11233841
![]() |
[56] |
N. Rufo, A. D. Garg, P. Agostinis, The unfolded protein response in immunogenic cell death and cancer immunotherapy, Trends Cancer, 3 (2017), 643–658. http://.doi.org/10.1016/j.trecan.2017.07.002 doi: 10.1016/j.trecan.2017.07.002
![]() |
[57] |
R. Saghaleyni, A. Sheikh Muhammad, P. Bangalore, Machine learning-based investigation of the cancer protein secretory pathway, PLoS Comput. Biol., 17 (2021), e1008898. http://.doi.org/10.1371/journal.pcbi.1008898 doi: 10.1371/journal.pcbi.1008898
![]() |
[58] |
C. T. Walsh, B. P. Tu, Y. Tang, Eight kinetically stable but thermodynamically activated molecules that power cell metabolism, Chem. Rev., 118 (2018), 1460–1494. http://.doi.org/10.1021/acs.chemrev.7b00510 doi: 10.1021/acs.chemrev.7b00510
![]() |
[59] |
S. Y. Lunt, S. M. Fendt, Metabolism – A cornerstone of cancer initiation, progression, immune evasion and treatment response, Curr. Opin. Syst. Biol., 8 (2018), 67–72. https://doi.org/10.1016/j.coisb.2017.12.006 doi: 10.1016/j.coisb.2017.12.006
![]() |
[60] |
V. Friand, G. David, P. Zimmermann, Syntenin and syndecan in the biogenesis of exosomes, Biol. Cell., 107 (2015), 331–341. http://.doi.org/10.1111/boc.201500010 doi: 10.1111/boc.201500010
![]() |
[61] |
S. Y. Lunt, M. G. Vander Heiden, Aerobic glycolysis: meeting the metabolic requirements of cell proliferation, Annu. Rev. Cell Dev. Biol., 27 (2011), 441–464. http://.doi.org/10.1146/annurev-cellbio-092910-154237 doi: 10.1146/annurev-cellbio-092910-154237
![]() |
[62] |
B. Sousa, J. Pereira, J. Paredes, The crosstalk between cell adhesion and cancer metabolism, Int. J. Mol. Sci., 20 (2019), 1933. http://.doi.org/10.3390/ijms20081933 doi: 10.3390/ijms20081933
![]() |
[63] |
D. Hanahan, L. M. Coussens, Accessories to the crime: functions of cells recruited to the tumor microenvironment, Cancer Cell, 21 (2012), 309–322. http://.doi.org/10.1016/j.ccr.2012.02.022 doi: 10.1016/j.ccr.2012.02.022
![]() |
![]() |
![]() |
1. | Xiaoding Meng, Hecheng Li, An adaptive co-evolutionary competitive particle swarm optimizer for constrained multi-objective optimization problems, 2024, 91, 22106502, 101746, 10.1016/j.swevo.2024.101746 | |
2. | Özlem TÜLEK, İhsan Hakan SELVİ, Derin Q Ağları Tabanlı Parçacık Sürü Optimizasyonu, 2023, 35, 1308-9072, 855, 10.35234/fumbd.1313906 | |
3. | Kangjian Sun, Ju Huo, Heming Jia, Lin Yue, Reinforcement learning guided Spearman dynamic opposite Gradient-based optimizer for numerical optimization and anchor clustering, 2023, 11, 2288-5048, 12, 10.1093/jcde/qwad109 | |
4. | Xiaoding Meng, Hecheng Li, 2024, A Learnable Competitive Swarm Operator Driven by Reinforcement Learning for Multi-Objective Optimization Problems, 979-8-3503-7784-2, 156, 10.1109/DOCS63458.2024.10704510 | |
5. | Zeyan Hu, Peng Zheng, Jiaxin Cai, Lingling Zhu, Yuqing Cai, Minghui Fang, 2024, Integrated Energy System Load Forecasting Based on GSO-LSTM and Feature Selection, 979-8-3503-5356-3, 288, 10.1109/ICESEP62218.2024.10651922 | |
6. | Qingyang Jia, Kewei Yang, Yajie Dou, Ziyi Chen, Nan Xiang, Lining Xing, A consensus optimization mechanism with Q-learning-based distributed PSO for large-scale group decision-making, 2025, 93, 22106502, 101841, 10.1016/j.swevo.2024.101841 |
Algorithm 1 Q-learning algorithm |
1: Initialize Q(s, a), α, γ, r, ε 2: Repeat : 3: Choose the best action at for the current state st using policy (e.g., ε-greedy strategy); 4: Execute action at; 5: Get the immediate reward rt+1; 6: Get the maximum Q-value for the next state st+1; 7: Update the Q-table by Eq (2.5); 8: Update the current state st←st+1; 9: Until the condition is terminal |
Q-learning | PSO |
Agents | Particles |
Environment | Optimization problems |
State | Fitness value |
Action | Strategy |
State | Action | |||
a1 | a2 | a3 | a4 | |
s1 | Q(s1, a1) | Q(s1, a2) | Q(s1, a3) | Q(s1, a4) |
s2 | Q(s2, a1) | Q(s2, a2) | Q(s2, a3) | Q(s2, a4) |
s3 | Q(s3, a1) | Q(s3, a2) | Q(s3, a3) | Q(s3, a4) |
s4 | Q(s4, a1) | Q(s4, a2) | Q(s4, a3) | Q(s4, a4) |
s5 | Q(s5, a1) | Q(s5, a2) | Q(s5, a3) | Q(s5, a4) |
Algorithm 2 The proposed algorithm |
Input:
Initialize N, D, the maximum number of fitness evaluations (Max_Fes); Initialize the maximum number of iterations (Max_Iter), k=0, kk=0, fit=0; Initialize the inertia weight w, learning factors c1, c2, position, velocity; Evaluate the fitness value, fit=fit+N, record pbest and gbest; Initialize Q-table, state, reward, learning period LP, ε, α, γ. Output: Best solution. 1: while termination condition is not met do 2: k=k+1; 3: /* pop1 */ 4: for i=1:length(pop1) do 5: Update the particle's velocity and position by CLPSO; 6: Evaluate the fitness of the particle, fit=fit+1; 7: Update pbesti and gbest; 8: end for 9: /* pop2 */ 10: Divide state for pop2 according to Section 3.2.1; 11: Select action based on Q-table using ε-greedy strategy according to Eq (3.1); 12: for i=length(pop1)+1:N do 13: Update the particle's velocity by action ai; 14: Update the particle's position; 15: Evaluate the fitness of the particle, fit=fit+1; 16: Update pbesti and gbest; 17: end for 18: /* update Q-table */ 19: if k<Max_Iter then 20: if mod(k, LP) = 0 then 21: Determine next state; 22: Select action using ε-greedy strategy; 23: Calculate reward according to Eq (3.2); 24: Update Q-table using Eq (2.5); 25: end if 26: else 27: if mod(kk, LP) = 0 then 28: Determine next state; 29: Calculate reward according to Eq (3.2); 30: Update Q-table using Eq (2.5); 31: end if 32: kk=kk+1; 33: end if 34: end while |
Function type | No. | Test Functions | F(x∗) | D |
Unimodal Functions | F1 | Shifted and Rotated Bent Cigar Function | 100 | 30/50/100 |
F2 | Shifted and Rotated Sum of Different Power Function* | 200 | 30/50/100 | |
Simple Multimodal Functions | F3 | Shifted and Rotated Zakharov Function | 300 | 30/50/100 |
F4 | Shifted and Rotated Rosenbrock Function | 400 | 30/50/100 | |
F5 | Shifted and Rotated Rastrigin Function | 500 | 30/50/100 | |
F6 | Shifted and Rotated Expanded Schaffer's F6 Function | 600 | 30/50/100 | |
F7 | Shifted and Rotated Lunacek Bi-Rastrigin Function | 700 | 30/50/100 | |
F8 | Shifted and Rotated Non-Continuous Rastrigin Function | 800 | 30/50/100 | |
F9 | Shifted and Rotated Levy Function | 900 | 30/50/100 | |
Hybrid Functions | F10 | Shifted and Rotated Schwefel Function | 1000 | 30/50/100 |
F11 | Hybrid Function 1 (N = 3) | 1100 | 30/50/100 | |
F12 | Hybrid Function 2 (N = 3) | 1200 | 30/50/100 | |
F13 | Hybrid Function 3 (N = 3) | 1300 | 30/50/100 | |
F14 | Hybrid Function 4 (N = 4) | 1400 | 30/50/100 | |
F15 | Hybrid Function 5 (N = 4) | 1500 | 30/50/100 | |
F16 | Hybrid Function 6 (N = 4) | 1600 | 30/50/100 | |
F17 | Hybrid Function 6 (N = 5) | 1700 | 30/50/100 | |
F18 | Hybrid Function 6 (N = 5) | 1800 | 30/50/100 | |
F19 | Hybrid Function 6 (N = 5) | 1900 | 30/50/100 | |
Composition Functions | F20 | Hybrid Function 6 (N = 6) | 2000 | 30/50/100 |
F21 | Composition Function 1 (N = 3) | 2100 | 30/50/100 | |
F22 | Composition Function 2 (N=3) | 2200 | 30/50/100 | |
F23 | Composition Function 3 (N = 4) | 2300 | 30/50/100 | |
F24 | Composition Function 4 (N = 4) | 2400 | 30/50/100 | |
F25 | Composition Function 5 (N = 5) | 2500 | 30/50/100 | |
F26 | Composition Function 6 (N = 5) | 2600 | 30/50/100 | |
F27 | Composition Function 7 (N = 6) | 2700 | 30/50/100 | |
F28 | Composition Function 8 (N = 6) | 2800 | 30/50/100 | |
F29 | Composition Function 9 (N = 3) | 2900 | 30/50/100 | |
F30 | Composition Function 10 (N = 3) | 3000 | 30/50/100 | |
Note: *F2 has been excluded because it shows unstable behavior especially for higher dimensions. |
No. | Functions | F(x∗) | D | Search range |
F31 | Storn's Chebyshev Polynomial Fitting Problem | 1 | 9 | [-8192, 8192] |
F32 | Inverse Hilbert Matrix Problem | 1 | 16 | [-16384, 16384] |
F33 | Lennard-Jones Minimum Energy Cluster | 1 | 18 | [-4, 4] |
F34 | Rastrigin Function | 1 | 10 | [-100,100] |
F35 | Griewank's Function | 1 | 10 | [-100,100] |
F36 | Weierstrass Function | 1 | 10 | [-100,100] |
F37 | Modified Schwefel Function | 1 | 10 | [-100,100] |
F38 | Expanded Schaffer's F6 Function | 1 | 10 | [-100,100] |
F39 | Happy Cat Function | 1 | 10 | [-100,100] |
F40 | Ackley Function | 1 | 10 | [-100,100] |
Algorithm | Key parameters | Year | Ref. |
LDWPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 1998 | [50] |
CLPSO | w:0.9−0.2, c:3.0−1.5, Vmax = 0.5*range | 2006 | [6] |
UPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5 | 2004 | [53] |
LIPS | χ:0.7298, Vmax = 0.5*range, nsize = 3 | 2013 | [52] |
HCLPSO | w:0.99−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.2*range | 2015 | [7] |
EPSO | w:0.9−0.4, c:3.0−1.5, w1:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2017 | [34] |
SDPSO | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2022 | [8] |
MRFO | S = 2 | 2020 | [58] |
I-GWO | a = 2−0 | 2021 | [59] |
BeSD | K = 5 | 2021 | [60] |
MPSORL | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | − | − |
Level | Factors | ||||
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | |
1 | 0.1 | 20 | 0.80 | 0.3 | 0.6 |
2 | 0.2 | 50 | 0.85 | 0.4 | 0.7 |
3 | 0.3 | 100 | 0.90 | 0.5 | 0.8 |
4 | 0.4 | 200 | 0.95 | 0.6 | 0.9 |
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | average rank | |
1 | 0.1 | 20 | 0.8 | 0.3 | 0.6 | 8.5 |
2 | 0.1 | 50 | 0.85 | 0.4 | 0.7 | 9.345 |
3 | 0.1 | 100 | 0.9 | 0.5 | 0.8 | 10.672 |
4 | 0.1 | 200 | 0.95 | 0.6 | 0.9 | 11.483 |
5 | 0.2 | 20 | 0.85 | 0.5 | 0.9 | 7.81 |
6 | 0.2 | 50 | 0.8 | 0.6 | 0.8 | 6.207 |
7 | 0.2 | 100 | 0.95 | 0.3 | 0.7 | 10.293 |
8 | 0.2 | 200 | 0.9 | 0.4 | 0.6 | 9.207 |
9 | 0.3 | 20 | 0.9 | 0.6 | 0.7 | 7.879 |
10 | 0.3 | 50 | 0.95 | 0.5 | 0.6 | 9.828 |
11 | 0.3 | 100 | 0.8 | 0.4 | 0.9 | 6.379 |
12 | 0.3 | 200 | 0.85 | 0.3 | 0.8 | 7.293 |
13 | 0.4 | 20 | 0.95 | 0.4 | 0.8 | 9.362 |
14 | 0.4 | 50 | 0.9 | 0.3 | 0.9 | 7.931 |
15 | 0.4 | 100 | 0.85 | 0.6 | 0.6 | 6.793 |
16 | 0.4 | 200 | 0.8 | 0.5 | 0.7 | 7.017 |
IEEE CEC2017 with 30D | LDWPSO | UPSO | CLPSO | LIPS | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 4.64E+03±5.02E+03+ | 2.03E+03±2.03E+03+ | 8.17E+00±1.07E+01− | 9.24E+02±1.60E+03+ | 1.05E+02±2.10E+02 |
F3 | 3.07E+02±4.39E+02+ | 1.46E+02±2.08E+02+ | 2.84E+04±6.40E+03+ | 2.49E+03±1.59E+03+ | 2.65E−02±7.68E−02 |
F4 | 9.26E+01±2.63E+01+ | 3.58E+01±3.85E+01≈ | 5.90E+01±2.66E+01+ | 3.59E+01±2.95E+01≈ | 4.13E+01±2.72E+01 |
F5 | 6.95E+01±1.83E+01+ | 6.99E+01±1.13E+01+ | 4.76E+01±8.81E+00+ | 1.05E+02±2.29E+01+ | 3.25E+01±1.00E+01 |
F6 | 4.27E−01±5.54E−01+ | 1.20E+00±9.13E−01+ | 4.92E−07±2.49E−07+ | 2.29E+01±8.13E+00+ | 2.35E−13±6.63E−14 |
F7 | 1.18E+02±2.17E+01+ | 9.67E+01±1.34E+01+ | 9.22E+01±1.01E+01+ | 1.13E+02±1.93E+01+ | 8.33E+01±1.15E+01 |
F8 | 6.93E+01±1.74E+01+ | 6.44E+01±1.13E+01+ | 5.33E+01±7.36E+00+ | 9.36E+01±2.08E+01+ | 3.67E+01±1.33E+01 |
F9 | 1.40E+02±1.22E+02+ | 9.18E+01±6.82E+01+ | 1.18E+02±5.19E+01+ | 1.52E+03±9.83E+02+ | 1.27E+00±1.05E+00 |
F10 | 3.21E+03±6.26E+02+ | 2.90E+03±4.36E+02+ | 2.18E+03±2.71E+02+ | 3.11E+03±4.73E+02+ | 1.86E+03±3.64E+02 |
F11 | 8.61E+01±4.17E+01+ | 5.86E+01±2.68E+01+ | 6.69E+01±2.04E+01+ | 1.04E+02±3.48E+01+ | 4.65E+01±3.30E+01 |
F12 | 3.79E+04±1.95E+04+ | 8.47E+04±8.91E+04+ | 4.02E+05±2.22E+05+ | 1.30E+05±1.33E+05+ | 2.34E+04±1.03E+04 |
F13 | 9.64E+03±1.20E+04+ | 7.86E+03±6.06E+03+ | 3.41E+02±1.26E+02≈ | 4.42E+03±5.13E+03+ | 4.90E+02±5.07E+02 |
F14 | 6.14E+03±4.66E+03+ | 3.66E+03±3.79E+03≈ | 2.89E+04±2.86E+04+ | 8.02E+03±5.99E+03+ | 3.94E+03±5.06E+03 |
F15 | 5.33E+03±5.78E+03+ | 2.47E+03±2.83E+03+ | 1.44E+02±1.03E+02− | 1.97E+03±3.02E+03+ | 2.88E+02±3.70E+02 |
F16 | 7.60E+02±2.01E+02+ | 6.57E+02±1.25E+02+ | 5.16E+02±1.21E+02+ | 7.24E+02±2.56E+02+ | 3.97E+02±1.79E+02 |
F17 | 2.26E+02±1.59E+02+ | 1.69E+02±7.73E+01+ | 1.22E+02±5.72E+01+ | 2.88E+02±1.55E+02+ | 9.27E+01±4.66E+01 |
F18 | 9.36E+04±5.84E+04≈ | 1.04E+05±4.07E+04≈ | 1.37E+05±6.93E+04+ | 8.81E+04±7.43E+04≈ | 8.72E+04±5.27E+04 |
F19 | 7.41E+03±1.02E+04+ | 2.04E+03±2.27E+03+ | 5.83E+01±4.54E+01− | 2.50E+03±3.42E+03+ | 1.55E+02±1.42E+02 |
F20 | 2.78E+02±1.26E+02+ | 2.77E+02±7.74E+01+ | 1.56E+02±7.37E+01≈ | 4.76E+02±1.29E+02+ | 1.34E+02±6.92E+01 |
F21 | 2.70E+02±1.43E+01+ | 2.62E+02±3.02E+01+ | 2.40E+02±4.08E+01+ | 2.95E+02±2.04E+01+ | 2.33E+02±9.38E+00 |
F22 | 3.90E+02±1.10E+03+ | 1.01E+02±1.73E+00+ | 1.13E+02±6.95E+00+ | 3.19E+02±8.32E+02≈ | 1.00E+02±1.08E+00 |
F23 | 4.29E+02±2.15E+01+ | 4.32E+02±2.02E+01+ | 4.00E+02±1.20E+01+ | 5.35E+02±3.83E+01+ | 3.86E+02±1.07E+01 |
F24 | 5.15E+02±3.12E+01+ | 4.87E+02±1.20E+01+ | 4.76E+02±9.15E+01+ | 5.70E+02±4.66E+01+ | 4.53E+02±7.86E+00 |
F25 | 3.96E+02±1.37E+01+ | 3.89E+02±5.82E+00+ | 3.87E+02±1.13E+00≈ | 3.87E+02±4.05E+00≈ | 3.87E+02±1.05E+00 |
F26 | 1.43E+03±7.28E+02+ | 6.40E+02±7.57E+02≈ | 4.02E+02±1.94E+02+ | 9.96E+02±1.18E+03≈ | 3.75E+02±3.19E+02 |
F27 | 5.37E+02±1.22E+01+ | 5.48E+02±1.57E+01+ | 5.10E+02±5.27E+00− | 5.84E+02±2.16E+01+ | 5.15E+02±6.39E+00 |
F28 | 4.13E+02±2.81E+01+ | 3.15E+02±3.50E+01≈ | 4.20E+02±6.12E+00+ | 3.10E+02±2.98E+01≈ | 3.36E+02±5.26E+01 |
F29 | 6.76E+02±1.76E+02+ | 7.95E+02±1.28E+02+ | 5.44E+02±6.75E+01+ | 1.03E+03±1.70E+02+ | 4.94E+02±4.09E+01 |
F30 | 5.83E+03±3.07E+03+ | 8.18E+03±6.88E+03+ | 4.82E+03±8.11E+02+ | 6.87E+03±2.12E+03+ | 4.06E+03±1.14E+03 |
+/≈/− | 28/1/0 | 24/5/0 | 22/3/4 | 23/6/0 | / |
IEEE CEC2017 with 50D | HCLPSO | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 1.65E+02±2.03E+02≈ | 2.15E+03±2.12E+03+ | 3.46E+02±3.11E+02+ | 4.94E+03±5.51E+03+ | 5.00E+03±6.84E+03+ | 8.66E+02±1.13E+03+ | 1.35E+02±1.34E+02 |
F3 | 1.55E+01±1.61E+01− | 6.38E−04±1.14E−03− | 5.11E+00±2.31E+01− | 6.98E+03±2.44E+03+ | 1.91E+04±4.05E+03+ | 1.22E+03±7.08E+02+ | 5.22E+02±5.03E+02 |
F4 | 7.85E+01±4.42E+01≈ | 4.05E+01±3.72E+01− | 5.97E+01±4.46E+01≈ | 8.48E+01±5.02E+01≈ | 1.01E+02±6.05E+01+ | 9.56E+01±4.32E+01+ | 7.11E+01±3.82E+01 |
F5 | 1.05E+02±1.88E+01+ | 1.12E+02±1.81E+01+ | 8.12E+01±2.15E+01≈ | 3.27E+02±4.73E+01+ | 3.53E+02±2.02E+01+ | 1.78E+02±1.27E+01+ | 7.83E+01±1.77E+01 |
F6 | 6.63E−13±1.43E−13+ | 2.90E−08±9.81E−08+ | 1.15E−05±1.96E−05≈ | 4.38E+01±1.12E+01+ | 3.60E−02±1.10E−01+ | 8.17E−03±4.72E−03+ | 4.55E−13±1.12E−13 |
F7 | 1.77E+02±1.47E+01≈ | 1.62E+02±2.47E+01≈ | 1.89E+02±2.52E+01+ | 7.11E+02±1.49E+02+ | 4.28E+02±2.94E+01+ | 2.72E+02±2.09E+01+ | 1.69E+02±2.47E+01 |
F8 | 1.02E+02±2.03E+01+ | 1.12E+02±1.87E+01+ | 7.77E+01±1.78E+01≈ | 3.44E+02±4.50E+01+ | 3.60E+02±1.62E+01+ | 1.81E+02±1.51E+01+ | 7.85E+01±1.58E+01 |
F9 | 2.32E+02±2.11E+02+ | 2.12E+02±1.18E+02+ | 1.08E+02±1.10E+02+ | 9.67E+03±2.08E+03+ | 5.35E+01±2.02E+02+ | 4.20E+02±9.59E+02+ | 2.62E+01±2.74E+01 |
F10 | 3.94E+03±5.03E+02≈ | 3.88E+03±4.66E+02≈ | 3.84E+03±3.84E+02≈ | 6.71E+03±1.07E+03+ | 1.22E+04±6.62E+02+ | 6.29E+03±4.18E+02+ | 3.78E+03±4.13E+02 |
F11 | 1.49E+02±3.44E+01+ | 1.65E+02±5.18E+01+ | 1.86E+02±4.90E+01+ | 1.42E+02±3.76E+01+ | 2.41E+02±2.99E+01+ | 1.07E+02±2.31E+01≈ | 1.18E+02±4.29E+01 |
F12 | 5.18E+05±2.49E+05+ | 1.42E+05±1.16E+05− | 8.07E+04±4.03E+04− | 5.70E+05±2.96E+05+ | 5.93E+06±3.15E+06+ | 1.84E+05±1.06E+05− | 3.81E+05±2.02E+05 |
F13 | 6.33E+02±2.47E+02+ | 1.50E+03±1.26E+03+ | 6.97E+02±4.59E+02≈ | 2.63E+03±3.33E+03+ | 5.36E+04±2.13E+05+ | 1.30E+03±1.42E+03+ | 5.10E+02±4.38E+02 |
F14 | 2.70E+04±1.69E+04≈ | 1.54E+04±1.17E+04− | 1.03E+04±7.06E+03− | 2.77E+04±2.05E+04≈ | 2.29E+05±9.37E+04+ | 1.11E+02±1.75E+01− | 3.64E+04±2.73E+04 |
F15 | 3.00E+02±2.31E+02+ | 5.40E+02±4.62E+02+ | 3.78E+02±4.07E+02≈ | 6.29E+03±5.46E+03+ | 3.97E+05±8.32E+05+ | 4.77E+03±2.46E+03+ | 1.99E+02±1.25E+02 |
F16 | 1.01E+03±2.61E+02≈ | 1.10E+03±2.75E+02+ | 8.91E+02±2.85E+02≈ | 1.89E+03±3.66E+02+ | 1.38E+03±6.26E+02+ | 7.69E+02±1.58E+02− | 9.49E+02±1.85E+02 |
F17 | 8.33E+02±1.48E+02+ | 8.14E+02±2.37E+02+ | 7.20E+02±1.69E+02≈ | 1.50E+03±3.63E+02+ | 1.25E+03±2.86E+02+ | 7.29E+02±1.12E+02≈ | 6.65E+02±1.77E+02 |
F18 | 1.40E+05±9.18E+04≈ | 1.03E+05±1.93E+05− | 6.48E+04±5.77E+04− | 1.39E+05±6.87E+04≈ | 2.71E+06±1.18E+06+ | 5.24E+03±2.46E+03− | 1.86E+05±1.80E+05 |
F19 | 6.16E+02±8.92E+02≈ | 3.76E+02±3.38E+02≈ | 6.33E+02±1.08E+03≈ | 1.39E+04±1.09E+04+ | 2.06E+04±1.24E+04+ | 7.03E+03±7.90E+03≈ | 7.31E+02±1.36E+03 |
F20 | 5.05E+02±2.01E+02+ | 6.17E+02±1.59E+02+ | 5.00E+02±2.19E+02+ | 1.25E+03±3.68E+02+ | 1.01E+03±3.19E+02+ | 4.80E+02±1.28E+02+ | 3.99E+02±1.64E+02 |
F21 | 3.04E+02±2.13E+01+ | 3.08E+02±2.28E+01+ | 2.72E+02±2.36E+01≈ | 4.77E+02±4.99E+01+ | 5.39E+02±1.84E+01+ | 3.48E+02±1.32E+01+ | 2.70E+02±1.12E+01 |
F22 | 3.17E+03±2.10E+03≈ | 3.96E+03±1.82E+03+ | 2.89E+03±2.18E+03≈ | 7.34E+03±1.02E+03+ | 1.18E+04±3.25E+03+ | 3.27E+03±3.46E+03≈ | 3.19E+03±2.08E+03 |
F23 | 5.46E+02±2.71E+01+ | 5.56E+02±2.92E+01+ | 5.22E+02±3.04E+01≈ | 8.30E+02±1.12E+02+ | 7.92E+02±1.84E+01+ | 5.95E+02±2.53E+01+ | 5.29E+02±2.52E+01 |
F24 | 6.35E+02±3.23E+01+ | 6.17E+02±2.89E+01+ | 5.77E+02±2.50E+01− | 8.99E+02±1.16E+02+ | 8.52E+02±1.54E+01+ | 6.45E+02±2.96E+01+ | 5.92E+02±2.36E+01 |
F25 | 5.08E+02±3.32E+01≈ | 5.15E+02±3.05E+01≈ | 5.21E+02±3.32E+01≈ | 5.59E+02±3.68E+01+ | 5.44E+02±2.80E+01+ | 5.76E+02±1.76E+01+ | 5.20E+02±3.22E+01 |
F26 | 1.42E+03±9.15E+02≈ | 1.59E+03±1.07E+03≈ | 1.70E+03±5.29E+02≈ | 3.89E+03±3.87E+03≈ | 4.48E+03±8.29E+02+ | 2.27E+03±1.62E+03+ | 1.51E+03±8.29E+02 |
F27 | 6.31E+02±2.76E+01≈ | 6.46E+02±3.54E+01≈ | 6.67E+02±3.19E+01≈ | 9.66E+02±1.74E+02+ | 6.46E+02±5.44E+01≈ | 6.21E+02±4.76E+01− | 6.48E+02±4.34E+01 |
F28 | 4.80E+02±2.21E+01≈ | 4.88E+02±2.04E+01≈ | 4.92E+02±2.25E+01≈ | 4.95E+02±2.72E+01≈ | 4.80E+02±2.30E+01≈ | 5.17E+02±2.04E+01+ | 4.93E+02±1.81E+01 |
F29 | 6.94E+02±1.96E+02≈ | 7.62E+02±1.50E+02≈ | 7.21E+02±1.59E+02≈ | 1.53E+03±3.60E+02+ | 1.39E+03±2.72E+02+ | 7.59E+02±8.20E+01≈ | 7.10E+02±1.80E+02 |
F30 | 7.02E+05±5.32E+04≈ | 7.03E+05±6.25E+04≈ | 7.43E+05±7.14E+04≈ | 1.05E+06±3.28E+05+ | 1.52E+07±1.16E+07+ | 8.25E+05±4.54E+04+ | 7.05E+05±5.83E+04 |
+/≈/− | 13/15/1 | 15/9/5 | 5/19/5 | 24/5/0 | 27/2/0 | 19/5/5 | / |
IEEE CEC2017 with 100D | HCLPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 6.74E+03±7.22E+03≈ | 7.59E+03±8.88E+03≈ | 5.59E+03±7.18E+03≈ | 1.30E+05±5.51E+04+ | 3.34E+03±2.87E+03≈ | 7.31E+03±7.60E+03 |
F3 | 7.51E+03±3.89E+03− | 2.07E+04±1.75E+04− | 8.58E+04±1.31E+04≈ | 9.01E+04±1.11E+04≈ | 3.42E+04±5.67E+03− | 8.75E+04±1.84E+04 |
F4 | 2.16E+02±3.25E+01≈ | 1.95E+02±4.18E+01− | 2.37E+02±4.09E+01≈ | 3.06E+02±3.91E+01+ | 2.81E+02±4.11E+01+ | 2.28E+02±5.13E+01 |
F5 | 2.91E+02±5.87E+01≈ | 3.05E+02±8.14E+01≈ | 7.98E+02±6.75E+01+ | 9.15E+02±3.20E+01+ | 6.46E+02±3.17E+01+ | 2.81E+02±5.72E+01 |
F6 | 2.56E−05±8.27E−05− | 4.90E−02±3.90E−02+ | 5.13E+01±6.93E+00+ | 1.93E+00±9.54E−01+ | 4.36E−01±1.80E−01+ | 8.82E−04±1.13E−03 |
F7 | 5.79E+02±7.60E+01+ | 5.43E+02±1.46E+02+ | 2.17E+03±2.71E+02+ | 1.10E+03±6.67E+01+ | 7.33E+02±4.82E+01+ | 4.22E+02±8.11E+01 |
F8 | 2.98E+02±5.97E+01≈ | 3.21E+02±6.77E+01+ | 8.83E+02±8.99E+01+ | 9.15E+02±3.05E+01+ | 6.52E+02±3.42E+01+ | 2.88E+02±6.62E+01 |
F9 | 1.55E+03±7.37E+02+ | 4.79E+03±2.78E+03+ | 2.06E+04±1.69E+03+ | 4.15E+03±3.17E+03+ | 1.91E+04±5.45E+03+ | 5.35E+02±3.20E+02 |
F10 | 1.17E+04±1.02E+03+ | 1.20E+04±7.35E+02+ | 1.41E+04±1.21E+03+ | 2.89E+04±7.52E+02+ | 1.84E+04±6.20E+02+ | 1.10E+04±8.17E+02 |
F11 | 7.57E+02±1.78E+02≈ | 9.67E+02±1.92E+02+ | 5.66E+02±1.18E+02− | 5.74E+03±1.66E+03+ | 6.27E+02±1.35E+02− | 7.10E+02±1.19E+02 |
F12 | 7.72E+05±3.83E+05− | 3.04E+05±1.00E+05− | 1.40E+06±5.88E+05+ | 4.01E+07±1.53E+07+ | 2.26E+06±7.87E+05+ | 1.03E+06±4.16E+05 |
F13 | 1.40E+03±8.97E+02+ | 1.64E+03±9.03E+02+ | 6.05E+03±8.30E+03+ | 7.10E+03±7.79E+03+ | 3.27E+03±1.64E+03+ | 9.36E+02±4.44E+02 |
F14 | 1.11E+05±3.95E+04≈ | 5.95E+04±2.98E+04− | 1.33E+05±8.46E+04≈ | 3.85E+06±1.83E+06+ | 4.78E+03±3.89E+03− | 1.43E+05±6.90E+04 |
F15 | 6.80E+02±2.92E+02+ | 5.05E+02±1.33E+02≈ | 2.34E+03±2.69E+03+ | 3.05E+03±3.30E+03+ | 5.44E+02±3.77E+02≈ | 5.34E+02±1.39E+02 |
F16 | 3.20E+03±3.53E+02+ | 2.90E+03±4.68E+02≈ | 3.95E+03±6.79E+02+ | 5.73E+03±1.58E+03+ | 3.33E+03±2.88E+02+ | 2.76E+03±4.80E+02 |
F17 | 2.41E+03±3.39E+02≈ | 2.22E+03±2.87E+02≈ | 2.96E+03±5.02E+02+ | 4.80E+03±4.04E+02+ | 2.15E+03±2.11E+02≈ | 2.23E+03±2.91E+02 |
F18 | 2.09E+05±8.99E+04− | 1.17E+05±3.31E+04− | 3.29E+05±1.21E+05≈ | 1.13E+07±7.10E+06+ | 5.71E+04±1.82E+04− | 4.13E+05±1.83E+05 |
F19 | 4.43E+02±2.87E+02+ | 3.32E+02±2.18E+02≈ | 2.93E+03±3.10E+03+ | 3.16E+03±3.87E+03+ | 9.38E+02±5.61E+02+ | 2.85E+02±1.11E+02 |
F20 | 2.25E+03±2.52E+02+ | 2.08E+03±3.21E+02+ | 3.19E+03±5.90E+02+ | 4.25E+03±3.40E+02+ | 2.43E+03±2.55E+02+ | 1.87E+03±2.71E+02 |
F21 | 5.47E+02±5.29E+01≈ | 5.32E+02±6.67E+01≈ | 9.60E+02±1.28E+02+ | 1.12E+03±3.20E+01+ | 7.73E+02±2.92E+01+ | 5.41E+02±6.73E+01 |
F22 | 1.32E+04±1.15E+03+ | 1.21E+04±4.15E+03+ | 1.68E+04±1.49E+03+ | 2.98E+04±1.01E+03+ | 2.02E+04±4.18E+02+ | 1.17E+04±3.25E+03 |
F23 | 8.03E+02±2.63E+01+ | 7.83E+02±3.24E+01≈ | 1.33E+03±1.38E+02+ | 1.38E+03±3.55E+01+ | 1.06E+03±2.63E+01+ | 7.86E+02±2.07E+01 |
F24 | 1.31E+03±5.20E+01≈ | 1.30E+03±5.71E+01≈ | 2.19E+03±2.78E+02+ | 1.76E+03±3.11E+01+ | 1.32E+03±9.25E+01≈ | 1.29E+03±6.33E+01 |
F25 | 7.38E+02±6.47E+01− | 7.78E+02±6.94E+01≈ | 7.84E+02±5.27E+01≈ | 8.61E+02±4.59E+01+ | 8.62E+02±4.13E+01+ | 7.88E+02±6.23E+01 |
F26 | 7.29E+03±6.01E+02≈ | 7.72E+03±6.71E+02+ | 1.69E+04±7.42E+03+ | 1.22E+04±5.14E+02+ | 1.23E+04±2.30E+03+ | 6.28E+03±2.76E+03 |
F27 | 7.91E+02±3.21E+01≈ | 7.73E+02±3.15E+01≈ | 1.22E+03±1.85E+02+ | 9.53E+02±1.30E+02+ | 8.53E+02±5.71E+01+ | 7.77E+02±2.78E+01 |
F28 | 5.71E+02±3.64E+01≈ | 5.65E+02±2.80E+01≈ | 5.70E+02±3.28E+01≈ | 6.69E+02±3.07E+01+ | 6.69E+02±2.81E+01+ | 5.65E+02±2.94E+01 |
F29 | 2.93E+03±2.89E+02≈ | 3.01E+03±4.44E+02≈ | 3.97E+03±5.99E+02+ | 4.90E+03±4.35E+02+ | 3.49E+03±2.48E+02+ | 3.08E+03±4.84E+02 |
F30 | 5.83E+03±2.82E+03≈ | 5.73E+03±2.78E+03≈ | 7.14E+03±3.85E+03≈ | 2.43E+05±1.07E+05+ | 1.06E+04±2.71E+03+ | 5.80E+03±3.15E+03 |
+/≈/− | 10/14/5 | 10/14/5 | 20/8/1 | 28/1/0 | 21/4/4 | / |
IEEE CEC2019 | LDWPSO | UPSO | CLPSO | LIPS | HCLPSO | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 5.22E+04±8.95E+04≈ | 5.71E+04±3.49E+04− | 8.62E+05±5.36E+05+ | 4.70E+04±4.30E+04≈ | 6.08E+04±1.20E+05≈ | 5.93E+04±6.88E+04 |
F32 | 2.61E+02±1.04E+02≈ | 3.23E+02±1.35E+02≈ | 6.25E+02±1.54E+02+ | 2.73E+02±9.38E+01≈ | 2.43E+02±8.17E+01≈ | 2.44E+02±1.03E+02 |
F33 | 3.95E−01±7.47E−02≈ | 4.80E−01±4.01E−01+ | 9.06E−01±3.06E−01+ | 6.22E−01±8.09E−01≈ | 3.95E−01±7.47E−02+ | 3.66E−01±1.33E−01 |
F34 | 8.52E+00±4.08E+00+ | 9.20E+00±2.93E+00+ | 4.27E+00±1.21E+00≈ | 1.05E+01±4.19E+00+ | 4.41E+00±1.47E+00≈ | 4.55E+00±1.77E+00 |
F35 | 1.05E−01±6.19E−02+ | 3.86E−02±1.90E−02≈ | 6.52E−02±2.62E−02+ | 1.00E−02±8.04E−03− | 5.13E−02±2.18E−02≈ | 3.33E−02±2.24E−02 |
F36 | 3.15E−01±5.71E−01+ | 4.36E−01±5.00E−01+ | 1.59E−01±7.46E−02+ | 4.76E−01±5.95E−01+ | 1.99E−03±2.78E−03+ | 3.89E−04±1.69E−03 |
F37 | 4.41E+02±2.38E+02+ | 4.66E+02±1.71E+02+ | 1.47E+02±7.73E+01≈ | 7.55E+02±2.49E+02+ | 2.49E+02±1.11E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.99E+00±4.61E−01+ | 2.21E+00±4.28E−01+ | 2.03E+00±2.62E−01+ | 3.04E+00±4.87E−01+ | 1.65E+00±3.53E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.32E−01±6.08E−02≈ | 6.99E−02±3.05E−02− | 1.75E−01±3.92E−02+ | 1.23E−01±4.73E−02≈ | 1.01E−01±3.46E−02− | 1.30E−01±3.08E−02 |
F40 | 1.42E+01±9.09E+00≈ | 1.46E+01±8.32E+00≈ | 1.88E+01±3.91E+00+ | 1.93E+01±3.65E+00≈ | 1.54E+01±8.57E+00≈ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 5/3/2 | 8/2/0 | 4/5/1 | 4/5/1 | / |
IEEE CEC2019 | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 3.20E+04±2.87E+04≈ | 4.42E+04±5.14E+04≈ | 0.00E+00±0.00E+00− | 7.24E+04±1.35E+05≈ | 0.00E+00±0.00E+00− | 5.93E+04±6.88E+04 |
F32 | 2.30E+02±8.66E+01≈ | 2.58E+02±1.05E+02≈ | 3.70E+00±3.69E−01− | 1.06E+03±3.38E+02+ | 4.84E+00±1.97E+00− | 2.44E+02±1.03E+02 |
F33 | 3.41E−01±1.55E−01≈ | 3.55E−01±1.40E−01≈ | 3.95E−01±7.47E−02≈ | 1.95E+00±1.18E+00+ | 9.35E−01±2.89E−01+ | 3.66E−01±1.33E−01 |
F34 | 5.82E+00±1.98E+00+ | 4.48E+00±1.50E+00≈ | 2.50E+01±1.55E+01+ | 1.98E+01±4.20E+00+ | 8.50E+00±1.96E+00+ | 4.55E+00±1.77E+00 |
F35 | 8.57E−02±4.08E−02+ | 4.81E−02±3.37E−02≈ | 1.11E−01±1.07E−01+ | 5.26E−01±9.12E−02+ | 5.04E−02±1.27E−02≈ | 3.33E−02±2.24E−02 |
F36 | 2.46E−01±2.42E−01+ | 2.72E−02±1.21E−01≈ | 2.13E+00±1.70E+00+ | 1.93E−01±3.60E−01+ | 1.34E−01±1.17E−01+ | 3.89E−04±1.69E−03 |
F37 | 2.40E+02±1.47E+02≈ | 1.80E+02±1.04E+02≈ | 7.29E+02±3.16E+02+ | 7.33E+02±2.42E+02+ | 4.46E+02±1.29E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.65E+00±4.69E−01+ | 1.19E+00±5.05E−01≈ | 2.45E+00±3.70E−01+ | 2.06E+00±2.53E−01+ | 2.50E+00±1.64E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.27E−01±3.64E−02≈ | 1.32E−01±2.78E−02≈ | 2.05E−01±7.63E−02+ | 1.84E−01±4.15E−02+ | 1.11E−01±1.76E−02− | 1.30E−01±3.08E−02 |
F40 | 1.24E+01±9.65E+00+ | 1.22E+01±9.85E+00≈ | 1.50E+01±9.02E+00+ | 1.50E+01±9.03E+00+ | 1.76E+01±4.33E+00+ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 0/10/0 | 7/1/2 | 9/1/0 | 6/1/3 | / |
MPSORL | Non-uniform vs. Uniform status division | |||||
Wins (+) | Losses (−) | Ties | R+ | R− | p-value | |
30D | 14 | 15 | 0 | 220 | 215 | 0.957 |
50D | 19 | 10 | 0 | 278 | 157 | 0.191 |
100D | 20 | 9 | 0 | 297 | 138 | 0.086 |
Function type | No. | Function | F(x∗) | D | search range |
Bowl-Shaped | F41 | Sphere Function | 0 | 2 | [-100 100] |
Many Local Minima | F42 | Rastrigin Function | 0 | 2 | [-5.12 5.12] |
Many Local Minima | F43 | Ackley Function | 0 | 2 | [-32.768, 32.768] |
Plate-Shaped | F44 | Matyas Function | 0 | 2 | [-10 10] |
Algorithm 1 Q-learning algorithm |
1: Initialize Q(s, a), α, γ, r, ε 2: Repeat : 3: Choose the best action at for the current state st using policy (e.g., ε-greedy strategy); 4: Execute action at; 5: Get the immediate reward rt+1; 6: Get the maximum Q-value for the next state st+1; 7: Update the Q-table by Eq (2.5); 8: Update the current state st←st+1; 9: Until the condition is terminal |
Q-learning | PSO |
Agents | Particles |
Environment | Optimization problems |
State | Fitness value |
Action | Strategy |
State | Action | |||
a1 | a2 | a3 | a4 | |
s1 | Q(s1, a1) | Q(s1, a2) | Q(s1, a3) | Q(s1, a4) |
s2 | Q(s2, a1) | Q(s2, a2) | Q(s2, a3) | Q(s2, a4) |
s3 | Q(s3, a1) | Q(s3, a2) | Q(s3, a3) | Q(s3, a4) |
s4 | Q(s4, a1) | Q(s4, a2) | Q(s4, a3) | Q(s4, a4) |
s5 | Q(s5, a1) | Q(s5, a2) | Q(s5, a3) | Q(s5, a4) |
Algorithm 2 The proposed algorithm |
Input:
Initialize N, D, the maximum number of fitness evaluations (Max_Fes); Initialize the maximum number of iterations (Max_Iter), k=0, kk=0, fit=0; Initialize the inertia weight w, learning factors c1, c2, position, velocity; Evaluate the fitness value, fit=fit+N, record pbest and gbest; Initialize Q-table, state, reward, learning period LP, ε, α, γ. Output: Best solution. 1: while termination condition is not met do 2: k=k+1; 3: /* pop1 */ 4: for i=1:length(pop1) do 5: Update the particle's velocity and position by CLPSO; 6: Evaluate the fitness of the particle, fit=fit+1; 7: Update pbesti and gbest; 8: end for 9: /* pop2 */ 10: Divide state for pop2 according to Section 3.2.1; 11: Select action based on Q-table using ε-greedy strategy according to Eq (3.1); 12: for i=length(pop1)+1:N do 13: Update the particle's velocity by action ai; 14: Update the particle's position; 15: Evaluate the fitness of the particle, fit=fit+1; 16: Update pbesti and gbest; 17: end for 18: /* update Q-table */ 19: if k<Max_Iter then 20: if mod(k, LP) = 0 then 21: Determine next state; 22: Select action using ε-greedy strategy; 23: Calculate reward according to Eq (3.2); 24: Update Q-table using Eq (2.5); 25: end if 26: else 27: if mod(kk, LP) = 0 then 28: Determine next state; 29: Calculate reward according to Eq (3.2); 30: Update Q-table using Eq (2.5); 31: end if 32: kk=kk+1; 33: end if 34: end while |
Function type | No. | Test Functions | F(x∗) | D |
Unimodal Functions | F1 | Shifted and Rotated Bent Cigar Function | 100 | 30/50/100 |
F2 | Shifted and Rotated Sum of Different Power Function* | 200 | 30/50/100 | |
Simple Multimodal Functions | F3 | Shifted and Rotated Zakharov Function | 300 | 30/50/100 |
F4 | Shifted and Rotated Rosenbrock Function | 400 | 30/50/100 | |
F5 | Shifted and Rotated Rastrigin Function | 500 | 30/50/100 | |
F6 | Shifted and Rotated Expanded Schaffer's F6 Function | 600 | 30/50/100 | |
F7 | Shifted and Rotated Lunacek Bi-Rastrigin Function | 700 | 30/50/100 | |
F8 | Shifted and Rotated Non-Continuous Rastrigin Function | 800 | 30/50/100 | |
F9 | Shifted and Rotated Levy Function | 900 | 30/50/100 | |
Hybrid Functions | F10 | Shifted and Rotated Schwefel Function | 1000 | 30/50/100 |
F11 | Hybrid Function 1 (N = 3) | 1100 | 30/50/100 | |
F12 | Hybrid Function 2 (N = 3) | 1200 | 30/50/100 | |
F13 | Hybrid Function 3 (N = 3) | 1300 | 30/50/100 | |
F14 | Hybrid Function 4 (N = 4) | 1400 | 30/50/100 | |
F15 | Hybrid Function 5 (N = 4) | 1500 | 30/50/100 | |
F16 | Hybrid Function 6 (N = 4) | 1600 | 30/50/100 | |
F17 | Hybrid Function 6 (N = 5) | 1700 | 30/50/100 | |
F18 | Hybrid Function 6 (N = 5) | 1800 | 30/50/100 | |
F19 | Hybrid Function 6 (N = 5) | 1900 | 30/50/100 | |
Composition Functions | F20 | Hybrid Function 6 (N = 6) | 2000 | 30/50/100 |
F21 | Composition Function 1 (N = 3) | 2100 | 30/50/100 | |
F22 | Composition Function 2 (N=3) | 2200 | 30/50/100 | |
F23 | Composition Function 3 (N = 4) | 2300 | 30/50/100 | |
F24 | Composition Function 4 (N = 4) | 2400 | 30/50/100 | |
F25 | Composition Function 5 (N = 5) | 2500 | 30/50/100 | |
F26 | Composition Function 6 (N = 5) | 2600 | 30/50/100 | |
F27 | Composition Function 7 (N = 6) | 2700 | 30/50/100 | |
F28 | Composition Function 8 (N = 6) | 2800 | 30/50/100 | |
F29 | Composition Function 9 (N = 3) | 2900 | 30/50/100 | |
F30 | Composition Function 10 (N = 3) | 3000 | 30/50/100 | |
Note: *F2 has been excluded because it shows unstable behavior especially for higher dimensions. |
No. | Functions | F(x∗) | D | Search range |
F31 | Storn's Chebyshev Polynomial Fitting Problem | 1 | 9 | [-8192, 8192] |
F32 | Inverse Hilbert Matrix Problem | 1 | 16 | [-16384, 16384] |
F33 | Lennard-Jones Minimum Energy Cluster | 1 | 18 | [-4, 4] |
F34 | Rastrigin Function | 1 | 10 | [-100,100] |
F35 | Griewank's Function | 1 | 10 | [-100,100] |
F36 | Weierstrass Function | 1 | 10 | [-100,100] |
F37 | Modified Schwefel Function | 1 | 10 | [-100,100] |
F38 | Expanded Schaffer's F6 Function | 1 | 10 | [-100,100] |
F39 | Happy Cat Function | 1 | 10 | [-100,100] |
F40 | Ackley Function | 1 | 10 | [-100,100] |
Algorithm | Key parameters | Year | Ref. |
LDWPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 1998 | [50] |
CLPSO | w:0.9−0.2, c:3.0−1.5, Vmax = 0.5*range | 2006 | [6] |
UPSO | w:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5 | 2004 | [53] |
LIPS | χ:0.7298, Vmax = 0.5*range, nsize = 3 | 2013 | [52] |
HCLPSO | w:0.99−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.2*range | 2015 | [7] |
EPSO | w:0.9−0.4, c:3.0−1.5, w1:0.9−0.2, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2017 | [34] |
SDPSO | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | 2022 | [8] |
MRFO | S = 2 | 2020 | [58] |
I-GWO | a = 2−0 | 2021 | [59] |
BeSD | K = 5 | 2021 | [60] |
MPSORL | w:0.9−0.2, c:3.0−1.5, c1:2.5−0.5, c2:0.5−2.5, Vmax = 0.5*range | − | − |
Level | Factors | ||||
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | |
1 | 0.1 | 20 | 0.80 | 0.3 | 0.6 |
2 | 0.2 | 50 | 0.85 | 0.4 | 0.7 |
3 | 0.3 | 100 | 0.90 | 0.5 | 0.8 |
4 | 0.4 | 200 | 0.95 | 0.6 | 0.9 |
pop1 (N1) | LP | epsilon (ε) | alpha (α) | gamma (γ) | average rank | |
1 | 0.1 | 20 | 0.8 | 0.3 | 0.6 | 8.5 |
2 | 0.1 | 50 | 0.85 | 0.4 | 0.7 | 9.345 |
3 | 0.1 | 100 | 0.9 | 0.5 | 0.8 | 10.672 |
4 | 0.1 | 200 | 0.95 | 0.6 | 0.9 | 11.483 |
5 | 0.2 | 20 | 0.85 | 0.5 | 0.9 | 7.81 |
6 | 0.2 | 50 | 0.8 | 0.6 | 0.8 | 6.207 |
7 | 0.2 | 100 | 0.95 | 0.3 | 0.7 | 10.293 |
8 | 0.2 | 200 | 0.9 | 0.4 | 0.6 | 9.207 |
9 | 0.3 | 20 | 0.9 | 0.6 | 0.7 | 7.879 |
10 | 0.3 | 50 | 0.95 | 0.5 | 0.6 | 9.828 |
11 | 0.3 | 100 | 0.8 | 0.4 | 0.9 | 6.379 |
12 | 0.3 | 200 | 0.85 | 0.3 | 0.8 | 7.293 |
13 | 0.4 | 20 | 0.95 | 0.4 | 0.8 | 9.362 |
14 | 0.4 | 50 | 0.9 | 0.3 | 0.9 | 7.931 |
15 | 0.4 | 100 | 0.85 | 0.6 | 0.6 | 6.793 |
16 | 0.4 | 200 | 0.8 | 0.5 | 0.7 | 7.017 |
IEEE CEC2017 with 30D | LDWPSO | UPSO | CLPSO | LIPS | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 4.64E+03±5.02E+03+ | 2.03E+03±2.03E+03+ | 8.17E+00±1.07E+01− | 9.24E+02±1.60E+03+ | 1.05E+02±2.10E+02 |
F3 | 3.07E+02±4.39E+02+ | 1.46E+02±2.08E+02+ | 2.84E+04±6.40E+03+ | 2.49E+03±1.59E+03+ | 2.65E−02±7.68E−02 |
F4 | 9.26E+01±2.63E+01+ | 3.58E+01±3.85E+01≈ | 5.90E+01±2.66E+01+ | 3.59E+01±2.95E+01≈ | 4.13E+01±2.72E+01 |
F5 | 6.95E+01±1.83E+01+ | 6.99E+01±1.13E+01+ | 4.76E+01±8.81E+00+ | 1.05E+02±2.29E+01+ | 3.25E+01±1.00E+01 |
F6 | 4.27E−01±5.54E−01+ | 1.20E+00±9.13E−01+ | 4.92E−07±2.49E−07+ | 2.29E+01±8.13E+00+ | 2.35E−13±6.63E−14 |
F7 | 1.18E+02±2.17E+01+ | 9.67E+01±1.34E+01+ | 9.22E+01±1.01E+01+ | 1.13E+02±1.93E+01+ | 8.33E+01±1.15E+01 |
F8 | 6.93E+01±1.74E+01+ | 6.44E+01±1.13E+01+ | 5.33E+01±7.36E+00+ | 9.36E+01±2.08E+01+ | 3.67E+01±1.33E+01 |
F9 | 1.40E+02±1.22E+02+ | 9.18E+01±6.82E+01+ | 1.18E+02±5.19E+01+ | 1.52E+03±9.83E+02+ | 1.27E+00±1.05E+00 |
F10 | 3.21E+03±6.26E+02+ | 2.90E+03±4.36E+02+ | 2.18E+03±2.71E+02+ | 3.11E+03±4.73E+02+ | 1.86E+03±3.64E+02 |
F11 | 8.61E+01±4.17E+01+ | 5.86E+01±2.68E+01+ | 6.69E+01±2.04E+01+ | 1.04E+02±3.48E+01+ | 4.65E+01±3.30E+01 |
F12 | 3.79E+04±1.95E+04+ | 8.47E+04±8.91E+04+ | 4.02E+05±2.22E+05+ | 1.30E+05±1.33E+05+ | 2.34E+04±1.03E+04 |
F13 | 9.64E+03±1.20E+04+ | 7.86E+03±6.06E+03+ | 3.41E+02±1.26E+02≈ | 4.42E+03±5.13E+03+ | 4.90E+02±5.07E+02 |
F14 | 6.14E+03±4.66E+03+ | 3.66E+03±3.79E+03≈ | 2.89E+04±2.86E+04+ | 8.02E+03±5.99E+03+ | 3.94E+03±5.06E+03 |
F15 | 5.33E+03±5.78E+03+ | 2.47E+03±2.83E+03+ | 1.44E+02±1.03E+02− | 1.97E+03±3.02E+03+ | 2.88E+02±3.70E+02 |
F16 | 7.60E+02±2.01E+02+ | 6.57E+02±1.25E+02+ | 5.16E+02±1.21E+02+ | 7.24E+02±2.56E+02+ | 3.97E+02±1.79E+02 |
F17 | 2.26E+02±1.59E+02+ | 1.69E+02±7.73E+01+ | 1.22E+02±5.72E+01+ | 2.88E+02±1.55E+02+ | 9.27E+01±4.66E+01 |
F18 | 9.36E+04±5.84E+04≈ | 1.04E+05±4.07E+04≈ | 1.37E+05±6.93E+04+ | 8.81E+04±7.43E+04≈ | 8.72E+04±5.27E+04 |
F19 | 7.41E+03±1.02E+04+ | 2.04E+03±2.27E+03+ | 5.83E+01±4.54E+01− | 2.50E+03±3.42E+03+ | 1.55E+02±1.42E+02 |
F20 | 2.78E+02±1.26E+02+ | 2.77E+02±7.74E+01+ | 1.56E+02±7.37E+01≈ | 4.76E+02±1.29E+02+ | 1.34E+02±6.92E+01 |
F21 | 2.70E+02±1.43E+01+ | 2.62E+02±3.02E+01+ | 2.40E+02±4.08E+01+ | 2.95E+02±2.04E+01+ | 2.33E+02±9.38E+00 |
F22 | 3.90E+02±1.10E+03+ | 1.01E+02±1.73E+00+ | 1.13E+02±6.95E+00+ | 3.19E+02±8.32E+02≈ | 1.00E+02±1.08E+00 |
F23 | 4.29E+02±2.15E+01+ | 4.32E+02±2.02E+01+ | 4.00E+02±1.20E+01+ | 5.35E+02±3.83E+01+ | 3.86E+02±1.07E+01 |
F24 | 5.15E+02±3.12E+01+ | 4.87E+02±1.20E+01+ | 4.76E+02±9.15E+01+ | 5.70E+02±4.66E+01+ | 4.53E+02±7.86E+00 |
F25 | 3.96E+02±1.37E+01+ | 3.89E+02±5.82E+00+ | 3.87E+02±1.13E+00≈ | 3.87E+02±4.05E+00≈ | 3.87E+02±1.05E+00 |
F26 | 1.43E+03±7.28E+02+ | 6.40E+02±7.57E+02≈ | 4.02E+02±1.94E+02+ | 9.96E+02±1.18E+03≈ | 3.75E+02±3.19E+02 |
F27 | 5.37E+02±1.22E+01+ | 5.48E+02±1.57E+01+ | 5.10E+02±5.27E+00− | 5.84E+02±2.16E+01+ | 5.15E+02±6.39E+00 |
F28 | 4.13E+02±2.81E+01+ | 3.15E+02±3.50E+01≈ | 4.20E+02±6.12E+00+ | 3.10E+02±2.98E+01≈ | 3.36E+02±5.26E+01 |
F29 | 6.76E+02±1.76E+02+ | 7.95E+02±1.28E+02+ | 5.44E+02±6.75E+01+ | 1.03E+03±1.70E+02+ | 4.94E+02±4.09E+01 |
F30 | 5.83E+03±3.07E+03+ | 8.18E+03±6.88E+03+ | 4.82E+03±8.11E+02+ | 6.87E+03±2.12E+03+ | 4.06E+03±1.14E+03 |
+/≈/− | 28/1/0 | 24/5/0 | 22/3/4 | 23/6/0 | / |
IEEE CEC2017 with 50D | HCLPSO | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 1.65E+02±2.03E+02≈ | 2.15E+03±2.12E+03+ | 3.46E+02±3.11E+02+ | 4.94E+03±5.51E+03+ | 5.00E+03±6.84E+03+ | 8.66E+02±1.13E+03+ | 1.35E+02±1.34E+02 |
F3 | 1.55E+01±1.61E+01− | 6.38E−04±1.14E−03− | 5.11E+00±2.31E+01− | 6.98E+03±2.44E+03+ | 1.91E+04±4.05E+03+ | 1.22E+03±7.08E+02+ | 5.22E+02±5.03E+02 |
F4 | 7.85E+01±4.42E+01≈ | 4.05E+01±3.72E+01− | 5.97E+01±4.46E+01≈ | 8.48E+01±5.02E+01≈ | 1.01E+02±6.05E+01+ | 9.56E+01±4.32E+01+ | 7.11E+01±3.82E+01 |
F5 | 1.05E+02±1.88E+01+ | 1.12E+02±1.81E+01+ | 8.12E+01±2.15E+01≈ | 3.27E+02±4.73E+01+ | 3.53E+02±2.02E+01+ | 1.78E+02±1.27E+01+ | 7.83E+01±1.77E+01 |
F6 | 6.63E−13±1.43E−13+ | 2.90E−08±9.81E−08+ | 1.15E−05±1.96E−05≈ | 4.38E+01±1.12E+01+ | 3.60E−02±1.10E−01+ | 8.17E−03±4.72E−03+ | 4.55E−13±1.12E−13 |
F7 | 1.77E+02±1.47E+01≈ | 1.62E+02±2.47E+01≈ | 1.89E+02±2.52E+01+ | 7.11E+02±1.49E+02+ | 4.28E+02±2.94E+01+ | 2.72E+02±2.09E+01+ | 1.69E+02±2.47E+01 |
F8 | 1.02E+02±2.03E+01+ | 1.12E+02±1.87E+01+ | 7.77E+01±1.78E+01≈ | 3.44E+02±4.50E+01+ | 3.60E+02±1.62E+01+ | 1.81E+02±1.51E+01+ | 7.85E+01±1.58E+01 |
F9 | 2.32E+02±2.11E+02+ | 2.12E+02±1.18E+02+ | 1.08E+02±1.10E+02+ | 9.67E+03±2.08E+03+ | 5.35E+01±2.02E+02+ | 4.20E+02±9.59E+02+ | 2.62E+01±2.74E+01 |
F10 | 3.94E+03±5.03E+02≈ | 3.88E+03±4.66E+02≈ | 3.84E+03±3.84E+02≈ | 6.71E+03±1.07E+03+ | 1.22E+04±6.62E+02+ | 6.29E+03±4.18E+02+ | 3.78E+03±4.13E+02 |
F11 | 1.49E+02±3.44E+01+ | 1.65E+02±5.18E+01+ | 1.86E+02±4.90E+01+ | 1.42E+02±3.76E+01+ | 2.41E+02±2.99E+01+ | 1.07E+02±2.31E+01≈ | 1.18E+02±4.29E+01 |
F12 | 5.18E+05±2.49E+05+ | 1.42E+05±1.16E+05− | 8.07E+04±4.03E+04− | 5.70E+05±2.96E+05+ | 5.93E+06±3.15E+06+ | 1.84E+05±1.06E+05− | 3.81E+05±2.02E+05 |
F13 | 6.33E+02±2.47E+02+ | 1.50E+03±1.26E+03+ | 6.97E+02±4.59E+02≈ | 2.63E+03±3.33E+03+ | 5.36E+04±2.13E+05+ | 1.30E+03±1.42E+03+ | 5.10E+02±4.38E+02 |
F14 | 2.70E+04±1.69E+04≈ | 1.54E+04±1.17E+04− | 1.03E+04±7.06E+03− | 2.77E+04±2.05E+04≈ | 2.29E+05±9.37E+04+ | 1.11E+02±1.75E+01− | 3.64E+04±2.73E+04 |
F15 | 3.00E+02±2.31E+02+ | 5.40E+02±4.62E+02+ | 3.78E+02±4.07E+02≈ | 6.29E+03±5.46E+03+ | 3.97E+05±8.32E+05+ | 4.77E+03±2.46E+03+ | 1.99E+02±1.25E+02 |
F16 | 1.01E+03±2.61E+02≈ | 1.10E+03±2.75E+02+ | 8.91E+02±2.85E+02≈ | 1.89E+03±3.66E+02+ | 1.38E+03±6.26E+02+ | 7.69E+02±1.58E+02− | 9.49E+02±1.85E+02 |
F17 | 8.33E+02±1.48E+02+ | 8.14E+02±2.37E+02+ | 7.20E+02±1.69E+02≈ | 1.50E+03±3.63E+02+ | 1.25E+03±2.86E+02+ | 7.29E+02±1.12E+02≈ | 6.65E+02±1.77E+02 |
F18 | 1.40E+05±9.18E+04≈ | 1.03E+05±1.93E+05− | 6.48E+04±5.77E+04− | 1.39E+05±6.87E+04≈ | 2.71E+06±1.18E+06+ | 5.24E+03±2.46E+03− | 1.86E+05±1.80E+05 |
F19 | 6.16E+02±8.92E+02≈ | 3.76E+02±3.38E+02≈ | 6.33E+02±1.08E+03≈ | 1.39E+04±1.09E+04+ | 2.06E+04±1.24E+04+ | 7.03E+03±7.90E+03≈ | 7.31E+02±1.36E+03 |
F20 | 5.05E+02±2.01E+02+ | 6.17E+02±1.59E+02+ | 5.00E+02±2.19E+02+ | 1.25E+03±3.68E+02+ | 1.01E+03±3.19E+02+ | 4.80E+02±1.28E+02+ | 3.99E+02±1.64E+02 |
F21 | 3.04E+02±2.13E+01+ | 3.08E+02±2.28E+01+ | 2.72E+02±2.36E+01≈ | 4.77E+02±4.99E+01+ | 5.39E+02±1.84E+01+ | 3.48E+02±1.32E+01+ | 2.70E+02±1.12E+01 |
F22 | 3.17E+03±2.10E+03≈ | 3.96E+03±1.82E+03+ | 2.89E+03±2.18E+03≈ | 7.34E+03±1.02E+03+ | 1.18E+04±3.25E+03+ | 3.27E+03±3.46E+03≈ | 3.19E+03±2.08E+03 |
F23 | 5.46E+02±2.71E+01+ | 5.56E+02±2.92E+01+ | 5.22E+02±3.04E+01≈ | 8.30E+02±1.12E+02+ | 7.92E+02±1.84E+01+ | 5.95E+02±2.53E+01+ | 5.29E+02±2.52E+01 |
F24 | 6.35E+02±3.23E+01+ | 6.17E+02±2.89E+01+ | 5.77E+02±2.50E+01− | 8.99E+02±1.16E+02+ | 8.52E+02±1.54E+01+ | 6.45E+02±2.96E+01+ | 5.92E+02±2.36E+01 |
F25 | 5.08E+02±3.32E+01≈ | 5.15E+02±3.05E+01≈ | 5.21E+02±3.32E+01≈ | 5.59E+02±3.68E+01+ | 5.44E+02±2.80E+01+ | 5.76E+02±1.76E+01+ | 5.20E+02±3.22E+01 |
F26 | 1.42E+03±9.15E+02≈ | 1.59E+03±1.07E+03≈ | 1.70E+03±5.29E+02≈ | 3.89E+03±3.87E+03≈ | 4.48E+03±8.29E+02+ | 2.27E+03±1.62E+03+ | 1.51E+03±8.29E+02 |
F27 | 6.31E+02±2.76E+01≈ | 6.46E+02±3.54E+01≈ | 6.67E+02±3.19E+01≈ | 9.66E+02±1.74E+02+ | 6.46E+02±5.44E+01≈ | 6.21E+02±4.76E+01− | 6.48E+02±4.34E+01 |
F28 | 4.80E+02±2.21E+01≈ | 4.88E+02±2.04E+01≈ | 4.92E+02±2.25E+01≈ | 4.95E+02±2.72E+01≈ | 4.80E+02±2.30E+01≈ | 5.17E+02±2.04E+01+ | 4.93E+02±1.81E+01 |
F29 | 6.94E+02±1.96E+02≈ | 7.62E+02±1.50E+02≈ | 7.21E+02±1.59E+02≈ | 1.53E+03±3.60E+02+ | 1.39E+03±2.72E+02+ | 7.59E+02±8.20E+01≈ | 7.10E+02±1.80E+02 |
F30 | 7.02E+05±5.32E+04≈ | 7.03E+05±6.25E+04≈ | 7.43E+05±7.14E+04≈ | 1.05E+06±3.28E+05+ | 1.52E+07±1.16E+07+ | 8.25E+05±4.54E+04+ | 7.05E+05±5.83E+04 |
+/≈/− | 13/15/1 | 15/9/5 | 5/19/5 | 24/5/0 | 27/2/0 | 19/5/5 | / |
IEEE CEC2017 with 100D | HCLPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F1 | 6.74E+03±7.22E+03≈ | 7.59E+03±8.88E+03≈ | 5.59E+03±7.18E+03≈ | 1.30E+05±5.51E+04+ | 3.34E+03±2.87E+03≈ | 7.31E+03±7.60E+03 |
F3 | 7.51E+03±3.89E+03− | 2.07E+04±1.75E+04− | 8.58E+04±1.31E+04≈ | 9.01E+04±1.11E+04≈ | 3.42E+04±5.67E+03− | 8.75E+04±1.84E+04 |
F4 | 2.16E+02±3.25E+01≈ | 1.95E+02±4.18E+01− | 2.37E+02±4.09E+01≈ | 3.06E+02±3.91E+01+ | 2.81E+02±4.11E+01+ | 2.28E+02±5.13E+01 |
F5 | 2.91E+02±5.87E+01≈ | 3.05E+02±8.14E+01≈ | 7.98E+02±6.75E+01+ | 9.15E+02±3.20E+01+ | 6.46E+02±3.17E+01+ | 2.81E+02±5.72E+01 |
F6 | 2.56E−05±8.27E−05− | 4.90E−02±3.90E−02+ | 5.13E+01±6.93E+00+ | 1.93E+00±9.54E−01+ | 4.36E−01±1.80E−01+ | 8.82E−04±1.13E−03 |
F7 | 5.79E+02±7.60E+01+ | 5.43E+02±1.46E+02+ | 2.17E+03±2.71E+02+ | 1.10E+03±6.67E+01+ | 7.33E+02±4.82E+01+ | 4.22E+02±8.11E+01 |
F8 | 2.98E+02±5.97E+01≈ | 3.21E+02±6.77E+01+ | 8.83E+02±8.99E+01+ | 9.15E+02±3.05E+01+ | 6.52E+02±3.42E+01+ | 2.88E+02±6.62E+01 |
F9 | 1.55E+03±7.37E+02+ | 4.79E+03±2.78E+03+ | 2.06E+04±1.69E+03+ | 4.15E+03±3.17E+03+ | 1.91E+04±5.45E+03+ | 5.35E+02±3.20E+02 |
F10 | 1.17E+04±1.02E+03+ | 1.20E+04±7.35E+02+ | 1.41E+04±1.21E+03+ | 2.89E+04±7.52E+02+ | 1.84E+04±6.20E+02+ | 1.10E+04±8.17E+02 |
F11 | 7.57E+02±1.78E+02≈ | 9.67E+02±1.92E+02+ | 5.66E+02±1.18E+02− | 5.74E+03±1.66E+03+ | 6.27E+02±1.35E+02− | 7.10E+02±1.19E+02 |
F12 | 7.72E+05±3.83E+05− | 3.04E+05±1.00E+05− | 1.40E+06±5.88E+05+ | 4.01E+07±1.53E+07+ | 2.26E+06±7.87E+05+ | 1.03E+06±4.16E+05 |
F13 | 1.40E+03±8.97E+02+ | 1.64E+03±9.03E+02+ | 6.05E+03±8.30E+03+ | 7.10E+03±7.79E+03+ | 3.27E+03±1.64E+03+ | 9.36E+02±4.44E+02 |
F14 | 1.11E+05±3.95E+04≈ | 5.95E+04±2.98E+04− | 1.33E+05±8.46E+04≈ | 3.85E+06±1.83E+06+ | 4.78E+03±3.89E+03− | 1.43E+05±6.90E+04 |
F15 | 6.80E+02±2.92E+02+ | 5.05E+02±1.33E+02≈ | 2.34E+03±2.69E+03+ | 3.05E+03±3.30E+03+ | 5.44E+02±3.77E+02≈ | 5.34E+02±1.39E+02 |
F16 | 3.20E+03±3.53E+02+ | 2.90E+03±4.68E+02≈ | 3.95E+03±6.79E+02+ | 5.73E+03±1.58E+03+ | 3.33E+03±2.88E+02+ | 2.76E+03±4.80E+02 |
F17 | 2.41E+03±3.39E+02≈ | 2.22E+03±2.87E+02≈ | 2.96E+03±5.02E+02+ | 4.80E+03±4.04E+02+ | 2.15E+03±2.11E+02≈ | 2.23E+03±2.91E+02 |
F18 | 2.09E+05±8.99E+04− | 1.17E+05±3.31E+04− | 3.29E+05±1.21E+05≈ | 1.13E+07±7.10E+06+ | 5.71E+04±1.82E+04− | 4.13E+05±1.83E+05 |
F19 | 4.43E+02±2.87E+02+ | 3.32E+02±2.18E+02≈ | 2.93E+03±3.10E+03+ | 3.16E+03±3.87E+03+ | 9.38E+02±5.61E+02+ | 2.85E+02±1.11E+02 |
F20 | 2.25E+03±2.52E+02+ | 2.08E+03±3.21E+02+ | 3.19E+03±5.90E+02+ | 4.25E+03±3.40E+02+ | 2.43E+03±2.55E+02+ | 1.87E+03±2.71E+02 |
F21 | 5.47E+02±5.29E+01≈ | 5.32E+02±6.67E+01≈ | 9.60E+02±1.28E+02+ | 1.12E+03±3.20E+01+ | 7.73E+02±2.92E+01+ | 5.41E+02±6.73E+01 |
F22 | 1.32E+04±1.15E+03+ | 1.21E+04±4.15E+03+ | 1.68E+04±1.49E+03+ | 2.98E+04±1.01E+03+ | 2.02E+04±4.18E+02+ | 1.17E+04±3.25E+03 |
F23 | 8.03E+02±2.63E+01+ | 7.83E+02±3.24E+01≈ | 1.33E+03±1.38E+02+ | 1.38E+03±3.55E+01+ | 1.06E+03±2.63E+01+ | 7.86E+02±2.07E+01 |
F24 | 1.31E+03±5.20E+01≈ | 1.30E+03±5.71E+01≈ | 2.19E+03±2.78E+02+ | 1.76E+03±3.11E+01+ | 1.32E+03±9.25E+01≈ | 1.29E+03±6.33E+01 |
F25 | 7.38E+02±6.47E+01− | 7.78E+02±6.94E+01≈ | 7.84E+02±5.27E+01≈ | 8.61E+02±4.59E+01+ | 8.62E+02±4.13E+01+ | 7.88E+02±6.23E+01 |
F26 | 7.29E+03±6.01E+02≈ | 7.72E+03±6.71E+02+ | 1.69E+04±7.42E+03+ | 1.22E+04±5.14E+02+ | 1.23E+04±2.30E+03+ | 6.28E+03±2.76E+03 |
F27 | 7.91E+02±3.21E+01≈ | 7.73E+02±3.15E+01≈ | 1.22E+03±1.85E+02+ | 9.53E+02±1.30E+02+ | 8.53E+02±5.71E+01+ | 7.77E+02±2.78E+01 |
F28 | 5.71E+02±3.64E+01≈ | 5.65E+02±2.80E+01≈ | 5.70E+02±3.28E+01≈ | 6.69E+02±3.07E+01+ | 6.69E+02±2.81E+01+ | 5.65E+02±2.94E+01 |
F29 | 2.93E+03±2.89E+02≈ | 3.01E+03±4.44E+02≈ | 3.97E+03±5.99E+02+ | 4.90E+03±4.35E+02+ | 3.49E+03±2.48E+02+ | 3.08E+03±4.84E+02 |
F30 | 5.83E+03±2.82E+03≈ | 5.73E+03±2.78E+03≈ | 7.14E+03±3.85E+03≈ | 2.43E+05±1.07E+05+ | 1.06E+04±2.71E+03+ | 5.80E+03±3.15E+03 |
+/≈/− | 10/14/5 | 10/14/5 | 20/8/1 | 28/1/0 | 21/4/4 | / |
IEEE CEC2019 | LDWPSO | UPSO | CLPSO | LIPS | HCLPSO | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 5.22E+04±8.95E+04≈ | 5.71E+04±3.49E+04− | 8.62E+05±5.36E+05+ | 4.70E+04±4.30E+04≈ | 6.08E+04±1.20E+05≈ | 5.93E+04±6.88E+04 |
F32 | 2.61E+02±1.04E+02≈ | 3.23E+02±1.35E+02≈ | 6.25E+02±1.54E+02+ | 2.73E+02±9.38E+01≈ | 2.43E+02±8.17E+01≈ | 2.44E+02±1.03E+02 |
F33 | 3.95E−01±7.47E−02≈ | 4.80E−01±4.01E−01+ | 9.06E−01±3.06E−01+ | 6.22E−01±8.09E−01≈ | 3.95E−01±7.47E−02+ | 3.66E−01±1.33E−01 |
F34 | 8.52E+00±4.08E+00+ | 9.20E+00±2.93E+00+ | 4.27E+00±1.21E+00≈ | 1.05E+01±4.19E+00+ | 4.41E+00±1.47E+00≈ | 4.55E+00±1.77E+00 |
F35 | 1.05E−01±6.19E−02+ | 3.86E−02±1.90E−02≈ | 6.52E−02±2.62E−02+ | 1.00E−02±8.04E−03− | 5.13E−02±2.18E−02≈ | 3.33E−02±2.24E−02 |
F36 | 3.15E−01±5.71E−01+ | 4.36E−01±5.00E−01+ | 1.59E−01±7.46E−02+ | 4.76E−01±5.95E−01+ | 1.99E−03±2.78E−03+ | 3.89E−04±1.69E−03 |
F37 | 4.41E+02±2.38E+02+ | 4.66E+02±1.71E+02+ | 1.47E+02±7.73E+01≈ | 7.55E+02±2.49E+02+ | 2.49E+02±1.11E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.99E+00±4.61E−01+ | 2.21E+00±4.28E−01+ | 2.03E+00±2.62E−01+ | 3.04E+00±4.87E−01+ | 1.65E+00±3.53E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.32E−01±6.08E−02≈ | 6.99E−02±3.05E−02− | 1.75E−01±3.92E−02+ | 1.23E−01±4.73E−02≈ | 1.01E−01±3.46E−02− | 1.30E−01±3.08E−02 |
F40 | 1.42E+01±9.09E+00≈ | 1.46E+01±8.32E+00≈ | 1.88E+01±3.91E+00+ | 1.93E+01±3.65E+00≈ | 1.54E+01±8.57E+00≈ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 5/3/2 | 8/2/0 | 4/5/1 | 4/5/1 | / |
IEEE CEC2019 | EPSO | SDPSO | MRFO | I-GWO | BeSD | MPSORL |
mean±std | mean±std | mean±std | mean±std | mean±std | mean±std | |
F31 | 3.20E+04±2.87E+04≈ | 4.42E+04±5.14E+04≈ | 0.00E+00±0.00E+00− | 7.24E+04±1.35E+05≈ | 0.00E+00±0.00E+00− | 5.93E+04±6.88E+04 |
F32 | 2.30E+02±8.66E+01≈ | 2.58E+02±1.05E+02≈ | 3.70E+00±3.69E−01− | 1.06E+03±3.38E+02+ | 4.84E+00±1.97E+00− | 2.44E+02±1.03E+02 |
F33 | 3.41E−01±1.55E−01≈ | 3.55E−01±1.40E−01≈ | 3.95E−01±7.47E−02≈ | 1.95E+00±1.18E+00+ | 9.35E−01±2.89E−01+ | 3.66E−01±1.33E−01 |
F34 | 5.82E+00±1.98E+00+ | 4.48E+00±1.50E+00≈ | 2.50E+01±1.55E+01+ | 1.98E+01±4.20E+00+ | 8.50E+00±1.96E+00+ | 4.55E+00±1.77E+00 |
F35 | 8.57E−02±4.08E−02+ | 4.81E−02±3.37E−02≈ | 1.11E−01±1.07E−01+ | 5.26E−01±9.12E−02+ | 5.04E−02±1.27E−02≈ | 3.33E−02±2.24E−02 |
F36 | 2.46E−01±2.42E−01+ | 2.72E−02±1.21E−01≈ | 2.13E+00±1.70E+00+ | 1.93E−01±3.60E−01+ | 1.34E−01±1.17E−01+ | 3.89E−04±1.69E−03 |
F37 | 2.40E+02±1.47E+02≈ | 1.80E+02±1.04E+02≈ | 7.29E+02±3.16E+02+ | 7.33E+02±2.42E+02+ | 4.46E+02±1.29E+02+ | 1.61E+02±1.30E+02 |
F38 | 1.65E+00±4.69E−01+ | 1.19E+00±5.05E−01≈ | 2.45E+00±3.70E−01+ | 2.06E+00±2.53E−01+ | 2.50E+00±1.64E−01+ | 1.31E+00±6.48E−01 |
F39 | 1.27E−01±3.64E−02≈ | 1.32E−01±2.78E−02≈ | 2.05E−01±7.63E−02+ | 1.84E−01±4.15E−02+ | 1.11E−01±1.76E−02− | 1.30E−01±3.08E−02 |
F40 | 1.24E+01±9.65E+00+ | 1.22E+01±9.85E+00≈ | 1.50E+01±9.02E+00+ | 1.50E+01±9.03E+00+ | 1.76E+01±4.33E+00+ | 1.21E+01±9.91E+00 |
+/≈/− | 5/5/0 | 0/10/0 | 7/1/2 | 9/1/0 | 6/1/3 | / |
MPSORL | Non-uniform vs. Uniform status division | |||||
Wins (+) | Losses (−) | Ties | R+ | R− | p-value | |
30D | 14 | 15 | 0 | 220 | 215 | 0.957 |
50D | 19 | 10 | 0 | 278 | 157 | 0.191 |
100D | 20 | 9 | 0 | 297 | 138 | 0.086 |
Function type | No. | Function | F(x∗) | D | search range |
Bowl-Shaped | F41 | Sphere Function | 0 | 2 | [-100 100] |
Many Local Minima | F42 | Rastrigin Function | 0 | 2 | [-5.12 5.12] |
Many Local Minima | F43 | Ackley Function | 0 | 2 | [-32.768, 32.768] |
Plate-Shaped | F44 | Matyas Function | 0 | 2 | [-10 10] |