Loading [MathJax]/jax/output/SVG/jax.js
Research article

A multi-sample particle swarm optimization algorithm based on electric field force

  • Received: 10 June 2021 Accepted: 23 August 2021 Published: 31 August 2021
  • Aiming at the premature convergence problem of particle swarm optimization algorithm, a multi-sample particle swarm optimization (MSPSO) algorithm based on electric field force is proposed. Firstly, we introduce the concept of the electric field into the particle swarm optimization algorithm. The particles are affected by the electric field force, which makes the particles exhibit diverse behaviors. Secondly, MSPSO constructs multiple samples through two new strategies to guide particle learning. An electric field force-based comprehensive learning strategy (EFCLS) is proposed to build attractive samples and repulsive samples, thus improving search efficiency. To further enhance the convergence accuracy of the algorithm, a segment-based weighted learning strategy (SWLS) is employed to construct a global learning sample so that the particles learn more comprehensive information. In addition, the parameters of the model are adjusted adaptively to adapt to the population status in different periods. We have verified the effectiveness of these newly proposed strategies through experiments. Sixteen benchmark functions and eight well-known particle swarm optimization algorithm variants are employed to prove the superiority of MSPSO. The comparison results show that MSPSO has better performance in terms of accuracy, especially for high-dimensional spaces, while maintaining a faster convergence rate. Besides, a real-world problem also verified that MSPSO has practical application value.

    Citation: Shangbo Zhou, Yuxiao Han, Long Sha, Shufang Zhu. A multi-sample particle swarm optimization algorithm based on electric field force[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 7464-7489. doi: 10.3934/mbe.2021369

    Related Papers:

    [1] Xin Zhou, Shangbo Zhou, Yuxiao Han, Shufang Zhu . Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5241-5268. doi: 10.3934/mbe.2022246
    [2] Qing Wu, Chunjiang Zhang, Mengya Zhang, Fajun Yang, Liang Gao . A modified comprehensive learning particle swarm optimizer and its application in cylindricity error evaluation problem. Mathematical Biosciences and Engineering, 2019, 16(3): 1190-1209. doi: 10.3934/mbe.2019057
    [3] Kefeng Fan, Cun Xu, Xuguang Cao, Kaijie Jiao, Wei Mo . Tri-branch feature pyramid network based on federated particle swarm optimization for polyp segmentation. Mathematical Biosciences and Engineering, 2024, 21(1): 1610-1624. doi: 10.3934/mbe.2024070
    [4] Fei Chen, Yanmin Liu, Jie Yang, Meilan Yang, Qian Zhang, Jun Liu . Multi-objective particle swarm optimization with reverse multi-leaders. Mathematical Biosciences and Engineering, 2023, 20(7): 11732-11762. doi: 10.3934/mbe.2023522
    [5] Yufeng Wang, BoCheng Wang, Zhuang Li, Chunyu Xu . A novel particle swarm optimization based on hybrid-learning model. Mathematical Biosciences and Engineering, 2023, 20(4): 7056-7087. doi: 10.3934/mbe.2023305
    [6] Rongmei Geng, Renxin Ji, Shuanjin Zi . Research on task allocation of UAV cluster based on particle swarm quantization algorithm. Mathematical Biosciences and Engineering, 2023, 20(1): 18-33. doi: 10.3934/mbe.2023002
    [7] Dashe Li, Xueying Wang, Jiajun Sun, Huanhai Yang . AI-HydSu: An advanced hybrid approach using support vector regression and particle swarm optimization for dissolved oxygen forecasting. Mathematical Biosciences and Engineering, 2021, 18(4): 3646-3666. doi: 10.3934/mbe.2021182
    [8] Xianzi Zhang, Yanmin Liu, Jie Yang, Jun Liu, Xiaoli Shu . Handling multi-objective optimization problems with a comprehensive indicator and layered particle swarm optimizer. Mathematical Biosciences and Engineering, 2023, 20(8): 14866-14898. doi: 10.3934/mbe.2023666
    [9] Xiaoding Meng, Hecheng Li, Anshan Chen . Multi-strategy self-learning particle swarm optimization algorithm based on reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(5): 8498-8530. doi: 10.3934/mbe.2023373
    [10] Dongning Chen, Jianchang Liu, Chengyu Yao, Ziwei Zhang, Xinwei Du . Multi-strategy improved salp swarm algorithm and its application in reliability optimization. Mathematical Biosciences and Engineering, 2022, 19(5): 5269-5292. doi: 10.3934/mbe.2022247
  • Aiming at the premature convergence problem of particle swarm optimization algorithm, a multi-sample particle swarm optimization (MSPSO) algorithm based on electric field force is proposed. Firstly, we introduce the concept of the electric field into the particle swarm optimization algorithm. The particles are affected by the electric field force, which makes the particles exhibit diverse behaviors. Secondly, MSPSO constructs multiple samples through two new strategies to guide particle learning. An electric field force-based comprehensive learning strategy (EFCLS) is proposed to build attractive samples and repulsive samples, thus improving search efficiency. To further enhance the convergence accuracy of the algorithm, a segment-based weighted learning strategy (SWLS) is employed to construct a global learning sample so that the particles learn more comprehensive information. In addition, the parameters of the model are adjusted adaptively to adapt to the population status in different periods. We have verified the effectiveness of these newly proposed strategies through experiments. Sixteen benchmark functions and eight well-known particle swarm optimization algorithm variants are employed to prove the superiority of MSPSO. The comparison results show that MSPSO has better performance in terms of accuracy, especially for high-dimensional spaces, while maintaining a faster convergence rate. Besides, a real-world problem also verified that MSPSO has practical application value.



    In recent years, traditional optimization algorithms cannot optimally solve many complex problems in real life. On the contrary, the metaheuristic algorithm can give a feasible solution to the problem within a limited range and approach the optimal solution as much as possible. Therefore, it has gradually become the first choice for solving these complex problems and is used to deal with multi-dimensional, single-object or multi-objective, continuous, or combinatorial optimization problems. Metaheuristic algorithms deal with issues by abstracting some phenomena in nature and animals into algorithms. Some representative algorithms have also appeared in this field, such as genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), symbiotic organisms search (SOS), simulated annealing (SA), salp swarm algorithm (SSA), and so on. To improve the algorithm's performance further, the hybrid metaheuristic algorithm that combines the advantages of various algorithms has also received extensive attention. We can't extravagantly expect an algorithm to solve all optimization problems. Therefore, researchers always upgrade metaheuristic algorithms for specific issues. Kaplan et al.[1] introduced GA in the field search of SA, which improves the search performance and effectively solves the excitation current estimation of synchronous motors. To solve complex multimodal functions and high-dimensional problems, Mohamed et al.[2] introduced the concept of attraction and repulsion into the imperial competitive algorithm (ICA). They verified the effectiveness of the algorithm through a multi-objective engineering problem. Çelik et al.[3] proposed an improved slap swarm algorithm (mSSA) to avoid the premature convergence of SSA. They also offered an improved stochastic fractal search (ISFS) algorithm, which replaced the first update process of SFS with a chaos-based strategy to effectively deal with the automatic generation control problem of the power system[4]. Lin et al.[5] proposed a comprehensive algorithm combining PSO and the estimate distribution algorithm (EDA) to solve the maximum segmentation problem. In addition, considering the shortcomings of SOS premature, ISOS[6] combined the advantages of quasi-oppositional based learning (QOBL) and chaotic local search (CLS), which balances exploration and exploitation. hSOS-SA[7] integrated the SA into the SOS algorithm to improve the convergence accuracy of the algorithm. Similarly, Singh et al.[8] proposed a hybrid metaheuristic algorithm (HSSAPSO), using the velocity phase of PSO in SSA, which reduced the risk of PSO falling into a local optimum. The abovementioned researches review that the metaheuristic algorithms have significantly developed.

    Among the developing methods in the field of metaheuristic algorithms, the PSO algorithm has attracted wide attention from researchers and practitioners for its advantages of easy implementation, strong adaptability, and low complexity. PSO[9] is a metaheuristic algorithm proposed by Kennedy and Eberhart to simulate the predation behavior of birds. The algorithm compares the problem's search space to the flight space of birds, and each bird is abstracted into a particle to represent a candidate solution to the problem. According to the fitness value, all particles are randomly searched domain to find a better solution. Therefore, the algorithm relies on a random process similar to evolution. Randomness makes the PSO algorithm generate uncertainty. Hence, researchers usually adopt new strategies to make the particles move purposefully, thereby upgrading the PSO algorithm. Uncertainty modeling has also made some progress over the years. Given the uncertainty of gene expression data, Taylan et al.[10] introduced stochastic differential equations into uncertainty modeling for the first time. Kropat et al.[11] used an interval algorithm to overcome the problem that given data is susceptible to noise. Since the data for practical problems are usually discrete, CMARS[12] applied the framework of conic quadratic programming to improve multivariate adaptive regression splines (MARS). Özmen et al.[13] further improved the CMARS algorithm through a robust optimization technique to deal with data uncertainty. A robust and flexible regulatory network regression model was introduced in the literature[14] to determine unknown system parameters from uncertain data. Semialgebraic sets were used to express the uncertain state of genes and environmental factors in Reference[15], solving the gene-environment network's data uncertainty, which improved the model prediction accuracy. The above researchers have adopted effective methods to address data uncertainty and reduce errors. As a simple but powerful method, PSO is superior to deterministic algorithms in solving some specific problems. And PSO has shown good performance in many practical optimization tasks and has been used in many research fields, including Function Optimization[16,17,18], Classification prediction[19,20], Neural network training[21,22,23], Feature selection[24,25,26], and Image encryption[27,28].

    Although PSO is simple and easy to implement, it still has the problems of easily falling into a local optimum and slow convergence speed. To overcome these problems, many researchers have proposed various PSO algorithm variants. To enhance the searchability of the algorithm, Liang et al.[29] offered a dynamic grouping particle swarm optimization (DMSPSO) algorithm, which divides the population into multiple subgroups. The members of the group exchange information for better local exploration, and at the same time, frequently reorganize and change the learning samples to realize the information exchange among the populations. HCLDMS-PSO[30] introduced a non-uniform mutation operator in the PSO to enhance the global search ability and adjusts the global best position through the Gaussian mutation operator, reducing the risk of falling into the local optimum. Zhang et al.[31] added two constraint factors to the PSO through migration learning, namely the source domain factor and the target domain factor, to balance the PSO algorithm's search ability and search efficiency.

    Furthermore, to avoid premature convergence, CMPSOWV[32] discarded the velocity component in the particle velocity update formula and introduced a constraint processing method to guide the particle search space with the best personal position and the global best position. Chen et al.[33] also introduced chaos mapping in the PSO algorithm and adjusted the inertia weight through the sinogram, balancing local exploitation and global exploration effectively. To better deal with multi-modal functions, Zhang et al.[34] introduced a local optimal topology (LOT) based on the comprehensive learning particle swarm optimization (CLPSO) algorithm, which expanded the search space of particles and improved the convergence speed of the algorithm. To increase the convergence speed of the algorithm, Zhu et al.[35] proposed a multi-ion particle swarm optimization algorithm (MIONPSO) based on repulsive force and attractive force. They introduced the concept of charge in the PSO algorithm and divided the population into multiple sub-populations, then the optimal solution of each group guides the update of individuals. In addition, papers[36,37] both adopted various strategies in the PSO algorithm to make the particles search better in the feasible domain space, which improves the accuracy of the solution.

    Based on the above research, we can find that most of the articles on PSO algorithms have put forward novel concepts to increase the diversified behavior of particles, thereby reducing the risk of falling into a local optimum. Nevertheless, the matching update strategies of many new PSO algorithms are not perfect, and they cannot take the convergence speed and accuracy into account simultaneously. Additionally, when solving complex optimization problems, a single improvement strategy has no advantage in improving convergence accuracy. On the contrary, adopting a variety of improvement strategies can enhance the diversity of the algorithm and achieve higher convergence accuracy.

    To further improve the performance of the PSO algorithm, inspired by MIONPSO, this paper proposes a multi-sample particle swarm optimization algorithm based on electric field force. MSPSO introduces the concept of the electric field into the PSO algorithm and makes it match more complete strategies. We construct multiple learning samples to improve the performance of the particle swarm algorithm in solving unimodal and multimodal problems. Lots of experiments have verified the effectiveness of these additional strategies. We further evaluated the performance of MSPSO through practical cases of design problems of multiphase codes for spread spectrum pulse radar. And the main contributions of this paper can be summarized as follows.

    ● To show the diverse behavior of particles, we regard the feasible region of the population as an electric field, and the particles suffer the electric field force of other charged particles in the electric field.

    ● We propose an electric field force-based comprehensive learning strategy to construct the attractive sample and the repulsive sample to guide the particle movement purposefully. The interaction of the two pieces directs the particles to areas that are more conducive to exploration.

    ● We use the historical experience information of elite particles and general particles obtained by the tournament mechanism to update the corresponding weight coefficients adaptively, which enhances the diversity of the population.

    ● To reduce the risk of falling into a local optimum, we construct a global learning sample by a segment-based weighted learning strategy so that particles can learn more helpful information from the elite particles.

    The rest of the paper is organized as follows. Section 2 introduces related works. Section 3 describes the details of MSPSO. The performance of newly introduced strategies is experimentally verified in Section 4. In Section 5, sixteen benchmark functions and eight well-known PSO variants are employed to verify the effectiveness of MSPSO. Finally, Section 6 summarizes the relevant conclusions.

    PSO is a random optimization algorithm based on population, in which each particle represents a potential problem solution. In the D-dimensional space, each particle has two attributes, the velocity vector Vi=(vi,1,vi,2,,vi,D) and the position vector Xi=(xi,1,xi,2,,xi,D). Xi is the candidate solution of the problem, and Vi represents the search direction and step size of ith particle. Each individual adjusts its trajectory according to its own best historical experience pbesti=(pbi,1,pbi,2,,pbi,D) and the best overall experience in history gbest=(g1,g2,,gD) in the feasible domain space. The speed and position update rules are defined as Equations} (2.1) and (2.2) respectively.

    vt+1i,j=ωvti,j+c1r1(pbestti,jxti,j)+c2r2(gbesttjxti,j), (2.1)
    xt+1i,j=xti,j+vt+1i,j, (2.2)

    where j=1,2,,D, ω represents the inertia weight which determines the proportion of the previous speed; c1 and c2 are acceleration learning factors, respectively representing the weights learned from pbesti and gbest; r1 and r2 are random numbers in the interval [0,1).

    The PSO algorithm mainly improves from four directions, including parameter adjustment, learning modes adjustment, topology change, and hybrid algorithm. The proposed MSPSO mainly covers the two improvement directions of adjusting parameters and changing particle learning modes, so other approaches will not be discussed.

    For adjusting parameters, Shih and Eberhart[38] proposed a strategy of linearly reducing the inertia weight from 0.9 to 0.4, changing the step length of the particle's velocity component in different evolutionary periods. XPSO[37] also adopted this method which effectively improves the performance of the algorithm. However, the strategy of linear weight reduction does not perform well on the objective function with multiple optimal solutions. To solve the above problem, Farooq et al.[39] proposed a phased update strategy. In the first half of the iteration, the inertia weight was linearly reduced from 0.9 to 0.4. The same method is repeated in the second half of the iteration. Chatterjee et al.[40] proposed a new nonlinear function to adaptively change the inertia weight, which effectively balances local exploitation and global exploration. Similarly, the sigmoid activation function in the neural network is used to change the inertia weight in HCLDMS-PSO[30] non-linearly. The abovementioned researches reveal that the adaptive adjustment of parameters will benefit the evolution of the population.

    For changing particle learning modes, Liang et al.[41] proposed CLPSO, which encourages each particle to learn from different particles of different dimensions to obtain more comprehensive information. An opposition-based learning competitive particle swarm optimizer (OBL-CPSO)[42] introduced two mechanisms of oppositional learning and competitive learning. Competitive learning allows particles with poor fitness to learn from particles with good fitness, and particles with moderate fitness are updated through oppositional learning. To solve the problem of dimensionality disaster, Shi et al.[43] proposed a segment-based learning strategy, which randomly divides the dimension into several segments. At the same time, a predominant learning strategy allows elite particles to guide each dimension segment. Additionally, MPCPSO[36] introduced two new strategies: a dynamic segment-based mean learning strategy (DSMLS) and a multidimensional comprehensive learning strategy (MDCLS), which effectively improved the performance of the algorithm. DSMLS realizes the information exchange between the elite population and the ordinary population, and MDCLS accelerates the convergence speed. XPSO[37] extends the social learning part of each particle from one sample to two samples so that the particles learn from the global best particle and the local best particle. The abovementioned researches show that the dynamic selection of learning samples effectively maintains the diversity of the population, which is conducive to solving complex multimodal problems.

    In this part, we introduce the proposed MSPSO in detail. Section 3.1 gives the particle model, and Section 3.2 explains the electric field force-based comprehensive learning strategy. The segment-based weighted learning strategy is introduced in Section 3.3, and Section 3.4 lists the framework of the MSPSO algorithm.

    In MSPSO, the feasible region of particles is seen as an electric field, and every particle is regarded as an electric charge. Hence, the electric field has a powerful effect on the particles. To put it simply, when the particle velocity is updated, the electric field force around the particle will affect it. We hope that the particle swarm can exhibit diverse behaviors by calculating the electric field force between particles, which is beneficial to the evolution of the population. At the same time, we propose an electric field force-based comprehensive learning strategy to construct attractive and repulsive samples and utilize the historical knowledge of particles to adjust the weight coefficients adaptively. Additionally, to increase the convergence accuracy of the algorithm, we adopt a segment-based weighted learning strategy to construct a global learning sample. The proposed MSPSO is introduced in detail as follows. The velocity update rule of positively charged particles:

    Vt+1i=ωtVti+αt1(PEt1Xti)βt1(PGt1Xti)+cr(GMtfigXti), (3.1)

    The velocity update rule of negatively charged particles:

    Vt+1i=ωtVti+αt2(PEt2Xti)βt2(PGt2Xti)+cr(GMtfigXti), (3.2)

    Update the position according to Equation (3.3).

    Xt+1i=Xti+Vt+1i, (3.3)

    where Vti=(vi,1,vi,2,,vi,D) is the velocity of the ith particle at the tth time, and Xti=(xi,1,xi,2,,xi,D) is the position of the ith particle at the tth time. ωt is tth inertia weight. PEt1 and PGt1 represent tth attractive sample and repulsive sample of positively charged particles, respectively. PEt2 and PGt2 denote tth attractive sample and repulsive sample of negatively charged particles, respectively. αt1, βt1, αt2, and βt2 represent tth weight coefficient of the corresponding learning sample. GMt denotes tth global learning sample and fig is the electric field force of GMt on the current update particle Xti.

    The particle's velocity update equation consists of four parts. The first part is the velocity of the particle itself. Attraction is reflected in the second part. To reduce the risk of falling into a local optimum, we select elite particles to construct an attractive sample by EFCLS. The third part is repulsion. For particles with poor fitness, their trajectory may not find the optimal value, and the particles should be urged to explore the optimal value in other directions. Similarly, we choose general particles to construct the repulsive sample by EFCLS. The last part is the global learning sample built by the SWLS to guide particles' movement and increase the convergence accuracy of the algorithm.

    The inertia weight ω plays a vital role in the PSO algorithm. A larger ω in the early stage can enhance the global exploration ability, while a smaller ω in the later stage is beneficial to local exploitation. Existing studies have shown that during the evolution of the population, the dynamic change of the inertia weight can obtain a better optimization result than the fixed value. In MSPSO, a nonlinear decreasing inertia weight based on the sigmoid function proposed by reference[30] is adopted. The calculation method of ω is defined as Equation (3.4).

    ωt=ωmaxωmaxωmin1+e5tT, (3.4)

    where ωmax is the maximum inertia weight, and ωmin is the minimum inertia weight, t is the current iteration number. T represents the maximum number of iterations.

    In MSPSO, all particles are in an electric field. Particles with opposite charges attract each other, and particles with the same charge repel each other. To purposefully guide particles to a better area, we enable the attraction of particles with opposite charges to lead the particles to move in the direction of the elite particles, thereby accelerating the convergence speed. At the same time, the repulsion of the particles with the same charge drives the poorer particles to explore other better directions, reducing the risk of falling into the local optimum. Therefore, we select elite particles and general particles through the tournament mechanism and construct attractive sample PE and repulsive sample PG based on the electric field force.

    Firstly, We screen out the elite particles and general particles of the positively and negatively charged subpopulation through the tournament mechanism. The tournament mechanism is to classify the particles of the population according to fitness ranking. The specific method is as follows. We arrange all particles of the positively charged subpopulation in ascending order of fitness. After sorting, the set of positively charged particles POS={X1,X2,,Xn1} satisfies f(X1)f(X2)f(Xn1), where f denotes fitness function, n1 is the number of positively charged particles. Then we select the first ne particles as the elite particles to create set S1. And the last ng particles are used as general particles in the positively charged subpopulation to form set S2. Similarly, we arrange all the particles of the negatively charged subpopulation in ascending order of fitness. After sorting, the set of negatively charged particles NEG={Xn1+1,Xn1+2,,Xn1+n2} satisfies f(Xn1+1)f(Xn1+2)f(Xn1+n2), where n2 represent the number of negatively charged particles. Then we select the first ne particles as the elite particles to create the set S3. The last ng particles are used as the general particles of the negatively charged subpopulation to form the set S4. Finally, we obtained the following four sets.

    elite particle set of the positively charged subpopulation S1={X1,X2,,Xne},

    general particle set of the positively charged subpopulation S2={Xn1ng+1,Xn1ng+2,,Xn1},

    elite particle set of the negatively charged subpopulation S3={Xn1+1,Xn1+2,,Xn1+ne},

    general particle set of the negatively charged subpopulation S4={Xn1+n2ng+1,Xn1+n2ng+2,,Xn1+n2}.

    In physics, we calculate the magnitude of the force between two charges through Coulomb's law, it is proportional to the distance between the charges. In the early stage of population evolution, the distance between particles is so far that the force is small. On the contrary, particles are more concentrated in the later stage, and the distance between the particles is small. Therefore, we calculate the electric field force based on the distance between the particles. The electric field force of any two charged particles Xi and Xj is defined as Equation (3.5).

    fij=εedij, (3.5)

    where fij denotes the eletric field force of Xj to Xi, dij represents the distance of Xi and Xj. ε is a real number distributed in (0,1), avoiding too large search space for particles.

    The distance of any two particles Xi=(xi,1,xi,2,,xi,D) and Xj=(xj,1,xj,2,,xj,D) is calculated by Euclidean distance:

    dij=Dk=1(xi,kxj,k)2, (3.6)

    where D is the dimension of a particle, xi,k represents the value of Xi in the kth dimension, and xj,k denotes the value of Xj in the kth dimension.

    This paper adopts the same method to build PE1 and PE2, but the selected population is different. PG1 and PG2 are also constructed in the same way. Therefore, we take positively charged particles as an example to illustrate building PE1=(PE1,1,PE1,2,,PE1,D) and PG1=(PG1,1,PG1,2,,PG1,D).

    ne elite particles are selected from the negatively charged subpopulation to construct the attractive sample PE1 through the electric field force-based comprehensive learning strategy, directing the particles to a better area. The ne elite particles constitute the set S3. For the currently updated positively charged particle Xi, we calculate the electric field force fij of each elite particle Xj in S3 to Xi, where j represents the serial number of the elite particle in S3.

    PE1,d=1neXjS3pbestj,dfij, (3.7)

    where d=1,2,,D, PE1,d is the value of PE1 in the dth dimension, pbestj,d represents dth dimension of the best historical of the elite particle Xj in S3. fij is the electric field force of Xj to Xi.

    Similarly, ng general particles are selected from the positively charged subpopulation to construct the repulsive sample PG1 through the electric field force-based comprehensive learning strategy. The ng general particles constitute the set S2. For the currently updated positively charged particle Xi, we calculate the electric field force fik of each elite particle Xk in S2 to Xi, where K represents the serial number of the general particle in S2.

    PG1,d=1ngXkS2pbestk,dfik, (3.8)

    where d=1,2,,D, PG1,d is the value of PG1 in the dth dimension, pbestk,d represents dth dimension of the best historical of the general particle Xk in S2. fik is the electric field force of the Xk to Xi.

    Algorithm 1 lists the pseudo code of EFCLS, taking positively charged particles as an example. And it is worth noting that when we construct PE2, Xi comes from S1. When we build PG2, Xk comes from S4.

    Algorithm 1 electric field force-based comprehensive learning strategy
    1: Sort positively charged particles and negatively charged particles respectively by fitness value
    2: for i=1n1 do
    3:  for Xj in S3 do
    4:    Calculate the distance dij between Xi and Xj by Eq (3.6)
    5:    Calculate the electric field force fij of Xj to Xi by Eq (3.5)
    6:   end for
    7:  Construct the attractive sample PE1 by Eq (3.7)
    8:  for Xk in S2 do
    9:    Calculate the distance dik between Xi and Xk by Eq (3.6)
    10:    Calculate the electric field force fik of Xk to Xi by Eq (3.5)
    11:  end for
    12:  Construct the repulsive sample PG1 by Eq (3.8)
    13: end for

     | Show Table
    DownLoad: CSV

    In MSPSO, once the particle is updated, the selection of elite particles and general particles will change according to the fitness value. To enhance the diversity of the population, we adaptively adjust the weight coefficients α1, β1, α2 and β2 by the historical information of the particles.

    We assign a weight coefficient wi to each particle and initialize wi by a random real number generated by N(0.1,σ2). Besides, real numbers randomly generated from Gaussian distribution N(μ1,σ2), N(μ2,σ2), N(μ3,σ2) and N(μ4,σ2) are assigned to each α2, β1, α1 and β2. μ1, μ2, μ3 and μ4 are updated according to the historical knowledge of elite particles and general particles of the subpopulations as Equation (3.9).

    μk=λ1|Sk|XiSkwi,k=1,2,3,4, (3.9)

    where λ represents the degree to which μk(k=1,2,3,4) learns from the historical knowledge of the corresponding subpopulation. Sk(k=1,2,3,4) represents the four sets that we have filtered through the tournament mechanism. |Sk| denotes the number of particles in Sk. We update wi by an actual random number generating by its corresponding N(μk,σ2). Based on experience, we set σ to 0.1.

    Although the attraction and repulsion in the velocity update equation can disperse the particles and facilitate exploration, the particles may fall into the locally optimal values of the two subpopulations. Therefore, to reduce the risk of falling into the local optimum, we construct a global learning sample GM by SWLS to guide particle motion. Algorithm 2 lists the pseudo-code of SWLS.

    Algorithm 2 segment-based weighted learning strategy
    1: Initialize the number of segments of particle dimensions m=D/10
    2: Sort particles in ascending order by fitness value, and select some top particles to form an elite population
    3: for j=1m do
    4:  E ← randomly select k particles from the elite population
    5:  for d=jj+10 do
    6:    Construct the global elite sample GMd by Eq (3.10)
    7:  end for
    8: end for

     | Show Table
    DownLoad: CSV

    Firstly, The dimension of the global learning sample is divided into m segments evenly, and we let each segment follow the elite particles of the entire population to construct the learning sample. Specifically, we sort all particles in ascending order through the tournament mechanism and select a part of top particles to form an elite population. For each segment, we select k particles randomly from the elite population. Then we use a weighting strategy to construct the global learning sample GM=(GM1,GM2,,GMD).

    GMd=ki=1f(Ei)kj=1f(Ej)+γEi,d, (3.10)

    where d=1,2,,D, GMd represents the value of GM in the dth dimension. E is the set of selected k elite particles, and Ej denotes the jth elite particle Xj in E. Ei,d is the value of Ei in the dth dimension. The fitness function is denoted by f, and γ is a small positive number that prevents the denominator from being zero.

    In this way, the particles can obtain more comprehensive information, which is conducive to global exploration, increasing the probability of finding the global best value. Based on SWLS, we calculate the electric field force of the newly constructed sample on the currently updated particles, which represents the influence of the newly created global learning example GM on the particles in the electric field.

    By integrating the above strategies, Algorithm 3 shows the implementation process of MSPSO.

    Algorithm 3 MSPSO algorithm
    1: /*Initialization*/
    2: Determine the size of two subpopulations: n1=n2=n/2, the first n1 particles are positively charged particles, and the remaining n2 particles are negatively charged particles
    3: for i=1n do
    4:  Randomly initialize Vi and Xi
    5:  Evaluate f(Xi); pbesti=Xi
    6: end for
    7: Initialize all particles' weight coefficient wi, set t=1
    8: /*Main Loop*/
    9: while t<Maxiter do
    10:  Compute ωt by Eq (3.4)
    11:  Compute μ1, μ2, μ3 and μ4 by Eq (3.9)
    12:  Update α1, β1, α2, β2
    13:  Construct the global elite sample GMt by SWLS
    14:  for i=1n1 do
    15:    Construct PEt1 and PGt1 by EFCLS
    16:    Compute the electric field force fig of GMt to Xi by Eq (3.5)
    17:    Update Vi by (3.1)
    18:  end for
    19:  for i=(n1+1)(n1+n2) do
    20:    Construct PEt2 and PGt2 by EFCLS
    21:    Compute the electric field force fig of GMt to Xi by Eq (3.5)
    22:    Update Vi by (3.2)
    23:  end for
    24:  Update all particles' Xi by (3.3)
    25:  Update all particles' weight coefficient wi and pbesti
    26:  t=t+1
    27: end while

     | Show Table
    DownLoad: CSV

    In this section, we conducted experiments to evaluate the performance of MSPSO. Firstly, the performance of the MSPSO algorithm optimizes with parameter adjustment. Then the diversity of the algorithm search space is analyzed through experiments, and the effectiveness of the new proposed strategies is verified.

    We carry out experiments to analyze the influence of the parameters ne, ng and λ involved in MSPSO on the algorithm's performance. We set the number of particles in the positively charged subpopulation to be the same as the number of particles in the negatively charged subpopulation, n1=n2=n/2. For simplicity, we set ne=ηen/2, ng=ηgn/2. The experimental setting is that the population size n = 60 and the maximum number of function evaluation MaxFes = 1000D. We change the parameter from 0 to 1, and the step amplitude is 0.1. One unimodal function (Schwefel's P2.21) and three multimodal functions (Quartic, Alpine, Penalized 1) are employed to evaluate the results. The evaluation standard is the mean value after 30 independent runs, and the tests are performed on the problem of 50D. Note that to analyze the influence of each parameter on the performance of the algorithm separately, we set a default value for each parameter, namely ηe=0.5, ηg=0.5, λ=0.5.

    (1) Sensitivity of ηe

    ηe represents the proportion of elite particles that are attractive to the current update particle. And its function is to guide particles to a better area. A smaller ηe will increase the convergence speed of the algorithm, but it may also cause the particles to ignore the information of other better particles. Although a larger ηe can obtain more information from the oppositely charged particles, it may cause the particles to evolve in a worse direction. From the first row of Figure 1, we can see that a smaller ηe is more conducive to improving the accuracy, so we set ηe to 0.1.

    Figure 1.  Sensitivity of ηe, ηg and λ.

    (2) Sensitivity of ηg

    ηg represents the proportion of general particles that repel the current update particle. And its function is to disperse the particle swarm and drive the poorer particles to explore other directions, increasing the ability of global exploration. A smaller ηg may increase the convergence speed of the algorithm but may cause the particles to gather too much and weaken the global exploring ability. A larger ηg would make the particles more dispersed but may change the direction of movement of better particles. According to the experimental results in the second row of Figure 1, ηg is better in the range of [0.1,0.2], and we set ηg to 0.1.

    (3) Sensitivity of λ

    λ represents the degree to which the weight coefficients of attraction and repulsion learn from the historical information of elite particles and general particles. If λ is small, the knowledge of elite particles or general particles has less influence on the adjustment of μk, which may cause some useful historical information of elite particles or general particles to be ignored. The greater the λ, the greater the dependence of μk on elite or general experience. Experimental results based on the third row of Figure 1, λ=0.4 is adopted in this work.

    According to the above experiment and analysis, we will set ηe, ηg and λ to 0.1, 0.1 and 0.4 in the subsequent experiments.

    MSPSO introduces electric field into PSO and proposes two learning strategies to improve the performance of the algorithm, namely, an electric field force-based comprehensive learning strategy and a segment-based weighted learning strategy. Additionally, we also adopted a parameter adaptation strategy to update the attractive weight coefficient and the repulsive weight coefficient. To verify the effectiveness of the proposed strategies, we selected four representative test functions for the population diversity experiment: a unimodal function (Different power) and three multimodal functions (Quartic, Ackley, Penalized 1). The diversity experiment settings are as follows: the overall size n = 60, the problem dimension D = 50, and the maximum number of iterations MaxFes = 1000D. Population diversity is defined according to references[30].

    diversity(n)=1nni=1Dj=1(xi,jˉxj)2, (4.1)

    where xi,j represents the position of the ith particle in the jth dimension, and ˉxj represents the average value of all positions of particles in jth dimension.

    MSPSO-EFF, MSPSO-PA, and MSPSO-SEW represent the algorithm that the electric field force, parameter adaptation, and segment-based weighted learning strategy are removed from MSPSO. The experimental comparison results of the diversity are shown in the first row of Figure 2. In addition, to further analyze the different effects of these strategies on the algorithm, we also experimented on the accuracy, as shown in the second row of Figure 2.

    Figure 2.  Comparison results on the new strategies.

    As shown in Figure 2, MSPSO-SEW delivers the best performance in terms of the diversity on the four test functions. However, MSPSO-SEW sacrifices the convergence accuracy of the algorithm and cannot provide the best results in the accuracy of the solution. On the contrary, the loss of diversity of MSPSO-PA is faster than other algorithms. Although MSPSO does not obtain the best performance in terms of diversity, it is always better than MSPSO-EFF except for the Quartic function, which shows that the electric field force is beneficial to enhance the diversity of the population. Additionally, compared with MSPSO-SEW, MSPSO-EFF, and MSPSO-PA, MSPSO has fewer advantages on convergence speed but always gets the highest accuracy.

    We can get some preliminary conclusions about the newly introduced strategies based on the above experimental results. Firstly, from the diversity, electric field force and the adaptive update of weight coefficients both play an essential role. Secondly, the three strategies are all beneficial to improve the accuracy of the algorithm.

    To evaluate the performance of the MSPSO algorithm from multiple aspects, we selected sixteen widely used benchmark functions and the detailed information is shown in Table 1. The third and fourth columns of Table 1 respectively represent the search range and global best value of the benchmark function. The benchmark functions include two categories: unimodal function (f1-f8) and multimodal functions (f9-f16). Among the multimodal functions, f11-f16 are relatively more complicated than f9-f10.

    Table 1.  Benchmark functions.
    Name Benchmark Functions Search Range fm
    Sphere f1(x)=Di=1x2i [100,100]D 0
    SchwefelsP2.21 f2(x)=max(|xi|),i=1,2,,D [100,100]D 0
    SchwefelsP2.22 f3(x)=Di=1|xi|+Di=1|xi| [10,10]D 0
    Different power f4(x)=Di=1|xi|i+1 [100,100]D 0
    Bent cigar f5(x)=x21+106Di=2x2i [100,100]D 0
    Discus f6(x)=106x21+Di=2x2i [100,100]D 0
    Zakharov f7(x)=Di=1x2i+(Di=10.5x2i)2+(Di=10.5xi)4 [15,10]D 0
    Rosenbrock f8(x)=D1i=1[100(xi+1x2i)2+(xi1)2] [30,30]D 0
    Quartic f9(x)=Di=1ix4i+random[0,1] [1.28,1.28]D 0
    Alpine f10(x)=Di=1|xisinxi+0.1xi| [10,10]D 0
    SchwefelsP2.26 f11(x)=Di=1(xisin(|xi|)) [500,500]D -12569.5
    Rastrigin f12(x)=Di=1[x2i10cos(2πxi)+10] [5.12,5.12]D 0
    Ackley f13(x)=20exp(0.21DDi=1x2i)exp(Di=1cos2πxi)+20+e [32,32]D 0
    Griewank f14(x)=1/4000Di=1x2iDi=1xi/i+1 [600,600]D 0
    Penalized 1 f15(x)=πD{10sin2(πyi)+D1i=1(yi1)2[1+10sin2(πyi+1)]+(yD1)2}+Di=1u(xi,10,100,4), where yi=1+0.25(xi+1),u(xi,a,k,m)={k(xia)m,xi>a0,axiak(xia)m,xi<a [50,50]D 0
    Penalized 2 f16(x)=0.1[sin2π3yi+Di=1(yi1)2[1+sin2(3πyi+1)]+(xn1)2[1+sin2(2πyD)]]+Di=1u(xi,10,100,4), where yi=1+0.25(xi+1),uasPenalized1 [50,50]D 0

     | Show Table
    DownLoad: CSV

    We selected eight well-known evolutionary algorithms based on particle swarm optimization algorithm to verify the superiority of the MSPSO algorithm, including PSO, DMSPSO, CLPSO, OBL-CPSO, CLPSO-LOT, MPCPSO, XPSO, and MIONPSO. And the specific information about the parameter settings of each algorithm is shown in Table 2. Note that each algorithm's parameter settings and experimental settings are the same as those of the corresponding original text for the fairness of comparison.

    Table 2.  The details of baseline algorithms.
    Algorithm Year Parameter settings
    PSO[44] 1998 ω=0.7298,c1=c2=1.49445
    DMSPSO[29] 2005 ω=0.7298,c1=c2=1.49445,R=10,L=100
    CLPSO[41] 2006 ω=[0.4,0.9],c=1.49445,m=7
    OBL-CPSO[42] 2016 ω=0.20.7
    CLPSO-LOT[34] 2019 ω=[0.4,0.9],c=1.49445,m=7,g=30
    MPCPSO[36] 2020 ω=0.7298,c1=c2=1.49445,F=0.5,t=0.5
    XPSO[37] 2020 n=50,ω=[0,4,0.9],η=0.2,stagmax=5,p=0.5
    MIONPSO[35] 2021 n=200,c=1.49445,SN=50,R=10,α=0.2,β=0.04

     | Show Table
    DownLoad: CSV

    The experimental settings are as follows: the population size n = 60, the maximum number of function evaluation MaxFes = 1000D, each algorithm runs 30 times independently. We adopt the evaluation indicators commonly used in the PSO algorithm: average and standard deviation. Additionally, to show the performance characteristics of each algorithm on different dimensions, we conducted experiments in 30D, 50D, and 100D.

    Table 3, Table 4, and Table 5 show the results of different algorithms on different dimensions and benchmark functions, including the running time of the CPU.

    Table 3.  Comparison results of all algorithms on benchmark functions (30D).
    MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    f1 Mean 0.00E+00 0.00E+00 1.88E-14 1.30E-62 1.24E+00 9.93E-32 3.17E+01 9.10E+00 3.09E-20
    Std 0.00E+00 0.00E+00 7.08E-14 4.36E-62 6.63E+00 3.65E-31 1.50E+01 3.64E+00 4.01E-20
    Time(s) 11.03 18.94 10.89 42.03 53.77 1.48 52.64 2.46 0.38
    f2 Mean 4.10E-198 0.00E+00 3.20E-04 1.63E-32 3.41E+01 2.89E-16 4.29E+01 2.86E+00 1.32E-01
    Std 0.00E+00 0.00E+00 4.96E-04 2.61E-32 7.07E+00 1.48E-15 4.96E+00 8.55E-01 1.00E-01
    Time(s) 10.99 19.52 10.85 40.85 55.26 1.44 54.17 2.50 0.36
    f3 Mean 1.32E-197 0.00E+00 2.91E-07 3.42E-33 9.17E-03 9.06E-17 6.58E-01 1.25E+00 3.52E-08
    Std 0.00E+00 0.00E+00 6.53E-07 6.12E-33 6.11E-03 2.84E-16 3.03E-01 4.06E-01 6.29E-08
    Time(s) 11.18 19.94 10.91 40.46 53.41 1.79 52.69 2.70 0.54
    f4 Mean 0.00E+00 1.28E+21 2.79E-07 2.78E+24 9.99E+28 3.36E-47 5.87E+27 1.57E+07 3.33E+12
    Std 0.00E+00 6.75E+21 1.06E-06 1.23E+25 5.33E+29 1.46E-46 1.73E+28 5.04E+07 1.80E+13
    Time(s) 14.40 41.72 14.60 45.81 56.02 8.19 58.02 6.74 3.48
    f5 Mean 0.00E+00 0.00E+00 3.54E-02 1.10E+09 3.46E+04 2.69E-26 2.62E+07 7.56E+06 3.33E+02
    Std 0.00E+00 0.00E+00 1.91E-01 2.02E+08 7.82E+04 1.13E-25 1.56E+07 2.89E+06 1.80E+03
    Time(s) 11.03 18.19 10.89 41.98 54.36 1.66 54.42 2.64 0.46
    f6 Mean 0.00E+00 1.52E+03 1.08E-03 4.18E-32 5.26E-02 9.92E-31 2.47E+02 2.66E+01 9.54E-18
    Std 0.00E+00 9.33E+02 5.12E-03 2.23E-31 1.30E-01 5.09E-30 2.36E+02 9.28E+00 3.84E-17
    Time(s) 11.00 17.77 11.08 41.16 55.72 1.76 52.92 2.70 0.47
    f7 Mean 0.00E+00 1.17E+02 6.25E-17 1.25E-64 4.55E+00 3.52E+02 6.20E+00 3.70E-01 3.44E+00
    Std 0.00E+00 3.18E+01 1.78E-16 5.03E-64 3.85E+00 7.18E+01 2.23E+00 1.21E-01 1.85E+01
    Time(s) 11.80 21.99 11.57 41.90 59.73 4.67 53.98 3.40 1.12
    f8 Mean 2.88E+01 2.90E+01 2.51E+01 2.71E+01 9.60E+02 2.81E+01 1.94E+04 2.50E+02 3.71E+01
    Std 2.98E-02 1.16E-03 4.27E-01 2.05E-01 1.54E+03 3.70E-01 9.59E+03 1.18E+02 3.23E+01
    Times 16.45 53.71 16.44 47.26 60.28 12.17 58.41 9.29 5.67
    f9 Mean 1.47E-05 1.63E-02 3.12E-03 1.45E-03 9.37E-02 1.92E-05 1.96E-01 9.06E-03 7.93E-03
    Std 1.46E-05 9.37E-03 1.33E-03 1.49E-03 5.40E-02 2.11E-05 5.96E-02 2.91E-03 2.75E-03
    Time(s) 11.77 23.35 12.00 42.99 52.05 3.38 55.42 3.62 1.33
    f10 Mean 4.98E-198 0.00E+00 2.11E-04 1.09E-33 2.95E+00 1.12E-17 1.27E+00 1.40E-01 5.18E-08
    Std 0.00E+00 0.00E+00 3.19E-04 3.39E-33 9.23E-01 2.80E-17 4.15E-01 8.01E-02 1.76E-07
    Time(s) 11.04 18.12 10.96 41.13 51.79 1.73 55.74 2.60 0.53
    f11 Mean -2.17E+03 -5.98E+03 -7.61E+03 -1.25E+04 -8.09E+03 -3.18E+03 -8.15E+03 -7.18E+03 -6.75E+03
    Std 3.31E+02 5.36E+02 2.25E+03 6.57E+01 3.08E+02 9.77E+02 3.31E+02 8.10E+02 8.68E+02
    Time(s) 11.07 17.81 11.44 41.89 51.04 1.81 57.05 2.58 0.50
    f12 Mean 0.00E+00 0.00E+00 1.37E-01 4.74E-16 1.69E+01 0.00E+00 4.33E+01 1.90E+01 4.63E+01
    Std 0.00E+00 0.00E+00 7.39E-01 1.65E-15 4.33E+00 0.00E+00 9.11E+00 3.72E+00 1.24E+01
    Time(s) 10.97 18.66 11.01 47.17 49.79 2.04 54.65 2.70 0.63
    f13 Mean 0.00E+00 7.11E-15 3.41E-10 7.58E-15 2.66E+00 2.37E-16 3.46E+00 2.18E+00 4.98E-01
    Std 0.00E+00 0.00E+00 5.85E-10 6.27E-15 1.51E+00 8.86E-16 6.09E-01 3.61E-01 6.95E-01
    Time(s) 11.50 21.34 11.39 51.93 52.98 2.83 52.25 3.25 1.01
    f14 Mean 0.00E+00 0.00E+00 5.36E-02 0.00E+00 1.58E-01 0.00E+00 1.60E+00 1.09E+00 1.04E-02
    Std 0.00E+00 0.00E+00 1.61E-01 0.00E+00 1.91E-01 0.00E+00 3.27E-01 4.40E-02 2.16E-02
    Time(s) 14.88 44.28 15.52 49.33 56.73 10.08 55.81 7.33 4.30
    f15 Mean 3.24E-01 1.55E+00 1.84E-07 3.38E-04 3.33E+00 2.02E-01 4.23E+01 2.25E-01 5.53E-02
    Std 1.06E-01 6.45E-02 5.85E-07 1.02E-04 5.09E+00 4.91E-02 7.02E+01 1.95E-01 1.44E-01
    Time(s) 16.98 60.12 17.03 49.10 60.03 14.09 58.79 10.14 6.24
    f16 Mean 2.13E+00 5.51E+00 7.35E-04 1.17E-01 5.95E-01 2.59E+00 4.22E+00 9.67E-01 3.66E-03
    Std 4.96E-01 6.43E-01 2.74E-03 4.68E-01 1.14E+00 3.19E-01 2.58E+00 3.81E-01 1.03E-02
    Time(s) 26.06 123.38 27.23 59.60 70.05 33.51 71.04 21.66 15.10

     | Show Table
    DownLoad: CSV
    Table 4.  Comparison results of all algorithms on benchmark functions (50D).
    MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    f1 Mean 0.00E+00 0.00E+00 2.13E-10 3.09E-64 1.94E-01 2.80E-32 9.27E-01 1.02E+02 9.59E-08
    Std 0.00E+00 0.00E+00 9.02E-10 8.52E-64 1.76E-01 9.54E-32 4.91E-01 3.53E+01 2.58E-07
    Times 15.22 18.86 17.40 67.17 107.23 1.53 85.96 3.50 0.44
    f2 Mean 1.99E-198 0.00E+00 1.21E-03 4.27E-33 3.42E+01 1.30E-17 4.64E+01 7.86E+00 5.30E+00
    Std 0.00E+00 0.00E+00 2.25E-03 6.15E-33 5.94E+00 5.65E-17 2.70E+00 1.39E+00 9.98E-01
    Times 14.98 19.04 17.70 67.49 107.36 1.48 85.18 2.44 0.41
    f3 Mean 2.20E-198 0.00E+00 1.59E-04 3.16E-34 1.11E-01 1.86E-15 2.14E-01 6.27E+00 6.91E-03
    Std 0.00E+00 0.00E+00 3.50E-04 4.54E-34 3.61E-02 7.38E-15 3.75E-02 1.14E+00 9.49E-03
    Times 15.07 20.36 17.48 67.52 106.16 1.89 87.51 2.74 0.60
    f4 Mean 0.00E+00 4.95E+36 3.16E+05 2.54E+27 3.44E+28 7.23E-43 1.65E+33 6.25E-01 3.33E+28
    Std 0.00E+00 2.63E+37 1.69E+06 1.32E+28 1.85E+29 3.90E-42 8.40E+33 1.95E+00 1.80E+29
    Times 21.42 56.22 24.02 73.94 124.31 13.57 90.84 9.67 5.84
    f5 Mean 0.00E+00 0.00E+00 1.68E+01 1.14E+09 1.71E+05 1.74E-26 8.11E+05 9.50E+07 2.02E-01
    Std 0.00E+00 0.00E+00 8.95E+01 2.68E+08 1.32E+05 8.16E-26 3.15E+05 2.98E+07 4.42E-01
    Times 15.69 19.74 17.44 67.82 105.71 1.70 84.22 2.69 0.52
    f6 Mean 0.00E+00 3.87E+03 7.66E-01 2.43E-29 1.38E-01 3.07E-27 2.03E+00 2.86E+02 3.40E-06
    Std 0.00E+00 2.02E+03 1.37E+00 1.31E-28 5.11E-02 1.46E-26 1.00E+00 8.78E+01 1.64E-05
    Times 15.64 19.50 17.84 66.37 105.17 1.71 84.58 2.62 0.53
    f7 Mean 0.00E+00 3.06E+02 4.13E-10 1.35E-65 1.42E+00 1.04E+03 2.23E+00 6.76E+00 1.51E+02
    Std 0.00E+00 7.55E+01 1.09E-09 5.17E-65 5.57E-01 2.43E+02 5.91E-01 1.74E+00 1.90E+02
    Times 15.64 23.62 18.13 67.09 107.24 3.04 87.26 3.61 1.17
    f8 Mean 4.87E+01 4.90E+01 4.55E+01 2.72E+01 3.50E+02 4.82E+01 1.02E+03 2.45E+03 1.00E+02
    Std 4.61E-02 3.76E-04 4.02E-01 2.35E-01 1.33E+02 3.37E-01 4.92E+02 1.05E+03 4.32E+01
    Times 24.22 80.51 27.19 76.74 118.10 22.21 100.35 14.25 9.66
    f9 Mean 1.60E-05 2.09E-02 5.70E-03 1.22E-03 1.32E-01 1.71E-05 2.22E-01 3.88E-02 2.46E-02
    Std 1.45E-05 1.35E-02 2.97E-03 1.08E-03 2.77E-02 2.03E-05 4.53E-02 1.25E-02 1.07E-02
    Times 16.58 30.26 19.62 69.16 104.76 5.01 90.74 4.67 2.20
    f10 Mean 3.88E-199 0.00E+00 1.71E-03 2.00E-34 6.15E+00 3.80E-17 2.99E+00 1.37E+00 4.36E-03
    Std 0.00E+00 0.00E+00 1.99E-03 4.12E-34 2.99E+00 7.95E-17 8.70E-01 4.74E-01 8.39E-03
    Times 15.05 19.66 17.52 65.92 104.91 1.79 87.64 2.64 0.59
    f11 Mean -3.24E+03 -7.98E+03 -8.84E+03 -1.25E+04 -1.20E+04 -7.40E+05 -1.22E+04 -1.11E+04 -1.10E+04
    Std 6.01E+02 1.07E+03 3.54E+03 2.12E+02 4.60E+02 3.55E+06 3.58E+02 1.20E+03 9.82E+02
    Times 15.19 19.53 18.27 67.04 102.54 1.78 84.37 2.63 0.57
    f12 Mean 0.00E+00 0.00E+00 7.32E-09 4.14E-16 2.89E+01 0.00E+00 9.66E+01 5.73E+01 8.01E+01
    Std 0.00E+00 0.00E+00 1.47E-08 1.35E-15 8.48E+00 0.00E+00 1.07E+01 1.03E+01 1.52E+01
    Times 15.12 20.34 17.57 71.85 99.92 2.02 85.52 2.80 0.70
    f13 Mean 0.00E+00 7.11E-15 3.79E-06 7.46E-15 1.83E+00 1.18E-16 1.67E+00 3.74E+00 2.19E+00
    Std 0.00E+00 0.00E+00 7.64E-06 4.53E-15 1.55E+00 6.38E-16 3.20E-01 3.88E-01 8.50E-01
    Times 15.52 23.00 18.05 67.31 105.04 2.90 85.03 3.56 1.12
    f14 Mean 0.00E+00 0.00E+00 1.90E-01 2.59E-17 2.69E-01 0.00E+00 9.61E-01 1.97E+00 3.06E-02
    Std 0.00E+00 0.00E+00 2.03E-01 1.40E-16 1.36E-01 0.00E+00 9.62E-02 3.01E-01 4.43E-02
    Times 21.50 62.78 25.19 79.22 111.73 15.99 91.82 12.13 6.89
    f15 Mean 3.19E-01 1.40E+00 7.33E-05 3.22E-04 8.63E-02 3.42E-01 7.31E-01 1.73E+00 4.34E-01
    Std 8.57E-02 6.81E-02 1.01E-04 8.90E-05 1.68E-01 4.85E-02 5.25E-01 7.54E-01 7.55E-01
    Times 25.42 92.85 28.01 78.90 118.17 23.04 97.69 15.90 10.65
    f16 Mean 3.98E+00 9.34E+00 1.26E-02 2.85E-02 3.02E-01 4.96E+00 8.10E-01 1.07E+01 9.52E-03
    Std 8.36E-01 1.08E+00 1.37E-02 1.67E-02 3.60E-01 6.85E-01 2.71E-01 3.09E+00 1.19E-02
    Times 41.15 206.59 45.55 96.25 139.44 54.44 117.62 35.18 26.35

     | Show Table
    DownLoad: CSV
    Table 5.  Comparison results of all algorithms on benchmark functions (100D).
    MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    f1 Mean 0.00E+00 0.00E+00 5.8647E-07 5.46E-60 2.46E+02 1.40E-27 2.01E+03 1.31E+03 1.05E+01
    Std 0.00E+00 0.00E+00 1.6439E-06 2.94E-59 1.75E+02 7.53E-27 4.46E+02 2.02E+02 1.98E+01
    Times 24.91 23.63 33.54 128.20 205.09 1.59 174.23 2.63 0.56
    f2 Mean 7.10E-199 0.00E+00 1.62E-01 7.19E-32 5.26E+01 2.67E-16 6.95E+01 1.37E+01 2.04E+01
    Std 0.00E+00 0.00E+00 8.63E-01 1.35E-31 1.04E+01 1.35E-15 2.64E+00 1.35E+00 1.99E+00
    Times 25.55 22.96 33.52 127.58 207.50 1.67 174.20 2.69 0.53
    f3 Mean 1.93E-197 0.00E+00 4.24E-03 1.26E-32 9.39E+00 5.19E-15 2.66E+01 3.19E+01 4.61E+00
    Std 0.00E+00 0.00E+00 9.01E-03 4.07E-32 4.46E+00 1.32E-14 2.79E+00 3.12E+00 2.35E+00
    Times 25.87 24.14 33.66 127.61 204.80 1.98 174.48 2.92 0.73
    f4 Mean 0.00E+00 1.45E+83 9.47E+28 1.09E+103 2.54E+108 7.67E-51 3.78E+112 5.29E-05 3.33E+106
    Std 0.00E+00 7.79E+83 5.10E+29 5.86E+103 1.34E+109 2.64E-50 2.02E+113 1.72E-04 1.80E+107
    Times 37.89 96.83 47.00 141.20 231.71 26.96 193.48 17.58 11.51
    f5 Mean 0.00E+00 1.33E+01 4.06E+01 5.53E+09 2.08E+08 1.26E-24 2.03E+09 1.20E+09 1.45E+07
    Std 0.00E+00 7.18E+01 9.91E+01 3.96E+08 9.54E+07 5.88E-24 4.57E+08 2.24E+08 4.33E+07
    Times 25.73 23.58 33.59 129.36 205.84 1.72 173.17 2.73 0.63
    f6 Mean 0.00E+00 8.80E+03 7.93E+01 1.05E-46 2.08E+02 7.16E-27 3.10E+03 3.07E+03 3.02E+01
    Std 0.00E+00 3.82E+03 4.10E+01 5.65E-46 1.08E+02 3.85E-26 5.46E+02 5.33E+02 8.20E+01
    Times 25.70 23.67 34.05 128.51 206.93 1.76 172.96 2.79 0.65
    f7 Mean 0.00E+00 9.46E+02 4.56E-07 7.64E-65 2.39E+02 4.03E+03 3.40E+02 1.29E+02 9.21E+02
    Std 0.00E+00 2.63E+02 1.38E-06 1.97E-64 5.93E+01 8.93E+02 4.31E+01 3.05E+01 3.61E+02
    Times 26.05 28.02 33.92 130.75 209.66 3.09 174.00 3.73 1.33
    f8 Mean 9.85E+01 9.90E+01 9.58E+01 9.72E+01 4.01E+04 9.84E+01 8.15E+05 8.52E+04 6.66E+02
    Std 5.70E-02 1.76E-03 6.41E-01 2.34E-01 4.01E+04 2.71E-01 2.09E+05 2.63E+04 4.61E+02
    Times 43.94 149.08 53.27 149.15 227.81 40.76 196.59 27.57 19.68
    f9 Mean 1.40E-05 3.66E-02 1.64E-02 1.61E-03 8.55E-01 1.55E-05 2.50E+00 3.28E-01 1.56E-01
    Std 1.30E-05 3.14E-02 7.67E-03 1.75E-03 2.48E-01 1.86E-05 3.53E-01 7.57E-02 3.32E-02
    Times 28.56 48.02 37.46 134.62 204.35 9.25 177.02 7.24 4.30
    f10 Mean 8.29E-198 1.62E-03 1.14E-02 1.59E-33 2.73E+01 2.79E-15 3.51E+01 1.24E+01 1.30E+00
    Std 0.00E+00 8.42E-03 1.08E-02 2.93E-33 1.01E+01 1.45E-14 3.60E+00 2.05E+00 1.36E+00
    Times 25.19 25.23 33.77 128.85 203.92 1.86 169.35 2.83 0.74
    f11 Mean -4.56E+03 -1.13E+04 -1.41E+04 -4.18E+04 -2.05E+04 -2.44E+04 -2.10E+04 -1.79E+04 -2.01E+04
    Std 8.04E+02 1.63E+03 7.81E+03 1.77E+02 7.73E+02 9.82E+04 5.67E+02 2.32E+03 1.91E+03
    Times 25.27 25.01 34.60 129.76 191.74 1.88 168.23 2.73 0.72
    f12 Mean 0.00E+00 0.00E+00 1.52E-04 6.51E-16 2.00E+02 0.00E+00 4.50E+02 2.54E+02 1.74E+02
    Std 0.00E+00 0.00E+00 6.59E-04 2.88E-15 3.86E+01 0.00E+00 2.28E+01 3.02E+01 2.19E+01
    Times 24.69 25.88 33.19 139.98 186.87 2.12 169.19 2.88 0.86
    f13 Mean 0.00E+00 7.11E-15 6.71E-05 9.12E-15 6.02E+00 0.00E+00 8.83E+00 6.29E+00 4.16E+00
    Std 0.00E+00 0.00E+00 2.03E-04 8.44E-15 2.53E+00 0.00E+00 5.39E-01 4.25E-01 1.07E+00
    Times 25.32 28.79 33.96 126.64 194.72 3.05 168.16 3.41 1.24
    f14 Mean 0.00E+00 0.00E+00 9.99E-01 4.07E-17 3.62E+00 0.00E+00 4.06E+01 1.27E+01 9.42E-01
    Std 0.00E+00 0.00E+00 2.11E-01 1.27E-16 1.74E+00 0.00E+00 9.52E+00 1.67E+00 4.95E-01
    Times 38.60 113.30 48.74 149.56 210.19 28.63 181.40 19.10 15.16
    f15 Mean 3.42E-01 1.30E+00 7.31E-03 3.67E-03 3.64E+02 5.46E-01 3.35E+03 8.09E+00 4.06E+00
    Std 9.73E-02 2.85E-02 2.44E-03 7.74E-04 1.81E+03 4.45E-02 3.79E+03 2.37E+00 1.46E+00
    Times 46.11 174.66 54.43 149.66 226.42 44.88 193.01 29.62 22.45
    f16 Mean 8.59E+00 1.98E+01 8.47E-01 1.53E+00 1.18E+02 1.01E+01 2.03E+02 9.71E+01 2.46E+01
    Std 1.53E+00 3.16E-02 2.35E-01 2.82E+00 8.32E+01 6.89E-01 3.89E+01 1.76E+01 1.84E+01
    Times 77.33 393.10 89.59 183.75 280.93 114.62 255.71 69.21 56.40

     | Show Table
    DownLoad: CSV

    (1) Unimodal functions (f1-f8)

    It can be seen from Table 3 that when MSPSO solves the problem of 30D, it shows a better mean value and standard deviation on most unimodal functions. Except for the functions f2, f3 and f8, MSPSO can find the optimal value of other unimodal functions, and it also has strong stability. On functions f2 and f3, the accuracy of MSPSO is slightly lower than MIONPSO, but better than the other seven algorithms. All algorithms have similar performance on the function f8 while XPSO performs best, and no optimal value is found. Besides, MSPSO also shows similar performance to the 30D problem with high dimensions (Table 4 and Table 5), almost finding the optimal value. And it won first place five times both on 50D and 100D.

    (2) Multimodal functions (f9-f16)

    The results in Table 3 indicate that for the eight multimodal functions, MSPSO can easily jump out of the local optimal values on functions f12, f13 and f14. None of the algorithms finds the optimal value on function f9, but MSPSO achieves the highest accuracy. Although MSPSO does not reach the global best value on function f10, it has a small difference from the MIONPSO, and is better than the other seven algorithms. However, MSPSO performs poorly on function f15 and f16. The reason is that the global learning sample guides the movement of particles and makes their convergence speed faster, which causes particles to fall into the local optimum. In addition, it can be seen from Table 4 and Table 5 that MSPSO has won first place 4 times and 5 times in the 50D and 100D multimodal functions, respectively. The results show that as the dimension increases, the performance of MSPSO is getting better and better.

    To further illustrate the comprehensive performance of comparison algorithms, in terms of the accuracy of the solution, Table 6 performs a Friedman test of mean values with a significance level of α=0.05 on sixteen benchmark functions of all algorithms on different dimensions. The result shows that MSPSO takes first place on unimodal functions with an obvious advantage and ranks third on multimodal functions. Although MSPSO is slightly worse than MPCPSO and OBL-CPSO in handling multimodal problems, it still ranks first overall. MSPSO has the best overall performance on benchmark functions, followed by MPCPSO and OBL-CPSO.

    Table 6.  Friedman test of mean values on benchmark functions of different dimensions.
    Average Rank Algorithm Ranking Unimodal Multimodal
    Algorithm Ranking Algorithm Ranking
    1 MSPSO 2.51 MSPSO 1.73 MPCPSO 2.90
    2 MPCPSO 3.30 MPCPSO 3.71 OBL-CPSO 3.21
    3 OBL-CPSO 3.58 OBL-CPSO 3.96 MSPSO 3.29
    4 XPSO 4.17 XPSO 4.13 XPSO 4.21
    5 MINOPSO 4.40 MIONPSO 4.27 MIONPSO 4.52
    6 PSO 5.60 PSO 5.63 PSO 5.58
    7 CLPSO-LOT 6.67 CLPSO-LOT 6.75 CLPSO-LOT 6.58
    8 DMSPSO 6.96 DMSPSO 6.83 DMSPSO 7.08
    9 CLPSO 7.81 CLPSO 8.00 CLPSO 7.63

     | Show Table
    DownLoad: CSV

    The above experimental results show that the performance of MSPSO is better than the other eight comparison algorithms. And it can effectively deal with high-dimensional unimodal and multimodal optimization problems, which have high robustness and convergence accuracy. The better performance of MSPSO benefits from the following advantages. Firstly, the interaction of attraction and repulsion disperses particles instead of gathering them at one point, enhances the capability of global exploration. The attraction between particles exchanges the information of the two subpopulations, making the particles co-evolve in a better direction. Secondly, the attractive sample and repulsive sample constructed by EFCLS make particles move purposefully and lead them to a better area. At the same time, the global learning sample created by SWLS enables particles to obtain more comprehensive information of the elite particles, which is conducive to global exploration. Thirdly, the adaptive adjustment of the weight coefficients enhances the diversity of the population, which is beneficial to the evolution of the population. Finally, the non-linear decrease of the inertia weight makes particles focus on global exploration in the early stage and local exploitation later, which improves the accuracy of convergence.

    Table 3, Table 4, and Table 5 record the CPU time consumed by each algorithm in sixteen test functions, and Table 7 shows the comprehensive ranking results of CPU time. The top three are PSO, DMSPSO, and OBL-CPSO. And MSPSO is close behind.

    Table 7.  Ranking of CPU time consumption on different dimensions.
    MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    30D 4.44 6.44 4.44 6.81 8.44 2.50 8.31 2.63 1.00
    50D 3.94 6.31 4.94 6.81 8.94 2.50 7.94 2.63 1.00
    100D 4.31 5.25 5.56 6.88 8.94 2.50 7.94 2.63 1.00
    Ave rank 4.23 6.00 4.98 6.83 8.77 2.50 8.06 2.63 1.00
    Final rank 4 6 5 7 9 3 8 2 1

     | Show Table
    DownLoad: CSV

    Figure 3 is the comprehensive performance diagram of MSPSO and other comparison algorithms. It is a comprehensive ranking diagram based on accuracy and CPU time on benchmark functions. The closer to the lower-left corner, the better the performance of the algorithm. From Figure 3, we can see that MSPSO and OBL-CPSO are the best among all algorithms and show similar performance. Although the accuracy of MSPSO is higher than that of OBL-CPSO, the CPU takes longer. This result indicates that MSPSO has a higher solution accuracy and performs well in algorithm complexity.

    Figure 3.  Comprehensive performance of all algorithms.

    To further prove the superiority of the performance of the proposed MSPSO, a Nemenyi test was performed based on the Friedman test, shown in Figure 4. CD denotes the critical difference. It can be seen from Figure 4 that the performance of the MSPSO algorithm is significantly higher than that of PSO, CLPSO-LOT, DMSPSO, and CLPSO. Compared with MPCPSO, OBL-CPSO, XPSO, and MIONPSO, MSPSO has the best performance on the Friedman test, but there is no significant difference.

    Figure 4.  Friedman rankings-based Nemenyi test result.

    In addition, we use the Wilcoxon signed-rank test with a significant level of α = 0.05 to represent the magnitude of the difference between MSPSO and the baseline algorithms on different dimensions, as shown in Table 8. It can be seen from Table 8 that MSPSO has a significant improvement compared with CLPSO-LOT, CLPSO, DMSPSO, and PSO. It is worth noting that there is no significant difference between MSPSO and PSO on the problem of 30D. However, the significant level of MSPSO and PSO changes from 0.1 to 0.01 with increased dimension. There is no statistically significant difference between MSPSO and MIONPSO, XPSO, MPCPSO, OBL-CPSO. However, MSPSO shows better results than these baseline algorithms on most of the test functions. Therefore, the statistical data of Friedman test, Nemenyi test, and Wilcoxon signed rank-test verify the consistent validity of MSPSO on the benchmark problems. And MSPSO performs better in high-dimensional spaces.

    Table 8.  Wilcoxon Signed Rank Test of benchmark functions.
    D MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    30 P-value 0.084 0.605 0.427 0.013 0.300 0.004 0.017 0.070
    50 P-value 0.084 0.352 0.352 0.039 0.084 0.023 0.006 0.026
    100 P-value 0.050 0.379 0.570 0.008 0.101 0.008 0.005 0.009

     | Show Table
    DownLoad: CSV

    The dynamic convergence curve of each benchmark function in the complete iteration cycle is employed to illustrate the convergence effect. We select the convergence of the first 1000 rounds to observe the difference in the convergence speed of each algorithm, as shown in Figure 5 and Figure 6.

    Figure 5.  Convergence curves of all algorithms on unimodal functions.
    Figure 6.  Convergence curves of all algorithms on multimodal functions.

    For the eight unimodal functions shown in Figure 5, MSPSO maintains a high convergence speed and has a high convergence accuracy. Other PSO algorithms most converge faster in the early stage, then tend to be slow. In the initial stage of function f8, although MSPSO has a high convergence speed, the convergence accuracy is relatively poor. As shown in Figure 6, for most multimodal functions, MSPSO has obvious advantages in terms of convergence speed and convergence accuracy. But on the functions f11, f15 and f16, although MSPSO converges faster in the early stage and then tends to be flat, it does not converge to the global optimal solution.

    Based on the above analysis, MSPSO, compared with other PSO algorithms, has the best convergence speed and accuracy on unimodal functions and multimodal functions, mainly due to the following reasons. Firstly, the learning samples constructed by the two strategies proposed by MSPSO guide the movement of particles no longer blind, avoiding particles from exploring in other poor directions, and accelerate the convergence speed. At the same time, the learning samples guide particles to possible exploration and development areas, reducing the risk of falling into the local optimum. Additionally, the introduction of the electric field force and the adaptive update of parameters increase the diversity of the population and make the convergence speed faster.

    In this section, MSPSO will verify its performance against a practical optimization problem that is widely used. Dukic et al.[45] proposed a design method of multiphase code for spread spectrum pulse radar(SSPR) based on the characteristics of the non-periodic autocorrelation function, which is used in the design of multiphase pulse compression code. Gil-López et al.[46] formulated SSRP as a nonlinear maximum-minimum optimization problem, which is defined as follows.

     global xXminf(x)=max{φ1(x),,φ2m(x)},X={(x1,,xn)Rn0xj2π,j=1,,n}, where m=2n1, and φ2i1(x)=nj=icos(jk=|2ij1|+1xk),i=1,,n,φ2i(x)=0.5+nj=i+1cos(jk=|2ij|+1xk),i=1,,n1,φm+i(x)=φi(x),i=1,,m. (5.1)

    Table 9 shows the comparison results of MSPSO and other algorithms in solving the SSRP coding design problem. The fifth row of Table 9 represents the final ranking of all algorithms on the SSRP problem of different dimensions. Although the performance of MSPSO on SSRP is slightly lower than MPCPSO, it is better than the other seven algorithms. The experiment result shows that the proposed MSPSO also has a better performance in solving practical problems and has a practical application value.

    Table 9.  Comparison results on SSRP coding design problem.
    MSPSO MIONPSO XPSO MPCPSO CLPSO-LOT OBL-CPSO CLPSO DMSPSO PSO
    30D 4.98E-06 1.80E-03 1.40E-03 8.55E-16 4.32E-04 1.84E-04 7.45E-02 4.95E-02 3.21E-03
    50D 1.32E-05 3.08E-03 1.31E-03 2.80E-15 3.28E-04 2.86E-04 9.52E-02 6.76E-02 6.75E-03
    100D 2.69E-07 5.12E-03 2.10E-03 1.45E-14 4.73E-04 4.30E-04 1.56E-01 1.43E-01 7.19E-03
    Ave rank 2.00 6.00 5.00 1.00 4.00 3.00 9.00 8.00 7.00
    Final Rank 2 6 5 1 4 3 9 8 7

     | Show Table
    DownLoad: CSV

    This section further compares MSPSO with other recently released metaheuristic algorithms, namely MOMICA[2] and mSSA[3]. MOMICA is an improved ICA algorithm, which is conducive to solving the multimodal problem. mSSA is an improved SSA, which solves optimization problems more widely. The unimodal functions cannot evaluate the ability of the algorithm to fall into the local optimum well. Therefore, we chose three unimodal functions (f1,f3,f8) and five more complex multimodal functions (f12,f13,f14,f15,f16) for experiments. The experimental parameter configurations of MOMICA and mSSA are the same as the original text, and the experimental results can be found in the respective original texts. Table 10 lists the comparison results of algorithms independently running 30 times on the 30D problem. In addition, we performed the Friedman test and Wilcoxon signed-rank test on the experimental results, as shown in the penultimate row and last row of Table 10. The third-to-last row of Table 10 represents the average rank of the algorithm on the eight test functions.

    Table 10.  Comparisons between MSPSO and other two metaheuristic algorithms.
    MSPSO MOMICA mSSA
    f1 Mean 0.00E+00 6.72E-28 0.00E+00
    Std 0.00E+00 6.34E-15 0.00E+00
    Rank 1 3 1
    f3 Mean 1.32E-197 7.16E-17 1.07E-210
    Std 0.00E+00 3.39E-08 0.00E+00
    Rank 2 3 1
    f8 Mean 2.88E+01 2.67E+01 2.84E+01
    Std 2.98E-02 2.65E-02 2.97E-01
    Rank 3 1 2
    f12 Mean 0.00E+00 2.98E-01 0.00E+00
    Std 0.00E+00 7.45E+00 0.00E+00
    Rank 1 3 1
    f13 Mean 0.00E+00 1.06E-13 8.88E-16
    Std 0.00E+00 1.02E-06 2.04E-31
    Rank 1 3 1
    f14 Mean 0.00E+00 4.00E-03 0.00E+00
    Std 0.00E+00 1.32E-04 0.00E+00
    Rank 1 3 1
    f15 Mean 3.24E-01 5.30E-02 1.35E-11
    Std 1.06E-01 2.00E-02 4.04E-12
    Rank 3 2 1
    f16 Mean 2.13E+00 6.54E-01 1.60E-10
    Std 4.96E-01 9.38E-06 5.61E-11
    Rank 3 2 1
    Avg rank 1.88 2.50 1.25
    Friedman 2.06 2.50 1.44
    p-value - 0.779 0.138

     | Show Table
    DownLoad: CSV

    From Table 10, we can see that MSPSO and mSSA are effective for unimodal problems and achieve good performance on multimodal functions. The Wilcoxon signed-rank test shows that there is no significant difference between MSPSO and the other two algorithms. However, mSSA offers the best performance on test functions in terms of solution accuracy and Friedman test results. The proposed MSPSO follows closely behind. It is worth noting that MSPSO is inferior to the other two algorithms on the multimodal functions f15 and f16. The reason may be that the global learning sample guides the particles to move too fast, which increases the probability of falling into the local optimum. The experimental results show that the proposed MSPSO needs to be further improved to improve the ability to jump out of the local optimum, which is more conducive to solving complex optimization problems.

    In this study, we propose a multi-sample particle swarm optimization algorithm based on electric field force. In the proposed MSPSO, electric field force-based comprehensive learning strategy and segment-based weighted learning strategy are employed to construct learning samples, enhancing the ability of global exploration. Additionally, the adaptive changes of inertia weights and weight coefficients strengthen the diversity of the population and help the particles get rid of the local optimum. The experimental results of MSPSO and eight PSO algorithms demonstrate the superiority of MSPSO performance. It can obtain faster convergence speed and higher convergence accuracy when dealing with unimodal and multimodal problems. MSPSO is also effective in solving practical problems.

    Nevertheless, the proposed MSPSO algorithm still has some problems, and further research and improvement are needed. On the one hand, although the MSPSO algorithm can obtain a faster convergence rate, the accuracy of the multimodal function is not ideal. For example, the optimal value cannot be found on the multimodal function f11. Secondly, adjusting the parameters through the normal distribution function has a certain degree of randomness, which affects the algorithm's accuracy. Therefore, our subsequent work will further explore the local exploitation features of MSPSO, such as matching appropriate strategies for the specific dimension of particles to improve search efficiency. On the other hand, the application selected in this article is relatively single. Next, we will extend the application of the proposed MSPSO algorithm.

    This work was supported by NSFC Grant Nos. 61701060 and 61801067, Guangxi Colleges and Universities Key Laboratory of Intelligent Processing of Computer Images and Graphics Project No. GIIP1806, and the Science and Technology Research Project of Higher Education of Hebei Province (Grant No. QN2019069), and Chongqing Key Lab of Computer Network and Communication Technology (CY-CNCL-2017-02).

    All authors declare no conflicts of interest in this paper.



    [1] O. Kaplan, E. Elik, Simplified model and genetic algorithm based simulated annealing approach for excitation current estimation of synchronous motor, Adv. Electr. Comput. Eng., 18 (2018), 75–84. doi: 10.4316/AECE.2018.04009
    [2] N. Mohamed, N. Bilel, A. S. Alsagri, A multi-objective methodology for multi-criteria engineering design, Appl. Soft Comput., 91 (2020), 106204. doi: 10.1016/j.asoc.2020.106204
    [3] E. Çelik, N. Öztürk, Y. Arya, Advancement of the search process of salp swarm algorithm for global optimization problems, Expert Syst. Appl., 182 (2021), 115292. doi: 10.1016/j.eswa.2021.115292
    [4] E. Çelik, Improved stochastic fractal search algorithm and modified cost function for automatic generation control of interconnected electric power systems, Eng. Appl. Artif. Intell., 88 (2020), 103407. doi: 10.1016/j.engappai.2019.103407
    [5] G. Lin, J. Guan, An integrated method based on PSO and EDA for the max-cut problem, Comput. Intell. Neurosci., 2016 (2016), 3420671.
    [6] E. Çelik, A powerful variant of symbiotic organisms search algorithm for global optimization, Eng. Appl. Artif. Intell., 87 (2020), 103294. doi: 10.1016/j.engappai.2019.103294
    [7] E. Çelik, N. Öztürk, A hybrid symbiotic organisms search and simulated annealing technique applied to efficient design of PID controller for automatic voltage regulator, Soft Comput., 22 (2018), 8011–8024. doi: 10.1007/s00500-018-3432-2
    [8] N. Singh, S. B. Singh, E. H. Houssein, Hybridizing salp swarm algorithm with particle swarm optimization algorithm for recent optimization functions, Evol. Intell., (2020), 1–34.
    [9] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95-international conference on neural networks, IEEE, (1995), 1942–1948.
    [10] P. Taylan, B. Akteke-Ozturk, Mathematical and data mining contributions to dynamics and optimization of gene-environment networks, Int. J. Theor. Phys., 4 (2007), 115–146.
    [11] E. Kropat, G. W. Weber, B. Akteke-Öztürk, Eco-finance networks under uncertainty, in Proceedings of the international conference on engineering optimization, (2008).
    [12] G. W. Weber, İ. Batmaz, G. Köksal, P. Taylan, CMARS: A new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization, Inverse Probl. Sci. Eng., 20 (2012), 371–400. doi: 10.1080/17415977.2011.624770
    [13] A. Özmen, G. W. Weber, İ. Batmaz, E. Kropat, RCMARS: Robustification of CMARS with different scenarios under polyhedral uncertainty set, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 4780–4787. doi: 10.1016/j.cnsns.2011.04.001
    [14] A. Özmen, E. Kropat, G. W. Weber, Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty, Optimization, 66 (2017), 2135–2155. doi: 10.1080/02331934.2016.1209672
    [15] E. Kropat, G. W. Weber, E. B. Tirkolaee, Foundations of semialgebraic gene-environment networks, J. Dynam. Games, 7 (2020), 253. doi: 10.3934/jdg.2020018
    [16] R. K. Agrawal, B. Kaur, P. Agarwal, Quantum inspired Particle Swarm Optimization with guided exploration for function optimization, Appl. Soft Comput., 102 (2021), 107122. doi: 10.1016/j.asoc.2021.107122
    [17] Y. Du, F. Xu, A hybrid multi-step probability selection particle swarm optimization with dynamic chaotic inertial weight and acceleration coefficients for numerical function optimization, Symmetryn, 12 (2020), 922. doi: 10.3390/sym12060922
    [18] D. Tian, X. Zhao, Z. Shi, Chaotic particle swarm optimization with sigmoid-based acceleration coefficients for numerical function optimization, Swarm Evol. Comput., 51 (2019), 100573. doi: 10.1016/j.swevo.2019.100573
    [19] C. Wu, F. Yang, Y. Wu, R. Han, Prediction of crime tendency of high-risk personnel using C5. 0 decision tree empowered by particle swarm optimization, Math. Biosci. Eng., 16 (2019), 4135–4150. doi: 10.3934/mbe.2019206
    [20] M. Zhu, K. Wu, Y. Zhou, Z. Wang, J. Qiao, et al., Prediction of cooling moisture content after cut tobacco drying process based on a particle swarm optimization-extreme learning machine algorithm, Math. Biosci. Eng., 18 (2021), 2496–2507. doi: 10.3934/mbe.2021127
    [21] P. Singh, S. Chaudhury, B. K. Panigrahi, Hybrid MPSO-CNN: Multi-level Particle Swarm optimized Hyperparameters of Convolutional Neural Network, Swarm Evol. Comput., 63 (2021), 100863. doi: 10.1016/j.swevo.2021.100863
    [22] J. Rojas-Delgado, R. Trujillo-Rasúa, Training Neural Networks by Continuation Particle Swarm Optimization, in International Workshop on Artificial Intelligence and Pattern Recognition, Springer, (2018), 59–67.
    [23] T. L. Dang, Y. Hoshino, Hardware/software co-design for a neural network trained by particle swarm optimization algorithm, Neural Process Lett., 49 (2019), 481–505. doi: 10.1007/s11063-018-9826-4
    [24] L. M. Abualigah, A. T. Khader, E. S. Hanandeh, A new feature selection method to improve the document clustering using particle swarm optimization algorithm, J. Comput. Sci., 25 (2018), 456–466. doi: 10.1016/j.jocs.2017.07.018
    [25] M. A. Tawhid, K. B. Dsouza, Hybrid binary bat enhanced particle swarm optimization algorithm for solving feature selection problems, Appl. Comput. Inform., 16 (2018), 117–136. doi: 10.1016/j.aci.2018.04.001
    [26] F. Kılıç, Y. Kaya, S. Yildirim, A novel multi population based particle swarm optimization for feature selection, Knowl. Based Syst., 219 (2021), 106894. doi: 10.1016/j.knosys.2021.106894
    [27] X. Wang, Y. Li, Chaotic image encryption algorithm based on hybrid multi-objective particle swarm optimization and DNA sequence, Opt. Lasers Eng., 137 (2021), 106393. doi: 10.1016/j.optlaseng.2020.106393
    [28] H. T. Yau, T. H. Hung, C. C. Hsieh, Bluetooth based chaos synchronization using particle swarm optimization and its applications to image encryption, Sensors, 12 (2012), 7468–7484. doi: 10.3390/s120607468
    [29] J. J. Liang, P. N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in Proceedings 2005 IEEE Swarm Intelligence Symposium, IEEE, (2005), 124–129.
    [30] S. Wang, G. Liu, M. Gao, S. Cao, A. Guo, J. Wang, Heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators, Inf. Sci., 540 (2020), 175–201. doi: 10.1016/j.ins.2020.06.027
    [31] Q. Zhang, H. G. Li, An improved least squares SVM with adaptive PSO for the prediction of coal spontaneous combustion, Math. Biosci. Eng., 16 (2019), 3169–3182. doi: 10.3934/mbe.2019157
    [32] K. M. Ang, W. H. Lim, N. A. M. Isa, S. S. Tiang, C. H. Wong, A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems, Expert Syst. Appl., 140 (2020), 112882. doi: 10.1016/j.eswa.2019.112882
    [33] K. Chen, F. Zhou, A. Liu, Chaotic dynamic weight particle swarm optimization for numerical function optimization, Knowl. Based Syst., 139 (2018), 23–40. doi: 10.1016/j.knosys.2017.10.011
    [34] K. Zhang, Q. Huang, Y. Zhang, Enhancing comprehensive learning particle swarm optimization with local optima topology, Inf. Sci., 471 (2019), 1–18. doi: 10.1016/j.ins.2018.08.049
    [35] S. Zhu, S. Zhou, J. Shang, L. Wang, B. Qiang, A multiion particle swarm optimization algorithm based on repellent and attraction forces, Concurr. Comput., 33 (2021), e5979.
    [36] W. Li, X. Meng, Y. Huang, Z. H. Fu, Multipopulation cooperative particle swarm optimization with a mixed mutation strategy, Inf. Sci., 529 (2020), 179–196. doi: 10.1016/j.ins.2020.02.034
    [37] X. Xia, L. Gui, G. He, B. Wei, Y. Zhang, F. Yu, H. Wu, Z. H. Zhan, An expanded particle swarm optimization based on multi-exemplar and forgetting ability, Inf. Sci., 508 (2020), 105–120. doi: 10.1016/j.ins.2019.08.065
    [38] Y. Shi, R. C. Eberhart, Parameter selection in particle swarm optimization, in International conference on evolutionary programming, Springer, (1998), 591–600.
    [39] M. U. Farooq, A. Ahmad, A. Hameed, Opposition-based initialization and a modified pattern for Inertia Weight (IW) in PSO, in 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), IEEE, (2017), 96–101.
    [40] A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization, Comput. Oper. Res., 33 (2006), 859–871. doi: 10.1016/j.cor.2004.08.012
    [41] J. J. Liang, A. K. Qin, P. N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput., 10 (2006), 281–295. doi: 10.1109/TEVC.2005.857610
    [42] J. Zhou, W. Fang, X. Wu, J. Sun, S. Cheng, An opposition-based learning competitive particle swarm optimizer, in 2016 IEEE Congress on Evolutionary Computation (CEC), IEEE, (2016), 515–521.
    [43] Q. Yang, W. N. Chen, T. Gu, H. Zhang, J. D. Deng, Y. Li, J. Zhang, Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization, IEEE Trans. Cybern., 47 (2017), 2896–2910. doi: 10.1109/TCYB.2016.2616170
    [44] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in IEEE world congress on computational intelligence, IEEE, (1998), 69–73.
    [45] M. L. Dukic, Z. S. Dobrosavljevic, A method of a spread-spectrum radar polyphase code design, IEEE J. Sel. Areas Commun., 8 (1990), 743–749. doi: 10.1109/49.56381
    [46] S. Gil-López, J. Del Ser, S. Salcedo-Sanz, Á. M. Pérez-Bellido, J. Marı, J. A. Portilla-Figueras, et al., A hybrid harmony search algorithm for the spread spectrum radar polyphase codes design problem, Expert Syst. Appl., 39 (2012), 11089–11093. doi: 10.1016/j.eswa.2012.03.063
  • This article has been cited by:

    1. Xin Zhou, Shangbo Zhou, Yuxiao Han, Shufang Zhu, Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization, 2022, 19, 1551-0018, 5241, 10.3934/mbe.2022246
    2. Man-Wen Tian, Yassine Bouteraa, Khalid A. Alattas, Shu-Rong Yan, Abdullah K. Alanazi, Ardashir Mohammadzadeh, Saleh Mobayen, Hiroki Sayama, A Type-3 Fuzzy Approach for Stabilization and Synchronization of Chaotic Systems: Applicable for Financial and Physical Chaotic Systems, 2022, 2022, 1099-0526, 1, 10.1155/2022/8437910
    3. Xiang Ma, Xuan Fang, Leiyan Lv, Kaiyuan Ling, Yang Shi, 2024, Optimal Allocation of Wind and Solar Storage Capacity in Smart Microgrid Based on Particle Swarm Optimization Algorithm, 9798400710155, 93, 10.1145/3685088.3685107
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2754) PDF downloads(92) Cited by(3)

Figures and Tables

Figures(6)  /  Tables(10)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog