Despite being easy to implement and having fast convergence speed, balancing the convergence and diversity of multi-objective particle swarm optimization (MOPSO) needs to be further improved. A multi-objective particle swarm optimization with reverse multi-leaders (RMMOPSO) is proposed as a solution to the aforementioned issue. First, the convergence strategy of global ranking and the diversity strategy of mean angular distance are proposed, which are used to update the convergence archive and the diversity archive, respectively, to improve the convergence and diversity of solutions in the archives. Second, a reverse selection method is proposed to select two global leaders for the particles in the population. This is conducive to selecting appropriate learning samples for each particle and leading the particles to quickly fly to the true Pareto front. Third, an information fusion strategy is proposed to update the personal best, to improve convergence of the algorithm. At the same time, in order to achieve a better balance between convergence and diversity, a new particle velocity updating method is proposed. With this, two global leaders cooperate to guide the flight of particles in the population, which is conducive to promoting the exchange of social information. Finally, RMMOPSO is simulated with several state-of-the-art MOPSOs and multi-objective evolutionary algorithms (MOEAs) on 22 benchmark problems. The experimental results show that RMMOPSO has better comprehensive performance.
Citation: Fei Chen, Yanmin Liu, Jie Yang, Meilan Yang, Qian Zhang, Jun Liu. Multi-objective particle swarm optimization with reverse multi-leaders[J]. Mathematical Biosciences and Engineering, 2023, 20(7): 11732-11762. doi: 10.3934/mbe.2023522
Despite being easy to implement and having fast convergence speed, balancing the convergence and diversity of multi-objective particle swarm optimization (MOPSO) needs to be further improved. A multi-objective particle swarm optimization with reverse multi-leaders (RMMOPSO) is proposed as a solution to the aforementioned issue. First, the convergence strategy of global ranking and the diversity strategy of mean angular distance are proposed, which are used to update the convergence archive and the diversity archive, respectively, to improve the convergence and diversity of solutions in the archives. Second, a reverse selection method is proposed to select two global leaders for the particles in the population. This is conducive to selecting appropriate learning samples for each particle and leading the particles to quickly fly to the true Pareto front. Third, an information fusion strategy is proposed to update the personal best, to improve convergence of the algorithm. At the same time, in order to achieve a better balance between convergence and diversity, a new particle velocity updating method is proposed. With this, two global leaders cooperate to guide the flight of particles in the population, which is conducive to promoting the exchange of social information. Finally, RMMOPSO is simulated with several state-of-the-art MOPSOs and multi-objective evolutionary algorithms (MOEAs) on 22 benchmark problems. The experimental results show that RMMOPSO has better comprehensive performance.
[1] | Y. Wang, W. Gao, M. Gong, H. Li, J. Xie, A new two-stage based evolutionary algorithm for solving multi-objective optimization problems, Inf. Sci., 611 (2022), 649–659. https://doi.org/10.1016/j.ins.2022.07.180 doi: 10.1016/j.ins.2022.07.180 |
[2] | Q. Zhu, Q. Lin, W. Chen, K. C. Wong, C. A. C. Coello, J. Li, et al., An external archive-guided multiobjective particle swarm optimization algorithm, IEEE Trans. Cybern., 47 (2017), 2794–2808. https://doi.org/10.1109/TCYB.2017.2710133 doi: 10.1109/TCYB.2017.2710133 |
[3] | L. Ma, M. Huang, S. Yang, R. Wang, X. Wang, An adaptive localized decision variable analysis approach to large-scale multiobjective and many-objective optimization, IEEE Trans. Cybern., 52 (2021), 6684–6696. https://doi.org/10.1109/TCYB.2020.3041212 doi: 10.1109/TCYB.2020.3041212 |
[4] | G. Acampora, R. Schiattarella, A. Vitiello, Using quantum amplitude amplification in genetic algorithms, Expert Syst. Appl., 209 (2022), 118203. https://doi.org/10.1016/j.eswa.2022.118203 doi: 10.1016/j.eswa.2022.118203 |
[5] | H. Zhao, C. Zhang, An ant colony optimization algorithm with evolutionary experience-guided pheromone updating strategies for multi-objective optimization, Expert Syst. Appl., 201 (2022), 117151. https://doi.org/10.1016/j.eswa.2022.117151 doi: 10.1016/j.eswa.2022.117151 |
[6] | Z. Zeng, M. Zhang, H. Zhang, Z. Hong, Improved differential evolution algorithm based on the sawtooth-linear population size adaptive method, Inf. Sci., 608 (2022), 1045–1071. https://doi.org/10.1016/j.ins.2022.07.003 doi: 10.1016/j.ins.2022.07.003 |
[7] | R. Nand, B. N. Sharma, K. Chaudhary, Stepping ahead firefly algorithm and hybridization with evolution strategy for global optimization problems, Appl. Soft. Comput., 109 (2021), 107517. https://doi.org/10.1016/j.asoc.2021.107517 doi: 10.1016/j.asoc.2021.107517 |
[8] | J. Kennedy, R. Eberhart, Particle swarm optimization, in Icnn95-international Conference on Neural Networks, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968 |
[9] | C. A. C. Coello, M. S. Lechuga, MOPSO: a proposal for multiple objective particle swarm optimization, in Pro. 2002 Congr. Evol. Comput. CEC'02 (Cat. No. 02TH8600), IEEE, 2 (2002), 1051–1056. https://doi.org/10.1109/CEC.2002.1004388 |
[10] | Y. Cui, X. Meng, J. Qiao, A multi-objective particle swarm optimization algorithm based on two-archive mechanism, Appl. Soft. Comput., 119 (2022), 108532. https://doi.org/10.1016/j.asoc.2022.108532 doi: 10.1016/j.asoc.2022.108532 |
[11] | Y. Li, Y. Zhang, W. Hu, Adaptive multi-objective particle swarm optimization based on virtual Pareto front, Inf. Sci., 625 (2023), 206–236. https://doi.org/10.1016/j.ins.2022.12.079 doi: 10.1016/j.ins.2022.12.079 |
[12] | D. Sharma, S. Vats, S. Saurabh, Diversity preference-based many-objective particle swarm optimization using reference-lines-based framework, Swarm Evol. Comput., 65 (2021), 100910. https://doi.org/10.1016/j.swevo.2021.100910 doi: 10.1016/j.swevo.2021.100910 |
[13] | Y. Hu, Y. Zhang, D. Gong, Multiobjective particle swarm optimization for feature selection with fuzzy cost, IEEE Trans. Cybern., 51 (2020), 874–888. https://doi.org/10.1109/TCYB.2020.3015756 doi: 10.1109/TCYB.2020.3015756 |
[14] | L. Li, L. Chang, T. Gu, W. Sheng, W. Wang, On the norm of dominant difference for many-objective particle swarm optimization, IEEE Trans. Cybern., 51 (2019), 2055–2067. https://doi.org/10.1109/TCYB.2019.2922287 doi: 10.1109/TCYB.2019.2922287 |
[15] | L. Yang, X. Hu, K. Li, A vector angles-based many-objective particle swarm optimization algorithm using archive, Appl. Soft. Comput., 106 (2021), 107299. https://doi.org/10.1016/j.asoc.2021.107299 doi: 10.1016/j.asoc.2021.107299 |
[16] | B. Wu, W. Hu, J. Hu, G. G.Yen, Adaptive multiobjective particle swarm optimization based on evolutionary state estimation, IEEE Trans. Cybern., 51 (2019), 3738–3751. https://doi.org/10.1109/TCYB.2019.2949204 doi: 10.1109/TCYB.2019.2949204 |
[17] | H. Han, W. Lu, J. Qiao, An adaptive multiobjective particle swarm optimization based on multiple adaptive methods, IEEE Trans. Cybern., 47 (2017), 2754–2767. https://doi.org/10.1109/TCYB.2017.2692385 doi: 10.1109/TCYB.2017.2692385 |
[18] | W. Huang, W. Zhang, Adaptive multi-objective particle swarm optimization with multi-strategy based on energy conversion and explosive mutation, Appl. Soft. Comput., 113 (2021), 107937. https://doi.org/10.1016/j.asoc.2021.107937 doi: 10.1016/j.asoc.2021.107937 |
[19] | K. Li, R. Chen, G. Fu, X. Yao, Two-archive evolutionary algorithm for constrained multiobjective optimization, IEEE Trans. Evol. Comput., 23 (2018), 303–315. https://doi.org/10.1109/TEVC.2018.2855411 doi: 10.1109/TEVC.2018.2855411 |
[20] | J. Liu, R. Liu, X. Zhang, Recursive grouping and dynamic resource allocation method for large-scale multi-objective optimization problem, Appl. Soft. Comput., 130 (2022), 109651. https://doi.org/10.1016/j.asoc.2022.109651 doi: 10.1016/j.asoc.2022.109651 |
[21] | M. Ergezer, D. Simon, Mathematical and experimental analyses of oppositional algorithms, IEEE Trans. Cybern., 44 (2014), 2178–2189. https://doi.org/10.1109/TCYB.2014.2303117 doi: 10.1109/TCYB.2014.2303117 |
[22] | Y. Xiang, Y. Zhou, M. Li, Z. Chen, A vector angle-based evolutionary algorithm for unconstrained many-objective optimization, IEEE Trans. Evol. Comput., 21 (2016), 131–152. https://doi.org/10.1109/TEVC.2016.2587808 doi: 10.1109/TEVC.2016.2587808 |
[23] | H. Wang, L. Jiao, X. Yao, Two_Arch2: An improved two-archive algorithm for many-objective optimization, IEEE Trans. Evol. Comput., 19 (2014), 524–541. https://doi.org/10.1109/TEVC.2014.2350987 doi: 10.1109/TEVC.2014.2350987 |
[24] | M. Garza-Fabre, G. T. Pulido, C. A. C. Coello, Ranking methods for many-objective optimization, Mex. Int. Conf. Artif. Intell., 5845 (2009), 633–645. https://doi.org/10.1007/978-3-642-05258-3_56 doi: 10.1007/978-3-642-05258-3_56 |
[25] | W. Huang, W. Zhang, Multi-objective optimization based on an adaptive competitive swarm optimizer, Inf. Sci., 583 (2022), 266–287. https://doi.org/10.1016/j.ins.2021.11.031 doi: 10.1016/j.ins.2021.11.031 |
[26] | S. Chen, X. Wang, J. Gao, W. Du, X. Gu, An adaptive switching-based evolutionary algorithm for many-objective optimization, Knowl. Based Syst., 248 (2022), 108915. https://doi.org/10.1016/j.knosys.2022.108915 doi: 10.1016/j.knosys.2022.108915 |
[27] | Y. Liu, D. Gong, J. Sun, Y. Jin, A many-objective evolutionary algorithm using a one-by-one selection strategy, IEEE Trans. Cybern., 47 (2017), 2689–2702. https://doi.org/10.1109/TCYB.2016.2638902 doi: 10.1109/TCYB.2016.2638902 |
[28] | E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput., 8 (2000), 173–195. https://doi.org/10.1162/106365600568202 doi: 10.1162/106365600568202 |
[29] | Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, S. Tiwari, Multi-objective optimization test instances for the CEC 2009 special session and competition, Mech. Eng. New York, 264 (2008), 1–30. |
[30] | K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multi-objective optimization, Evol. Mult. Opt. London., (2005), 105–145. https://doi.org/10.1007/1-84628-137-7_6 doi: 10.1007/1-84628-137-7_6 |
[31] | A. M. Zhou, Y. C. Jin, Q. F. Zhang, B. Sendhoff, E. Tsang, Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion, in 2006 IEEE Int. Conf. Evol. Comput., (2006), 892–899. https://doi.org/10.1109/CEC.2006.1688406 |
[32] | L. While, P. Hingston, L. Barone, S. Huband, A faster algorithm for calculating hypervolume, IEEE Trans. Evol. Comput., 10 (2006), 29–38. https://doi.org/10.1109/TEVC.2005.851275 doi: 10.1109/TEVC.2005.851275 |
[33] | Q. Lin, S. Liu, Q. Zhu, C. Tang, R. Song, J. Chen, et al., Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems, IEEE Trans. Evol. Comput., 22 (2018), 32–46. https://doi.org/10.1109/TEVC.2016.2631279 doi: 10.1109/TEVC.2016.2631279 |
[34] | C. Dai, Y. Wang, M. Ye, A new multi-objective particle swarm optimization algorithm based on decomposition, Inf. Sci., 325 (2015), 541–557. https://doi.org/10.1016/j.ins.2015.07.018 doi: 10.1016/j.ins.2015.07.018 |
[35] | Q. Lin, J. Li, Z. Du, J. Chen, Z. Ming, A novel multiobjective particle swarm optimization with multiple search strategies, Eur. J. Oper. Res., 247 (2015), 732–744. https://doi.org/10.1016/j.ejor.2015.06.071 doi: 10.1016/j.ejor.2015.06.071 |
[36] | A. J. Nebro, J. J. Durillo, J. Garcia-Nieto, C. C. Coello, F. Luna, E. Alba, SMPSO: a new PSO-based metaheuristic for multi-objective optimization, in 2009 IEEE Symp. Comput. Intell. MCDM., (2009), 66–73. https://doi.org/10.1109/MCDM.2009.4938830 |
[37] | C. He, R. Cheng, D. Yazdani, Adaptive offspring generation for evolutionary large-scale multi-objective optimization, IEEE Trans. Syst. Man Cybern. Syst., 52 (2020), 786–798. https://doi.org/10.1109/TSMC.2020.3003926 doi: 10.1109/TSMC.2020.3003926 |
[38] | S. Jiang, S. Yang, A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization, IEEE Trans. Evol. Comput., 21 (2017), 329–346. https://doi.org/10.1109/TEVC.2016.2592479 doi: 10.1109/TEVC.2016.2592479 |
[39] | K. Deb, H. Jain, An evolutionary many-objective optimization algorithm using reference-point-based non-dominated sorting approach, part ⅰ: solving problems with box constraints, IEEE Trans. Evol. Comput., 18 (2013), 577–601. https://doi.org/10.1109/TEVC.2013.2281535 doi: 10.1109/TEVC.2013.2281535 |
[40] | Q. F. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput., 11 (2007), 712–731. https://doi.org/10.1109/TEVC.2007.892759 doi: 10.1109/TEVC.2007.892759 |
[41] | Y. Tian, R. Cheng, X. Zhang, Y. Jin, PlatEMO: a matlab platform for evolutionary multi-objective optimization[educational forum], IEEE Comput. Intell. Mag., 12 (2017), 73–87. https://doi.org/10.1109/MCI.2017.2742868 doi: 10.1109/MCI.2017.2742868 |
[42] | Y. Zhou, Z. Chen, Z. Huang, Y. Xiang, A multiobjective evolutionary algorithm based on objective-space localization selection, IEEE Trans. Cybern., 52 (2020), 3888–3901. https://doi.org/10.1109/TCYB.2020.3016426 doi: 10.1109/TCYB.2020.3016426 |
[43] | M. Sheng, Z. Wang, W. Liu, X. Wang, S. Chen, X. Liu, A particle swarm optimizer with multi-level population sampling and dynamic p-learning mechanisms for large-scale optimization, Knowl. Based Syst., 242 (2022), 108382. https://doi.org/10.1016/j.knosys.2022.108382 doi: 10.1016/j.knosys.2022.108382 |
[44] | J. Lu, J. Zhang, J. Sheng, Enhanced multi-swarm cooperative particle swarm optimizer, Swarm Evol. Comput., 69 (2022), 100989. https://doi.org/10.1016/j.swevo.2021.100989 doi: 10.1016/j.swevo.2021.100989 |