Research article

Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization


  • Received: 06 December 2021 Revised: 18 February 2022 Accepted: 09 March 2022 Published: 23 March 2022
  • In the traditional particle swarm optimization algorithm, the particles always choose to learn from the well-behaved particles in the population during the population iteration. Nevertheless, according to the principles of particle swarm optimization, we know that the motion of each particle has an impact on other individuals, and even poorly behaved particles can provide valuable information. Based on this consideration, we propose Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization, called LFIACL-PSO. In the LFIACL-PSO algorithm, First, when the particle is trapped in the local optimum and cannot jump out, inverse learning is used, and the learning step size is obtained through the Lévy flight. Second, to increase the diversity of the algorithm and prevent it from prematurely converging, a comprehensive learning strategy and Ring-type topology are used as part of the learning paradigm. In addition, use the adaptive update to update the acceleration coefficients for each learning paradigm. Finally, the comprehensive performance of LFIACL-PSO is measured using 16 benchmark functions and a real engineering application problem and compared with seven other classical particle swarm optimization algorithms. Experimental comparison results show that the comprehensive performance of the LFIACL-PSO outperforms comparative PSO variants.

    Citation: Xin Zhou, Shangbo Zhou, Yuxiao Han, Shufang Zhu. Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 5241-5268. doi: 10.3934/mbe.2022246

    Related Papers:

  • In the traditional particle swarm optimization algorithm, the particles always choose to learn from the well-behaved particles in the population during the population iteration. Nevertheless, according to the principles of particle swarm optimization, we know that the motion of each particle has an impact on other individuals, and even poorly behaved particles can provide valuable information. Based on this consideration, we propose Lévy flight-based inverse adaptive comprehensive learning particle swarm optimization, called LFIACL-PSO. In the LFIACL-PSO algorithm, First, when the particle is trapped in the local optimum and cannot jump out, inverse learning is used, and the learning step size is obtained through the Lévy flight. Second, to increase the diversity of the algorithm and prevent it from prematurely converging, a comprehensive learning strategy and Ring-type topology are used as part of the learning paradigm. In addition, use the adaptive update to update the acceleration coefficients for each learning paradigm. Finally, the comprehensive performance of LFIACL-PSO is measured using 16 benchmark functions and a real engineering application problem and compared with seven other classical particle swarm optimization algorithms. Experimental comparison results show that the comprehensive performance of the LFIACL-PSO outperforms comparative PSO variants.



    加载中


    [1] J. Kennedy, R. Eberhart, Particle swarm optimization, in Icnn95-international Conference on Neural Networks, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [2] S. M. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [3] X. S. Yang, Firefly algorithm, stochastic test functions and design optimisation, Int. J. Bio-Inspired Comput., 2 (2010), 78–84. https://doi.org/10.1504/IJBIC.2010.032124 doi: 10.1504/IJBIC.2010.032124
    [4] S. M. Mirjalili, The ant lion optimizer, Adv. Eng. Software, 83 (2015), 80–98. https://doi.org/10.1016/j.advengsoft.2015.01.010 doi: 10.1016/j.advengsoft.2015.01.010
    [5] R. Rajabioun, Cuckoo optimization algorithm, Appl. Soft Comput., 11 (2011), 5508–5518. https://doi.org/0.1016/j.asoc.2011.05.008
    [6] D. E. Goldberg, J. H. Holland, Genetic algorithms and machine learning, Mach. Learn., 3 (2005), 95–99. https://doi.org/10.1023/A:1022602019183 doi: 10.1023/A:1022602019183
    [7] G. G. Wang, S. Deb, Z. H. Cui, Monarch butterfly optimization, Neural Comput. Appl., 31 (2019), 1995–2014. https://doi.org/10.1007/s00521-015-1923-y doi: 10.1007/s00521-015-1923-y
    [8] G. G. Wang, S. Deb, L. D. S. Coelho, Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems, Int. J. Bio-Inspired Comput., 12 (2018), 1–22. https://doi.org/10.1504/IJBIC.2015.10004283 doi: 10.1504/IJBIC.2015.10004283
    [9] G. G. Wang, S. Deb, L. D. S. Coelho, Elephant herding optimization, in 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), (2015), 1–5. https://doi.org/10.1109/ISCBI.2015.8
    [10] G. G. Wang, Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems, Memetic Comput., 10 (2018), 151–164. https://doi.org/10.1007/s12293-016-0212-3 doi: 10.1007/s12293-016-0212-3
    [11] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (1995), 39–43. https://doi.org/10.1109/MHS.1995.494215
    [12] W. Li, G. G. Wang, A. H. Gandomi, A survey of learning-based intelligent optimization algorithms, Arch. Comput. Methods Eng., 28 (2021), 3781–3799. https://doi.org/10.1007/S11831-021-09562-1 doi: 10.1007/S11831-021-09562-1
    [13] Y. Feng, S. Deb, G. G. Wang, A. H. Gandomi, Monarch butterfly optimization: a comprehensive review, Exp. Syst. Appl., 168 (2021), 114418. https://doi.org/10.1016/j.eswa.2020.114418 doi: 10.1016/j.eswa.2020.114418
    [14] S. Zhou, Y. Han, L. Sha, S. Zhu, A multi-sample particle swarm optimization algorithm based on electric field force, Math. Biosci. Eng., 18 (2021), 7464–7489. https://doi.org/10.3934/mbe.2021369 doi: 10.3934/mbe.2021369
    [15] Y. Feng, G. G. Wang, S. Deb, M. Lu, X. J. Zhao, Solving 0–1 knapsack problem by a novel binary monarch butterfly optimization, Neural Comput. Appl., 28 (2017), 1619–1634. https://doi.org/10.1007/s00521-015-2135-1 doi: 10.1007/s00521-015-2135-1
    [16] Y. Feng, G. G. Wang, J. Dong, L. Wang, Opposition-based learning monarch butterfly optimization with gaussian perturbation for large-scale 0-1 knapsack problem, Comput. Electr. Eng., 67 (2018), 454–468. https://doi.org/10.1016/j.compeleceng.2017.12.014 doi: 10.1016/j.compeleceng.2017.12.014
    [17] F. Liu, Y. Sun, G. Wang, T. Wu, An artificial bee colony algorithm based on dynamic penalty and lévy flight for constrained optimization problems, Arabian J. Sci. Eng., 43 (2018), 7189–7208. https://doi.org/10.1007/S13369-017-3049-2 doi: 10.1007/S13369-017-3049-2
    [18] L. Guo, G. G. Wang, A. H. Gandomi, A. H. Alavi, H. Duan, A new improved krill herd algorithm for global numerical optimization, Neurocomputing, 138 (2014), 392–402. https://doi.org/10.1016/j.neucom.2014.01.023 doi: 10.1016/j.neucom.2014.01.023
    [19] I. Hanafi, F. M. Cabrera, F. Dimane, J. T. Manzanares, Application of particle swarm optimization for optimizing the process parameters in turning of peek cf30 composites, Proc. Technol., 22 (2016), 195–202. https://doi.org/10.1016/J.PROTCY.2016.01.044 doi: 10.1016/J.PROTCY.2016.01.044
    [20] J. Lu, J. Zhang, J. Sheng, Enhanced multi-swarm cooperative particle swarm optimizer, Swarm Evol. Comput., 69 (2022), 100989. https://doi.org/10.1016/j.swevo.2021.100989 doi: 10.1016/j.swevo.2021.100989
    [21] H. Zhang, M. Yuan, Y. Liang, L. Qi, A novel particle swarm optimization based on prey-predator relationship, Appl. Soft Comput., 68 (2018), 202–218. https://doi.org/10.1016/j.asoc.2018.04.008 doi: 10.1016/j.asoc.2018.04.008
    [22] J. J. Liang, A. K. Qin, P. N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput., 10 (2006), 281–295. https://doi.org/10.1109/TEVC.2005.857610 doi: 10.1109/TEVC.2005.857610
    [23] X. Xia, L. Gui, F. Yu, H. Wu, B. Wei, Y. Zhang, et al., Triple archives particle swarm optimization, IEEE Trans. Cyber., 50 (2020), 4862–4875. https://doi.org/10.1109/TCYB.2019.2943928 doi: 10.1109/TCYB.2019.2943928
    [24] N. Lynn, P. N. Suganthan, Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm Evol. Comput., 24 (2015), 11–24. https://doi.org/10.1016/j.swevo.2015.05.002 doi: 10.1016/j.swevo.2015.05.002
    [25] X. Zhang, W. Sun, M. Xue, A. Lin, Probability-optimal leader comprehensive learning particle swarm optimization with bayesian iteration, Appl. Soft Comput., 103 (2021), 107132. https://doi.org/10.1016/j.asoc.2021.107132 doi: 10.1016/j.asoc.2021.107132
    [26] Y. Shi, R. C. Eberhart, A modified particle swarm optimizer, in 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), (1998), 69–73. https://doi.org/10.1109/ICEC.1998.699146
    [27] X. Xia, L. Gui, G. He, B. Wei, Y. Zhang, F. Yu, et al., An expanded particle swarm optimization based on multi-exemplar and forgetting ability, Inf. Sci., 508 (2020), 105–120. https://doi.org/10.1016/j.ins.2019.08.065 doi: 10.1016/j.ins.2019.08.065
    [28] G. G. Wang, A. H. Gandomi, X. S. Yang, A. H. Alavi, A novel improved accelerated particle swarm optimization algorithm for global numerical optimization, Eng. Comput., 2014 (2014). https://doi.org/10.1108/EC-10-2012-0232 doi: 10.1108/EC-10-2012-0232
    [29] S. Mirjalili, G. G. Wang, L. D. S. Coelho, Binary optimization using hybrid particle swarm optimization and gravitational search algorithm, Neural Comput. Appl., 25 (2014), 1423–1435. https://doi.org/10.1007/s00521-014-1629-6 doi: 10.1007/s00521-014-1629-6
    [30] O. Kahouli, H. Alsaif, Y. Bouteraa, N. B. Ali, M. Chaabene, Power system reconfiguration in distribution network for improving reliability using genetic algorithm and particle swarm optimization, Appl. Sci., 11 (2021), 3092. https://doi.org/10.3390/APP11073092 doi: 10.3390/APP11073092
    [31] A. Lin, W. Sun, H. Yu, G. Wu, H. Tang, Global genetic learning particle swarm optimization with diversity enhancement by ring topology, Swarm Evol. Comput., 44 (2019), 571–583. https://doi.org/10.1016/j.swevo.2018.07.002 doi: 10.1016/j.swevo.2018.07.002
    [32] E. Naderi, M. Pourakbari-Kasmaei, M. Lehtonen, Transmission expansion planning integrated with wind farms: a review, comparative study, and a novel profound search approach, Int. J. Electr. Power Energy Syst., 115 (2020), 05460. https://doi.org/10.1016/J.IJEPES.2019.105460 doi: 10.1016/J.IJEPES.2019.105460
    [33] R. Jamous, H. Alrahhal, M. El-Darieby, A new ann-particle swarm optimization with center of gravity (ann-psocog) prediction model for the stock market under the effect of covid-19, Sci. Prog., 2021 (2021), 6656150. https://doi.org/10.1155/2021/6656150 doi: 10.1155/2021/6656150
    [34] B. Mohammadi, Y. Guan, R. Moazenzadeh, M. J. S. Safari, Implementation of hybrid particle swarm optimization-differential evolution algorithms coupled with multi-layer perceptron for suspended sediment load estimation, Catena, 198 (2020), 105024. https://doi.org/10.1016/j.catena.2020.105024 doi: 10.1016/j.catena.2020.105024
    [35] M. Clerc, The swarm and the queen: towards a deterministic and adaptive particle swarm optimization, in Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), 3 (1999), 1951–1957. https://doi.org/10.1109/CEC.1999.785513
    [36] M. Asada, Modeling early vocal development through infantaregiver interaction: a review, IEEE Trans. Cognit. Dev. Syst., 8 (2016), 128–138. https://doi.org/10.1109/TCDS.2016.2552493 doi: 10.1109/TCDS.2016.2552493
    [37] A. Baeck, K. Maes, C. V. Meel, H. P. O. de Beeck, The transfer of object learning after training with multiple exemplars, Front. Psychol., 7 (2016), 1386. https://doi.org/10.3389/fpsyg.2016.01386 doi: 10.3389/fpsyg.2016.01386
    [38] K. E. Twomey, S. Ranson, J. S. Horst, That's more like it: Multiple exemplars facilitate word learning, Infant Child Dev., 23 (2014), 105–122. https://doi.org/10.1002/ICD.1824 doi: 10.1002/ICD.1824
    [39] X. S. Yang, S. Deb, Cuckoo search via lévy flights, in 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), (2009), 210–214. https://doi.org/10.1109/NABIC.2009.5393690
    [40] O. Aoun, M. Sarhani, A. E. Afia, Particle swarm optimisation with population size and acceleration coefficients adaptation using hidden markov model state classification, Int. J. Metaheuristics, 7 (2018), 1–29. https://doi.org/10.1504/IJMHEUR.2018.10012905 doi: 10.1504/IJMHEUR.2018.10012905
    [41] A. T. Kiani, M. F. Nadeem, A. N. Ahmed, I. Khan, H. I. Alkhammash, I. A. Sajjad, et al., An improved particle swarm optimization with chaotic inertia weight and acceleration coefficients for optimal extraction of pv models parameters, Energies, 14 (2021), 2980. https://doi.org/10.3390/EN14112980 doi: 10.3390/EN14112980
    [42] Z. H. Zhan, J. Zhang, Y. Li, H. S. H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man Cybern. B, 39 (2009), 1362–1381. https://doi.org/10.1109/TSMCB.2009.2015956 doi: 10.1109/TSMCB.2009.2015956
    [43] J. J. Liang, P. N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in Proceedings 2005 IEEE Swarm Intelligence Symposium, (2005), 124–129. https://doi.org/10.1109/SIS.2005.1501611
    [44] G. Xu, Q. Cui, X. Shi, H. W. Ge, Z. H. Zhan, H. P. Lee, et al., Particle swarm optimization based on dimensional learning strategy, Swarm Evol. Comput., 45 (2019), 33–51. https://doi.org/10.1016/J.SWEVO.2018.12.009 doi: 10.1016/J.SWEVO.2018.12.009
    [45] K. Zhang, Q. Huang, Y. Zhang, Enhancing comprehensive learning particle swarm optimization with local optima topology, Inf. Sci., 471 (2019), 1–18. https://doi.org/10.1016/j.ins.2018.08.049 doi: 10.1016/j.ins.2018.08.049
    [46] O.Olorunda, A. P. Engelbrecht, Measuring exploration/exploitation in particle swarms using swarm diversity, in 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), (2008), 1128–1134. https://doi.org/10.1109/CEC.2008.4630938
    [47] M. L. Dukic, Z. S. Dobrosavljevic, A method of a spread-spectrum radar polyphase code design, IEEE J. Sel. Areas Commun., 8 (1990), 743–749. https://doi.org/10.1109/49.56381 doi: 10.1109/49.56381
    [48] S. Gil-Lopez, J. D. Ser, S. Salcedo-Sanz, Á. M. Pérez-Bellido, J. M. Cabero, J. A. Portilla-Figueras, A hybrid harmony search algorithm for the spread spectrum radar polyphase codes design problem, Expert Syst. Appl., 39 (2012), 11089–11093. https://doi.org/10.1016/j.eswa.2012.03.063 doi: 10.1016/j.eswa.2012.03.063
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1917) PDF downloads(109) Cited by(2)

Article outline

Figures and Tables

Figures(5)  /  Tables(14)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog