A new swarm-based optimization algorithm called the Aquila optimizer (AO) was just proposed recently with promising better performance. However, as reported by the proposer, it almost remains unchanged for almost half of the convergence curves at the latter iterations. Considering the better performance and the lazy latter convergence rates of the AO algorithm in optimization, the multiple updating principle is introduced and the heterogeneous AO called HAO is proposed in this paper. Simulation experiments were carried out on both unimodal and multimodal benchmark functions, and comparison with other capable algorithms were also made, most of the results confirmed the better performance with better intensification and diversification capabilities, fast convergence rate, low residual errors, strong scalabilities, and convinced verification results. Further application in optimizing three benchmark real-world engineering problems were also carried out, the overall better performance in optimizing was confirmed without any other equations introduced for improvement.
Citation: Juan ZHAO, Zheng-Ming GAO. The heterogeneous Aquila optimization algorithm[J]. Mathematical Biosciences and Engineering, 2022, 19(6): 5867-5904. doi: 10.3934/mbe.2022275
Related Papers:
[1]
Yaning Xiao, Yanling Guo, Hao Cui, Yangwei Wang, Jian Li, Yapeng Zhang .
IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems. Mathematical Biosciences and Engineering, 2022, 19(11): 10963-11017.
doi: 10.3934/mbe.2022512
[2]
Shuang Wang, Heming Jia, Qingxin Liu, Rong Zheng .
An improved hybrid Aquila Optimizer and Harris Hawks Optimization for global optimization. Mathematical Biosciences and Engineering, 2021, 18(6): 7076-7109.
doi: 10.3934/mbe.2021352
[3]
Yufei Wang, Yujun Zhang, Yuxin Yan, Juan Zhao, Zhengming Gao .
An enhanced aquila optimization algorithm with velocity-aided global search mechanism and adaptive opposition-based learning. Mathematical Biosciences and Engineering, 2023, 20(4): 6422-6467.
doi: 10.3934/mbe.2023278
[4]
Huangjing Yu, Heming Jia, Jianping Zhou, Abdelazim G. Hussien .
Enhanced Aquila optimizer algorithm for global optimization and constrained engineering problems. Mathematical Biosciences and Engineering, 2022, 19(12): 14173-14211.
doi: 10.3934/mbe.2022660
[5]
Teng Fei, Xinxin Wu, Liyi Zhang, Yong Zhang, Lei Chen .
Research on improved ant colony optimization for traveling salesman problem. Mathematical Biosciences and Engineering, 2022, 19(8): 8152-8186.
doi: 10.3934/mbe.2022381
[6]
Yunpeng Ma, Chang Chang, Zehua Lin, Xinxin Zhang, Jiancai Song, Lei Chen .
Modified Marine Predators Algorithm hybridized with teaching-learning mechanism for solving optimization problems. Mathematical Biosciences and Engineering, 2023, 20(1): 93-127.
doi: 10.3934/mbe.2023006
[7]
Bin Deng, Ran Ding, Jingfeng Li, Junfeng Huang, Kaiyi Tang, Weidong Li .
Hybrid multi-objective metaheuristic algorithms for solving airline crew rostering problem with qualification and language. Mathematical Biosciences and Engineering, 2023, 20(1): 1460-1487.
doi: 10.3934/mbe.2023066
[8]
Rong Zheng, Heming Jia, Laith Abualigah, Qingxin Liu, Shuang Wang .
An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problems. Mathematical Biosciences and Engineering, 2022, 19(1): 473-512.
doi: 10.3934/mbe.2022023
[9]
Jiahao Zhang, Zhengming Gao, Suruo Li, Juan Zhao, Wenguang Song .
Improved intelligent clonal optimizer based on adaptive parameter strategy. Mathematical Biosciences and Engineering, 2022, 19(10): 10275-10315.
doi: 10.3934/mbe.2022481
[10]
Guo Zhou, Yongquan Zhou, Zhonghua Tang, Qifang Luo .
CWCA: Complex-valued encoding water cycle algorithm. Mathematical Biosciences and Engineering, 2021, 18(5): 5836-5864.
doi: 10.3934/mbe.2021294
Abstract
A new swarm-based optimization algorithm called the Aquila optimizer (AO) was just proposed recently with promising better performance. However, as reported by the proposer, it almost remains unchanged for almost half of the convergence curves at the latter iterations. Considering the better performance and the lazy latter convergence rates of the AO algorithm in optimization, the multiple updating principle is introduced and the heterogeneous AO called HAO is proposed in this paper. Simulation experiments were carried out on both unimodal and multimodal benchmark functions, and comparison with other capable algorithms were also made, most of the results confirmed the better performance with better intensification and diversification capabilities, fast convergence rate, low residual errors, strong scalabilities, and convinced verification results. Further application in optimizing three benchmark real-world engineering problems were also carried out, the overall better performance in optimizing was confirmed without any other equations introduced for improvement.
1.
Introduction
Along with the development of modern science and technology, the details of problems in nature are considered so much as to every influential factor being considered. These factors would play the roles influencing both the apparent showing to human and each other. As a result, analytical solutions could be no longer obtained whenever the problems are a little complicated. To conquer such difficulties, the nature-inspired algorithms had been proposed several decades of years ago.
Genetic algorithm (GA) [1] might be the first candidate in literature, it soon became a hot spot to solve most of the problems human met. Convinced by the better performance, other optimization algorithms were soon proposed, such as the ant colony optimization (ACO) [2] algorithm, the particle swarm optimization (PSO) [3] algorithm and so on.
Talking about the PSO algorithm, scientists and engineers around the world were soon addicted to the beautiful and simple structure. Thorough details were contributed, the reason why it converged so fast, its stability together with the constraints of variables c1,c2 were confirmed [4]. Researches applied the PSO algorithm to solve every optimization problem they found, and they were also not satisfied with its current performance. Finding ways to improve the performance, in either convergence ratio or residual errors, soon became a hot spot at those years. Inertia weights [5], local unimodal sampling [6], regrouping [7], guaranteed convergence [8], and other improvements were soon proposed and better performance confirmed. Literally speaking, the improvements could be classified as: 1) improvements of variables, such as the chaotic mapping [9], Levy flight [10]. 2) re-constructions of updating equations, such as the guaranteed convergence, regrouping methods. 3) improvements of controlling parameters, for instance, the inertia weights. 4) Hybridizations, such as the hybridization of PSO with GA [11], PSO with ACO [12].
Among all of the improved PSO algorithms, the heterogeneous PSO (HPSO) improvement [13] was a most shining one, it might be an inspiration of most of the modern optimization algorithms proposed recently.
In the early years of computational optimization, all of the individuals in swarms would following a same style to update their positions. For example, in the GA, they would crossover, mutate, while in the PSO algorithm, all of the individuals would update their positions according to their current velocities, the global best candidate, and their current best trajectories. Individuals in swarms of the bat algorithm (BA) [14] would also follow a same equation to update their current positions. While in the HPSO algorithm, individuals in swarms would select a way randomly among the cognitive-only, social-only, barebones, and modified barebones operations. The HPSO was also confirmed better in optimization.
Heterogeneous improvements would give individuals in swarm multiple ways [15] or tunnels [16] to update their positions. In such conditions, individuals could seek help for more variable operations. Some of them could take larger steps for a better exploration capability, some of them could take smaller steps to exploit the local optima and confirm whether it is the global one or not, and they would not follow a same way, which could increase their capability in both exploration and exploitation procedure. We can see that individuals in the sine cosine algorithm (SCA) [17] have two choices to update their positions, the arithmetic optimization algorithm (AOA) [18] and Harris hawk optimization (HHO) algorithm have four choices. More ways to update their positions might not always achieve in better performance, individuals in swarms with such operations could indeed afford variable choices and multiple characteristics. Henceforth, the heterogeneous improvements could be induced to be a multiple updating principle.
Aquila optimizer (AO) [19] was just proposed recently, it was claimed and verified with fast convergence rate and better performance than most of the other algorithms, it have been applied in intrusion detection [20], production forecasting [21].
The individuals in swarms of the AO algorithm also have four ways to update their positions, however, they can only choose two of them during the first 2/3 full procedure in exploration, and two of them at the rest exploitation procedure. Although the overall results were quite promising as reported by the proposer, individuals in swarms could not always take further operations to obtain better results in the exploitation procedure, which could be easily found in figure 9 in referenced paper [19]. This paper would report a heterogeneous improvement of the Aquila optimization algorithm with abbreviation HAO. The constant value 2/3 which constraint the individuals to follow two ways each during their exploration and exploitation procedure would be changed to a probability value, and consequently, individuals in the HAO swarms could have four ways to update their positions.
The main contribution of this paper would be:
1) The heterogeneous improvement of the AO algorithm was proposed in this paper, which did not introduce any further equations and improve the computer complexity. The proposed HAO algorithm only reconstructed the strategies which were proposed by the proposer.
2) The four strategies were classified by two groups with two proportional values p1 and p2 , their influence of the convergence rate was simulated and a best set was chosen.
3) Qualitative analysis, intensification, diversification, and scalability capability of the proposed HAO algorithm were simulated and the acceleration rate were tested on either unimodal or multimodal benchmark functions.
4) Simulation experiments on unimodal, multimodal, together with three classical real-world engineering problems were carried out and comparison were made. Results confirmed the better performance of the propose HAO algorithm than the original AO algorithm together with some other well-known optimization algorithms.
The rest of this paper would be arranged as follows: In Section 2, the original AO algorithm was briefly reported, and the improved HAO was proposed. Simulation experiments would be carried out both on benchmark functions in Section 3 and real-world engineering problems in Section 4. Discussions would be made and conclusions would be drawn in Section 5.
2.
The AO and proposed HAO algorithms
2.1. Strategies in the AO algorithm
There are four strategies for individuals in the AO algorithm:
Strategy 1: Expanded exploration.
Xi(t+1)=Xbest(t)×(1−tT)+XM(t)−Xbest(t)∗r1
(1)
where Xi(t+1) , Xbest(t) , XM(t) represent the position of i-th individuals at iteration t + 1, the best location at the current iteration and the mean positions of all individuals at the current iteration respectively. XM(t) would be calculated as follows:
XM(t)=1N∑Ni=1Xi(t)
(2)
where Xi(t) represents the position of i-th individuals at iteration t.N represents the number of individuals in swarms. r1 is the random number in Gaussian distribution with the interval of 0 and 1.
Strategy 2: Narrowed exploration.
Xi(t+1)=Xbest(t)×Levy(D)+XR(t)+(y−x)∗r2
(3)
where Levy(D) represents the Levy flights following equations:
Levy(D)=s×μ×σ|ν|1β
(4)
where s=0.01 is a constant parameter, r2 is another random number. μ , ν are random numbers between 0 and 1. σ is calculated as follows:
σ=Γ(1+β)×sin(πβ2)Γ(1+β2)×β×2β−12
(5)
where β=1.5 is a constant value. XR(t) is a random selected candidate at the current iteration. y and x represent the spiral shape:
y=r×cos(θ)
(6)
x=r×sin(θ)
(7)
r=r1+U×D1
(8)
θ=−ω×D1+θ1
(9)
θ1=3π2
(10)
where r1 is a fixed number between 1 and 20. D1 is integer numbers from 1 to the length of the problems. ω=0.005 is a fixed constant number.
Strategy 3: Expanded exploitation.
Xi(t+1)=α×[Xbest(t)−XM(t)]+δ×[(UB−LB)×r3+LB]
(11)
where [LB, UB] is the definitional domain of the given problem. α and δ are two fixed small numbers. r3 is the third random number in Gaussian distribution.
Strategy 4: Narrowed exploitation.
Xi(t+1)=QF×Xbest(t)−G1×Xi(t)×r4−G2×Levy(D)+r5×G1
(12)
where QF denotes to a quality function used to equilibrium the search strategy, and calculated with the following equation:
QF(t)=t2×r6−1(1−T)2
(13)
G1=2r7−1
(14)
G2=2×(1−tT)
(15)
r4 , r5,r6,r7 are the fourth to seventh random numbers involved in this algorithm.
2.2. Proposed HAO improved algorithm
Although individuals in the original AO swarm have four strategies and four ways to update their positions, however, according to the status of the prey and the Aquila, individuals could only choose the first two strategies in the early 2/3 iteration times called exploration procedure, while they would choose the latter two strategies at the rest of exploitation procedure. Apparently, the first two strategies would be more aggressive and could result in fast convergence, however, the latter two involve the re-initializing operations, which would give the individuals a chance to jump out of the local optima. Both of the first and latter two strategies have their own merits and result in better performance of the original AO algorithm.
However, we can easily find that the latter two strategies lack capability in approaching global optima, especially for the multimodal benchmark functions. Figure 9 in the referenced paper [19] showed that individuals would fail to obtain better positions any more in the later iterations for F5–F10, F12–F15, F17–23, who were some unimodal benchmark functions and most of the multimodal benchmark functions. To eliminate such defects, the choice of strategies could be a random way with different probabilities, let alone the solemn separation of exploration and exploitation. The individuals are recommended to choose a way with different probabilities to explore or exploit the whole definitional domain with their own willing. Therefore, we introduce the heterogeneous improvement for the original AO algorithm, and proposed an improvement called the heterogeneous Aquila Optimizer (HAO). For simplicity, a pseudo code of the proposed HAO algorithm would be shown in Table 1.
The complexity would remain as O(M×N×D) with a minimum change in proportional values.
3.
Simulation experiments on benchmark functions
In this section, simulation experiments on benchmark functions would be carried out, simulation results would be discussed to find whether the proposed HAO algorithm would perform better than the original AO algorithm or not. And furthermore, considering the development of swarm intelligence and better performance, several other algorithms would also be involved in comparison, such as original AO, the antlion optimization (ALO) [22], African vultures optimization (AVOA) [23], equilibrium optimizer (EO) [24], grasshopper optimization algorithm (GOA) [25], the grey wolf optimizer (GWO) [26], Harris hawk optimization (HHO) [27], Moth-frame optimization (MFO) [28], mayfly optimization algorithm (MOA) [29], PSO, SCA and whale optimization algorithm (WOA) [30]. The parameters of all of the compared algorithms are shown in Table 2.
Verifying the capability of an algorithm with benchmark functions is a classical way in optimization. In this paper, the improved HAO would be also applied in optimization benchmark functions. 5 unimodal, 5 unimodal with three-dimensional basin-like landscapes, and 11 multimodal benchmark functions would be involved [31], as shown in Table 3–5.
The source code was compiled with Matlab 2021b. And for simplicity, the maximum iteration number would be fixed 200, and the population size would be 40. In order to reduce the influence of random numbers involved in the algorithms, Monte Carlo simulation methods would be introduced and all of the results, if needed, would be the averaged over 30 separated runs.
For fair comparison, all the algorithms involved in this paper would be set the same with the above setup.
3.3. Probability parameters
For the proposed HAO algorithm, there would be three probability parameters p1,p2,p3 . According to the source code of the AO algorithm provided by the proposer, p1 might be a constant number near 2/3, and p2=p3=0.5 . However, we do not know exactly whether it is suitable for the HAO algorithm. As a probability value, they all in a definitional domain fallen into [0, 1], and therefore, we simulate both p1−p2 and p1−p3 with the mean least iteration number (MLIN) when the best fitness values are smaller than 5e–4, which is 0.5‰, a constant number frequently used in engineering. Reducing the influence or randomness involved in the algorithm, Monte Carlo method is introduced and the final results are the average over 30 separated independent runs. Results were shown in Figures 1–6.
Figure 1.p1−p2 relationship for different benchmark functions.
We can conclude from Figure 1–3 that p1=0.7 , and p2=0.5 might be a better value for most of benchmark functions, while some of the benchmark functions cannot tell a clear way, such as Csendes, Chung Renolds, Holzman's 2, Schumer Steiglitz 2 and Alpine 1 function. Specially, Venter Sobiezcczanski Sobieski function refused to be optimized with a minimum constraint of 1000 maximum iteration number, for saving time of running and computer complexity.
Considering the relationship between p1andp3 , no guaranteed relationship could be confirmed at the first glance of Figures 4–6. But the mean least iteration number for Hyper ellipsoid, Ackley 1, Sargan, Cosine Mixture and Stretched V Sine wave functions require non-smaller p3 values. For simplicity, the values for p3 might be also setup to 0.5 at the same value with p2 .
3.4. Qualitative experiments
Qualitative analysis is quite an import type of experiments, which would always give people a glance at the capability in optimization. Simulations would be carried out once for each of the benchmark functions involved in this paper, and results were shown in Figures 7–11.
Figure 7.
Qualitative results for the representatives.
For convenience, all of the dimensionality would be fixed to 10.
We can see from Figures 7–11 that almost all of the benchmark functions, either unimodal or multimodal, would be quickly optimized with the improved HAO algorithm, except Venter and Sobiezcczanski-Sobieski's, which should be so complicated as to fail to optimize by many algorithms. Individuals in swarms would quickly find the global optima, with fast convergence, lower residual errors. The search history for the former two dimensionality showed their better capability. Most of the trajectories of the first dimensionality confirmed the fast convergence.
3.5. Intensification capability experiments
Specifically speaking about the unimodal benchmark functions, they are easy to optimize at most times. Due to the only one global optima existed in their whole domain, individuals in swarms would approach the global optima without interference and disturbance. However, there is indeed a parameter balancing their capabilities. We called intensification experiments.
To reduce the influence of random numbers involved in the algorithms, Monte Carlo method would be also used and the best, worst, median, mean, standard derivation of the best results after 200 iterations would be calculated for 30 separated independent. Results were shown in Table 7.
We can see from Table 7 that the proposed HAO is definitely better, it performs 8 best of 10, among which 7 best, one equally best with the original AO algorithm, and failed to gain the best position compared to the GWO algorithm. Note that only the HAO, AO, GWO, HHO algorithms have gained the best positions in these experiments. Their better performance would be mainly relevant to their inherit mechanisms and the GWO algorithm performed better for F3 function, which proved that in some cases, multiple top candidates could result in better performance, although the EO algorithm, also being involved multiple top candidates, performed worse all the time.
3.6. Diversification capability experiments
As for multimodal benchmark functions, they have more than one global optima, and consequently, individuals in swarms would be trapped in local optima. There is a need for individuals to be capable to run out of the local optima and approach the global one. This capability is called the diversification capability. To find out whether the proposed HAO algorithm has better diversification capability or not, similar experiments would be carried out with the intensification experiments, whereas this time the simulation would be carried out on multimodal benchmark functions. The results would be in a same situation with intensification experiments and shown in Table 8.
Table 8 showed that both the proposed HAO, AO and the HHO algorithm could perform well in optimizing multimodal benchmark functions. Comparatively speaking, the proposed HAO succeeded for 9 among 11 experiments, while the HHO succeed 9, AO succeeded 8 times. We can see that the proposed heterogeneous improvements not only increase the diversification capability of the original AO algorithm, it further succeeded more than the HHO, it means that the HAO is definitely better than before.
3.7. Acceleration convergence analysis
Both the qualitative, intensification, and diversification experiments verified the better performance of the proposed HAO algorithm. In this section, acceleration convergence analysis would be carried out, the best fitness values versus iterations would show more apparent results, as shown in figures from Figures 12–16.
Figure 12.
Acceleration convergence experiments on unimodal scalabe benchmark functions.
Based on the averaged results in figures from Figures 12–16, we can see that the proposed improvement increases the convergence rate a lot, however, HHO algorithm became 3 bests of ten optimizing unimodal benchmark functions, while 7 bests of 10 for HAO algorithm, which remains most top lists. With Figure, we can find that the proposed HAO algorithm would perform the best for 8 from 11, while 3 multimodal benchmark functions, Griewank, Venter and Sobiezcczanski-Sobieski's, and Xin-She Yang 6 functions remain difficult to optimize. All of the involved nine algorithms could not optimize them at all.
Regarding the executive time of runs, the less time exhausted, the faster convergence rate. Under the same conditions that all results would be averaged over 30 independent runs, the executive time of the algorithms involved would be evaluated and compared, as shown in Figure 17.
Figure 17.
Executive time of parallel computing with workers = 14 for representative benchmark functions.
We can see that for the unimodal Exponential function, the heterogeneous improvement took almost the same time as the original version. While for the multimodal Cosine Mixture function, it would take more time to finish the job than the original one. Meanwhile, a controversial conclusion might be drawn with simulation experiments on unimodal or multimodal benchmark functions. The executive time the algorithms take would be possibly based on the functions, other than the algorithms themselves.
3.8. Scalability experiments
Although almost all of the above experiments verified the best performance of the proposed HAO algorithm in this paper, they were carried out with a fixed dimensionality, therefore, scalability experiments should also be carried out to confirm whether it would also perform the best when the dimensionality changed.
In this section, the dimensionality would be changed from 2, 10 and up to 100, with an interval of 10. The population size remains the same, and the overall results would remain an average over 30 separated independent runs with Monte Carlo method. Results were shown in figures from Figures 18–23.
Figure 18.
Scalability experiments on unimodal benchmark functions.
We can see figures from Figures 18–20 that the proposed HAO disappeared in the results of Ackley 1 and Exponential benchmark function. Detailed study confirmed the zero values for AO and HAO. And results on unimodal benchmark functions verified the better performance, specifically, 8/10 bests.
Results on multimodal, as shown in figures from Figures 21–23, however, did not result in a same conclusion. The results of Alpine 1 and Cosine Mixture benchmark functions would follow a same style. However, all of the rest benchmark functions did not support the former conclusion. Although at most times, the proposed HHO algorithm perform better with HHO, AO algorithms than others. That is to say, for the multimodal benchmark functions, a fixed population size might be unable to be suitable the increasing dimensionality.
3.9. Wilcoxon rank sum test
Most of the conclusions demonstrated that the proposed HAO algorithm could perform better in optimization. Verification should be made furthermore. In this section, the Wilcoxon rank sum test would be carried out to confirm whether the better results are fallen in a same distribution with results obtained from other compared algorithms. The normal value p=0.05 is adopted and verified, if p≤0.05 , acceptance of the basic hypothesis would be made and consequently, the proposed HHO algorithm would perform better, on the contrary, if p>0.05 , rejection might be taken, and the datum would be derived from a same situation, and consequently, the proposed HHO algorithm could not be confirmed to perform better even though its mean, median, mean, worst, or standard derivation values are smaller. Results were shown in Table 9.
We can see from Table 9 that the proposed HHO could be verified at most times, only a few functions and algorithms against the hypothesis with bigger values.
4.
Simulation experiments on real-world engineering problems
Based on the simulation experiments results on benchmark functions, we have found that the proposed HAO algorithm is quite promising in optimization. What about its capability in handling the real-world engineering problems? In this section, we would introduce the proposed HAO algorithm to find the best solution for some benchmark engineering problems.
Literally speaking, the difference between the real-world engineering problems and the benchmark functions might be the constraints. That is to say, there is no constraints for individuals in swarms when optimizing the benchmark functions, however, some definitional domain could not be searched or exploited when optimizing the real-world engineering problems.
where, xi∈[LBi,UBi] is the definitional domain for the i-th parameter. For simplicity, we introduce the penalty parameters to construct a new fitness function as follows:
where m and n are the number of equal and unequal constraint equations. Pie , Pe represent the penalty factors, which should be fixed numbers for simplicity.
In order to find the best solution for the real-world engineering problems, 10,000 separated independent runs would be involved in this section, and the final results would be the best one of them. The population size would be fixed to 40, and the maximum allowed iteration number is set with 1000.
4.1. Pressure vessel design
The pressure vessel design problem is four-dimensional structural design problem. It contains four non-equal and four equal constraints. Applying the proposed HAO algorithm, we got the results and compared with other results in literature, as shown in Table 10.
Table 10.
Results applied in pressure vessel design problem.
The proposed HAO algorithm found the best design option among the compared algorithms reported in literature.
4.2. Three-bar truss design
The three-bar truss design is a two-dimensional constraint problem, it has only two non-equal constraints, yet a little difficult to find the candidate. The proposed HAO algorithm finds a better yet not the best option, as shown in Table 11, it failed to do better than the original version. However, the proposed HAO does find a better option than the original AO algorithm in our simultaneous comparison experiments.
Table 11.
Best solutions for the three-bar truss design problem.
The welded beam design is a four-dimensional constraint problem. It has seven non-equal constraints. Results were shown in Table 12 and the proposed HAO also find a better result, yet not the best one. Same situation met in our experiment that the proposed HAO could find a better option than the original HAO, however, still worse than the reported result.
Table 12.
Best solutions obtained for the welded beam design problem.
This paper reports a new improvement for the Aquila optimization (AO) algorithm, which was just proposed in literature with better performance. Considering the flat lines in latter convergence curves in the original paper, the original AO should have some defects in exploitation procedure. Inspired by the better performance of heterogeneous improvements, and the inspiration of multiple updating principle, the heterogeneous AO algorithm called HAO is proposed in this paper. The proposed HAO algorithm would not introduce other equations, and re-construct the four strategies with promising better performance.
Simulation experiments were carried out on either unimodal or multimodal benchmark functions. Results confirmed the better performance including most of the Wilcoxon rank sum test results.
Three real-world engineering problems were also included to test the capability of the proposed HAO algorithm. Only one result, specifically the pressure vessel design problem, succeeded in comparison with other reported results in literature, including the original AO algorithm. However, the rest two failed to be the best one, even worse than the reported results from the original version. But in our experiment, the proposed HAO could obtain better options than the original AO algorithm. We can find that the randomness could affect the results a lot and better results might be found with occasions.
We could find the proposed heterogeneous AO algorithm would outperform at most times, it has intensification capability with unimodal benchmark functions, diversification capability with multimodal benchmark functions, it would be faster in convergence rate and approach the global optima much closer. The scalability capability is also good and the rank sum test convinced such simulations.
Individuals in swarms with heterogeneous improvements should be accompanied with larger population size accordingly. In the future, the proposed HAO algorithm could be applied to solve other problems such as reducing dimensionality, figure segmentation for real applications.
Acknowledgements
The authors would like to thank the supports of the following projects: 1) The scientific research team project of Jingchu University of technology with grant number TD202001. 2) the key research and development project of Jingmen with grant numbers 2019YFZD009. 3) the Outstanding Youth Science and Technology Innovation Team Project of Colleges and Universities in Hubei Province with grants T201923.
Conflict of interest
All authors declare no conflicts of interest in this paper.
References
[1]
D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989.
[2]
M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. B Cybern., 26 (1996), 29–41. https://doi.org/10.1109/3477.484436 doi: 10.1109/3477.484436
[3]
R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, (1995), 39–43. https://doi.org/10.1109/MHS.1995.494215
[4]
M. Clerc, J. Kennedy, The particle swarm - explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evolut. Comput.,6 (2002), 58–73. https://doi.org/10.1109/4235.985692 doi: 10.1109/4235.985692
[5]
R. C. Eberhart, Y. Shi, Comparing inertia weights and constriction factors in particle swarm optimization, in Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512), 1 (2000), 16–19. https://doi.org/10.1109/CEC.2000.870279
[6]
M. E. H. Pedersen, A. J. Chipperfield, Simplifying Particle Swarm Optimization, Appl. Soft Comput., 10 (2010), 618–628. https://doi.org/10.1016/j.asoc.2009.08.029 doi: 10.1016/j.asoc.2009.08.029
[7]
G. I. Evers, M. B. Ghalia, Regrouping particle swarm optimization: A new global optimization algorithm with improved performance consistency across benchmarks, in 2009 IEEE International Conference on Systems, Man and Cybernetics, (2009), 3901–3908. https://doi.org/10.1109/ICSMC.2009.5346625
[8]
F. v. d. Bergh, A. P. Engelbrecht, A new locally convergent particle swarm optimiser, in IEEE International Conference on Systems, Man and Cybernetics, 3 (2002). https://doi.org/10.1109/ICSMC.2002.1176018.
[9]
T. Xiang, X. Liao, K. W. Wong, An improved particle swarm optimization algorithm combined with piecewise linear chaotic map, Appl. Math. Comput., 190 (2007), 1637–1645. https://doi.org/10.1016/j.amc.2007.02.103 doi: 10.1016/j.amc.2007.02.103
[10]
H. Haklı, H. Uğuz, A novel particle swarm optimization algorithm with Levy flight, Appl. Soft Comput., 23 (2014), 333–345. http://dx.doi.org/10.1016/j.asoc.2014.06.034 doi: 10.1016/j.asoc.2014.06.034
[11]
H. Garg, A hybrid PSO-GA algorithm for constrained optimization problems, Appl. Math. Comput.,274 (2016), 292–305. https://doi.org/10.1016/j.amc.2015.11.001 doi: 10.1016/j.amc.2015.11.001
[12]
N. Holden, A. A. Freitas, A hybrid PSO/ACO algorithm for discovering classification rules in data mining, J.Artif. Evolut. Appl., (2008), 316145. https://doi.org/10.1155/2008/316145 doi: 10.1155/2008/316145
[13]
A. P. Engelbrecht, Heterogeneous particle swarm optimization, in Swarm Intelligence (eds. M. Dorigo et al.), Springer Berlin Heidelberg, (2010), 191–202.
[14]
X. S. Yang, A New Metaheuristic Bat-Inspired Algorithm, in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), Springer Berlin Heidelberg, (2010), 65–74.
[15]
Z. M. Gao, J. Zhao, X. R. Li, Y. R. Hu, An improved sine cosine algorithm with multiple updating ways for individuals, J. Phys. Conf. Ser., 1678 (2020), 012079. https://doi.org/10.1088/17426596/1678/1/012079 doi: 10.1088/17426596/1678/1/012079
[16]
J. Zhao, Z. M. Gao, An improved grey wolf optimization algorithm with multiple tunnels for updating, J. Phys. Conf. Ser., 1678 (2020), 012096. https://doi.org/10.1088/17426596/1678/1/012096 doi: 10.1088/17426596/1678/1/012096
[17]
S. Mirjalili, SCA: A Sine Cosine algorithm for solving optimization problems, Knowl. Based Syst., 96 (2016), 120–133. https://doi.org/10.1016/j.knosys.2015.12.022 doi: 10.1016/j.knosys.2015.12.022
[18]
L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, A. H. Gandomi, The arithmetic optimization algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
[19]
L. Abualigaha, D. Yousrib, M. A. Elazizc, A. A. Eweesd, M. A. A. Al-qanesse, A. H. Gandomif, Aquila optimizer: a novel meta-heuristic optimization algorithm, Comput. Indust. Eng., 157 (2021), 107250. https://doi.org/10.1016/j.cie.2021.107250 doi: 10.1016/j.cie.2021.107250
[20]
A. Fatani, A. Dahou, M. A. A. Al-qaness, S. Lu, M. Abd Elaziz, Advanced feature extraction and selection approach using deep learning and Aquila optimizer for IoT intrusion detection system, Sensors,22 (2022), 140. https://doi.org/10.3390/s22010140 doi: 10.3390/s22010140
[21]
A. M. AlRassas, M. A. A. Al-qaness, A. A. Ewees, S. Ren, M. Abd Elaziz, Optimized ANFIS model using Aquila optimizer for oil production forecasting, Processes,9 (2021). https://doi.org/10.3390/pr9071194 doi: 10.3390/pr9071194
[22]
S. Mirjalili, The ant lion optimizer, Adv. Eng. Soft.,83 (2015), 80–98. http://dx.doi.org/10.1016/j.advengsoft.2015.01.010 doi: 10.1016/j.advengsoft.2015.01.010
[23]
B. Abdollahzadeh, F. S. Gharehchopogh, S. Mirjalili, African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems, Comput. Indust. Eng., 158 (2021), 107408. https://doi.org/10.1016/j.cie.2021.107408 doi: 10.1016/j.cie.2021.107408
[24]
A. Faramarzi, M. Heidarinejad, B. Stephens, S. Mirjalili, Equilibrium optimizer: A novel optimization algorithm, Knowl. Based Syst., (2019), 105190. https://doi.org/10.1016/j.knosys.2019.105190 doi: 10.1016/j.knosys.2019.105190
[25]
S. Saremi, S. Mirjalili, A. Lewis, Grasshopper optimisation algorithm: theory and application, Adv. Eng. Soft., 105 (2017), 30–47. https://doi.org/10.1016/j.advengsoft.2017.01.004 doi: 10.1016/j.advengsoft.2017.01.004
[26]
S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, " Adv. Eng. Soft., 69 (2014), 46–61. http://dx.doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
[27]
A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, H. Chen, Harris hawks optimization: Algorithm and applications, Future Gener. Comp. Syst., 2019. https://doi.org/10.1016/j.future.2019.02.028 doi: 10.1016/j.future.2019.02.028
[28]
S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowl. based syst., 89 (2015), 228. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
[29]
K. Zervoudakis, S. Tsafarakis, A mayfly optimization algorithm, Comput. Indust. Eng., 145 (2020), 106559. https://doi.org/10.1016/j.cie.2020.106559 doi: 10.1016/j.cie.2020.106559
[30]
S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw., 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
[31]
Z. M. Gao, J. Zhao, Benchmark functions with Python, Golden Light Academic Publishing, (2020), 3–5.
[32]
S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-Verse Optimizer: a nature-inspired algorithm for global optimization, Neural Comput. Appl.,27 (2016), 495–513. https://doi.org/10.1007/s00521-015-1870-7 doi: 10.1007/s00521-015-1870-7
[33]
A. D. Laith Abualigah, S. Mirjalilid, M. Abd Elazizf, A. H. Gandomih, The Arithmetic Optimization Algorithm, Comput. Methods Appl. Mech. Eng., 376 (2021), 113609. https://doi.org/10.1016/j.cma.2020.113609 doi: 10.1016/j.cma.2020.113609
[34]
A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems, Appl. Soft Comput., 13 (2013), 2592–2612. https://doi.org/10.1016/j.asoc.2012.11.026 doi: 10.1016/j.asoc.2012.11.026
[35]
S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer for engineering design problems, Adv. Eng. Softw., 114 (2017), 163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002 doi: 10.1016/j.advengsoft.2017.07.002
[36]
M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic stochastic selection for constrained optimization, Inform. Sci., 178 (2008), 3043–3074. https://doi.org/10.1016/j.ins.2008.02.014 doi: 10.1016/j.ins.2008.02.014
[37]
V. Bhargava, S. E. K. Fateen, A. Bonilla-Petriciolet, Cuckoo Search: A new nature-inspired optimization method for phase equilibrium calculations, Fluid Phase Equilibr., 337 (2013), 191–200. http://dx.doi.org/10.1016/j.fluid.2012.09.018 doi: 10.1016/j.fluid.2012.09.018
[38]
J. F. Tsai, Global optimization of nonlinear fractional programming problems in engineering design, Eng. Optimiz., 37 (2005), 399–409. https://doi.org/10.1080/03052150500066737 doi: 10.1080/03052150500066737
[39]
A. Faramarzi, M. Heidarinejad, S. Mirjalili, A. H.Gandomic, Marine predators algorithm: A nature-inspired metaheuristic, Expert Syst. Appl., 152 (2020), 113377. https://doi.org/10.1016/j.eswa.2020.113377 doi: 10.1016/j.eswa.2020.113377
[40]
J. M. Czerniak, H. Zarzycki, D. Ewald, AAO as a new strategy in modeling and simulation of constructional problems optimization, Simul. Model. Pract. Theory, 76 (2017), 22–33. https://doi.org/10.1016/j.simpat.2017.04.001 doi: 10.1016/j.simpat.2017.04.001
[41]
S. Li, H. Chen, M. Wang, A. A. Heidari, S. Mirjalili, Slime mould algorithm: A new method for stochastic optimization, Future Gener. Comput. Syst., 111 (2020), 300–323. https://doi.org/10.1016/j.future.2020.03.055 doi: 10.1016/j.future.2020.03.055
This article has been cited by:
1.
Imran Mir, Faiza Gul, Suleman Mir, Mansoor Ahmed Khan, Nasir Saeed, Laith Abualigah, Belal Abuhaija, Amir H. Gandomi,
A Survey of Trajectory Planning Techniques for Autonomous Systems,
2022,
11,
2079-9292,
2801,
10.3390/electronics11182801
2.
Yaning Xiao, Yanling Guo, Hao Cui, Yangwei Wang, Jian Li, Yapeng Zhang,
IHAOAVOA: An improved hybrid aquila optimizer and African vultures optimization algorithm for global optimization problems,
2022,
19,
1551-0018,
10963,
10.3934/mbe.2022512
3.
Hao Cui, Yaning Xiao, Abdelazim G. Hussien, Yanling Guo,
Multi-strategy boosted Aquila optimizer for function optimization and engineering design problems,
2024,
27,
1386-7857,
7147,
10.1007/s10586-024-04319-4
4.
Gopi S., Prabhujit Mohapatra,
Chaotic Aquila Optimization algorithm for solving global optimization and engineering problems,
2024,
108,
11100168,
135,
10.1016/j.aej.2024.07.058
5.
Davut Izci, Serdar Ekinci, Abdelazim G. Hussien, Sani Isah Abba,
An elite approach to re-design Aquila optimizer for efficient AFR system control,
2023,
18,
1932-6203,
e0291788,
10.1371/journal.pone.0291788
6.
Lei Ni, Yuanyuan Li, Langqiang Zhang, Geng Wang,
A modified aquila optimizer with wide plant adaptability for the tuning of optimal fractional proportional–integral–derivative controller,
2024,
28,
1432-7643,
6269,
10.1007/s00500-023-09473-2
Laith Abualigah, Batool Sbenaty, Abiodun M. Ikotun, Raed Abu Zitar, Anas Ratib Alsoud, Nima Khodadadi, Absalom E. Ezugwu, Essam Said Hanandeh, Heming Jia,
2024,
9780443139253,
89,
10.1016/B978-0-443-13925-3.00001-7
9.
Fei Pan, Xingwei Sun, Heran Yang, Yin Liu, Sirui Chen, Hongxun Zhao,
Prediction of Abrasive Belt Wear Height for Screw Rotor Belt Grinding Based on BP Neural Network with Improved Skyhawk Algorithm,
2024,
2234-7593,
10.1007/s12541-024-01110-8
10.
Salih Berkan Aydemir,
Ideal solution candidate search for starling murmuration optimizer and its applications on global optimization and engineering problems,
2024,
80,
0920-8542,
4083,
10.1007/s11227-023-05618-0
11.
Buddhadev Sasmal, Abdelazim G. Hussien, Arunita Das, Krishna Gopal Dhal,
A Comprehensive Survey on Aquila Optimizer,
2023,
30,
1134-3060,
4449,
10.1007/s11831-023-09945-6