1.
Introduction
Meta-heuristic algorithms are formulated to address the specific optimization demands of progressively intricate challenges encountered in actual production, playing a crucial role in optimizing problems across diverse states. Consequently, metaheuristic algorithms have undergone continual evolution, yielding numerous outstanding MAs, such as the differential evolution algorithm (DE) [1], particle swarm optimization (PSO) [2], grey wolf optimizer (GWO) [3], aquila optimizer (AO) [4], and human memory optimization algorithm (HMOA) [5].
Since most MAs has the characteristics of high adaptability to complex problems, initial solution without characteristic or gradient information, it replaces traditional mathematical methods such as Newton's method and gradient descent method in dealing with many nonlinear and high-dimensional optimization problems [6]. Therefore, many scholars have widely applied metaheuristic algorithms in fields, such as Job shop scheduling problem [7], parameter recognition of photovoltaic cells and modules [8], feature selection [9], drone technology [10], fake news detection [11], etc., and have achieved good optimization results.
Among these metaheuristic algorithms, the slime mould algorithm (SMA) is a novel intelligent algorithm proposed by Li et al. in 2020 [12]. This algorithm simulates the principle of slime moulds, a biological dispersal for foraging. SMA is similar to other MAs in that it relies less on the feature information of the problem to be optimized due to its black box principle, making it suitable for solving global optimization problems in complex environments such as nonconvex, discontinuous, nonlinear, and multimodal. Numerous studies have confirmed that SMA is an efficient algorithm for solving complex optimization problems in the real world [13,14,15]. Scholar Yang [16] proposed an intelligent algorithm called flower pollination algorithm (FPA) that mimics flower pollination in 2012. The mathematical model of this algorithm consists of self-pollination within a local search range and cross pollination within a given global search range. Its simple structure and fewer parameters make it more competitive compared to many meta heuristic algorithms such as PSO.
SMA and FPA, as two MAs with excellent search performance, each have their own advantages, disadvantages, and application scope. They have played their own characteristics in solving real-world problems and provided new optimization ideas. However, due to the limitations of the single mechanism of the two algorithms themselves, they are difficult to balance the convergence between global and local solutions and easily fall into local optima when dealing with high-dimensional problems [17,18].
Talbi [19] states that, typically, metaheuristic algorithms balance global exploration and local exploitation during the search process, and how to balance the two within limited computational resources has always been of interest to researchers. Hybrid metaheuristic algorithms often yield better results in many practical and academic optimization problems compared to single algorithms, as different algorithms complement each other in specific aspects. Therefore, selecting the appropriate algorithm appears particularly crucial.
The discussed SMA and FPA are types of swarm optimization algorithms within metaheuristic algorithms. Research indicates that these two algorithms have different design principles, suggesting variations in optimization performance at different stages. In the case of SMA, the global exploration phase consists of a combination of two random search agents, which may lead to SMA being adept at exploring the search space and locating the global minimum region [20]. The local search in SMA is a search method that only involves the current population individuals with local exploration around the precise solution within a small range of (0, 1), making it more prone to local optima. In other words, SMA excels in global exploration but has weaker local search capabilities. In FPA, global search is guided by the Levy distribution function, combined with step sizes formed by the current search individuals and the best individual, for global exploration around the current individual. This approach, due to the properties of the Levy distribution function involving short-range moves and long jumps [21], results in shorter step lengths overall, making it challenging to achieve effective optimization over a wide global range. Additionally, Draa [22] experimentally demonstrated that with a probability of p=0.2, i.e., 80% of the iteration process dedicated to local exploitation, FPA exhibits optimal search capabilities. The description above explains why FPA has weaker global capabilities but stronger local capabilities. Therefore, we propose hybridizing these two algorithms to leverage their respective strengths and mitigate weaknesses.
Considering how existing SMA and FPA can be hybridized to maximize utility, inspired by Talbi's HTH (High-level teamwork hybrid) model [19], which asserts that the performance in the HTH model is at least greater than or equal to that of any single algorithm participating in the collaboration, as cooperative algorithms provide assistance to other algorithms. Drawing from this model, the GFPSMA was developed. SMA and FPA use the same random initial population and then operate independently from each other during the iterative process without integration. In the optimization process, after each algorithm completes the required computations separately, the full iteration of GFPSMA is finalized based on the results of an information exchange inspired by the concept of double game.
The hybrid algorithm represents the amalgamation of two or more algorithms with commendable characteristics in distinct stages, following a rule. This integration enhances the algorithm's search capability to a heightened level [19,23]. Building upon the preceding discourse, this paper introduces a hybrid algorithm (GFPSMA) rooted in FPA and SMA. It integrates SMA and FPA algorithms through game-inspired principles to address the deficiencies of each, reinforcing their individual strengths. Subsequently, it facilitates the exchange of information among effective particles from both sides, adhering to formulated rules, culminating in an enhanced optimization approach.
Game theory has achieved significant results in various areas: literature [24] combining multi-objective random fractal search with cooperative game theory has led to minimizing the synergistic effect to save the golden time. Literature [25] proposes the use of Lagrangian relaxation method in an evolutionary game considering environmental feedback during the COVID-19 pandemic outbreak. Literature [26] applies Stackelberg game theory to establish a two-level blood supply chain network under uncertain conditions during an epidemic outbreak. Literature [27] explores the use of leader-follower game form in game theory, particularly the Stackelberg competitive game, and compares it with genetic algorithm and grey wolf optimizer. The above literature utilizes game theory to address practical issues. The application of game theory in this paper aims to enhance the integration of two algorithms to improve their capabilities.
This paper primarily contributes to the following concepts:
1) Propose a method utilizing the golden ratio mechanism to enhance the exploration of information exchange between random individuals and the optimal individual in standard FPA, aiming to address the global exploration issue in FPA.
2) Proposed an adaptive step strategy to improve the random search phase mechanism in SMA and enhance the algorithm's convergence speed.
3) Utilized a fusion mechanism based on the idea of dual game theory to implement dual-population independent optimization, aiming to achieve complementary advantages of the two algorithms in search, thereby enhancing the search performance of the hybrid algorithm.
4) Introduced an elite search method to further enhance the algorithm's performance by improving the search path of the optimal search agent and increasing the probability of GFPSMA escaping local optima.
5) The effectiveness and robustness of GFPSMA are theoretically and practically validated by solving 39 test functions containing standard functions, CEC 2017 test suite and addressing 4 engineering optimization problems.
The remainder of this paper is as follows: Section 2 gives a brief overview of the work of SMA and FPA. Section 3 describes the SMA and FPA algorithms in detail. In Section 4, the improved hybrid algorithm of SMA and FPA is introduced in detail. Section 5 describes the simulation experiment and data analysis of the test function. Section 6 tests four engineering problems. Finally, Section 7 gives the conclusion and the prospect of future research.
2.
Related work
In order to address the limitations of SMA and FPA, researchers have analyzed the two algorithms from various perspectives and offered corresponding methods. The subsequent section furnishes a concise overview of enhancements across different facets of FPA and SMA, alongside related research on their hybridization with other MA algorithms, divided into three parts.
2.1. The improvement of FPA
In the research on the improvement of FPA's position update formula, Kopciewicz and Lukasik [28] proposed a simplified version of the flower pollination algorithm (BFPA). In the study of this method, the adverse effects of Levy distribution were expressed, so global cross-pollination was discarded and only local self-flower search was performed throughout the search process. The optimization performance was validated in the CEC2017 test function and physical and engineering probability modeling fields. The main work by Singh et al. [29] utilized the sine-cosine operator instead of Levy flight in global search to improve the accuracy of global exploration. In local exploitation, a new inertial parameter is proposed to enhance the local search phase. Chen et al. [30] opted for a domain-based search approach for global exploration. They also restructured the population organization after stagnation and demonstrated the algorithm's convergence using differential equations and stochastic function theory. Ozsoydan and Baykasoglu [31] enhanced the search location by locally leveraging the carry-out level selection within the roulette process. This was achieved through the incorporation of a step function and the proposal of a multi-subgroup solution. The validity of the method was confirmed through assessments in multimodal problems. Chen and Pi [17] optimized global exploration by leveraging dimension information and improved the population's optimal solution through the implementation of a strategy for mutation. The transformation probability, influences the balance between global and local searches in the FPA. Draa [22] compared and analyzed the values of the transformation probability. Experimental results demonstrated that setting to 0.2 achieves the optimal optimization effect of FPA. Salgotra and Singh [32] proposed a dynamic transformation probability based on the number of iterations and enhanced the advantage of later-stage local search.
2.2. The improvement of SMA
To enhance the search approach of SMA, scholars have proposed improvement methods. Deng and Liu [33] designed a search strategy for the DE mutation vector, incorporating a population-optimal guidance mechanism, and finally devised a new mutation probability to balance global and local searches. Experimental data indicate a significant performance improvement compared to the standard SMA. Hu et al. [13] proposed a hierarchical strategy, employing a mutual assistance strategy in the elite layer and a learning strategy in the regular population layer. Numerical tests and smooth path planning at the end demonstrate the effectiveness of this approach. Wang et al. [14] proposed an improved PSMA, designing a parallel strategy. This method consists of three sub-methods: improving inertia weight, single communication between random group and best group, and multiple communications between best group and other groups. Finally, the improved PSMA is applied in Distribution Network Reconfiguration. Hu et al. [34] proposed a DFSMA algorithm, focusing on a dispersed foraging search strategy. Initially, they introduced the dispersal rate parameter, DR, to balance local search frequencies. The concept of dispersed foraging from ABC algorithm was incorporated into SMA. DFSMA demonstrated an improvement in classification accuracy in feature selection. Ren et al. [35] introduced an MGSMA algorithm. This method incorporates the motion theory of MVO to mitigate algorithmic entrapment in local optima, and adds Gaussian kernel probability to drive slime moulds during foraging processes. This method primarily finds application in multi-threshold image segmentation. In improving search parameters, researchers have also made significant contributions. Miao et al. [15] modified the parameter of SMA to Tent chaos as a nonlinear factor, enhancing the algorithm's distribution during the random search phase. Furthermore, they utilized Tent chaos to improve the capability of elite individuals. This method is employed for maximizing the annual power generation of cascade hydropower systems. Altay [18] chose to improve the global exploration oscillation parameter by utilizing 10 different chaotic sequences.
2.3. Hybrid algorithm
According to the renowned "No Free Lunch" principle [36], it is understood that a singular metaheuristic algorithm often exhibits deficiencies in certain aspects due to its initial design focus on specific performance criteria. Consequently, in recent years, the development of hybrid algorithms combining foundational MAs has been an active area of research. According to the scholar Talbi, as described in the literature [19], we can know that the hybridization of low and high levels has different operational connotations. The former involves one metaheuristic algorithm replacing another, while the latter implies that each algorithm is self-contained, addressing the optimization function internally. Regarding team collaborative work modes, it encompasses both sequential and parallel approaches. The former involves a sequential, one-after-another work style, while the latter involves a parallel optimization model, where multiple agents work simultaneously and in parallel. There are two ideas used in improving hybrid MAs: one is to introduce a certain high-quality feature or part of a certain algorithm into another algorithm, such as reference [37]. Another approach is to take the elite solution based on the fitness value of the problem to be solved after parallel computing, such as in reference [38].
One of the purposes of improving MAs is to apply algorithms to real-world production and practice. Based on the above description of hybrid algorithms, hybrid metaheuristic algorithms are extended to solve optimization problems. For example, a combination of PSO algorithm and K-means clustering method was used to address the issue of using radio frequency identification (RFID) technology for object recognition and tracking [39]. The combination of ABC algorithm and SVM has solved the aspect level sensitive classification problem [40]. Numerous studies focus on hybrid enhancements of SMA and FPA, exemplified by GBOSMA proposed by Ewees et al. [41], EOSMA proposed by Yin et al. [42], and HSMA-WOA proposed by Mohamed et al. [43], which represent hybrid algorithms integrating SMA principles. Hybrid algorithms of FPA such as FP-GSA proposed by Chakraborty et al. [44], FA/FPA proposed by Kalra and Arora [45], and WOFPA proposed by Tawhid and Ibrahim [46]. The aforementioned hybrid algorithms are applied in different applications and have achieved notable optimization results. The literature review above is presented in Table 1.
3.
Background
In the following section, we introduce the mathematical models of FPA and SMA respectively.
3.1. Brief description of standard FPA
FPA simulates the behavior of biological cross pollination and non-biological self-pollination in flowers. In addition, it is necessary to assume the following four provisions:
1) The global exploration stage of the cross-pollination for plants involves using organisms as carriers and following the Levy distribution.
2) The behavior of non-biological self-pollination spreads pollen through wind, corresponding to the local development stage of the algorithm.
3) The reproduction probability corresponds to the characteristics of flowers, and the similarity and correlation between two flowers (individuals) are proportional.
4) The probability conversion parameter, denoted as and constrained within the interval [0, 1], governs the reciprocal transformation between exploration (global pollination) and exploitation (local pollination) within the FPA algorithm.
From the above description, it can be seen that cross-pollination (global pollination) and self-pollination (local pollination) are the core of FPA.
3.1.1. Global pollination
The global pollination of FPA can be achieved through formula (1):
In FPA, with Pi denoting the current solution, Pb representing the global optimal solution, and L representing Levy flight, the step length is calculated by the following formula (2):
The parameter λ is set to 1.5, and Γ(λ) denotes the standard gamma function. As Eq (1) employs the global best solution Pb, each iteration explores the entire global search phase towards the global optimum. Furthermore, Levy flight is utilized to randomly adapt the step size in order to maintain population diversity. The best solution obtained in each generation serves as the solution for the subsequent iteration, ensuring the inheritance of the best solution.
3.1.2. Local pollination
The local pollination of FPA can be achieved by formula (3):
where ϵ is a random number following a uniform distribution, and PA and PB represent two random individuals within the population, respectively. As illustrated above, global search emphasizes the combination of the optimal individual under the influence of the Levy distribution for optimization, while local search focuses on random individuals. Both of these approaches embody the concept of escaping local optima. Algorithm 1 describes the pseudo-code of FPA.
From a global search perspective, the FPA algorithm influences individuals in the population simultaneously during global pollination by combining Levy flights and the optimal individual Pb. Due to the attraction of the global optimal individual Pb, the FPA algorithm shows a faster convergence speed when optimizing simple problems. However, when addressing complex optimization problems, if the individual Pb in the population becomes trapped in certain local minima in the exploration space, other individuals are swiftly drawn towards Pb, causing the difference between (Pi−Pb) to become very small.
In the local search component of the basic flower pollination algorithm, a new individual is generated using the mutation operation described in Eq (3). As per Eq (3), the new individual is derived by adding a perturbation term to the parent individual, which is a product of a random number and the difference vector between two individuals. Consequently, the generation of new individuals exhibit significant randomness, which helps maintain the diversity of the population, thus enabling the algorithm to sustain good continuous optimization capabilities.
3.2. Brief description of standard SMA
The mathematical model of SMA consists of three stages: approaching the target, surrounding the target, and obtaining the target. Fixed parameters z and transformation parameters p are used to control the random search stage, global exploration stage, and local development stage of the entire algorithm.
3.2.1. Approach food
The concentration of odor in the air serves as a crucial factor for slime moulds in determining the proximity of food. As the concentration of food encountered by slime moulds increases, so does the amplitude of the oscillator, and vice versa. The update method is depicted in formula (4):
In SMA, X is used as the population individual to distinguish it from the population individual P of FPA. Here, r belongs to the interval (0, 1), Xb represents the current best position of the population, XA and XB are two random individuals of the population, vb belongs to the interval [−a,a] and controls global exploration. vc is a parameter that linearly decreases from 1 to 0, controlling local development. W represents the oscillation coefficient of the slime mould, which in turn controls the step size in global exploration.
The transformation parameter p is calculated using formula (5):
Among them, Smell(i) represents the fitness value of the i-th individual of the slime moulds currently iteration; DF represents the fitness of the best individual for slime moulds in the current iteration.
The values of vb and a are calculated using formulas (6) and (7) respectively,
where: t is the current number of iterations; Tmax represents the maximum number of iterations.
The weighting coefficient W simulates the positive and negative feedback of the biological oscillator in the slime mould when encountering different food concentrations, which results in changes in the oscillation frequency, as shown in formula (8):
where, BF and wF are the best and worst fitness in the current iteration process; SmellIndex represents the population fitness ranking position index.
3.2.2. Wrap food and oscillation
During the food search process, slime moulds allocate a portion of individuals for random exploration within a given range. This random variation process is incorporated into the SMA algorithm's random search stage. Considering these principles, the formula for updating the overall slime mould's position is (10.1)–(10.3):
where, UB and LB represent the upper and lower bounds of the search range, while z is a crucial parameter controlling the random search stage, and holds a constant value of 0.03.
Finally, under the control of vb, the step size of global exploration gradually approaches 0, causing the search individuals to converge towards the optimal individual. Moreover, under the control of vc, local development causes the current individual's position to approach 0. Algorithm 2 outlines the pseudocode for the SMA.
The updating mechanism of slime mould positions in the standard SMA depends on the relative sizes of r, p, and z. When r<p, the update of slime mould positions is determined by the positions of the current best individual and two random individuals, leading to random exploration near the current optimal position. This enhances the global search capability of SMA in the early stages but can slow down the convergence speed due to purposeless random exploration. As the number of iterations increases, the slime mold population converges towards the current best position, making SMA prone to getting stuck in local optima when solving functions with multiple local optimum values. When r≥p, the update of slime mould positions is determined by the convergence factor vc and the individual positions of the slime molds. With increasing iterations, vc linearly converges from 1 to 0, causing the slime mold population to converge towards the origin.
4.
The proposed algorithm (GFPSMA)
In this section, we cover the improved GFPSMA algorithm, including: 1) the rationale for hybrid algorithms and the selection of SMA and FPA as the two metaheuristic algorithms; 2) the individual enhancements of the weakness of SMA and FPA; 3) the integration of both; and 4) an analysis of the population distribution of GFPASMA, as well as a summary of the algorithm's time complexity and computational framework.
Figure 1 depicts the hybrid model of the proposed algorithm GFPSMA. In Figure 1, after the initial computation in the same population, 1) the population P represents the new population obtained when using FPA, while the population X represents the new population obtained when using FPA, and the new population S is obtained through a competitive mechanism. 2) The new best search agent Gb is obtained using the Householder reflection matrix. 3) Indicates that the new population S and the new optimal search agent Gb are used as the same population for iterative computations using FPA and SMA, respectively. Additionally, for better clarity on the symbols used in the paper, the explanations of the symbols are listed in Table 2.
In the following section, a comprehensive analysis of the proposed improvement strategies for GFPSMA is presented to better demonstrate how these strategies enhance the algorithm's performance. Specifically, the focus is on four effective strategies: improvement of global search based on FPA, enhancement of the random search phase in SMA, the integration of both using a competitive mechanism, and the enhancement of the optimal search individual.
4.1. Improvement of FPA
Generally, metaheuristic algorithms can be divided into two stages: global exploration and local exploitation. Within the given bounds of a problem, a strong exploration technique in the early and middle stages serves as the foundation for precise local optimization in the later stage, thus enhancing the overall capability of the hybrid algorithm. Various methods have been researched by scholars for such problems. For instance, in reference [47], three methods including tournament, proportional, and linear rank-based methods are used to accelerate the convergence exploration speed.
The drawbacks of the FPA's global search approach are described in detail in Section 1. The FPA makes it difficult to fully explore the unknown area near the current flower position in this way.
It is worth noting that the exchange of information among individuals within the population to enhance the search for unknown areas and the absorption of useful information from different individual neighborhoods play a crucial role in optimization [48].
The golden ratio is a well-recognized harmonious proportion between geometry and numbers. Reference [49] introduced the excellent metaheuristic algorithm, the Golden Sine Algorithm, based on the golden section ratio. The form of combining two positions using random numbers makes it difficult to highlight the role of the leading individual's position in the population. The golden ratio, to some extent, serves as a position weight that can reflect this role. To address the deficiencies in the global exploration of FPA, the golden section mechanism is employed to combine an arbitrary random individual with the optimal individual position, forming the dominant searching individual for global search, as depicted in Eq (11):
where Pr represents a randomly selected individual within the population, Pb denotes the best individual, gold is the golden ratio, ω2 is a parameter with a fixed value of 1, and ω1 is a linearly decreasing inertia coefficient between 0 and 1, while r is a random number within the range of (0, 1).
Figure 2 shows the x1 and x2 running charts with 1000 iterations. From this, it can be observed that in the search state of the FPA algorithm in the early and middle stages, due to the effect of coefficients x1 and x2, the sum of coefficients of Pr and Pb decreases from 2 to 1, respectively. The amplitude of disturbance is relatively large during the early global exploration. As the algorithm iterates further, the disturbance gradually decreases in the middle and later stages, serving to balance the algorithm between global and local exploration. Additionally, it can be observed that the influence of Pb becomes increasingly significant with the increase in iteration times. Thus, the issues related to the search range and thorough exploration of the unknown regions near the optimal individual are resolved.
4.2. Improvement of SMA
The standard SMA, like other MAs, can be broadly divided into two stages: approaching food (global exploration) and warping food (local exploitation). However, when r<z, even if the slime mould finds a better food source, some individuals will separate to explore other areas in an attempt to find higher quality food sources, which represents the random search stage, as indicated by Eq (10.1). The method is based on establishing upper and lower bounds for the problem to be solved, creating a difference vector to generate random positions, with the aim of enabling the algorithm to escape local optima. However, as the number of iterations increases, the algorithm's search range shrinks. Utilizing the upper and lower bounds for calculating each random position would slow down the convergence speed and diminish the optimization ability. In other words, the solutions obtained by SMA during the random search stage may not always be reliable.
The calculation of step size in metaheuristic algorithms is a crucial method to stabilize the random search process. In the differential evolution algorithm, the step size is formed by the influence of two different positions under the mutation factor. In the particle swarm optimization algorithm, the optimal step size formed by the global best and the current best demonstrates that an appropriate step size contributes to the optimization of the algorithm.
To address this issue, this paper proposes an adaptive update method that combines exponential functions with best or average positions instead of directly using fixed upper and lower bounds, as depicted in Eq (14):
where, Xb is the optimal individual position of the SMA algorithm, XM is the average position of the SMA population, and the calculation method is shown in Eq (15). Both perform update calculations based on a probability of 0.5z,
It is not difficult to observe from Eq (14) that in the early stages of iteration, |Xi−Xb| and |Xi−XM| have a longer distance between Xi and the best position XB or the average XM, which is suitable for conducting a large-scale global search. As the iterative calculation deepens, the slime mould population slowly approaches, so the distance also becomes shorter and fluctuates around the current position, which is suitable for small-scale local development.
4.3. The fusion of two algorithms
Game theory is a mathematical theory that rigorously examines optimal decision-making in real-world conflict and confrontation situations. It focuses on how decision-makers optimize their benefits within a given information structure and explores decision equilibriums among different decision-makers. Thanks to the contributions of numerous scholars, game theory has evolved into a comprehensive discipline and has been extensively studied and applied across fields.
Based on game theory research, metaheuristic algorithms have also made great progress. Liu and Nishi [50] introduced a selection mechanism based on game strategy, established the connection between particle swarm optimization and game theory, and enhanced the stability of the algorithm under evolutionary pressure. The paper draws on the relevant ideas of game theory for multi-agent strategic optimization, proposing the use of a dual competition mechanism that combines the zero-sum game in non-cooperative games with the Shapley value in cooperative games to implement a hybrid of the two improved algorithms. This approach facilitates the independent optimization of the two algorithms, allowing them to complement each other's strengths during the search process and enhance the search performance of the hybrid algorithm.
Scholars Leboucher et al. [51] improved particle swarm optimization using evolutionary game theory. In this approach, game theory is utilized to allow different populations to engage in games as decision-makers make evolving decisions over time. This method maintains the ability of the particle swarm optimizer to explore particle diversity in the solution space. Similarly, in this study, two algorithms are treated as distinct populations, and through iterative processes, the game determines which population performs better, leading it into the next cycle.
Owing to the parallel iterative search conducted by FPA and SMA, which effectively creates a dual population calculation merging into a high-quality single population for the next iteration, it is crucial to consider the method of merging. MAs typically decide whether to replace old individuals with newly generated search individuals based on the fitness values of the problem being solved. Therefore, this paper continues to reference this concept, primarily based on non-cooperative game mechanisms and supplemented by cooperative game mechanisms.
Case1. Non-cooperative - zero-sum games
The concept of zero-sum game belongs to non-cooperative game theory, where one party's gain signifies the other party's loss under strict competition. The sum of gains and losses of all parties in the game always equals zero, and there is no possibility of cooperation between the parties (perfect competition relationship). Equation (16) was utilized to calculate the population for the next iteration, based on this concept:
where f is the fitness function, Eq (16) indicates that the fitness values of the population individuals for FPA and SMA are compared with each other, and the superior individuals from both are selected to enter the next iteration's high-quality population.
Case2. Cooperative - Shapley value
In contrast to non-cooperative games, cooperative games allow participants to coordinate and form alliances to promote their own interests. The difference between cooperative and non-cooperative games lies in the emphasis on individual rationality in non-cooperative games and collective rationality in cooperative games. One important method for computing cooperative games is to calculate the contribution of both parties using the Shapley value. The specific calculation method is shown in Eq (17):
where sv is the calculated Shapley value, R is a permutation of n participants, and in GPFSMA, n=2, v(s∪{i}) is the payment value of the alliance consisting of participant i and a set of participants before him. In this paper, the payment value refers to the fitness value of the search individual, and v(s) is the payment value of the alliance consisting of a set of participants (excluding i) before him.
Finally, based on sv in Eq (18), the population of both algorithms will enter the next iteration calculation.
The article proposes an approach to effectively integrate two forms of game theory, drawing inspiration from the simulated annealing algorithm. It introduces an annealing probability model β to determine the feasibility of using cooperative games for computation in the iterative search process, as expressed in Eq (19):
where T0 is set to 1000 based on experiments in this paper, and the cooling efficiency is set to 0.9 according to references [37]. Δt′, which represents the difference in fitness values between FPA and SMA at iteration t, when Δt′ is less than 0, Case 1 is used to calculate the entire population solution. Otherwise, the population solution is calculated using Case 2 with a probability of β. Figure 3 illustrates the process of the competitive mechanism.
4.4. Improvement of the best individual
In the revised Eq (10.2) for SMA global exploration and the new exploration strategy Eq (13) for the improved FPA proposed in this paper, the best individual guides the entire population toward the global optimal direction of the problem. Any deviation from this guidance may lead to the failure of the entire optimization process, particularly in multimodal functions with multiple local optima. If low-quality solutions are utilized during the early and middle stages of global exploration, it is likely that the overall optimization accuracy of the algorithm will not reach a high level. Hence, we introduce a conditional elite learning method to enhance the positioning of the best individual.
The Householder reflection transform is a special orthogonal transformation in linear algebra used to map a vector or matrix to another location through plane reflection. It is widely applied in numerical computation, eigenvalue computation, and solving systems of linear equations. The principle of the Householder reflection transform is elaborated in the literature [52] and illustrated in Figure 4. Overall, this transform is a highly valuable tool in linear algebra, with significant relevance in scientific computation and engineering applications.
The key to constructing the Householder reflection matrix lies in the selection of the mirror normal vector. In this method, the normalized current population individual is chosen as the vector, and then the Householder matrix for the current iteration is obtained. Subsequently, the current population undergoes the reflection transformation, while retaining the excellent individuals. The calculation process is illustrated in Eqs (21) and (22):
where u is the normalized vector for the current individual, I is the identity matrix, and Sreflect is the position matrix corresponding to the reflected population. Si∪Sreflect Selecting the fittest individuals from Si∪Sreflect to form the new optimal individuals, Sbtemp. The reflection process is illustrated in Figure 5.
In the optimization process, generating a reflection point and perturbing the best individual each time would increase the computational complexity of the algorithm. To detect whether the algorithm has fallen into a local optimum, it is possible to make a judgment by comparing the observation factor with 0, where the observation factor can be represented by Eq (23):
where f is the fitness value of the problem function.
It is necessary to disturb the optimal individual based on the calculated value from the observation factor fdis. Before the given number of function evaluations or iterations is completed, two scenarios can be observed for Eq (23): Case 1) When the absolute difference |f(Sb)−f(Sbnew)| equals 0, it indicates that the current optimal solution has stagnated, and if the overall fdis value is less than 0, the algorithm falls into local optima; Case 2) As the number of iterations increases, if the rate of decrease of the former is less than e(tTmax)ξ, then fdis will also be less than 0, signifying that it is stuck in local optima as well. When it is determined that disturbance is required based on the value of fdis, the reflection operation is computed; otherwise, no reflection operation is performed on the optimal individual.
After the reflection operation is completed, the optimal reflection agent Sbtemp is obtained. In order to maintain the position of the best individual at its optimal position, greedy selection is performed between the reflective optimal individual Sbtemp and the best individual Sb in the high-quality population, i.e., Eq (24):
4.5. Summary of GFPSMA
In summary, we propose a new hybrid algorithm, GFPSMA. First, GFPSMA introduces a method that utilizes the golden section mechanism to enhance the exploration between random individuals and the optimal individual, to address the global exploration issue of FPA. Second, a self-adaptive step strategy is incorporated into SMA to improve the random search stage mechanism. Third, a dual-competition mechanism based on game inspiration is introduced to better integrate the two algorithms. Finally, an elite learning approach with adjustable conditions is used to improve the position of the optimal individual.
Next, this section will present 1) the algorithm flowchart and pseudocode of GFPSMA, 2) population distribution analysis, and 3) analysis of the algorithm's time complexity.
4.5.1. The flowchart of GFPSMA
The GFPSMA flowchart is shown in Figure 5. The pseudocode is shown in Algorithm 3.
4.5.2. Analysis of population distribution
To visually illustrate the distinct population distributions at different stages of the optimization process for standard SMA, FPA, and GFPSMA, the Shifted and Rotated Levy Function in CEC2017 is utilized as the benchmark. The population size is set to 30, with a dimension of 2, and the maximum number of iterations is t=200. The scatter plot of population distributions, as shown in Figure 6, displays the dimensions in the horizontal axis for the 2-dimensional states and the computed function values in the vertical axis.
From Figure 6 (a1), (b1), (c1), it can be observed that when t=1, the populations of all three methods are relatively dispersed, located between [-100,100], with little difference in function values. As the computation progresses and reaches the middle stage at t=100, as shown in Figure (a2), (b2), (c2), FPA exhibits the highest function values, and the search range remains between [-100,100]. This indicates that the standard SMA's global search approach is relatively weak, making it difficult to locate local small ranges for complex problems. SMA The function values of SMA are in an intermediate position among the three, as indicated in the figure, and the search range also gradually narrows, reducing the algorithm's population diversity and deepening the trend of aggregation. Clearly, GFPSMA exhibits a relatively dispersed population in a small range, maintaining population diversity while also achieving the optimal computed function values. Finally, when t=200, as shown in Figure (a3), (b3), (c3), the state of FPA remains unchanged, indicating its lack of ability to optimize complex multimodal problems. Except for individual population individuals, SMA is completely clustered, leading to local optima, resulting in the final function values being among the three. On the other hand, the hybrid algorithm GFPSMA, using a competitive mechanism, leverages the advantages of both algorithms to achieve a better population distribution and search near the optimal. In summary, the strategy proposed in this paper is efficient, and GFPSMA has a strong adaptive search mechanism that can quickly find the best solution to solve the problem at hand.
4.5.3. Analysis of time complexity
In this paper, the time complexity of GFPSMA is calculated using big O notation. Assuming the population size is N, the problem dimension is Dim, and the maximum number of iterations is Tmax. GFPSMA mainly calculates the following components:
1) The computational complexity of initialization is O(N).
2) In the FPA part of GFPSMA, the time complexity of population position updates is O(N×Dim).
3) In the SMA section of GFPSMA, the computational cost of sorting is O(N×NlogN), the computational complexity of weight updates is O(N×Dim), and the time complexity of population position updates is O(N×Dim).
4) The computational complexity of competitive selection is O(N), the time complexity of simulated annealing probability is O(N).
5) The computational complexity of elite learning operator is O(Dim), and the time complexity of observation factor is O(N).
In summary, the maximum total complexity of GFPSMA can be estimated as Eq (25):
5.
Algorithm testing and result analysis
In this section, we evaluate the optimization ability of GFPASMA in numerical experiments by comparing the results of 39 different types of test functions. The experimental steps are as follows: 1) Experiment and algorithm settings, 2) comparison of GFPASMA with related variant algorithms, 3) comparison of GFPASMA with other variant algorithms, 4) comparison with competitive algorithms, 5) comparison of results from various strategies, 6) data statistics, and 7) GFPASMA diversity measurement, exploration, and development analysis.
5.1. Experimental setup and test functions
The test environment for this experiment is an AMD 7735H CPU@3.20GHz, running on the Windows 11 operating system, with the programming and computational software being Matlab R2021a. The population size is set to 50, the maximum number of iterations is 5000, and the number of independent executions is 30 to ensure reliable test results. To clearly compare the differences in algorithm performance, this paper uses the mean (Mean), standard deviation (Std), and best value (Min) as evaluation metrics. Among these metrics, the mean (Mean) is selected as the basis for ranking, as a smaller mean represents better stability in algorithm performance and superior ranking. Table 3 presents the comparison algorithms used in this study, with the algorithm parameters set according to relevant literature.
The selection of a test dataset is a crucial determinant in evaluating the performance of an algorithm. References [53,54,55] emphasize the significance of classic benchmark functions. The test set includes two parts: CEC2017 and standard functions, as shown in Tables 4 and 5, covering a total of 39 functions including unimodal, multi-modal, hybrid, and composite functions. When evaluating the performance of metaheuristic algorithms, it is important to consider various aspects. On one hand, single-modal functions have only one global optimum, making them suitable for evaluating the algorithm's local search capability. On the other hand, multi-modal functions have multiple local optima, making them suitable for evaluating the algorithm's global search capability. Hybrid and composite functions combine the characteristics of single-modal and multi-modal functions, making them meaningful for evaluating the algorithm's ability to handle complex functions.
5.2. Comparisons with variants of FPA and SMA
To validate the effectiveness of GFPSMA, we performed numerical tests and compared it with variant algorithms of FPA and SMA proposed in recent years. Our testing encompassed 10 standard functions and 29 CEC2017 test functions. The standard functions comprise 5 unimodal functions (SF1–SF5) each with a single global optimal solution, and 5 multimodal functions (SF6–SF10) with multiple global optimal solutions. The CEC2017 test functions consist of 2 unimodal functions (CF1–CF3) with only one global optimal solution, 7 simple multimodal functions (CF4–CF10), and 20 complex multimodal functions, including mixed functions (CF11–CF20) and composition functions (CF21–CF30), all of which have multiple global optimal solutions. The algorithms involved in the comparison include 3 SMA variants and 3 FPA variants. SMA variants include MSMA [33], TLSMA [56], and ESMA [57], while FPA variants include WOFPA [46], MBFPA [38], and MFPA [58]. The test results for dimension 30 are listed in Table 6. If the Mean metric is the same, then the ranking is the same.
In the evaluation of standard test functions, GFPSMA has demonstrated its superiority in achieving theoretical best solutions, especially in the SF1–SF4, SF7, and SF18 functions. The algorithm has successfully identified the theoretical best solutions for these test functions, establishing itself as the optimal optimization algorithm for these functions. Furthermore, GFPSMA has exhibited outstanding optimization performance in other functions, such as SF5, earning it the top ranking in the algorithm evaluation. In contrast, among the other algorithms, MBFPA is only marginally inferior to GFPSMA, while ESMA trails behind. In the more complex CEC2017 function testing, GFPSMA not only outperforms in mean metric count, but also demonstrates a particularly noticeable advantage. Specifically, in functions such as CF7, CF9, CF12, and CF15, their optimization ability has been significantly improved. Compared with other algorithms, variants of SMA and FPA may achieve best results in a few functions, but their optimization performance is relatively poor in most CEC functions. According to the evaluation of the Average rank, GFPSMA has the highest ranking at 1.2821, indicating its best adaptability to complex problems. In summary, in the above test functions, GFPSMA is more able to find reasonable and high-quality results compared to other algorithms. It can generate feasible solutions with higher accuracy, thus verifying the effectiveness of GFPSMA. In the statistical results, GFPSMA ranked first among 32 functions, MSMA ranked 8, TLSMA ranked 7, and ESMA and MBFPA ranked 11 and 9, respectively. WOFPA and MFPA have not been identified as optimal algorithms in all functions. Overall, GFPSMA also ranks among the top in terms of ranking.
Table 7 displays the test results of GFPSMA and its variant algorithms in a high-dimensional environment with a dimension of 50. The table reveals that the improvement strategy proposed in this paper remains highly effective in GFPSMA within this high-dimensional setting. The introduction of the improvement strategy significantly accelerates the convergence speed of the algorithm and swiftly narrows the search space range, establishing a robust foundation for local development to attain accurate solutions. Furthermore, the table demonstrates that GFPSMA displays superior stability and robustness compared to other comparative algorithms. The test results indicate that out of 39 test functions, GFPSMA achieved the best optimization values in 28 functions, with an average ranking of 1.4615. In contrast, the MSMA variant was optimal in 7 functions, while the optimal performance quantities for MBFPA, WOFPA, TLSMA, ESMA, and MFPA decreased successively to 6, 5, 4, 3, and 1, respectively. Furthermore, the results also indicate that GFPSMA exhibited excellent performance across types of functions, ensuring a good balance between exploration and exploitation stages, and demonstrating outstanding performance in handling complex functions with multiple local optima. This outcome illustrates that through the competitive mechanism of game theory, combined with improved global and random search, as well as the overlay of elite learning strategies, the optimization capability of GFPSMA has been significantly enhanced, enabling it to possess stronger abilities to prevent falling into local optima when solving complex problems.
Figure 7 illustrates the convergence characteristics of the GFPSMA algorithm and its comparison with other algorithms in terms of convergence speed and convergence accuracy across 18 different types of functions. The compared algorithms include other variants of FPA and SMA. By showcasing the convergence curves of 18 typical functions in a 30-dimensional space, a direct comparison of these algorithms' performance differences can be made. The convergence curves reflect the changes in the objective function values during the optimization process, thereby revealing the algorithm's convergence speed and accuracy. Among these curves, for functions such as SF5, CF9, and CF12, GFPSMA shows a significantly faster decrease in the global optimum value from the early to mid-iterations compared to other algorithms, demonstrating outstanding global search capability. Additionally, it also exhibits strong local exploitation advantages in the later iterations, showing the best convergence speed relative to other algorithms. For unimodal functions, GFPSMA shows fast convergence speed and high solution quality, indicating its ability to quickly find a single optimal solution. Importantly, even when dealing with complex multimodal, hybrid, and composite functions, GFPSMA's performance does not degrade, suggesting its good robustness for complex problems.
Figure 8 compares the performance of the GFPSMA algorithm with other benchmarked algorithms in terms of the optimal solutions for 9 CEC2017 functions across 30 independent runs. In the boxplot, the height of the boxes reflects the fluctuation of the algorithm's optimal solutions, while the bottom of the boxes represents the algorithm's optimal solution. By observing the boxplots for the listed functions, it can be seen that the box for GFPSMA is relatively narrow, indicating that its optimal solutions have smaller fluctuations, i.e., the algorithm exhibits strong stability, and thus, the differences between the optimal solutions at each generation are small. In contrast, the boxes for the other benchmarked algorithms are wider, implying larger variations in the solutions obtained from the beginning to the end of the iterations, indicating lower stability compared to GFPSMA.
5.3. Comparisons with variants of other MAs
This section presents the test results of GFPSMA and variants of other metaheuristic algorithms developed by scholars in recent years across 39 diverse functions. The algorithms compared include AGWO [59], HSCA [60], PSOsono [61], VPPSO [62], ESOA [63], and MsRwGWO [64], all of which have demonstrated their optimization capabilities in various domains.
Table 8 shows the test results of GFPSMA and other MAs algorithms in 30 dimensions. From the data in the table, it can be observed that GFPSMA achieved the best results in 29 out of 39 functions, including single-modal and multi-modal types, and obtained the first place in the Final rank. In the standard test functions, GFPSMA was the optimal algorithm for 9 functions, followed by VPPSO. PSOsono showed no optimal performance in this function test set, while MsRwGWO's optimal performance was 1, indicating the weak optimization capabilities of these two algorithms on the standard test set, with GFPSMA being the best. In the CEC2017 test set, GFPSMA achieved the highest number of best-performing algorithms in 20 functions, the most among all evaluated algorithms, demonstrating the overall improvement in the optimization performance of GFPSMA in various functions by combining the dual-competition mechanism based on game theory, the improved FPA and SMA, and the perturbation test on the optimal individuals. Its Ave rank is 1.3077, while the Ave ranks of other algorithms AGWO, HSCA, PSOsono, VVPSO, ESOA, MsRwGWO are 5.7179, 4.5385, 2.9744, 3.4103, 4.5128, and 4.3846, with Final ranks of 7, 6, 2, 3, 5, and 4, respectively. This indicates that GFPSMA has significant theoretical and practical advantages over the aforementioned 6 algorithms in low dimensions.
In Table 9, it can be observed that GFPSMA also achieved the first place in the ranking in 29 functions, indicating that GFPSMA outperforms the other tested algorithms in terms of optimization performance. In the entire function test experiment, GFPSMA's Ave rank is 1.2821, with an overall ranking of first. This indicates that with the adoption of improvement measures, GFPSMA is able to improve the tendency of its FPA and SMA algorithms to fall into local optima. This suggests that as the optimization function dimension increases, the optimization capability of GFPSMA does not decrease and can achieve good optimization results.
Figure 9 presents the convergence curves of each algorithm in the 50-dimensional scenario. The purpose is to observe the convergence of GFPSMA in high-dimensional scenarios. From the figure, it can be observed that GFPSMA has the fastest convergence speed and higher convergence accuracy in various functions such as SF2, SF4, CF3, CF9, CF18, and CF30. In the other test functions, GFPSMA demonstrates convergence capabilities greater than or equal to those of other improved algorithms, consistently showing the effectiveness of GFPSMA's improvement strategies and maintaining algorithm diversity, while other algorithms are prone to falling into local optima.
The box plots presented in Figure 10 show a consistent trend in the thirty optimal solutions found by GFPSMA in the CF4, CF9, and CF28 functions, forming approximately a straight line and highlighting its outstanding and stable performance during the search process. Concurrently, the test results for other functions also demonstrate relatively lower bottom positions. Overall, GFPSMA not only can find theoretically optimal results in specific test functions but also performs well in other functions, demonstrating its universality and efficiency.
5.4. Comparisons with CEC algorithm
To highlight the superior optimization capability of GFPSMA, we compared it with the competition algorithms proposed by CEC in recent years, including SHADE [65], LSHADE_SPACMA [66], and RPBSO [67]. SHADE and LSHADE_SPACMA were the winners of the CEC2013 and CEC2017 competitions, respectively, while RPBSO was the competition algorithm for CEC2019. Table 10 clearly shows that, at a 10-dimensional state, GFPSMA not only competes well with these CEC competition algorithms but also ranks lower overall. For 9 functions such as CF1, CF2, and CF4, GFPSMA can approach the theoretical optimal solution. In the case of other functions, the differences in performance metrics compared to other algorithms are not significant. Additionally, from the table, it can be observed that in terms of comprehensive search capability, in low dimensions, GFPSMA exhibits excellent search performance compared to the CEC competition algorithms. This further confirms the superiority of GFPSMA in optimization problems.
5.5. Comparisons with strategy algorithms
In order to compare the impact of each strategy on the GFPSMA algorithm, this paper will test the improved global strategy GFPSMA1, which uses the golden section idea for information exchange in the FPA, the improved random search strategy GFPSMA2, which incorporates the exponential function's differential mechanism into SMA, and the conditional elite learning strategy GFPSMA3, against the standard FPA and SMA. The benchmark test functions will be based on CEC2017, with a dimension of 30, and the test results are presented in Table 11.
Based on the data in Table 11, GFPSMA demonstrates the most outstanding optimization performance in 20 test functions, including CF3 and CF4. Specifically, in terms of the mean optimization accuracy, GFPSMA1, GFPSMA2, and GFPSMA3 outperform the standard SMA and FPA in all functions except for CF25. It is worth noting that in 5 test functions, such as CF12 and CF13, each strategy has improved the optimization accuracy by one or more orders of magnitude compared to the standard FPA and SMA, while showing varying degrees of improvement in other functions. This indicates that the application of various strategies in the GFPSMA algorithm has enhanced its stability. When comparing the improvement strategies, the average ranks for the three strategies are 4.0690, 2.2414, and 2.3793, and the overall ranks are 1st, 3rd, and 2nd, respectively. This indicates that the contribution of GFPSMA2 is the most significant in enhancing the optimization performance of GFPSMA, followed by GFPSMA3, and GFPSMA1 ranks third overall. Therefore, considering the combined effect of these strategies, it can be concluded that GFPSMA can achieve the best optimization results through the comprehensive application of these improvement strategies.
Comparative analysis between GFPSMA and widely recognized state-of-the-art random search algorithms is performed. This serves two purposes: first, to demonstrate the degree of performance improvement of GFPSMA over SMA; second, to showcase the optimization capability of GFPSMA in a wider array of heuristic algorithms. Table 12 summarizes the comparison results of GFPSMA with seven other art of state algorithms, including AO [4], PSO [2], HHO [68], GWO [3], DE [1], AOA [69], and SCA [70], for problem dimensions of 30. Based on the primary metric Mean value, GFPSMA achieved the best performance across all functions, with an Ave rank of 1.3793, positioning it as the top-performing algorithm. Among the other algorithms, AOA demonstrated the best performance, being the optimal algorithm for seven functions and ranking second overall with an Ave rank of 2.9655. The Ave ranks for the other algorithms were 3.6206, 6.2068, 4.6207, 3.3793, 7.8965, and 6.2758. In terms of the Best metric, GFPSMA also outperformed most functions, showcasing its exceptional optimization performance. Furthermore, the Std metric reveals that GFPSMA exhibits robustness compared to the other seven algorithms. In conclusion, the proposed strategy in this study yielded promising results.
5.6. Data statistics
Friedman test is a statistical method used to assess whether there are significant differences among multiple samples from different populations, as shown in Eq (26). This method is used in this paper to determine whether the performance of GFPSMA on the CEC2017 function test set differs significantly from other algorithms in a statistical.
where Ff is the Friedman test statistic and k is the degrees of freedom.
In Table 13, the p-value is an important indicator for this test method. Typically, a p-value less than 0.05 indicates a significant difference in the optimization results of the algorithms being evaluated. From the table, it can be observed that the p-value for GFPSMA is consistently less than 0.05 across different dimensions and algorithm groups, indicating that its performance differs significantly from other algorithms in a statistical sense. This significant difference may be attributed to the mixed strategy proposed in this paper.
5.7. Analysis of diversity, exploration, and exploitation about GFPSMA
5.7.1. Analysis of diversity
Hussain et al. [71] suggest that the dispersion or aggregation of a population in a given space can be inferred from the differences in individual dimensions. A large difference between individuals indicates a phase of global exploration or diversification, whereas a small difference leads to the population converging to a small range, indicating a phase of local exploitation or intensification. Mean, variance, and other indicators of solutions cannot help in understanding the algorithm's search behavior. Therefore, Hussain proposes a method for measuring the diversity of an algorithm, as shown in Eq (28):
where, median(Xj) represents the median of dimension j, Xji is the j-dimensional vector of the ith fungus, and N is the population size.
Figure 11 depicts the diversity curves of the GFPSMA algorithm for 6 complex CEC2017 functions of dimension 30, achieved through the introduction of multiple improvement strategies. From the figure, it can be observed that the multimodal functions CF14, hybrid function CF20, and composition function CF29 maintain good population diversity throughout the entire iteration process until the diversity weakens during local search. This is attributed to the combination of the strengths of the FPA and SMA algorithms, along with the improvements proposed in this paper. For other complex functions, the decrease in diversity in the middle stage may be due to the rapid localization of the precise solution range in the early stage, thus accelerating the local exploitation.
5.7.2. Analysis of exploration and exploitation
The previous section discussed the issue of algorithm diversity. This section is a continuation of the previous one, focusing on the balance between global exploration and local exploitation of algorithms. To address this, scholar Hussain also devised corresponding mathematical methods as shown in Eqs (29) and (30) in reference [71]:
where, Xpl% and Xpt% represent the ratios of exploration to exploitation, with Divmax being the maximum diversity of population individuals.
Figure 12 illustrates the calculation results of the exploration and exploitation percentages of GFPSMA on 6 functions in CEC2017 with a dimension of 30. From the figure, it can be observed that the algorithm exhibits different exploration and exploitation ratios in different complex functions. In CF9, CF11, and CF27, the exploitation percentage is higher, indicating that the algorithm can quickly complete the global exploration process in similar problems, focusing on local search. In contrast, CF14, CF20, and CF29 show a relatively higher ratio of global exploration, indicating that these problems possess a strong complexity and require more computational effort to locate precise solutions. In summary, GFPSMA is capable of obtaining appropriate exploration and exploitation ratios when dealing with complex problems, demonstrating its strong adaptability and ability to achieve a good balance between global and local exploration.
6.
Practical engineering optimization problems
Researchers generally consider solving actual engineering problems as an effective means to validate the practicality and rationality of metaheuristic algorithms compared to test functions. In this section, we selected four real-world engineering application problems, namely the pressure vessel design problem (PVD) [72], three-bar truss design problem (TBTD) [4], welded beam design problem (WBD) [4], and robot gripper problem [63]. The evaluated algorithms include HHO [68], MSMA [33], GWO [3], AO [4], HSCA [60], SMA [12], and AOA [69] to verify the practicality of the GFPSMA algorithm. The algorithm proposed in this paper follows a similar approach to the studies cited in references [53,54,55], utilizing engineering problems to validate the algorithm's performance in optimizing real-world problems.
6.1. Pressure vessel design problem
The objective of the pressure vessel design problem (PVD) is to minimize the total cost of the pressure vessel while meeting the specified requirements. This problem involves four variables: The length of the cylindrical section (L), the radius of the cylindrical section (R), the thickness of the cylindrical section (T), and the thickness of the head section (H). The design is illustrated in Figure 13. The objective function and four optimization constraints of the problem are presented in Appendix A1.
From Table 14, it can be observed that GFPSMA exhibits the lowest Mean value among the evaluated algorithms, with a value of 6363.310, ranking first and achieving the minimum cost.
6.2. Three-bar truss design problem
The optimization of the three-bar truss problem is a complex issue in civil engineering, involving two parameters, the problem's objective function, and four optimization constraints, as indicated in Appendix A2. The design is illustrated in Figure 14.
Table 15 presents the optimization results of each algorithm. GFPSMA demonstrates the lowest Mean value among the evaluated algorithms, with a value of 250.690, ranking first and achieving the most effective optimization.
6.3. Welded beam design problem
The objective of the welded beam design problem is to minimize the cost, involving the optimization of four variables: Weld thickness (h), steel thickness (b), length of the connection (l), and the objective function and four optimization constraints are presented in Appendix A3. The design is illustrated in Figure 15.
From Table 16, it can be observed that GFPSMA exhibits the lowest Mean value among the evaluated algorithms, with a value of 6363.310, ranking first and achieving the best optimization result.
6.4. Robot gripper problem
The optimization problem for the robot gripper involves minimizing the force difference applied to the gripper. It consists of seven main parameters: a, b, and c represent the lengths of the linkages, while e denotes the displacement, f the vertical length, l the horizontal length, and δ the angle, as shown in Figure 16. The objective function and four optimization constraints are presented in Appendix A4.
From Table 17, it can be observed that, in comparison to HHO and GWO, GFPSMA achieves the same lowest force difference as MSMA, HSCA, and SMA, with a difference of 3.046.
7.
Conclusions and future
In order to address the limitation of a single algorithm in solving large-scale global optimization problems due to its own principles and mechanisms, a hybrid algorithm based on FPA and SMA is proposed in this paper, referred to as GFPSMA. The algorithm incorporates three improvement methods and a fusion strategy, namely utilizing the golden section's information exchange concept to address the global exploration issue of FPA; employing an adaptive step size with exponential properties to resolve the random exploration problem of SMA; introducing a conditional elite learning strategy to enhance the optimal individuals in the population; and adopting a competitive mechanism based on game inspiration to address the integration of the two algorithms. We also employed 39 functions from the CEC2017 test suite and standard functions to evaluate the numerical optimization performance of GFPSMA. The experimental results demonstrate that GFPSMA exhibits good search performance. Additionally, its feasibility is illustrated in four real-world engineering problems.
With the development of the economy, people's demands for their own health continue to increase, and the medical field is an important research direction. One algorithm cannot simultaneously meet all application scenarios. In future research, different hybrid strategies will be explored to enhance the algorithm's optimization capabilities, and the application of GFPSMA in specific medical fields, such as medical image segmentation, will be investigated to expand the scope of GFPSMA's feasibility.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Nos. 62272418 and 62102058), Basic public welfare research program of Zhejiang Province (No. LGG18E050011), the Major Open Project of Key Laboratory for Advanced Design and Intelligent Computing of the Ministry of Education under grant ADIC2023ZD001, the Science and Technology Research Project of Jiangxi Provincial Education Department (No. GJJ2203314).
Conflict of interest
The authors declare there is no conflict of interest.
Appendix
A1. Pressure vessel design problem
Consider:
Minimize:
Subject to:
Parameters range:
A2. Three-bar truss design problem
Consider:
Minimize:
Subject to:
Parameters range:
A3. Welded beam design problem
Consider:
Minimize:
Subject to:
Parameters range:
where,
A4. Robot gripper problem
Consider:
Minimize:
Subject to:
Parameters range: