1.
Introduction
With the development of technology and society, more and more highly complex and challenging practical optimization problems need to be solved. Most of the traditional optimization methods are based on gradient or derivative information, such as Newton's method [1,2], conjugate gradient method [3], which have the advantages of theoretical soundness and fast convergence and can be used to solve optimization problems in some engineering fields. However, these methods tend to be based on problem-specific characteristics, are difficult to meet the needs of a large number of practical problems, and it easily becomes trapped into local optima when used to solve complex, highly nonlinear, and multi-peak complex problems [4]. To overcome these problems, metaheuristic optimization algorithms were introduced, which can help to solve optimal or near-optimal solutions of complex functional and real-world problems with the iterative process of the algorithm. Unlike traditional methods, metaheuristic algorithms have a stochastic and gradient-free mechanism, require minimal mathematical analysis, and use only inputs and outputs to consider and solve the optimization problem [5]. This is one of the fundamental advantages of metaheuristic algorithms, giving them a high degree of flexibility in solving various problems.
The metaheuristic optimization methods can be divided into three categories from the principle of algorithms: evolution-based, physics-based, and swarm intelligence-based [6]. Evolution-based algorithms are proposed to simulate Darwinian biological evolution and mainly include Genetic Algorithm (GA) [7] and Differential Evolution (DE) [8]. The physics-based algorithms are inspired by the laws of physics and mainly include Simulated Annealing (SA) [9], Quantum Search Algorithm (QSA) [10], Big Bang-Big Crunch (BBBC) [11], Artificial Chemical Reaction Optimization Algorithm (ACROA) [12], Lightning Search Algorithm (LSA) [13], Multi-Verse Optimizer (MVO) [14], Heat transfer search (HTS) [15], Atom Search Optimization (ASO) [16], and Equilibrium optimizer (EO) [17]. Swarm intelligence-based algorithms are proposed to simulate the collaborative behavior of natural biological swarms. Representative algorithms include Particle Swarm Optimization (PSO) [18], Artificial Bee Colony Algorithm (ABC) [19], Teaching-Learning Based Optimization (TLBO) [20], Gray Wolf Optimizer (GWO) [21], Whale Optimization Algorithm (WOA) [22], Salp Swarm Algorithm (SSA) [23], Social Spider Optimization (SSO) [24], Seagull Optimization Algorithm (SOA) [25], Marine Predators Algorithm (MPA) [26], Harris Hawks Optimization (HHO) [27], Bald Eagle Search (BES) [28], Slime Mould Algorithm (SMA) [29], Chameleon Swarm Algorithm (CSA) [30], and so on.
It is worth noting that, according to the No Free Lunch (NFL) theorem [31], no algorithm performs well on all problems, and each algorithm has its own strengths and weaknesses, which are applied to different real-world problems to obtain better results. As a result, applying improved algorithms to specific problems has become a hot topic of current research. For example, Zhang et al. [32] proposed a state transition simulated annealing algorithm (STASA) that introduces a new elementary breakpoint operator and neighborhood search structure in SA to solve multiple traveling salesman problems, and experimental results show that the improved algorithm outperforms other state-of-the-art algorithms. Yu et al. [33] proposed a performance-guided JAYA (PGJAYA) algorithm for extracting parameters of different PV models, and the performance of PGJAYA was evaluated on a standard dataset of three PV models, and the results showed that PGJAYA has excellent performance. Fan et al. [34] proposed an improved Harris Hawk Optimization algorithm based on domain centroid opposite-based learning (NCOHHO), which was applied to feedforward neural network training and achieved good results in classification applications.
The Slime Mould Algorithm (SMA) is a metaheuristic algorithm inspired by slime mould oscillation proposed by Li et al. in 2020 [29]. It has been applied to many fields in less than two years because it simulates the unique oscillatory foraging behavior of the slime mould and has superior performance. For example, Ewees et al. [35] applied the firefly algorithm (FA) and SMA hybrid algorithm (SMAFA) to the feature selection (FS). Abdel-basset et al. [36] applied the binary SMA (BSMA) to the FS problem. Abdel-basset et al. [37] proposed a hybrid method based on threshold technology (HSMA_WOA) to overcome the image segmentation problem (ISP) of chest X-ray images of COVID-19. Zhao et al. [38] proposed a Renyi's entropy multi-threshold image segmentation method based on improved slime mold algorithm (DASMA). Naik et al. [39] applied an improved SMA (LSMA) to the ISP. Yousri et al. [40] proposed a novel hybrid algorithm of marine predator algorithm (MPA) and SMA (HMPA) to solve the ISP. Mostafa et al. [41] applied SMA to the single-diode and dual-diode models of photovoltaic cells. El-Fergany [42] studied the performance of SMA and its improved version (ImSMA) in photovoltaic parameter extraction. Liu et al. [43] proposed a SMA that integrates Nelder-Mead simplex strategy and chaotic mapping to identify photovoltaic solar cell parameters. Kumar et al. [44] applied SMA to the parameter extraction of photovoltaic cells and proved the superiority of SMA. Agarwal et al. [45] applied SMA to path planning and obstacle avoidance problem of mobile robots. Rizk-Allah et al. [46] proposed a chaos-opposition-enhanced SMA (CO-SMA) to minimize the energy costs of wind turbines at high-altitude sites. Hassan et al. [47] proposed an improved version of the SMA (ISMA) and applied it to efficiently solve economic and emission dispatch (EED) problem with single and dual objectives, and compared it with five algorithms on five test systems. Wei et al. [48] proposed an improved SMA (ISMA) for optimal reactive power dispatch (ORPD) problem in power systems, and achieved better results than the well-known algorithms on power test systems with IEEE 57 bus, IEEE 118 bus and IEEE 300 bus. Abdollahzadeh et al. [49] proposed a binary version of SMA to solve the 0-1 knapsack problem; Zubaidi et al. [50] combined SMA and artificial neural network for urban water demand prediction; Chen et al. [51] combined K-means clustering and chaotic SMA with support vector regression to obtain higher prediction accuracy. Ekinci et al. [52] applied SMA to the power system stabilizer design (PSSD); Wazery et al. [53]. Combined SMA and K-nearest neighbor for disease classification and diagnosis system. Premkumar et al. [54] proposed a multi-objective version of the SMA (MOSMA) for solving complex real-world multi-objective engineering optimization problems, which has better performance compared to other well-known multi-objective algorithms. Yu et al. [55] proposed an improved SMA (WQSMA), which used quantum rotation gate (QRG) and water cycle operator to improve the robustness of the original SMA, so as to balance the exploration and exploitation ability. The effectiveness of WQSMA on CEC2014 and three engineering problems was verified. Houssein et al. [56] proposed a hybrid SMA and adaptive guided differential evolution (AGDE) algorithm, namely SMA-AGDE, which makes a good combination of SMA's exploitation ability and AGDE's exploration ability, and verified the effectiveness of SM-AGDE through CEC2017 and three engineering design problems.
As mentioned above, many scholars have only improved SMA for specific problems, and the generalization ability of the proposed algorithms has yet to be tested. Yu et al. [55] and Houssein et al. [56] respectively used QRG and AGDE to enhance the exploration ability of SMA to address the shortcomings of SMA and achieved good results. In this paper, a novel improved slime mould algorithm DTSMA based on dominant swarm and nonlinear adaptive t-distribution mutation is proposed based on the improved experience of WQSMA and SMA-AGDE. The dominant swarm enhanced the exploitation ability of SMA, and the t-distribution mutation enhanced the exploration ability of SMA. In order to further improve the exploitation ability of SMA, a new exploitation formula is added to DTSMA. The main contributions of this paper are as follows.
(1) It is verified that the dominant swarm strategy can improve the convergence rate of SMA.
(2) The proposed nonlinear adaptive t-distribution mutation mechanism can expand the search range of SMA in the iterative process, increase the difference of search agents, improve the global search ability of SMA, and avoid falling into local optimal.
(3) The proposed new exploitation mechanism is effectively combined with that of SMA.
(4) The DTSMA is compared with other advanced metaheuristic algorithms on CEC2019, and the advantages of DTSMA in convergence speed and solution accuracy are verified.
(5) The performance of DTSMA is tested on eight classical engineering application problems and the inverse kinematics problems of a 7-DOF robot manipulator.
In this paper, the CEC2019 functions and eight constrained engineering design problems are selected as test cases and compared with twenty-two well-known algorithms on CEC2019 and with SMA and improved algorithms in the literature on engineering instances. Experimental results show that DTSMA has strong search ability and can obtain better solutions than most algorithms under the condition that constraints are satisfied.
The rest of this paper is organized as follows: Section 2 briefly describes the principle and characteristics of SMA. Section 3 describes the principle of DTSMA and its difference from SMA in detail. Section 4 presents the experimental configuration, the comparative experimental results of the CEC2019 functions, and its statistical analysis. In section 5, DTSMA is used to optimize eight engineering problems, i.e., three-bar truss, cantilever beam, pressure vessel, tension/compression spring, welded beam, speed reducer, multi-disc clutch brake, and car side crash problem. In section 6, DTSMA is used to solve the inverse kinematics problems of a 7-DOF robot manipulator. Section 7 presents the discussion, conclusions and future work.
2.
Slime mould algorithm (SMA)
2.1. Inspiration
SMA is an interesting swarm-based meta-heuristic algorithm proposed by Li et al. in 2020 [29]. It simulates the behavior and morphological changes of slime mould in foraging to find the best solution. The slime mould relies mainly on propagating waves generated by biological oscillators to modify the cytoplasmic flow in the veins to approach a higher food concentration, then surrounds it and secretes enzymes to digest.
2.2. Mathematical model
During the foraging process of slime mould, individuals can approach the food based on the odor in the air. The greater the concentration of food odor, the stronger the bio-oscillator wave, the faster the cytoplasmic flow, and the thicker the vein-like tubes formed by the slime mould. The mathematical model for updating the location of slime mould is as Eq. (1).
where LB and UB denote the lower and upper bounds of the search range, rand and r denote random numbers in [0, 1], and z is a parameter that the original authors did a lot of experiments and suggested to take 0.03, →Xb indicates the location where the highest concentration of food odor is currently found, →vb and →vc are parameters, →vb takes values in [−a,a], →vc decreases linearly from 1 to 0 with the number of iterations t, →W indicates the thickness of the vein-like vessels formed by the slime mould, →XA and →XB are two randomly selected agents positions in the population, →X indicates the current position of the slime mould.
The value of p is calculated as Eq. (2).
where i∈1,2,...,n, S(i) denotes the fitness →X, DF denotes the best fitness obtained so far.
The value of a in the range of →vb is calculated as Eq. (3).
where max_t indicates the maximum number of iterations.
The formula of →W is calculated as Eq. (4).
where condition represents that S(i) ranks first half of the population, r means a random number in [0, 1], bF represents the optimal fitness obtained in the iterative process currently, wF represents the worst fitness obtained in the iterative process currently, SmellIndex denotes the result of the ascending order of fitness values (in the minimization problem).
2.3. Characteristics of SMA
The slime mould approximation food behavior shown in Eq. (1), the individuals position →X can be updated according to the best position →Xb obtained so far, while the fine-tuning of parameters →vb, →vc and →W can change the individuals position and rand allows the search agents to form a search vector of any angle.
At the beginning of the SMA, the individual positions are scattered, the value of p tends to 1, and the slime mould is mainly explored by the second equation in Eq. (1). As the number of iterations increases, the individual positions are gradually close together, the vein-like vessels of the slime population are gradually formed, the individual fitness value S(i) is gradually approached with the current optimal fitness value DF, the value of p tends to 0, and the slime mould are mainly exploited by the third equation in Eq. (1). In addition, a stochastic strategy was introduced into the search process of SMA so that the algorithm maintains some exploration ability even during the exploitation phase. In SMA, there are no velocity settings for the agents of the slime mould and the population is not divided into hierarchies or subpopulations. All search agents are simply and equally selected close to or far from the current best location →Xb. Furthermore, the position is updated using only the best positions obtained so far and not using the historical best position information of individuals. The pseudo-code of the SMA is shown in Algorithm 1 [29], and the flow chart is expressed in Figure 1.
3.
Improved slime mould algorithm (DTSMA)
3.1. Dominant swarm
In the process of solving the optimization problem, SMA does not use the information of the individual optimal position of slime mould to update the solution, and may miss a good opportunity to find the global optimal. In DTSMA, in order to record the individual historical optimal position information, the dominant swarm →Xgood and its fitness value →Sgood are defined to store the historical optimal information. After the position is updated, the updated position →X is compared with the position in the dominant swarm →Xgood, and the greedy selection strategy is used to reserve the better position to the dominant swarm. In the exploration phase, DTSMA uses the individual historical optimal →XgoodA, →XgoodB and the population historical optimal →Xgoodb found so far to jointly update the search individual position →X. The formula for updating the position of slime mould is as Eq. (6).
where →Xgoodb is the best solution for the fitness value in the dominant swarm, →XgoodA and →XgoodB are two randomly selected position vectors from the dominant swarm, →vb is the random number vector with the value in [−a,a], a is calculated by Eq. (3), →W represents the adaptive weight of the slime mould individual.
SMA sorts the individual fitness value in each iteration in order to find the optimal and the worst fitness. The sorting process is time-consuming, and to make better use of the sorted individual positions and fitness values, DTSMA divides the sorted population into two subpopulations, →XgoodA from the population ranked in the top half of fitness values and →XgoodB from the other population. The values of A and B are taken as Eq. (7) and Eq. (8).
where N denotes the population size, rand denotes a random number in [0, 1], round indicates the rounding function.
After adding the dominant swarm, the convergence speed and solution accuracy of SMA have been greatly improved, but the problem of easily falling into local optimum is still severe.
3.2. Mutation mechanism
SMA has strong exploitation ability, but weak exploration ability. The algorithm is easy to fall into local optimum and appear premature convergence phenomenon. To balance exploration and exploitation, mutation mechanism is added after the regeneration of dominant swarm. There are many probabilistic mutation mechanisms, such as Levy flight [57,58], Gaussian mutation [49,59,60] and Cauchy mutation [61], all of which can enhance the search ability of the algorithm. Levy flight can enhance the exploration and exploitation ability of the algorithm at the same time, but mainly enhance the exploitation ability. SMA needs to improve the exploration ability, so it is not suitable to use Levy flight mechanism. For algorithms with strong exploitation ability, Gaussian mutation can enhance its exploration ability, while for algorithms with strong exploration ability, Gaussian mutation can enhance its exploitation ability. In literature [49], SMA based on Gaussian mutation is used to solve the 0-1 knapsack problem. Since knapsack problem is NP hard discrete optimization problem, it is necessary to improve the exploration ability of SMA. But for more general optimization problems, the later exploitation ability of the algorithm needs to be concerned. Cauchy mutation also enhanced SMA's exploration ability, but not as much as Gaussian mutation. Therefore, inspired by the above literature, this paper applies the t-distribution mutation switching between Gaussian mutation and Cauchy mutation to SMA. The degree of freedom of t-distribution mutation adaptively changes with the number of iterations, which can well balance the exploration and exploitation of SMA. When the degree of freedom is large, the t-distribution is close to the Gaussian distribution, and when the degree of freedom is equal to 1, it is the Cauchy distribution, as clearly shown in Eq. (9) and Figure 2.
where trnd(tn) denotes the t-distribution with degrees of freedom tn.
In DTSMA, the position of each slime mould of the dominant swarm →Xgood is perturbed using t-distribution mutation with adaptive parameters. t-distribution mutation operator is mathematically formulated as Eq. (10).
where →TX denotes the position vector of slime mould after t-distribution mutation, and tn denotes the degree of freedom parameter of the t-distribution.
In DTSMA, the degree of freedom parameter tn grows nonlinearly with the number of iterations t. The value of tn is calculated as Eq. (11).
The degree of freedom parameter tn enables DTSMA to approximate the use of the Cauchy mutation in the early iteration to enhance the exploration ability, and to approximate the use of the Gaussian mutation in the late iteration to focus on the exploitation ability. During the iteration of DTSMA, with the increase of the degree of freedom tn, the algorithm gradually transforms from focusing on the global exploration ability to the local exploitation ability. The t-distribution mutational operator combines the advantages of Gaussian mutational and Cauchy mutational operators, allowing DTSMA to achieve an excellent balance between exploration and exploitation.
3.3. Greedy strategy
SMA does not use greedy selection, and a greedy strategy is utilized in DTSMA to retain search agents of slime mould with better fitness than the current ones and eliminate those with worse fitness in each iteration, expressed in the mathematical formula as Eq. (12).
where S(→X) denotes the fitness of →X, →Xgood represents the position in the dominant swarm.
The use of a greedy strategy seems to weaken the exploration performance of the algorithm, but the mutation mechanism incorporated in each iteration of DTSMA constantly performs exploration, and greedy selection simply discards the fraction of individuals that fail in exploration and prepares them more adequately for the next exploration.
3.4. Exploitation operator
Finally, a search operator was added in the exploitation phase of DTSMA to increase the population diversity of slime mould, and the exploitation operator was formulated as Eq. (13).
where →Xgood represents the position in the dominant swarm, →vc is a random number vector with the value in [−b,b], and b decreases linearly from 1 to 0 with the number of iterations.
This operator donates that the search agents of the slime mould will eventually stop at the optimal position it currently finds, and in some cases, the individual optimal may converge beyond the current global optimal position →Xgoodb. Based on the above principles, the mathematical formula for the position update can be organized as Eq. (14).
where q is a parameter that can be adjusted to the specific problem and takes values in [0, 1].
3.5. Sensitivity analysis of parameters
When using DTSMA, it is necessary to determine two adjustable parameters z and q, among which the adjustment method of parameter z is consistent with that of SMA, which can be referred to [29]. To illustrate the impact of q on solving optimization problems and to facilitate users to adjust on specific problems, the value of q was compared on the CEC2019 functions, and the interval between 0 and 1 is 0.1. The test results are shown in Table 1. The data presented in the table are the average optimal fitness obtained by the algorithm running 30 times on each function and their rank among the other values taken by q. As can be seen from Table 1, the Friedman mean rank best when q is 0.9 and obtained the best results on the five functions. It shows that the searching ability of DTSMA is improved significantly when q is taken as 0.9. Therefore, considering the generalization ability of the DTSMA algorithm, q is taken as 0.9 for the next test. In addition, for most optimization problems, the value of q should be taken in [0.7, 0.9].
The pseudo-code of DTSMA is presented in Algorithm 2, and the flow chart is shown in Figure 3.
3.6. Computational complexity analysis
DTSMA mainly consists of the subsequent components: initialization, fitness evaluation, dominant swarm update, t-distribution mutation, sorting, weight update, and location update. Among them, N donates the number of agents of slime mould, Dim donates the dimension of the variable, and max_t donates the maximum number of iterations. The computation complexity of initialization is O(N∗Dim), the computation complexity of dominant swarm update and t-distribution mutation are O(N), the computation complexity of sorting is O(N∗logN), the computation complexity of weight update is O(N∗Dim), the computation complexity of location update is O(N∗Dim). Therefore, assuming that the time complexity of fitness evaluation is O(F), the total computation complexity is O(max_t∗(N∗Dim+N∗logN+F)), which is the same as SMA. The space complexity of DTSMA is O(N∗Dim).
4.
Experimental results on benchmark functions
To verify the improvement, the performance of DTSMA was evaluated using the average best fitness value and its standard deviation, the results of the test functions were ranked, and the Friedman rank of each algorithm on the different test functions was counted. Then, the Wilcoxon rank-sum test was used to evaluate the differences between DTSMA and comparison algorithms. For a fair comparison, all algorithms were set with the same common parameters, the population size to 30, and the maximum number of iterations to 1000. All experiments were executed on Windows 10 OS and all algorithm codes were run in MATLAB R2019a with hardware details: Intel(R) Core (TM) i7-9700 CPU (3.00GHz) and 16GB RAM.
4.1. Benchmark functions
In this study, the test functions for the DTSMA comparison experiment are the CEC2019 functions. The search ranges and minimum values are shown in Table 2, and the 3-D map for 2-D function are shown in Figure 4.
4.2. Comparison algorithm parameter setting
To test the effectiveness and efficiency, DTSMA was compared with twenty-two algorithms, including the original SMA [29], classical algorithms (i.e., PSO [18], DE [8], TLBO [20], GWO [21], WOA [22], SSA [23], MVO [14], MFO [62], ALO [63], DA [64], SCA [65]), novel algorithms (i.e., Equilibrium Optimizer (EO) [17], Bald Eagle Search (BES) [28], Harris Hawks Optimization (HHO) [27], Pathfinder Algorithm (PFA) [66], Seagull Optimization Algorithm (SOA) [25]), improved algorithms (i.e., Autonomous Groups Particle Swarm Optimization (AGPSO) [67], Gaussian Quantum-behaved Particle Swarm Optimization (GQPSO) [68], hybrid Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) [69], Centroid Opposition-based Differential Evolution (CODE) [70]), and superior performance algorithms (i.e., Multi-trial Vector-based Differential Evolution (MTDE) [71]). The adjustable parameter settings of comparison algorithms are shown in Table 3.
4.3. Experimental results and analysis
The results were reported in Table 4 and Table 5, where Table 4 exhibits the average best fitness obtained by running the algorithm for 30 times, and Table 5 exhibits the standard deviation of the 30 best fitness values. As can be seen from Table 4, DTSMA achieves the best results on F1-2 and F10, and is significantly superior to other comparison algorithms in terms of convergence accuracy. In addition, MTDE obtained the best solution on F5-7, EO showed a clear advantage on F8-9, and PFA performed best on F3. But in general, DTSMA ranks first in average performance among 23 comparison algorithms, and can obtain better solutions, and is far better than SMA, which indicates that the performance of proposed DTSMA is significant.
It can be summarized from Table 5 that the stability of MTDE is better than DTSMA on the CEC2019 functions, and it is also inferior to EO and GQPSO in terms of robustness, but the robustness of DTSMA is much better than the original SMA. Therefore, the proposed DTSMA is superior to SMA in convergence accuracy and robustness, which verifies the effectiveness and efficiency of DTSMA. In conclusion, the Friedman mean rank shows DTSMA as a powerful optimization algorithm with good performance not only in the search ability of the optimal solution but also in most functions, which is very competitive with MTDE and EO. Therefore, DTSMA can provide a high-level candidate solution for complex function optimization problems with strong generalization ability.
The convergence curves of algorithms on CEC2019 functions are given in Figure 5 and Figure 6. The results show that DTSMA outperforms most of the compared algorithms, especially the classical metaheuristic, in terms of convergence speed and solution accuracy. In Figure 5, DTSMA achieves the best performance on all tested functions. In Figure 6, DTSMA achieves optimal performance on F1–2 and F10, and is less competitive on F4–7 and F9, especially on F7, where MTDE shows its superiority. Because F7 has many locally optimal solutions, making the algorithm easily fall into local optima and premature convergence, which indicates that MTDE outperforms DTSMA in terms of exploration ability. On F1–2, DTSMA still has the fastest convergence speed and best solution accuracy, which indicates that DTSMA is obviously superior to MTDE in terms of exploitation ability. Therefore, DTSMA and MTDE can be considered as complementary algorithms, which can be applied to different real-world optimization problems to obtain more satisfactory results.
Since boxplots illustrate the data distribution, they are excellent graphs for describing the consistency between data. To further compare the distribution states of the optimization results of DTSMA and other algorithms, the best fitness values obtained by 23 algorithms run 30 times independently on each test function are presented in the form of box plots in Figure 7. The results show that DTSMA has the smallest median, upper quartile and lower quartile, the fewest outliers, and the narrowest distribution frame in the comparison of classical algorithms.
In the comparison of advanced algorithms, DTSMA outperforms most algorithms and has strong robustness. In general, the performance of DTSMA and MTDE is the best, and the two algorithms have their own advantages for different functions respectively, which are far better than the other algorithms. Therefore, DTSMA is a good optimization algorithm in the terms of convergence accuracy and robustness.
The Wilcoxon rank-sum test [72] is used to verify whether there is a significant difference between the two data sets, i.e., the test evaluates whether the obtained performance is not random. Due to the random nature of the metaheuristic algorithm, a similar comparison of statistical experiments is necessary to ensure the validity of the data. The p-value is an indicator of decreasing confidence that there is a significant difference between the two data sets, the smaller the p-value, the higher the confidence level. When p < 0.05, it indicates that there is a significant difference between the data considered for the two algorithms at a confidence interval of 95%. The results of the Wilcoxon p-value test of DTSMA and well-known algorithms are shown in Table 6.
The results of the Wilcoxon p-value test show that there are fewer cases (shown in bold) without significant differences and that DTSMA significantly outperforms the original SMA on six functions. DTSMA has a strong competitive performance with EO, BES and TLBO, demonstrating the algorithm's advantages on different functions for different optimization problems. In conclusion DTSMA is significantly different from and outperforms SMA, and the results are statistically meaningful, verifying that the performance of DTSMA is not random.
5.
Applicability of DTSMA for solving engineering problems
To test the generalization ability of DTSMA, the DTSMA was tested in eight well-known constrained engineering design problems, i.e., three-bar truss, cantilever beam, pressure vessel, tension compression spring, welded beam, speed reducer, multiple-disc clutch brake and car side impact design problem. The optimization results of SMA and DTSMA given in tables are the optimal results obtained from 30 independent runs of the algorithm with 1000 iterations with 30 individuals. These engineering design problems have various constraints and need to be optimized using constraint handling methods.
5.1. Constraint processing method
In constraint processing techniques, penalty functions are simple and easy to implement. There are different types of penalty functions, such as static, dynamic, annealing, and adaptive penalties, and these methods transform the constrained problem into an unconstrained one by adding a certain penalty value [73]. In this paper, a static penalty function was used to deal with the constraints of the engineering problem. The mathematical model of the penalty function is expressed as Eq. (15).
where O(→x) denotes the objective function, f(→x) denotes the objective function without considering the constraints, m and n denote the number of equation constraints and inequality constraints, respectively, gi(→x) and hi(→x) denote the inequality constraints and equation constraints, respectively, w denotes the penalty factor.
In this study, the penalty factor was set to 1015. The array-indexed mapping approach was used to solve for discrete and integer variables.
5.2. Three-bar truss design problem
Three-bar truss design optimization is a non-linear fraction optimization [74]. This problem has only two decision parameters A1 and A2. The structure of the three-bar truss is presented in Figure 8. The mathematical formulation is defined as Eq. (16).
Table 7 shows the optimal results obtained by DTSMA and other algorithms in the literature. It can be observed that DTSMA outperforms the original SMA and other comparative algorithms.
5.3. Cantilever beam design problem
The second engineering optimization problem is the cantilever beam design problem, where the main goal of this type of optimization is to reduce the weight of the beam. The structure of the cantilever beam is presented in Figure 9. The mathematical model is defined as Eq. (17) [60].
The comparative results of the different algorithms solved in the literature are shown in Table 8. It can be concluded that DTSMA and SMA are superior to the well-known comparison algorithms, and DTSMA is superior to SMA.
5.4. Pressure vessel design problem
The third engineering design problem used is the pressure vessel design problem [56]. The objective is to minimize the cost of cylindrical pressure vessels, including the material cost, welding and forming cost of cylindrical vessels. The problem has four decision variables: shell thickness (Ts), head thickness (Th), radius (R), and cylindrical length (L). This problem is presented in Figure 10. The mathematical model is described as Eq. (18).
The optimization results are shown in Table 9, from which it can be concluded that the DTSMA algorithm has better performance than SMA and other comparative algorithms in solving the pressure vessel problem.
Continuous variables version.
5.5. Tension/compression spring design problem
The fourth application is the tension/compression spring design problem, which requires minimizing the weight of the spring by considering constraints on minimum deflection, shear stress and surge frequency, as well as limitations on geometry [89]. This problem has three continuous variables: diameter of the wire (d), diameter of the mean coil (D) and the active coils number (N). This problem is presented in Figure 11. The mathematical model is described as Eq. (19).
The optimization results are shown in Table 10, the results show that DTSMA obtains better optimal weights than SMA.
5.6. Welded beam design problem
Another engineering design problem is the welded beam problem, which is often used as a benchmark case for testing different optimization algorithms [62]. The problem contains nearly 3.5% of the feasible region in the search space. The structure and design variables are illustrated in Figure 12. The objective of this problem is to minimize fabricating cost subjected to shear stress (τ), bending stress (σ), buckling load (Pc), deflection (δ), and other constraints [17]. This problem has four parameters: thickness of the weld (h), length of welded part of the beam (l), height of the beam (t), and width of the beam (b). The mathematical model is as Eq. (20).
Table 11 presents the optimization results of DTSMA and other well-known algorithms in the literature. The results demonstrated that DTSMA excelled in solving the welded beam problem, outperforming many improved algorithms and recently proposed metaheuristic algorithms.
5.7. Speed reducer design problem
Another important engineering design problem is the speed reducer design problem, which is a challenging design problem since it is correlated to seven variables [30]. A graphical illustration of this problem is shown in Figure 13. The objective of this problem is to minimize the weight subjected to different constraints of bending stress, surface stress, lateral deflection of the shaft and stress in the shaft. The seven design variables considered are: face width (B), number of tooth modules (M), number of teeth in the pinion (N), length of the first shaft between bearings (L1), length of the second shaft between bearings (L2), and diameters of the first and second shafts (D1, D2). The third variable is an integer, while the other variables are continuous, and the problem comprises nearly 0.4% of the feasible region. The mathematical formulation of this problem is as Eq. (21).
The optimization results for DTSMA, SMA and several other algorithms are given in Table 12. The results show that the performance of DTSMA is better than that of SMA.
5.8. Multiple-disc clutch brake design problem
The multi-disc clutch brake problem is a well-known problem in engineering constrained optimization, as shown in Figure 14 [89]. Five discrete design variables are considered to minimize the weight of the multi-disc clutch brake: the inner radius (Ri), the outer radius (Ro), the thickness of the disc (Th), the driving force (F), and the number of friction surfaces (Z). There are eight different constraints based on geometry and operating conditions. The mathematical model of this optimization problem is described as Eq. (22).
Table 13 shows the results of DTSMA and other algorithms in the literature for optimizing multiple-disc clutch brakes. Both DTSMA and SMA find better results than other algorithms, indicating that DTSMA has good performance in solving discrete constraint problems.
5.9. Car side crash design problem
The car side crash optimization design problem was originally proposed by Gu et al. [102]. This optimization problem is specified as minimizing an objective function with eleven mixed design variables with ten constraint limits, as shown in Figure 15. The simplified mathematical model of the problem can be written in the following Eq. (23).
The optimization results of different algorithms are given in Table 14. The results show that DTSMA outperforms PSO, GA, GOA, ABC, GWO, CODE and SMA algorithms in optimizing the car side impact design problem and can obtain satisfactory results.
5.10. Statistics analysis of engineering problems
In order to observe the distribution of the best fitness when solving engineering problems by DTSMA and SMA, the results of 30 runs are presented in the form of box plots, as shown in Figure 16. The corresponding Wilcoxon p-value test results are shown in Figure 17, where the number 1 represents three-bar truss problem, the number 2 represents cantilever beam problem, and so on.
It can be summarized that as follows from Figure 16 and Figure 17.
(1) For the three-bar truss, cantilever beam, tension/compression spring, and welded beam engineering design problems, the boxes of DTSMA are significantly lower than those of SMA, and the Wilcoxon p-value test results are much less than 0.05, indicating that the solution accuracy and robustness of DTSMA for these four engineering problems are significantly better than SMA.
(2) DTSMA performs better than SMA but not significantly enough for the pressure vessel, speed reducer, and car side crash engineering design problems. Although DTSMA obtains better results, it also produces more outliers and is less stable. For the multi-disc clutch brake problem, both DTSMA and SMA find the same optimal solution because it is a discrete numerical problem and the difference in feasible solutions is smaller.
(3) On the whole, DTSMA still outperforms SMA, and according to the NFL theorem [31], no single algorithm can be applied to all problems. DTSMA outperforms SMA for most engineering problems, which indicates that the improvement is meaningful.
6.
Inverse kinematics solution of 7-DOF robot manipulator
Seven-degree-of-freedom (7-DOF) robot manipulators are widely used in industry for their ability to easily avoid obstacles, move flexibly, and work in larger spaces. The inverse kinematics of a robotic arm is defined as finding the joint angle by using the kinematics equations of the desired end-effector position. Due to its complex nonlinear structure, the inverse kinematics problem can be considered as a challenging optimization problem [106].
The most used method for kinematics modeling of robotic arms is the Denavit-Hartenberg (DH) coordinate parameter method. The robot manipulator model solved in this paper is proposed by Serkan et al. in [107], and its DH parameter table is listed in Table 15, where {a_i}, {\alpha _i}, {d_i}, {\theta _i} refer to link length, link twist, link offset and joint angle, respectively. The model structure of a robotic arm can be determined according to the DH parameter table, as shown in Figure 18.
The general homogeneous transformation matrix can be expressed as Eq. (24).
where {}_{i - 1}^iT is the transformation matrix relating joint i - 1 to joint i, s and c denote sine and cosine functions, respectively.
The kinematics equations of the serial robot manipulator can be obtained by substituting the values of the DH parameter in Table 15 into Eq. (24) and then multiplying them successively, as shown in Eq. (25).
where Tend-effector is the homogeneous transformation matrix of the end-effector with respect to the base frame, {n_x}, {n_y}, {n_z}, {s_x}, {s_y}, {s_z}, {a_x}, {a_y}, {a_z} denote the rotational elements of the transformation matrix, {P_x}, {P_y}, {P_z} denote the elements of the position vector of end-effector.
The main task of the inverse kinematics problem is to determine the corresponding joint angles based on the desired position of the end-effector in the Cartesian coordinate system. Thus, the objective function can be defined as minimizing the Euclidean distance between the desired position and the predicted position, as shown in Eq. (26).
where Pdesired represents the desired position vector of end-effector of robot manipulator, Ppredicted represents the predicted position vector.
To verify the performance of the proposed DTSMA, two different desired position vectors i.e., {P_1} = {[-0.25, 1.00, 0.50]^{\text{T}}} and {P_2} = {[0.50, -0.25, 0.75]^{\text{T}}} were selected for testing. DTSMA was compared with SMA, PSO, DE, GQPSO [68], CODE [70], GWO, and HHO, and each algorithm was run independently for 30 times with 30 individual populations and 1000 iterations. Table 16 illustrates the numerical optimization results of DTSMA and the other compared algorithms for the inverse kinematics problem with two different desired position coordinates of the end-effector.
From the optimization results in Table 16, it can be seen that the solution accuracy of the DTSMA is better than the comparison algorithm for the inverse kinematics problem, but the computation time is longer. The optimal solution of the HHO algorithm for Case2 optimization is better than DTSMA, but its average solution is poorer and less stable. The convergence history of the algorithm is shown in Figure 19. It can be seen that DTSMA converges faster than the other comparison algorithms. The statistical results of the comparison algorithms are shown in Figure 20 and Figure 21.
It can be seen that the performance of DTSMA is significantly better than SMA and the other six comparison algorithms in optimizing the inverse kinematics problem of the robot manipulator, which reflects the applicability of DTSMA to practical problems.
7.
Conclusions and future works
In this paper, an improved slime mould algorithm, DTSMA, is proposed for the shortcomings of slow convergence, weak exploration ability, and easy to fall into local optimal of the SMA. In DTSMA, the dominant swarm strategy is firstly introduced to retain the historical optimal position of each slime mould individual. In the position updating formula of exploration stage, both the historical optimal position of the population and the historical optimal position of the individual are used to make the population look for places with high probability of the optimal solution as much as possible. Secondly, by making full use of SMA's fitness ranking information, the dominant population is further divided into dominant and inferior population, and the two sub-populations cooperate with each other to make the population search more extensive in the exploration stage. Then, a nonlinear adaptive t-distribution mutation strategy is introduced to perturb the dominant swarm to avoid premature convergence. Finally, the exploitation mechanism for convergence to individual historical optimal is added to improve the diversity of the population and the robustness and generalization ability of DTSMA. The effectiveness and efficiency of DTSMA in solving numerical optimization problems were tested on CEC2019 functions. Then, DTSMA was applied to eight classical engineering application problems and the inverse kinematics problem of a 7-DOF robot manipulator. Experimental results show that the solution accuracy of DTSMA on CEC2019 ranks first overall among 23 algorithms, significantly outperforming SMA and numerous comparative algorithms. In eight engineering instances, DTSMA obtains better optimal solutions, far outperforming SMA for the three-bar truss, cantilever beam, tension/compression spring, and welded beam problems, and slightly outperforming SMA for the remaining four problems. In the inverse kinematics of the robot manipulator, DTSMA significantly outperforms SMA, PSO, DE, GQPSO, CODE, GWO and HHO in terms of solution accuracy and stability. The statistical results demonstrate that DTSMA has the following superiority.
(1) The test function results show that DTSMA converges quickly and accurately, has the ability to escape from local optimum, and has a good balance between exploration and exploitation. The convergence curve shows that the proposed method has fast convergence speed and avoids premature convergence and local optimal stagnation.
(2) Friedman and Wilcoxon rank test illustrate that DTSMA has better performance compared to SMA and well-known algorithms and there are significant differences.
(3) Experimental results of DTSMA in engineering problems show that it is an ideal choice for solving continuous and discrete constrained optimization problems as well as inverse kinematics problems of robot manipulator.
Although DTSMA overcomes many drawbacks of the original SMA, its long running time makes it unsuitable for real-time control systems. More in-depth study on how to reduce the time complexity of DTSMA will be conducted in the future. Then, DTSMA will be applied to the inverse kinematics of robot manipulator with comprehensive consideration of position and posture of end-effector. In addition, DTSMA has stronger scalability and can also be applied to solve high or ultra-high dimensional problems, such as the traveling salesman problem, the job shop scheduling problem, and the time series forecasting problem, etc.
Acknowledgments
This work was supported by the National Science Foundation of China under Grant No. 62066005, and Project supported by Hainan Provincial Natural Science Foundation of China, No.620QN237, and Project supported by the Education Department of Hainan Province, No. Hnky2020-5.
Conflict of interests
The authors declare no conflict of interest.