
Unlike prior solvency prediction studies conducted in Egypt, this study aims to set up a real picture of companies' financial performance in the Egyptian insurance market. Therefore, 11 financial ratios commonly used by NAIC, AM BEST Company, and S & P Global Ratings were calculated for all property-liability insurance companies in Egypt from 2010 to 2020. They have been used to measure those companies' financial performance efficiency levels by comparing these ratios with the international standard limits. The financial analysis results for those companies revealed that property-liability insurers in Egypt do not have the same level of financial performance efficiency where those companies are classified into three groups: excellent, good, and poor. Furthermore, this paper investigates using the stepwise logistic regression model to determine the most factors among these selected financial ratios that influence those companies' financial performance. The results suggest that only three ratios were statistically significant predictors: "Risk retention rate", "Insurance account receivable to total assets", and "Net profit after tax to total assets". Finally, this paper presents the multi-layers artificial neural network with a backpropagation algorithm as a new solvency prediction model with perfect classifying accuracy of 100%. The trained ANN could predict the next fiscal year with a prediction accuracy of 91.67%, and this percent is a good and favorable result comparing to other solvency prediction models used in Egypt.
Citation: Eid Elghaly Hassan, Diping Zhang. The usage of logistic regression and artificial neural networks for evaluation and predicting property-liability insurers' solvency in Egypt[J]. Data Science in Finance and Economics, 2021, 1(3): 215-234. doi: 10.3934/DSFE.2021012
[1] | Zhenzhong Xu, Xu Chen, Linchao Yang, Jiangtao Xu, Shenghan Zhou . Multi-modal adaptive feature extraction for early-stage weak fault diagnosis in bearings. Electronic Research Archive, 2024, 32(6): 4074-4095. doi: 10.3934/era.2024183 |
[2] | Ramalingam Sakthivel, Palanisamy Selvaraj, Oh-Min Kwon, Seong-Gon Choi, Rathinasamy Sakthivel . Robust memory control design for semi-Markovian jump systems with cyber attacks. Electronic Research Archive, 2023, 31(12): 7496-7510. doi: 10.3934/era.2023378 |
[3] | Nan Xu, Longbin Yang, Andrea Lazzaretto, Massimo Masi, Zhenyu Shen, YunPeng Fu, JiaMeng Wang . Fault location in a marine low speed two stroke diesel engine using the characteristic curves method. Electronic Research Archive, 2023, 31(7): 3915-3942. doi: 10.3934/era.2023199 |
[4] | Jichen Hu, Ming Zhu, Tian Chen . The nonlinear observer-based fault diagnosis method for the high altitude airship. Electronic Research Archive, 2025, 33(2): 907-930. doi: 10.3934/era.2025041 |
[5] | Xu Chen, Wenbing Chang, Yongxiang Li, Zhao He, Xiang Ma, Shenghan Zhou . Resnet-1DCNN-REA bearing fault diagnosis method based on multi-source and multi-modal information fusion. Electronic Research Archive, 2024, 32(11): 6276-6300. doi: 10.3934/era.2024292 |
[6] | Zhizhou Zhang, Yueliang Pan, Weilong Zhao, Jinchu Zhang, Zheng Zi, Yuan Xie, Hehong Zhang . Frequency analysis of a discrete-time fast nonlinear tracking differentiator algorithm based on isochronic region method. Electronic Research Archive, 2024, 32(9): 5157-5175. doi: 10.3934/era.2024238 |
[7] | Xiaoyan Wu, Guowen Ye, Yongming Liu, Zhuanzhe Zhao, Zhibo Liu, Yu Chen . Application of Improved Jellyfish Search algorithm in Rotate Vector reducer fault diagnosis. Electronic Research Archive, 2023, 31(8): 4882-4906. doi: 10.3934/era.2023250 |
[8] | Xingyue Liu, Kaibo Shi, Yiqian Tang, Lin Tang, Youhua Wei, Yingjun Han . A novel adaptive event-triggered reliable H∞ control approach for networked control systems with actuator faults. Electronic Research Archive, 2023, 31(4): 1840-1862. doi: 10.3934/era.2023095 |
[9] | Sahar Badri . HO-CER: Hybrid-optimization-based convolutional ensemble random forest for data security in healthcare applications using blockchain technology. Electronic Research Archive, 2023, 31(9): 5466-5484. doi: 10.3934/era.2023278 |
[10] | Arvind Prasad, Shalini Chandra, Ibrahim Atoum, Naved Ahmad, Yazeed Alqahhas . A collaborative prediction approach to defend against amplified reflection and exploitation attacks. Electronic Research Archive, 2023, 31(10): 6045-6070. doi: 10.3934/era.2023308 |
Unlike prior solvency prediction studies conducted in Egypt, this study aims to set up a real picture of companies' financial performance in the Egyptian insurance market. Therefore, 11 financial ratios commonly used by NAIC, AM BEST Company, and S & P Global Ratings were calculated for all property-liability insurance companies in Egypt from 2010 to 2020. They have been used to measure those companies' financial performance efficiency levels by comparing these ratios with the international standard limits. The financial analysis results for those companies revealed that property-liability insurers in Egypt do not have the same level of financial performance efficiency where those companies are classified into three groups: excellent, good, and poor. Furthermore, this paper investigates using the stepwise logistic regression model to determine the most factors among these selected financial ratios that influence those companies' financial performance. The results suggest that only three ratios were statistically significant predictors: "Risk retention rate", "Insurance account receivable to total assets", and "Net profit after tax to total assets". Finally, this paper presents the multi-layers artificial neural network with a backpropagation algorithm as a new solvency prediction model with perfect classifying accuracy of 100%. The trained ANN could predict the next fiscal year with a prediction accuracy of 91.67%, and this percent is a good and favorable result comparing to other solvency prediction models used in Egypt.
Optimization algorithm refers to the process of finding the best combination for a set of decision variables to solve a specific problem. For the complex optimization problems emerging in various fields such as engineering, economy, and medicine, it is not easy to find the optimal global solution by using traditional methods of mathematical optimization. However, the swarm intelligence optimization algorithm, which simulates the behavior of natural organisms, has successfully solved many complex optimization problems [1,2,3,4]. With the in-depth understanding of biological organisms, researchers have successively developed a series of swarm intelligence optimization algorithms, such as Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], Firefly Algorithm (FA) [8], Grey Wolf Optimizer (GWO) [9], Seagull Optimization Algorithm (SOA) [10], Slime Mould Algorithm (SMA) [11] and so on.
Mirjalili et al. [12] proposed a Salp Swarm Algorithm (SSA) in 2017, which is a new swarm intelligence optimization algorithm. Its optimization idea came from the population mechanism of the salp swarm chain foraging in the ocean. Once the SSA was proposed, it has attracted the extensive attention of many scholars because of its simple principle and easy implementation. Currently, this algorithm has been widely used in the fields of feature extraction [13,14], image segmentation [15,16], dispatch optimization [17], nodes localization [18], and so on.
The essence of the SSA is a random search optimization algorithm. It has the shortcomings of low accuracy in the later stage of iteration and is easy to get stuck at local optima. As a meta-heuristic algorithm, the searching behavior of SSA is divided into two main phases: exploration and exploitation phases. In exploration phase, it can efficiently discover the search space mostly by randomization, but it may face abrupt changes. In exploitation phase, it converges toward the most promising region. But, SSA often traps into local optima due to its stochastic nature and lack of balancing between exploration and exploitation. Thus, from this point, many studies have been presented to improve the performance of SSA and to overcome these defects.
Some improvement of single strategy has applied to enhance the performance of SSA by scholars. Sayed et al. [19] used a chaotic mapping sequence to take place the random parameter, which significantly improved the convergence rate and resulting precision of SSA. Abbassi et al. [20] proposed an Opposition-based Learning Modified Salp Swarm Algorithm (OLMSSA) for the accurate identification of circuit parameters. Singh et al. [21] updated the position of the salp swarm by sine cosine to enhance the exploration and exploitation capability. Syed et al. [22] proposed a strategy based on the weighted distance position update called the Weighted Salp Swarm Algorithm (WSSA) to enhance the performance and convergence rate of the SSA. Singh et al. [23] proposed a Hybrid SSA-PSO algorithm (HSSAPSO) by adding the speed optimization method of particle swarm optimization algorithm in the position update stage of salp swarm to avoid premature convergence of the optimal solution in the search space.
Moreover, multi-strategy improvement is adopted to enhance the SSA and has achieved good results with the development of research and application of SSA. Zhang et al. [24] used the Gaussian Barebone and stochastic fractal search mechanism to balance the global search ability and local search ability of the basic SSA. Liu et al. [25] proposed a new SSA-based method named MCSSA, in which the structure of SSA is rearranged using a chaos-assisted exploitation strategy and multi-population foundation to enhance its performance. Zhang et al. [26] proposed an ensemble composite mutation strategy to boost the exploitation and exploration trends of SSA, as well as a restart strategy to assist salps in getting away from local optimum. Zhao et al. [27] made an improvement of SSA called AGSSA, in which an adaptive control parameter is introduced into the position update stage of followers to boost the local exploitative ability of the population, and the elite gray wolf domination strategy is introduced in the last stage of the population position update to help the population find the global optimal solution faster. Zhang et al. [28] presented a chaotic SSA with differential evolution (CDESSA), and in the proposed framework, chaotic initialization is utilized to produce a better initial population aim at locating a better global optimal, and the differential evolution is used to build up the search capability of each agent. Xia et al. [29] proposed a QBSSA, in which an adaptive barebones strategy help to reach both accurate convergence speed and high solution quality and a quasi-oppositional-based learning make the population away from trapping into local optimal and expand the search space. Zhang et al. [30] proposed an enhanced SSA (ESSA), which improves the performance of SSA by embedding strategies such as orthogonal learning, quadratic interpolation, and generalized oppositional learning.
Although the basic SSA enriches some characteristics like fast convergence speed and simple implementation, it may trap at sub-optimal solutions easily in some cases when handling the more complex optimization problems. Some improved algorithms of SSA have been provided by scholars mentioned as above, but each algorithm has its own merits and drawbacks. Hence, there is no guarantee which algorithm is best suited for a specific problem according to the "No free lunch" theorem [31]. In practical applications, there are special requirements on the accuracy or the convergence speed of the algorithm, so it is necessary to adopt more strategies.
In basic SSA, when the whole swarm of salps falls into a sub-optimal solution, the algorithm is trapped at that local solution and eventually stagnate at that suboptimal solution. So, we proposed a strategy of using dimension-by-dimension centroid opposition-based learning to make the slap population get more wide search space. Moreover, a random factor is used to increase the randomness of the population distribution and PSO's social learning strategy is added to speed up convergence.
The main contributions of this paper are as follows:
1) An improved algorithm which combines dimension-by-dimension centroid opposition-based learning, random factor, and PSO's social learning strategy (DCORSSA-PSO) is proposed.
2) The performance of proposed DCORSSA-PSO is verified by comparing it with several well-known algorithms in benchmark functions.
3) The proposed DCORSSA-PSO is used to the design of system reliability optimization based on T-S fault tree and has achieved good result.
The remainder section of this article is structured as follows. Section 3 introduces the basic principles of SSA. Section 4 introduces the mathematical principles of dimension-by-dimension centroid opposition-based learning, the addition of random factor and PSO' social learning, and proposes DCOSSA, DCORSSA and DCORSSA-PSO. Section 5 contains simulation experiment and result analysis. In Section 6, the efficacy of the proposed DCORSSA-PSO is assessed on engineering design of system reliability optimization. Finally, conclusions and future works are summarized in Section 7.
The salp swarm algorithm was proposed by Mirjalili et al. in 2017 [12], which is a heuristic swarm intelligent optimization algorithm that simulates the navigating and foraging behavior of salps. The salp chain consists of two types of salps: leader and follower. The leader is the salp at the head of the salp chain, and the other salps are considered followers. To enhance the population diversity of the algorithm and enhance the ability to jump out of the local optima, half the salps are selected as the leaders.
The position update equation of the leader is as follows [32]:
xij={Fj+c1((ubj−lbj)c2+lbj)0⩽c3<0.5Fj−c1((ubj−lbj)c2+lbj)0.5⩽c3⩽1 | (1) |
where xij is the position of the ith leader in the jth dimension; Fj is the position of food source in the jth dimension; ubj and lbj indicate the upper and lower bound of the jth dimension, respectively; c2 and c3 are random numbers in the range [0, 1], which decide respectively the moving step and the moving direction (positive or negative) of the jth dimension. c1 is the convergence factor, which is used to balance the exploration and exploitation ability of the algorithm in the iterative process. c1 is defined as follows:
c1=2e−(4t/T)2 | (2) |
where t is the current number of iterations and T is the maximum number of iterations.
The position update equation of followers is as follows:
xij=12(xij+xi−1j) | (3) |
where i ≥ 2 and xij is the position of the ith follower in the jth dimension. Eq (3) shows that the ith follower in the jth dimension is updated according to the center of the previous generation of the ith follower and the i-1th follower in the jth dimension.
Although the SSA is experienced to reach good accuracy compared with recent meta-heuristics, it may still face the shortcomings of getting trapped in local optima and is not suitable for highly complex optimization functions. To extend the search capability of SSA, a new hybrid salp swarm algorithm based on dimension-by-dimension centroid opposition-based learning strategy, random factor, and PSO's social learning strategy (DCORSSA-PSO) is proposed to solve engineering problems.
Opposition-based learning is a novel learning strategy proposed by Tizhoosh in 2005 [33]. The principle is that the current optimal solution and the opposition-based learning solution are searched for simultaneously during the population iteration and the better one in these two solutions is retained to the next generation according to the fitness value. This searching method improves the population diversity and enhances the ability of the algorithm to jump out of the local solutions.
Definition 1: Let x∈R be a real number and x∈[lb, ub]. The opposite number ˜x is defined as follows:
˜x=lb+ub−x | (4) |
Analogously, the opposite number in a multidimensional case is also defined.
Definition 2: Let X=(x1,x2,⋯,xD) be a point in a D-dimensional coordinate system with x1, ..., xD∈R, and xj∈ [lbj, ubj]. The opposite point ˜X=(˜x1,˜x2,⋯,˜xD) is defined as:
˜xj=lbj+ubj−xj, 1⩽j⩽D | (5) |
When the opposition-based learning method calculates the opposite point, two boundaries (min and max) are taken from two extreme points in the population for every dimension. The remaining points of the population are not considered, so this represents a weakness in terms of convergence speed. A Centroid Opposition-Based Computing (COBC) was proposed by Rahnamayan [34], which takes the entire population to generate the opposite points and hence improves the convergence speed and solution accuracy.
Let (X1, ..., Xn) be n points in D-dimensional search space with each point in that space carrying a unit mass. Then the centroid of the body can be defined as follows:
M=X1+X2+⋯+Xnn | (6) |
where then we have
Mj=n∑i=1xi,jn | (7) |
where xi, j is the jth dimension of ith point, Mj is the jth dimension centroid of all n points.
The opposite-point ˜Xi of the point Xi is calculated as follows:
{˜Xi=2×M−Xi˜xi,j=2×Mj−xi,j, 1⩽j⩽D | (8) |
At present, most algorithms adopt the method of variation for all dimension information of the population and select or eliminate evolution by comparing the fitness values of different individuals as a whole for all dimension, however, it is difficult to ensure that each selected dimensional information of the evolutionary individual is better than that of the eliminated individual. Therefore, there is often inter-dimensional interference in the calculation of the fitness value, which masks the information of evolution dimension and reduces the quality of solution and the convergence speed.
The dimension-by-dimension update evaluation strategy refers to evaluating the fitness value separately for each dimension, which can reduce the inter-dimensional interference between each dimension of the individual and avoid the problem of low variation efficiency. This operation for population individuals can more accurately evaluate the fitness value in each iteration. However, for the high-dimensional test functions, it will greatly increase the time complexity of the algorithm. Through the test, it is found that better results can still be achieved by updating the dimensional information of the optimal individual instead of each individual.
This paper combines the dimension-by-dimension update strategy with the centroid opposition-based learning strategy, and proposes a Dimension-by-dimension Centroid Opposition-based learning Salp Swarm Algorithm (DCOSSA). The basic step is: Firstly, calculate the opposite point of the centroid of the current population through Eqs (4)–(8). Then, replace the information at the first dimension of the food source with the first dimension of the opposite point of the center of gravity. If the fitness value of the new food source is better than the fitness value of the original food source, the opposite solution information of the population center of gravity of this dimension is retained, otherwise, the update result of this dimension is discarded. At last, update the next food source information in dimension order until all dimensions are updated.
The pseudo-code of the strategy of dimension-by-dimension centroid opposition-based learning is as follows:
Calculate the center position M of the iterative population according to Eq (6)
for j = 1: Dim
Calculate opposite solution of the food source position˜Fjaccording to Eq (8)
If f (˜Fj) is better than f (Fj) then
Fj = ˜Fj
end if
end for
According to the position update Eq (3) of followers in the salp swarm algorithm, the position of follower takes the center of gravity of the corresponding position of the salp in the previous generation and the position adjacent to the previous salp. The head follower uses the information of the leader slap besides its information of previous generation, so the whole population of followers are affected gradually by the leader slap. It can be seen from Eq (1) that the leader's position is updated near the food source. Therefore, when the food source does not fall into the local solutions, this end-to-end search mechanism can enable the followers to fully carry out local exploitation, but the gradual transmission of the global optimal information is not conducive to the rapid convergence of the algorithm. Moreover, when the food source falls into the local solutions, the followers will fall into the dilemma of invalid region search, that is, the lack of population diversity, so the algorithm is easy to fall into the local extremes.
The above searching mechanism is fixed and lack of dynamic adjustment. Therefore, this paper proposes a DCORSSA algorithm on the basis of DCOSSA, that is, a random factor is added to the update equation of its followers to enhance the update randomness, so that population can get more chance to jump out of the local optima.
A random factor c4 between 0 and 1 replaces the constant 1 of Eq (3). So, the new position update equation of followers is as follows:
xij=c42(xij+xi−1j) | (9) |
As mentioned above in Section 4.2, the one-by-one transfer mechanism of salp chain makes the convergence speed is slow. So, in this section, we introduce the social learning strategy of PSO on the basis of DCORSSA. The Particle Swarm Optimization (PSO) is a very practical swam optimization algorithm proposed by Kennedy and Eberhart [5]. Particles search for optimization through information sharing mechanism. That is, particles obtain their own historical experience (individual optimal pi) and group experience (global optimal pg) through information exchange between individuals to achieve the purpose of optimization. The update formula of velocity and position are as follows:
vij(t+1)=ωvij(t)+c1r1(t)[pij(t)−xij(t)]+c2r2(t)[pgj(t)−xij(t)] | (10) |
xij(t+1)=xij(t)+vij(t+1) | (11) |
where vijis the velocity of the ith particle in the jth dimension, xij is the position of the ith particle in the jth dimension, t refers to the iteration number, w is inertia weight that aims at determining the effect of previous velocities on current velocity, c1 represents the individual learning factor and c2 represents the social learning factor, r1 and r2 are random variables used to increase the randomness of particle flight, whose values are normally distributed in [0, 1], pij and pgj indicate the elements of individual optimal location and global optimal location in the jth dimension, respectively.
Equation (10) includes three parts: the first part is the "inertia" part, which is the motion inertia of particles, reflecting the tendency of particles to maintain their previous velocity; the second part is the "individual learning", which reflects the trend of particles moving to their previous best position in history; the third part is the "social learning", which reflects the trend of particles moving to the best position in the previous history of the population.
Finally, based on DCORSSA, this paper further proposes a DCORSSA-PSO algorithm. That is, the social learning strategy of PSO which is the third part of Eq (10) is introduced to the position update equation of the followers of DCORSSA. On the basis of increasing the random distribution of its own position, the improved algorithm makes full use of the global information and strengthens the tendency of individuals to move to the food source.
The position update equation of followers of proposed DCORSSA-PSO is as follows:
xij=xij+c5×rand(0,1)×(Fj−xij) | (12) |
where xij is the position of the ith follower in the jth dimension updated by Eq (9), Fj is the food source position of the jth dimension, c4 is random number between 0 and 1, c5 is the social learning factor, taken c5 = 1.49.
The flow chart of the algorithm DCORSSA-PSO is shown in Figure 1.
The pseudo-code of DCORSSA-PSO is as follows:
Set the initial parameters of the algorithm: population numbers N, population dimensions Dim, population iteration times T, search upper bound ub and search lower bound lb;
Initialize the population randomly, calculate the fitness value of each individual, and take the position with the optimal fitness value as the food source position F;
for t = 1: T
for i = 1: N
if i < = N/2
Update the position of leader by Eq (1)
else
Update the position of followers by Eqs (9) and (12)
end if
end for
Update the food source position F and its fitness value
Calculate the center position M of the iterative population according to Eq (6)
for j = 1: Dim
Calculate opposite solution of the food source position˜Fjaccording to Eq (8)
If f (˜Fj)is better than f (Fj) then
Fj = ˜Fj
end if
end for
end for
According to the literature [12], the computational complexity of the SSA algorithm is O(t(d∗n + Cof∗n)) where t shows the number of iterations, d is the number of variables (dimension), n is the number of solutions, and Cof indicates the cost of the objective function. In the DCORSSA-PSO, the fitness function is recalculated for each dimension of the food source with the dimension-by-dimension centroid opposition-based learning, so the amount of operation of O (Cof * d) is increased; simultaneously, to increase the search vitality of the algorithm and improve the search speed of the algorithm, random factor and PSO's social learning strategy are introduced. Still, the number of code execution is not increased. So the computational complexity of DCORSSA-PSO algorithm is O(t(d∗n + Cof∗n + Cof*d)). It can be seen that the computational complexity of the DCORSSA-PSO algorithm is higher than that of standard SSA, and it increases with the increase of population dimension.
Test environment: the hard disk running environment is CPUi5-7200U, the memory is 12GB, the software running environment is windows10 system, and the running software is MATLAB 2019b. These parameters of different algorithms are set the same: population sizes N = 30, population dimensions Dim = 30, and the maximum number of iterations T = 500. Respective parameter settings for involved algorithms are shown in Table 1.
Algorithm | Parameters |
SSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCOSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCORSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; |
DCORSSA-PSO | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; c5 = 1.49; |
PSO | w∈[0.4, 0.9]; c1 = c2 = 1.49; |
GWO | r1∈[0, 1]; r2∈[0, 1] |
Ten classical benchmark functions, as shown in Table 2, are used to evaluate the performance of DCORSSA-PSO. SSA, PSO, DCOSSA and GWO are the algorithms for comparison. In the ten functions, f1~f4 are unimodal test functions that can test the optimization accuracy of the algorithms, f5~f8 are multimodal test functions that can test the global optimization ability and convergence speed of the algorithms, f9~f10 are ill-conditioned test functions that can test the exploration and exploitation capabilities of the algorithms.
Benchmark function | Range | fmin |
f1(x)=n∑i=1x2i | [-100,100] | 0 |
f2(x)=n∑i=1|xi|+n∏i=1|xi| | [-10, 10] | 0 |
f3(x)=nmaxi=1{|xi|} | [-100,100] | 0 |
f4(x)=n∑i=1(⌊xi+0.5⌋)2 | [-100,100] | 0 |
f5(x)=n∑i=1[x2i−10cos(2π xi)+10] | [-5.12, 5.12] | 0 |
f6(x)=14000n∑i=1x2i−n∏i=1cosxi√i+1 | [-600,600] | 0 |
f7(x)=1−cos(2π√n∑i=1x2i)+0.1√n∑i=1x2i | [-32, 32] | 0 |
f8(x)=−20exp(−0.2√1nn∑i=1x2i)−exp(1nn∑i=1cos(2πxi))+20+e | [-100,100] | 0 |
f9(x)=n∑i=1ix4i+rand | [-1.28, 1.28] | 0 |
f10(x)=n−1∑i=1[100(xi+1−x2i)2+(xi−1)2] | [-30, 30] | 0 |
To objectively test the optimization performance of the algorithm of DCORSSA-PSO, the same initial population is selected for all algorithms, and the average fitness value, optimal fitness value, standard deviation of fitness value, and the average running time of each algorithm for 30 times independently are counted to evaluate the algorithm comprehensively. The comparison results of each algorithm are shown in Table 3.
Functions | Measure | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
f1 | Mean | 2.40×10-7 | 2.28×10-9 | 2.70×10-27 | 5.80×10-44* | 2.08×102 | 1.42×10-27 |
Best | 3.70×10-8 | 9.29×10-12 | 6.45×10-32 | 1.42×10-46* | 5.82×101 | 3.07×10-29 | |
Std | 4.09×10-7 | 5.16×10-9 | 5.86×10-27 | 1.17×10-43* | 1.08×102 | 1.82×10-27 | |
Time/s | 1.01×10-1* | 1.99×10-1 | 2.01×10-1 | 2.11×10-1 | 2.71×10-1 | 1.70×10-1 | |
f2 | Mean | 1.96 | 1.36×10-5 | 1.00×10-14 | 7.80×10-23* | 6.30 | 7.96×10-17 |
Best | 1.71×10-1 | 1.03×10-6 | 6.24×10-16 | 2.12×10-24* | 3.57 | 4.54×10-18 | |
Std | 1.46 | 3.95×10-5 | 1.18×10-14 | 9.73×10-23* | 1.75 | 4.69×10-17 | |
Time/s | 9.37×10-2* | 1.83×10-1 | 1.80×10-1 | 1.93×10-1 | 2.41×10-1 | 1.47×10-1 | |
f3 | Mean | 1.20×101 | 1.08 | 9.05×10-15 | 7.16×10-23* | 1.24×101 | 8.01×10-7 |
Best | 5.19 | 1.36×10-1 | 6.79×10-17 | 1.98×10-24* | 4.76 | 1.16×10-7 | |
Std | 3.35 | 6.96×10-1 | 1.29×10-14 | 8.06×10-23* | 3.19 | 7.47×10-7 | |
Time/s | 9.07×10-2* | 1.75×10-1 | 1.72×10-1 | 1.85×10-1 | 2.36×10-1 | 1.41×10-1 | |
f4 | Mean | 1.08×10-7 | 9.70×10-10* | 3.39×10-7 | 6.50×10-8 | 1.67×102 | 7.97×10-1 |
Best | 2.54×10-8 | 9.62×10-12* | 2.56×10-8 | 2.41×10-8 | 5.07×101 | 2.57×10-1 | |
Std | 8.12×10-8 | 2.23×10-9* | 8.51×10-7 | 2.66×10-8 | 8.92×101 | 2.57×10-1 | |
Time/s | 9.29×10-2* | 1.79×10-1 | 1.76×10-1 | 1.91×10-1 | 2.42×10-1 | 1.44×10-1 | |
f5 | Mean | 6.16×101 | 4.64×10-1 | 0* | 0* | 1.31×102 | 3.42 |
Best | 2.79×101 | 2.80×10-12 | 0* | 0* | 8.73×101 | 0* | |
Std | 1.80×101 | 5.68×10-1 | 0* | 0* | 2.41×101 | 4.36 | |
Time/s | 1.05×10-1* | 1.98×10-1 | 1.92×10-1 | 2.00×10-1 | 2.59×10-1 | 1.52×10-1 | |
f6 | Mean | 9.54×10-1 | 1.00×10-1 | 0* | 0* | 2.65 | 4.12×10-13 |
Best | 8.49×10-1 | 4.44×10-2 | 0* | 0* | 1.57 | 5.00×10-15 | |
Std | 4.30×10-2 | 3.49×10-2 | 0* | 0* | 1.04 | 4.65×10-13 | |
Time/s | 1.22×10-1* | 2.42×10-1 | 2.36×10-1 | 2.46×10-1 | 2.77×10-1 | 1.70×10-1 | |
f7 | Mean | 1.96 | 1.04 | 2.96×10-15 | 2.25×10-23* | 2.75 | 1.83×10-1 |
Best | 1.00 | 7.00×10-1 | 1.22×10-16 | 1.86×10-25* | 1.80 | 9.99×10-2 | |
Std | 4.18×10-1 | 2.40×10-1 | 2.99×10-15 | 2.90×10-23* | 4.88×10-1 | 3.79×10-2 | |
Time/s | 9.56×10-2* | 1.94×10-1 | 1.91×10-1 | 2.00×10-1 | 2.46×10-1 | 1.51×10-1 | |
f8 | Mean | 2.46 | 7.58×10-6 | 8.23×10-15 | 8.88×10-16* | 5.84 | 9.80×10-14 |
Best | 9.31×10-1 | 7.50×10-7 | 8.88×10-16* | 8.88×10-16* | 4.45 | 7.55×10-14 | |
Std | 7.42×10-1 | 1.01×10-5 | 1.23×10-14 | 0* | 8.80×10-1 | 1.49×10-14 | |
Time/s | 1.08×10-1* | 2.04×10-1 | 1.98×10-1 | 2.10×10-1 | 2.62×10-1 | 1.52×10-1 | |
f9 | Mean | 1.51 | 5.48×10-2 | 6.72×10-5* | 7.24×10-4 | 1.22 | 1.72×10-2 |
Best | 5.85×10-1 | 1.73×10-2 | 1.32×10-7* | 7.12×10-6 | 3.09×10-2 | 8.50×10-4 | |
Std | 6.72×10-1 | 1.91×10-2 | 6.83×10-5* | 1.55×10-3 | 1.23 | 1.55×10-2 | |
Time/s | 1.59×10-1* | 3.16×10-1 | 3.18×10-1 | 3.27×10-1 | 3.06×10-1 | 2.09×10-1 | |
f10 | Mean | 2.77×102 | 5.65×101 | 2.79×101 | 2.64×101* | 5.39×103 | 2.69×101 |
Best | 2.49×101* | 3.96×10-2 | 2.74×101 | 2.62×101 | 5.89×102 | 2.57×101 | |
Std | 4.33×102 | 4.31×101 | 2.05×10-1 | 1.33×10-1* | 5.25×103 | 7.83×10-1 | |
Time/s | 1.24×10-1* | 2.47×10-1 | 2.47×10-1 | 2.60×10-1 | 2.81×10-1 | 1.78×10-1 | |
Note: the mark "*" at the top right of the data indicates the best result obtained by all algorithms. |
From the experimental results of 30 independent runs in Table 3, it can be seen that the optimization performance of each algorithm in the standard test functions is different. The optimal value and average value can measure the accuracy of the optimization algorithm. In multimodal functions f5 and f6, DCORSSA-PSO and DCORSSA can search the theory optimal value 0, showing excellent optimization ability. In the test functions f1~f3, f7, the DCORSSA-PSO algorithm is superior to other algorithms in terms of both the average value and the optimal value. In the test functions f8 and f10, DCORSSA-PSO is superior to other algorithms in terms of the average value. In the unimodal test function f1, DCORSSA-PSO is more than 10 orders of magnitude higher than other algorithms in the accuracy of the optimal value. Compared with SSA, the convergence accuracy of DCORSSA and DCORSSA-PSO is also greatly improved, which indicates that adding different optimization strategies is very helpful to improve the optimization of SSA. At the same time, in the ill-conditioned function f9, the optimization accuracy of DCORSSA-PSO reaches 7.12 × 10-6; although its accuracy is improved compared with SSA, it is worse than DCORSSA. In the ill-conditioned function f10, although the DCORSSA-PSO algorithm improves the mean optimization accuracy compared with the SSA algorithm, the optimal value search is still insufficient compared with the SSA algorithm. In unimodal function f4, DCORSSA-PSO inferiors to DCOSSA in terms of average value and optimal value. These cases which DCORSSA-PSO does not get the best performance indicate that the DCORSSA-PSO algorithm is still insufficient in search of some functions.
The standard deviation can measure the optimization stability of the optimization algorithm. Except for f4 and f9, the standard deviation of the DCORSSA-PSO algorithm calculated 30 times independently is always less than that of other algorithms, which shows that the improved DCORSSA-PSO algorithm can ensure the optimization stability of the algorithm when dealing with unimodal, multimodal, even ill-conditioned functions.
In terms of average running time, the SSA algorithm has a shorter running time than PSO algorithm and GWO algorithm, which shows that the improved DCOSSA, DCORSSA and DCORSSA-PSO have inherent advantages in operation speed. The average running time of DCORSSA-PSO algorithm is slightly longer than that of the DCOSSA, which does not cause a significant increase in running time, indicating that the addition of random factor and PSO's social learning strategy have little impact on the time complexity of the algorithm. The average running time of DCORSSA-PSO algorithm and DCORSSA is longer than that of the SSA algorithm, mainly due to the addition of dimension-by-dimension centroid opposition-based learning strategy.
The Wilcoxon signed-rank test [35] with a significance level of 0.05 was used to judge the statistical difference between the improved algorithm DCORSSA-PSO and the comparative algorithms such as SSA. The statistical results are shown in Table 4, in which: "+" indicates that the test result of DCORSSA-PSO is superior to the corresponding comparison algorithm. "=" indicates that the performance of the DCORSSA-PSO test result is similar to the corresponding comparison algorithm, and there is no statistically significant difference. "-" indicates that the DCORSSA-PSO test result is inferior to that of the corresponding comparison algorithm.
Comparison group | +/=/- | Comparison group | +/=/- |
DCORSSA-PSO VS SSA | 9/1/0 | DCORSSA-PSO VS PSO | 10/0/0 |
DCORSSA-PSO VS DCOSSA | 9/0/1 | DCORSSA-PSO VS GWO | 10/0/0 |
DCORSSA-PSO VS DCORSSA | 7/2/1 | --- |
According to the Wilcoxon signed-rank test results described in Table 4, it can be learned that DCORSSA-PSO wins in 45( = 9 + 9 + 7 + 10 + 10) cases, loses in 2 cases and shows a tie in the other cases in the total 50 ( = 5*10) cases. In general, the DCORSSA-PSO algorithm is better than other algorithms such as SSA algorithm in most functions, which proves the effectiveness of the proposed improved method.
In addition, in order to further evaluate the statistical comparison of the optimization performance of each algorithm, Friedman test [36] is used to study the difference between each algorithm as is shown in Table 5. The average ranking value (ARV) represents the average ranking value of the Friedman test of an algorithm that runs 30 times of all test functions independently. The smaller the ARV, the higher the optimization performance of the algorithm.
Algorithm | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
ARV | 4.8533 | 3.5700 | 2.2567 | 1.4300 | 5.8700 | 3.0200 |
rank | 5 | 4 | 2 | 1 | 6 | 3 |
From Table 5, we can clearly see the statistical results of the Friedman test. The ARV of DCORSSA-PSO integrating the three learning strategies is 1.4300, and the rank is No.1, which indicates that DCORSSA-PSO is significantly better than other comparison algorithms in solving these test functions. In addition, the rank of DCORSSA and DCOSSA combining the other strategies are No.2 and No.4 respectively, indicating that the above-mentioned optimization strategies are of great help in improving the optimization accuracy of the SSA algorithm.
Figure 2 shows the average convergence curve of each algorithm in 10 standard test functions. To better observe the optimization effect of the algorithm, the logarithm based on 10 is taken for the optimization fitness values of f1~f10.
In Figure 2, it can be seen that the optimization speed and accuracy of DCOSSA added with the dimension-by-dimension centroid opposition-based learning strategy are greatly improved compared with SSA, which shows that the dimension-by-dimension centroid opposition-based learning strategy is of great benefit to improve the population diversity and the ability to jump out of the local solutions. Compared with DCOSSA and the other three algorithms, DCORSSA-PSO which adds a random factor and integrates the social learning strategy of PSO, declines rapidly in the middle of the iteration, and its optimization speed is significantly ahead. Especially in the middle and early iterations of the function, the DCORSSA-PSO algorithm can almost quickly search for the optimal value, and continues to show high search activity in the later iterations. Even in multimodal functions f5 and f6, the curves are interrupted because the DCORSSA-PSO algorithm searches the theoretical optimal value 0 (the independent variable of lg cannot be 0).
All the above show that the DCORSSA-PSO algorithm is effective in dealing with unimodal, multimodal, and ill-conditioned test functions, and it has better optimization accuracy and speed, which is very helpful to solve the problems to be optimized in engineering practice.
Nowadays, more and more engineering problems are adopting optimization methods to get optimal performance [37], while system reliability optimization is one of the most useful engineering fields. System reliability optimization refers to finding an optimal design under certain resource constraints to obtain the highest reliability of the system or minimizing the investment while meeting specific reliability index requirements, thus obtaining the maximum economic benefits. At present, practice shows that the optimal redundancy allocation design is one of the most usually used methods to reduce system failure probability and improve system reliability. Redundancy design means that when a part of the system fails, the redundant part is activated through the monitoring and switching mechanism to complete the same function instead of the failed part, to reduce the failure probability of the system.
Many scholars have used intelligent optimization algorithms to solve reliability optimization problems. In literature [38], an enhanced nest cuckoo optimization algorithm was used to study the system reliability redundancy allocation with a cold-standby strategy. Literature [39] carried out reliability optimization of a fuzzy multi-objective system based on genetic algorithm and cluster analysis. Literature [40] proposed a new particle swarm optimization algorithm based on fuzzy adaptive inertia weight to solve the reliability redundancy allocation problem.
Fault tree analysis is one of the commonly used reliability analysis methods, which is oriented by system failure and unit failure. A fault tree is composed of events and gates. It is named fault tree because its fault logic relationship is graphically represented like a tree with the top event as root, event logic causality represented by the gate as a branch, and bottom event as a leaf. T-S model [41] was proposed by Takagi and Sugeno in 1985. Through if-then rules, a series of local linear subsystems and membership functions were used to accurately describe nonlinear systems. Song et al. [42] constructed T-S gates to describe event relations based on the T-S model, proposing the T-S fault tree analysis method. Yao et al. [43] proposed a new reliability optimization method based on the T-S fault tree and EPSO (Extended PSO).
Hypothetically, a mechanical system consists of two subsystems, each of which can improve the system reliability by adding a redundant design. In this paper, the T-S fault tree analysis method is used to construct the reliability allocation optimization model of the system, and the DCORSSA-PSO algorithm is used to optimize its reliability allocation. The T-S fault tree of the mechanical system is shown in Figure 3.
In Figure 3, x1~x5 are the bottom events, y1~y2 are the intermediate events, and y3 is the top event. G1~G3 are T-S gates. Fuzzy numbers 0, 0.5 and 1 represent the three states of normal, semi failure, and complete failure of each part, respectively. The fault states of each part are independent of each other. According to expert experience and historical data, the rule tables of the T-S gate are defined as shown in Tables 6–8.
rules | x1 | x2 | x3 | y1 | rules | x1 | x2 | x3 | y1 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||||
1 | 0 | 0 | 0 | 1 | 0 | 0 | 15 | 0.5 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0 | 0.5 | 0.2 | 0.5 | 0.3 | 16 | 0.5 | 1 | 0 | 0 | 0 | 1 |
3 | 0 | 0 | 1 | 0 | 0 | 1 | 17 | 0.5 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0 | 0.5 | 0 | 0.3 | 0.5 | 0.2 | 18 | 0.5 | 1 | 1 | 0 | 0 | 1 |
5 | 0 | 0.5 | 0.5 | 0.2 | 0.3 | 0.5 | 19 | 1 | 0 | 0 | 0 | 0 | 1 |
6 | 0 | 0.5 | 1 | 0 | 0 | 1 | 20 | 1 | 0 | 0.5 | 0 | 0 | 1 |
7 | 0 | 1 | 0 | 0 | 0 | 1 | 21 | 1 | 0 | 1 | 0 | 0 | 1 |
8 | 0 | 1 | 0.5 | 0 | 0 | 1 | 22 | 1 | 0.5 | 0 | 0 | 0 | 1 |
9 | 0 | 1 | 1 | 0 | 0 | 1 | 23 | 1 | 0.5 | 0.5 | 0 | 0 | 1 |
10 | 0.5 | 0 | 0 | 0.2 | 0.5 | 0.3 | 24 | 1 | 0.5 | 1 | 0 | 0 | 1 |
11 | 0.5 | 0 | 0.5 | 0.1 | 0.4 | 0.5 | 25 | 1 | 1 | 0 | 0 | 0 | 1 |
12 | 0.5 | 0 | 1 | 0 | 0 | 1 | 26 | 1 | 1 | 0.5 | 0 | 0 | 1 |
13 | 0.5 | 0.5 | 0 | 0.1 | 0.5 | 0.4 | 27 | 1 | 1 | 1 | 0 | 0 | 1 |
14 | 0.5 | 0.5 | 0.5 | 0.1 | 0.4 | 0.5 | - | - | - | - | - | - | - |
rules | x4 | x5 | y2 | rules | x4 | x5 | y2 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.4 | 0.2 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
rules | y1 | y2 | y3 | rules | y1 | y2 | y3 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.5 | 0.1 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
Taking Table 6 as an example, each row in rules1~27 represents a G1 gate rule. For example, in rule1, the fault states of bottom events x1, x2, and x3 are 0, 0, and 0 respectively, then the occurrence probability of fault state 0 of y1 is P1(y1 = 0) = 1, the occurrence probability of fault state 0.5 of y1 is P1(y1 = 0.5) = 0, the occurrence probability of fault state 1 of y1 is P1(y1 = 1) = 0. Under the same rule, the sum of the occurrence possibilities of each fault state of the superior event y1 is 1, that is, P1(y1 = 0) + P1(y1 = 0.5) + P1(y1 = 1) = 1.
According to the T-S fault tree and the corresponding rule gate of the system, a system reliability optimization model is constructed with the lowest system fault probability as the objective function and the overall cost of the system as the constraint. Among them, the system cost is the sum of the expenses of each component unit, its connectors, and switching equipment. The unit cost increases nonlinearly with the improvement of its reliability. The objective function and cost constraint expression are as follows:
min3∑q=2P(T=Tq)=9∑l=1PloPl(T=0.5)+9∑l=1PloPl(T=1) | (13) |
s.t5∑i=1αi{−μln[1−3∑αi=2P(xαii)]}βi×[ni+exp(ni4)]⩽C0 | (14) |
where P(T = Tq) is the fault probability of top event T when its fault state is Tq; Pl(T = 0.5) and Pl(T = 1) respectively represent the probability when the fault state of top event T is 0.5 and 1 in the rule l; Pl0 is the execution degree of T-S rule; P(xaii) is the failure probability of bottom event xi when its failure state is xaii; ni is the redundancy number of the element; μ is the fault-free operation time, taken μ = 1000 h; C0 is the constraint value of system cost, taken C0 = 175. αi and βi can be seen in Table 9.
i | 1 | 2 | 3 | 4 | 5 |
105αi | 2.540×10-5 | 2.483 × 10-5 | 6.420 × 10-5 | 7.160 × 10-5 | 2.417 × 10-5 |
βi | 1.500 | 1.500 | 1.500 | 1.500 | 1.500 |
The penalty function is one of the main constraint optimization methods available at present, whose core idea is to transform the original constrained optimization problem into an unconstrained problem by constructing auxiliary functions. In this paper, the cost constraint in the system reliability optimization model is transformed into an unconstrained optimization problem by introducing a penalty function. That is, a penalty factor is added to the fitness value of the salps that does not satisfy the cost constraints, so that the infeasible solution can be eliminated in the process of evolution. In this paper, the maximum probability value of system fault N (N = 1) is used as the penalty factor, and the failure probability fitness function is constructed as follows:
fitness={3∑q=2P(T=Tq)C⩽C03∑q=2P(T=Tq)+NOtherwise | (15) |
The DCORSSA-PSO algorithm is compared with SSA, PSO, and GWO algorithms. Set the maximum number of iterations of the above five algorithms T = 500. And the reliability optimization results of the algorithms are shown in Table 10.
Optimized parameters | SSA | DCORSSA-PSO | PSO | GWO | ||||
P(xi) | ni | P(xi) | ni | P(xi) | ni | P(xi) | ni | |
x1 | 1.28 × 10-1 | 3 | 1.09 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.37 × 10-1 | 3 |
x2 | 9.22 × 10-2 | 3 | 1.08 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.68 × 10-1 | 3 |
x3 | 1.38 × 10-1 | 3 | 9.00 × 10-2 | 3 | 1.07 × 10-1 | 3 | 1.88 × 10-1 | 3 |
x4 | 1.26 × 10-1 | 3 | 1.41 × 10-1 | 3 | 1.70 × 10-1 | 3 | 1.57 × 10-1 | 3 |
x5 | 1.16 × 10-1 | 3 | 9.66 × 10-2 | 3 | 1.11 × 10-1 | 3 | 1.59 × 10-1 | 3 |
P | 5.03 × 10-3 | 3.71 × 10-3* | 8.03 × 10-3 | 1.22 × 10-2 | ||||
C | 175.00 | 175.00 | 115.76 | 105.89* | ||||
Time/s | 4.91 × 10-1* | 6.05 × 10-1 | 8.17 × 10-1 | 5.09 × 10-1 |
According to the optimization results in Table 10, when taking the minimum failure probability of the system as the objective function, the system failure probability optimized by the DCORSSA-PSO algorithm is lower than that of other algorithms including SSA, PSO, GWO, which proves the feasibility and superiority of the improved algorithm. In terms of running time, SSA has the shortest one, while DCORSSA-PSO has the longest one. This indicates that the multi-strategy improvement of DCORSSA-PSO spend more time, but the running time of DCORSSA-PSO is less than one second which can fulfill the needs of practical engineering.
In addition, in order to more intuitively show the reliability optimization process of the four algorithms, the iterative curve is shown in Figure 4.
To test the stableness of the DCORSSA-PSO algorithm, let all the algorithms run 30 times at the same initial condition. Table 11 shows the statistical results of failure probability.
Measure | SSA | DCORSSA-PSO | PSO | GWO |
Mean | 6.956 × 10-3 | 3.746 × 10-3* | 5.196 × 10-3 | 8.304 × 10-3 |
Best | 3.710 × 10-3 | 3.709 × 10-3* | 4.387 × 10-3 | 4.404 × 10-3 |
Std | 3.703 × 10-3 | 3.886 × 10-5* | 8.625 × 10-4 | 2.518 × 10-3 |
Time/s | 5.466 × 10-1* | 6.795 × 10-1 | 8.975 × 10-1 | 5.617 × 10-1 |
From Table 11, we can find that in a statistical sense, DCORSSA-PSO compared with SSA, PSO and GWO still get the best result of failure probability including mean value, best value and standard deviation except for running time. The average failure probability obtained by DCORSSA-PSO algorithm relatively reduced by 46.14% compared to SSA, which shows that DCORSSA-PSO greatly improves the optimization performance of SSA by integrating the multi-strategy improvement.
In this paper, a box plot is used to analyze the data distribution of the system failure probability. The box plot consists of five parts: upper limit, upper quartile, median, lower quartile and lower limit. The upper limit is connected to the upper quartile with a dashed line, and same to the lower limit and the lower quartile. The center mark indicates the median. Figure 5 shows the boxplot of different algorithms.
In the statistical result of Figure 5, y-axis P means the failure probability, and it can be found that DCORSSA-PSO has the lowest median of P value, which means that the reliability calculated by DCORSSA-PSO is the highest. In addition, the box obtained by DCORSSA-PSO is very compact, that is, the range of the box formed between the upper quartile and the lower quartile is the smallest, indicating that DCORSSA-PSO has less volatility compared to the datasets of other algorithms. Therefore, DCORSSA-PSO outperforms other algorithms. On the other hand, outliers (+) appear in the failure probability optimized by SSA, DCORSSA-PSO and PSO, indicating that further research in performance improvement is needed for DCORSSA-PSO.
This paper proposes a DCORSSA-PSO algorithm that hybridizes dimension-by-dimension centroid opposition-based learning strategy, random factor and PSO's social learning strategy based on standard SSA. The improved algorithm mainly improves the standard SSA algorithm in three parts: a) a dimension-by-dimension centroid opposition-based learning strategy is added to the food source update, which can expand the population search range, strengthen the dimension evolution information, and enhance the ability to jump out of the local solutions; b) random factor is added in the update equation of followers to enhance the diversity of population distribution; c) drawing on the experience of PSO's social learning strategy, in the update equation of followers, the food source is added to directly guide the followers to improve the convergence speed of the algorithm. The comparison results in the synthesis of ten standard test functions and the reliability optimization example show that the DCORSSA-PSO algorithm is superior to other algorithms in optimization, which proves that the above improvement strategy has good feasibility and superiority to improve the optimization performance of the SSA algorithm. As a future plan, the method of increasing the diversity of the population will be introduced into the research of DCORSSA-PSO such as the levy-flight theory, chaos mapping. At the same time, DCORSSA-PSO can be employed to optimize pattern classification, fuzzy control, machine learning, etc.
This project is supported by the National Natural Science Foundation of China (Grant No. 51975508), Natural Science Foundation of Hebei Province (Grant No. E2021203061).
All authors declare that there is no conflict of interests in this paper.
[1] |
Affes Z, Hentati-Kaffel R (2019) Predicting US banks bankruptcy: logit versus Canonical Discriminant analysis. Comput Econ 54: 199-244. doi: 10.1007/s10614-017-9698-0
![]() |
[2] |
Altman EI, Haldeman R, Narayanan P (1977) ZETA Analysis, a New Model for Bankruptcy Classification. J Bank Financ 1: 29-54. doi: 10.1016/0378-4266(77)90017-6
![]() |
[3] |
Altman EI (1968) Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy. J Financ 23: 589-609. doi: 10.1111/j.1540-6261.1968.tb00843.x
![]() |
[4] |
Altman EI, Iwanicz-Drozdowska M, Laitinen EK, et al. (2020) A race for long horizon bankruptcy prediction. Appl Econ 52: 4092-4111. doi: 10.1080/00036846.2020.1730762
![]() |
[5] |
Atiya AF (2001) Bankruptcy Prediction for Credit Risk Using Neural Networks: A Survey and New Results. IEEE Tran Neural Networks 12: 929-935. doi: 10.1109/72.935101
![]() |
[6] | Back B, Laitinen T, Sere K, et al. (1996) Choosing Bankruptcy Predictors Using Discriminant Analysis, Logit Analysis, and Genetic Algorithms. Technical Report No. 40. Turku Centre for Computer Science, Turku. |
[7] |
Baranoff EG, Sager TW, Shively TS (2000) A Semiparametric Stochastic Spline Model as a Managerial Tool for Potential Insolvency. J Risk Insur 67: 369-396. doi: 10.2307/253834
![]() |
[8] |
BarNiv R, McDonald JB (1992) Identifying Financial Distress in the Insurance Industry: A Synthesis of Methodological and Empirical Issues. J Risk Insur 59: 543-573. doi: 10.2307/253344
![]() |
[9] |
BarNiv R, Hershbarger RA (1990) Classifying Financial Distress in the Life Insurance Industry. J Risk Insur 57: 110-136. doi: 10.2307/252927
![]() |
[10] |
Beaver WH (1966) Financial ratios as predictors of failure. J Account Res 4: 71-111. doi: 10.2307/2490171
![]() |
[11] | Berthold M, Hand DJ (2003) Intelligent data analysis. Vol. 2. Berlin: Springer. |
[12] |
Bradley AP (1997) The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit 30: 1145-1159. doi: 10.1016/S0031-3203(96)00142-2
![]() |
[13] |
Brockett LP, Cooper WW, Golden LL, et al. (1994) A neural network method for obtaining an early warning of insurer insolvency. J Risk Insur 61: 402-424. doi: 10.2307/253568
![]() |
[14] |
Brockett LP, Golden LL, Jang J, et al. (2006) A comparison of neural network, statistical methods, and variable choice for life insurers. J Risk Insur 73: 397-419. doi: 10.1111/j.1539-6975.2006.00181.x
![]() |
[15] | Burca AM, Batrinca G (2014) The Determinants of Financial Performance in the Romanian Insurance Market. Int J Acad Res Accounti Financ Manage Sci 4: 299-308. |
[16] |
Carson JM, Hoyt RE (1995) Life Insurer Financial Distress: Classification Models and Empirical Evidence. J Risk Insur 62: 764-775. doi: 10.2307/253595
![]() |
[17] |
Carson JM, Hoyt R (2000) Evaluating the Risk of Life Insurer Insolvency: Implications From the US for the European Union. J Multinatl Financ Manage 10: 297-314. doi: 10.1016/S1042-444X(00)00023-2
![]() |
[18] |
Cheng J, Weiss MA (2012) The Role of RBC, Hurricane Exposure, Bond Portfolio Duration, and Macroeconomic and Industry-wide Factors in Property—Liability Insolvency Prediction. J Risk Insur 79: 723-750. doi: 10.1111/j.1539-6975.2011.01452.x
![]() |
[19] |
Chen R, Wong KA (2004) The Determinants of Financial Health of Asian Insurance Companies. J Risk Insur 71: 469-499 doi: 10.1111/j.0022-4367.2004.00099.x
![]() |
[20] | Choi RY, Coyner AS, Kalpathy-Cramer J, et al. (2020) Introduction to machine learning, neural networks, and deep learning. Transl Vision Sci Technol 9: 14. |
[21] |
Cummins JD, Grace MF, Phillips RD (1999) Regulatory Solvency Prediction in Property Liability Insurance: Risk-Based Capital, Audit Ratios, and Cash Flow Simulation. J Risk Insur 66: 417-458. doi: 10.2307/253555
![]() |
[22] | Fernandes AAT, Figueiredo DB, Rocha EC, et al. (2021) Read this paper if you want to learn logistic regression. Rev Sociol Polít 28: e006. |
[23] | Garson GD (2011) Multiple regression: Overview. Statnotes: Topics in Multivariate Analysis. |
[24] | Garson GD (2014) Logistic regression: Binary and multinomial. Asheboro, NC. |
[25] | Gross EP, Vozikis GS (2000) Prediction of Insolvency of Life Insurance through Neural Networks. In ECIS 2000 Proceedings. |
[26] |
Gschwind M (2007) Predicting late payments: A study in tenant behavior using data mining techniques. J Real Estate Portf Manage 13: 269-288. doi: 10.1080/10835547.2007.12089778
![]() |
[27] | Hair JF, Black WC, Babin BJ, et al. (2009) Análise multivariada de dados. Bookman editora. |
[28] | Hara K, Kataoka H, Satoh Y (2017) Learning spatio-temporal features with 3d residual networks for action recognition. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 3154-3160. |
[29] | Hosmer DW, Lemeshow S (2000) Applied Logistic Regression. New York: John Wiley & Sons. |
[30] |
Hsiao SH, Whang TJ (2009) A Study of Financial Insolvency Prediction Model for Life Insurers. Expert Syst Appl 36: 6100-6107. doi: 10.1016/j.eswa.2008.07.024
![]() |
[31] | Huang CS, Dorsey RE, Boose MA (1994) Life Insurer Financial Distress Prediction: A Neural Network Model. J Insur Regul 13: 131-167. |
[32] | Kennedy P (2008) A guide to econometrics. John Wiley & Sons. |
[33] | Kohonen T, Hynninen J, Kangas J, et al. (1996) LVQ PAK: The LearningVector Quantization Program Package. Helsinki University of Technology, Technical report. |
[34] | Kristóf T (2004) Mesterséges intelligencia a csõ delõ rejelzésben [Artificial Intelligence in BankruptcyPrediction]. Jö võ tanulmányok 21. Budapest: Future Theories Centre, BUESPA. |
[35] | Krogh A (2008) What are artificial neural networks? Nat Biotechnol 26: 195-197. |
[36] |
Lee SH, Urrutia JL (1996) Analysis and Prediction of Insolvency in the Property-Liability Insurance Industry: A Comparison of Logit and Hazard Models. J Risk Insur 63: 121-130. doi: 10.2307/253520
![]() |
[37] | Maas AL, Hannun AY, Ng AY (2013) Rectifier nonlinearities improve neural network acoustic models. In Proc. Icml, 30: 3. |
[38] | Menard S (2002) Applied logistic regression analysis. Sage. |
[39] | Mollahosseini A, Chan D, Mahoor MH (2016) Going deeper in facial expression recognition using deep neural networks. In 2016 IEEE Winter conference on applications of computer vision (WACV), IEEE, 1-10. |
[40] | Negnevitsky M (2011) Artificial intelligence: a guide to intelligent systems. Third Edition. |
[41] |
Ohlson JA (1980) Financial ratios and the probabilistic prediction of bankruptcy. J Account Res 18: 109-131. doi: 10.2307/2490395
![]() |
[42] | Ooghe H, Claus H, Sierens N, et al. (1999) International Comparison of Failure Prediction Models from Different Countries: An Empirical Analysis. Department of Corporate Finance, University of Ghent. |
[43] |
Park HA (2013) An introduction to logistic regression: from basic concepts to interpretation with particular attention to nursing domain. J Korean Acad Nurs 43: 154-164. doi: 10.4040/jkan.2013.43.2.154
![]() |
[44] |
Pinches GE, Trieschmann JS (1974) The Efficiency of Alternative Models for Solvency Surveillance in the Insurance Industry. J Risk Insur 41: 563-577. doi: 10.2307/251955
![]() |
[45] | Ping W, Peng K, Gibiansky A, et al. (2017) Deep voice 3: Scaling text-to-speech with convolutional sequence learning. arXiv preprint, arXiv: 1710.07654. |
[46] |
Pottier SW, Sommer DW (2002) The Effectiveness of Public and Private Sector Summary Risk Measures in Predicting Insurer Insolvencies. J Financ Serv Res 21: 101-116. doi: 10.1023/A:1014325802171
![]() |
[47] |
Sharpe IG, Stadnik A (2007) Financial Distress in Australian General Insurers. J Risk Insur 74: 377-399. doi: 10.1111/j.1539-6975.2007.00217.x
![]() |
[48] | Stephenson B, Cook D, Dixon P, et al. (2008) Binary response and logistic regression analysis. Available from: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.6636&rep=rep1&type=pdf. |
[49] | Szandała T (2021) Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks. In: Bhoi AK, Mallick PK, Liu CM, et al., Bio-inspired Neurocomputing. Springer, Singapor, 203-224. |
[50] |
Trieschmann JS, Pinches GE (1973) A Multivariate Model for Predicting Financially Distressed P-L Insurers. J Risk Insur 40: 327-338. doi: 10.2307/252222
![]() |
[51] |
Vanstone B, Finnie G (2009) An empirical methodology for developing stockmarket trading systems using artificial neural networks. Expert Syst Appl 36: 6668-6680. doi: 10.1016/j.eswa.2008.08.019
![]() |
[52] |
Virág M, Kristóf T (2005) Neural networks in bankruptcy prediction-A comparative study on the basis of the first Hungarian bankruptcy model. Acta Oecon 55: 403-426. doi: 10.1556/aoecon.55.2005.4.2
![]() |
[53] | Walker J (1996) Methodology Application: Logistic Regression the Using CODES Data. No. HS-808 460. Available from: https://rosap.ntl.bts.gov/view/dot/4427. |
[54] |
Were K, Bui DT, Dick Ø B, et al. (2015) A comparative assessment of support vector regression, artificial neural networks, and random forests for predicting and mapping soil organic carbon stocks across an Afromontane landscape. Ecol Indic 52: 394-403. doi: 10.1016/j.ecolind.2014.12.028
![]() |
[55] |
Zekić-Sušac M, Šarlija N, Has A, et al. (2016) Predicting company growth using logistic regression and neural networks. Croat Oper Res Rev 7: 229-248. doi: 10.17535/crorr.2016.0016
![]() |
[56] |
Zhang L, Nielson N (2015) Solvency Analysis and Prediction in Property-Casualty Insurance: Incorporating Economic and Market Predictors. J Risk Insur 82: 97-124. doi: 10.1111/j.1539-6975.2013.12012.x
![]() |
1. | Yue Zhong, Jieming Gu, Lightweight block ciphers for resource-constrained environments: A comprehensive survey, 2024, 157, 0167739X, 288, 10.1016/j.future.2024.03.054 | |
2. | Usman Haruna Garba, Adel N. Toosi, Muhammad Fermi Pasha, Suleman Khan, SDN-based detection and mitigation of DDoS attacks on smart homes, 2024, 221, 01403664, 29, 10.1016/j.comcom.2024.04.001 |
Algorithm | Parameters |
SSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCOSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCORSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; |
DCORSSA-PSO | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; c5 = 1.49; |
PSO | w∈[0.4, 0.9]; c1 = c2 = 1.49; |
GWO | r1∈[0, 1]; r2∈[0, 1] |
Benchmark function | Range | fmin |
f1(x)=n∑i=1x2i | [-100,100] | 0 |
f2(x)=n∑i=1|xi|+n∏i=1|xi| | [-10, 10] | 0 |
f3(x)=nmaxi=1{|xi|} | [-100,100] | 0 |
f4(x)=n∑i=1(⌊xi+0.5⌋)2 | [-100,100] | 0 |
f5(x)=n∑i=1[x2i−10cos(2π xi)+10] | [-5.12, 5.12] | 0 |
f6(x)=14000n∑i=1x2i−n∏i=1cosxi√i+1 | [-600,600] | 0 |
f7(x)=1−cos(2π√n∑i=1x2i)+0.1√n∑i=1x2i | [-32, 32] | 0 |
f8(x)=−20exp(−0.2√1nn∑i=1x2i)−exp(1nn∑i=1cos(2πxi))+20+e | [-100,100] | 0 |
f9(x)=n∑i=1ix4i+rand | [-1.28, 1.28] | 0 |
f10(x)=n−1∑i=1[100(xi+1−x2i)2+(xi−1)2] | [-30, 30] | 0 |
Functions | Measure | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
f1 | Mean | 2.40×10-7 | 2.28×10-9 | 2.70×10-27 | 5.80×10-44* | 2.08×102 | 1.42×10-27 |
Best | 3.70×10-8 | 9.29×10-12 | 6.45×10-32 | 1.42×10-46* | 5.82×101 | 3.07×10-29 | |
Std | 4.09×10-7 | 5.16×10-9 | 5.86×10-27 | 1.17×10-43* | 1.08×102 | 1.82×10-27 | |
Time/s | 1.01×10-1* | 1.99×10-1 | 2.01×10-1 | 2.11×10-1 | 2.71×10-1 | 1.70×10-1 | |
f2 | Mean | 1.96 | 1.36×10-5 | 1.00×10-14 | 7.80×10-23* | 6.30 | 7.96×10-17 |
Best | 1.71×10-1 | 1.03×10-6 | 6.24×10-16 | 2.12×10-24* | 3.57 | 4.54×10-18 | |
Std | 1.46 | 3.95×10-5 | 1.18×10-14 | 9.73×10-23* | 1.75 | 4.69×10-17 | |
Time/s | 9.37×10-2* | 1.83×10-1 | 1.80×10-1 | 1.93×10-1 | 2.41×10-1 | 1.47×10-1 | |
f3 | Mean | 1.20×101 | 1.08 | 9.05×10-15 | 7.16×10-23* | 1.24×101 | 8.01×10-7 |
Best | 5.19 | 1.36×10-1 | 6.79×10-17 | 1.98×10-24* | 4.76 | 1.16×10-7 | |
Std | 3.35 | 6.96×10-1 | 1.29×10-14 | 8.06×10-23* | 3.19 | 7.47×10-7 | |
Time/s | 9.07×10-2* | 1.75×10-1 | 1.72×10-1 | 1.85×10-1 | 2.36×10-1 | 1.41×10-1 | |
f4 | Mean | 1.08×10-7 | 9.70×10-10* | 3.39×10-7 | 6.50×10-8 | 1.67×102 | 7.97×10-1 |
Best | 2.54×10-8 | 9.62×10-12* | 2.56×10-8 | 2.41×10-8 | 5.07×101 | 2.57×10-1 | |
Std | 8.12×10-8 | 2.23×10-9* | 8.51×10-7 | 2.66×10-8 | 8.92×101 | 2.57×10-1 | |
Time/s | 9.29×10-2* | 1.79×10-1 | 1.76×10-1 | 1.91×10-1 | 2.42×10-1 | 1.44×10-1 | |
f5 | Mean | 6.16×101 | 4.64×10-1 | 0* | 0* | 1.31×102 | 3.42 |
Best | 2.79×101 | 2.80×10-12 | 0* | 0* | 8.73×101 | 0* | |
Std | 1.80×101 | 5.68×10-1 | 0* | 0* | 2.41×101 | 4.36 | |
Time/s | 1.05×10-1* | 1.98×10-1 | 1.92×10-1 | 2.00×10-1 | 2.59×10-1 | 1.52×10-1 | |
f6 | Mean | 9.54×10-1 | 1.00×10-1 | 0* | 0* | 2.65 | 4.12×10-13 |
Best | 8.49×10-1 | 4.44×10-2 | 0* | 0* | 1.57 | 5.00×10-15 | |
Std | 4.30×10-2 | 3.49×10-2 | 0* | 0* | 1.04 | 4.65×10-13 | |
Time/s | 1.22×10-1* | 2.42×10-1 | 2.36×10-1 | 2.46×10-1 | 2.77×10-1 | 1.70×10-1 | |
f7 | Mean | 1.96 | 1.04 | 2.96×10-15 | 2.25×10-23* | 2.75 | 1.83×10-1 |
Best | 1.00 | 7.00×10-1 | 1.22×10-16 | 1.86×10-25* | 1.80 | 9.99×10-2 | |
Std | 4.18×10-1 | 2.40×10-1 | 2.99×10-15 | 2.90×10-23* | 4.88×10-1 | 3.79×10-2 | |
Time/s | 9.56×10-2* | 1.94×10-1 | 1.91×10-1 | 2.00×10-1 | 2.46×10-1 | 1.51×10-1 | |
f8 | Mean | 2.46 | 7.58×10-6 | 8.23×10-15 | 8.88×10-16* | 5.84 | 9.80×10-14 |
Best | 9.31×10-1 | 7.50×10-7 | 8.88×10-16* | 8.88×10-16* | 4.45 | 7.55×10-14 | |
Std | 7.42×10-1 | 1.01×10-5 | 1.23×10-14 | 0* | 8.80×10-1 | 1.49×10-14 | |
Time/s | 1.08×10-1* | 2.04×10-1 | 1.98×10-1 | 2.10×10-1 | 2.62×10-1 | 1.52×10-1 | |
f9 | Mean | 1.51 | 5.48×10-2 | 6.72×10-5* | 7.24×10-4 | 1.22 | 1.72×10-2 |
Best | 5.85×10-1 | 1.73×10-2 | 1.32×10-7* | 7.12×10-6 | 3.09×10-2 | 8.50×10-4 | |
Std | 6.72×10-1 | 1.91×10-2 | 6.83×10-5* | 1.55×10-3 | 1.23 | 1.55×10-2 | |
Time/s | 1.59×10-1* | 3.16×10-1 | 3.18×10-1 | 3.27×10-1 | 3.06×10-1 | 2.09×10-1 | |
f10 | Mean | 2.77×102 | 5.65×101 | 2.79×101 | 2.64×101* | 5.39×103 | 2.69×101 |
Best | 2.49×101* | 3.96×10-2 | 2.74×101 | 2.62×101 | 5.89×102 | 2.57×101 | |
Std | 4.33×102 | 4.31×101 | 2.05×10-1 | 1.33×10-1* | 5.25×103 | 7.83×10-1 | |
Time/s | 1.24×10-1* | 2.47×10-1 | 2.47×10-1 | 2.60×10-1 | 2.81×10-1 | 1.78×10-1 | |
Note: the mark "*" at the top right of the data indicates the best result obtained by all algorithms. |
Comparison group | +/=/- | Comparison group | +/=/- |
DCORSSA-PSO VS SSA | 9/1/0 | DCORSSA-PSO VS PSO | 10/0/0 |
DCORSSA-PSO VS DCOSSA | 9/0/1 | DCORSSA-PSO VS GWO | 10/0/0 |
DCORSSA-PSO VS DCORSSA | 7/2/1 | --- |
Algorithm | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
ARV | 4.8533 | 3.5700 | 2.2567 | 1.4300 | 5.8700 | 3.0200 |
rank | 5 | 4 | 2 | 1 | 6 | 3 |
rules | x1 | x2 | x3 | y1 | rules | x1 | x2 | x3 | y1 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||||
1 | 0 | 0 | 0 | 1 | 0 | 0 | 15 | 0.5 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0 | 0.5 | 0.2 | 0.5 | 0.3 | 16 | 0.5 | 1 | 0 | 0 | 0 | 1 |
3 | 0 | 0 | 1 | 0 | 0 | 1 | 17 | 0.5 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0 | 0.5 | 0 | 0.3 | 0.5 | 0.2 | 18 | 0.5 | 1 | 1 | 0 | 0 | 1 |
5 | 0 | 0.5 | 0.5 | 0.2 | 0.3 | 0.5 | 19 | 1 | 0 | 0 | 0 | 0 | 1 |
6 | 0 | 0.5 | 1 | 0 | 0 | 1 | 20 | 1 | 0 | 0.5 | 0 | 0 | 1 |
7 | 0 | 1 | 0 | 0 | 0 | 1 | 21 | 1 | 0 | 1 | 0 | 0 | 1 |
8 | 0 | 1 | 0.5 | 0 | 0 | 1 | 22 | 1 | 0.5 | 0 | 0 | 0 | 1 |
9 | 0 | 1 | 1 | 0 | 0 | 1 | 23 | 1 | 0.5 | 0.5 | 0 | 0 | 1 |
10 | 0.5 | 0 | 0 | 0.2 | 0.5 | 0.3 | 24 | 1 | 0.5 | 1 | 0 | 0 | 1 |
11 | 0.5 | 0 | 0.5 | 0.1 | 0.4 | 0.5 | 25 | 1 | 1 | 0 | 0 | 0 | 1 |
12 | 0.5 | 0 | 1 | 0 | 0 | 1 | 26 | 1 | 1 | 0.5 | 0 | 0 | 1 |
13 | 0.5 | 0.5 | 0 | 0.1 | 0.5 | 0.4 | 27 | 1 | 1 | 1 | 0 | 0 | 1 |
14 | 0.5 | 0.5 | 0.5 | 0.1 | 0.4 | 0.5 | - | - | - | - | - | - | - |
rules | x4 | x5 | y2 | rules | x4 | x5 | y2 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.4 | 0.2 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
rules | y1 | y2 | y3 | rules | y1 | y2 | y3 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.5 | 0.1 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
i | 1 | 2 | 3 | 4 | 5 |
105αi | 2.540×10-5 | 2.483 × 10-5 | 6.420 × 10-5 | 7.160 × 10-5 | 2.417 × 10-5 |
βi | 1.500 | 1.500 | 1.500 | 1.500 | 1.500 |
Optimized parameters | SSA | DCORSSA-PSO | PSO | GWO | ||||
P(xi) | ni | P(xi) | ni | P(xi) | ni | P(xi) | ni | |
x1 | 1.28 × 10-1 | 3 | 1.09 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.37 × 10-1 | 3 |
x2 | 9.22 × 10-2 | 3 | 1.08 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.68 × 10-1 | 3 |
x3 | 1.38 × 10-1 | 3 | 9.00 × 10-2 | 3 | 1.07 × 10-1 | 3 | 1.88 × 10-1 | 3 |
x4 | 1.26 × 10-1 | 3 | 1.41 × 10-1 | 3 | 1.70 × 10-1 | 3 | 1.57 × 10-1 | 3 |
x5 | 1.16 × 10-1 | 3 | 9.66 × 10-2 | 3 | 1.11 × 10-1 | 3 | 1.59 × 10-1 | 3 |
P | 5.03 × 10-3 | 3.71 × 10-3* | 8.03 × 10-3 | 1.22 × 10-2 | ||||
C | 175.00 | 175.00 | 115.76 | 105.89* | ||||
Time/s | 4.91 × 10-1* | 6.05 × 10-1 | 8.17 × 10-1 | 5.09 × 10-1 |
Measure | SSA | DCORSSA-PSO | PSO | GWO |
Mean | 6.956 × 10-3 | 3.746 × 10-3* | 5.196 × 10-3 | 8.304 × 10-3 |
Best | 3.710 × 10-3 | 3.709 × 10-3* | 4.387 × 10-3 | 4.404 × 10-3 |
Std | 3.703 × 10-3 | 3.886 × 10-5* | 8.625 × 10-4 | 2.518 × 10-3 |
Time/s | 5.466 × 10-1* | 6.795 × 10-1 | 8.975 × 10-1 | 5.617 × 10-1 |
Algorithm | Parameters |
SSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCOSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; |
DCORSSA | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; |
DCORSSA-PSO | c1∈[2 × e-16, 2 × e]; c2∈[0, 1]; c3∈[0, 1]; c4∈[0, 1]; c5 = 1.49; |
PSO | w∈[0.4, 0.9]; c1 = c2 = 1.49; |
GWO | r1∈[0, 1]; r2∈[0, 1] |
Benchmark function | Range | fmin |
f1(x)=n∑i=1x2i | [-100,100] | 0 |
f2(x)=n∑i=1|xi|+n∏i=1|xi| | [-10, 10] | 0 |
f3(x)=nmaxi=1{|xi|} | [-100,100] | 0 |
f4(x)=n∑i=1(⌊xi+0.5⌋)2 | [-100,100] | 0 |
f5(x)=n∑i=1[x2i−10cos(2π xi)+10] | [-5.12, 5.12] | 0 |
f6(x)=14000n∑i=1x2i−n∏i=1cosxi√i+1 | [-600,600] | 0 |
f7(x)=1−cos(2π√n∑i=1x2i)+0.1√n∑i=1x2i | [-32, 32] | 0 |
f8(x)=−20exp(−0.2√1nn∑i=1x2i)−exp(1nn∑i=1cos(2πxi))+20+e | [-100,100] | 0 |
f9(x)=n∑i=1ix4i+rand | [-1.28, 1.28] | 0 |
f10(x)=n−1∑i=1[100(xi+1−x2i)2+(xi−1)2] | [-30, 30] | 0 |
Functions | Measure | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
f1 | Mean | 2.40×10-7 | 2.28×10-9 | 2.70×10-27 | 5.80×10-44* | 2.08×102 | 1.42×10-27 |
Best | 3.70×10-8 | 9.29×10-12 | 6.45×10-32 | 1.42×10-46* | 5.82×101 | 3.07×10-29 | |
Std | 4.09×10-7 | 5.16×10-9 | 5.86×10-27 | 1.17×10-43* | 1.08×102 | 1.82×10-27 | |
Time/s | 1.01×10-1* | 1.99×10-1 | 2.01×10-1 | 2.11×10-1 | 2.71×10-1 | 1.70×10-1 | |
f2 | Mean | 1.96 | 1.36×10-5 | 1.00×10-14 | 7.80×10-23* | 6.30 | 7.96×10-17 |
Best | 1.71×10-1 | 1.03×10-6 | 6.24×10-16 | 2.12×10-24* | 3.57 | 4.54×10-18 | |
Std | 1.46 | 3.95×10-5 | 1.18×10-14 | 9.73×10-23* | 1.75 | 4.69×10-17 | |
Time/s | 9.37×10-2* | 1.83×10-1 | 1.80×10-1 | 1.93×10-1 | 2.41×10-1 | 1.47×10-1 | |
f3 | Mean | 1.20×101 | 1.08 | 9.05×10-15 | 7.16×10-23* | 1.24×101 | 8.01×10-7 |
Best | 5.19 | 1.36×10-1 | 6.79×10-17 | 1.98×10-24* | 4.76 | 1.16×10-7 | |
Std | 3.35 | 6.96×10-1 | 1.29×10-14 | 8.06×10-23* | 3.19 | 7.47×10-7 | |
Time/s | 9.07×10-2* | 1.75×10-1 | 1.72×10-1 | 1.85×10-1 | 2.36×10-1 | 1.41×10-1 | |
f4 | Mean | 1.08×10-7 | 9.70×10-10* | 3.39×10-7 | 6.50×10-8 | 1.67×102 | 7.97×10-1 |
Best | 2.54×10-8 | 9.62×10-12* | 2.56×10-8 | 2.41×10-8 | 5.07×101 | 2.57×10-1 | |
Std | 8.12×10-8 | 2.23×10-9* | 8.51×10-7 | 2.66×10-8 | 8.92×101 | 2.57×10-1 | |
Time/s | 9.29×10-2* | 1.79×10-1 | 1.76×10-1 | 1.91×10-1 | 2.42×10-1 | 1.44×10-1 | |
f5 | Mean | 6.16×101 | 4.64×10-1 | 0* | 0* | 1.31×102 | 3.42 |
Best | 2.79×101 | 2.80×10-12 | 0* | 0* | 8.73×101 | 0* | |
Std | 1.80×101 | 5.68×10-1 | 0* | 0* | 2.41×101 | 4.36 | |
Time/s | 1.05×10-1* | 1.98×10-1 | 1.92×10-1 | 2.00×10-1 | 2.59×10-1 | 1.52×10-1 | |
f6 | Mean | 9.54×10-1 | 1.00×10-1 | 0* | 0* | 2.65 | 4.12×10-13 |
Best | 8.49×10-1 | 4.44×10-2 | 0* | 0* | 1.57 | 5.00×10-15 | |
Std | 4.30×10-2 | 3.49×10-2 | 0* | 0* | 1.04 | 4.65×10-13 | |
Time/s | 1.22×10-1* | 2.42×10-1 | 2.36×10-1 | 2.46×10-1 | 2.77×10-1 | 1.70×10-1 | |
f7 | Mean | 1.96 | 1.04 | 2.96×10-15 | 2.25×10-23* | 2.75 | 1.83×10-1 |
Best | 1.00 | 7.00×10-1 | 1.22×10-16 | 1.86×10-25* | 1.80 | 9.99×10-2 | |
Std | 4.18×10-1 | 2.40×10-1 | 2.99×10-15 | 2.90×10-23* | 4.88×10-1 | 3.79×10-2 | |
Time/s | 9.56×10-2* | 1.94×10-1 | 1.91×10-1 | 2.00×10-1 | 2.46×10-1 | 1.51×10-1 | |
f8 | Mean | 2.46 | 7.58×10-6 | 8.23×10-15 | 8.88×10-16* | 5.84 | 9.80×10-14 |
Best | 9.31×10-1 | 7.50×10-7 | 8.88×10-16* | 8.88×10-16* | 4.45 | 7.55×10-14 | |
Std | 7.42×10-1 | 1.01×10-5 | 1.23×10-14 | 0* | 8.80×10-1 | 1.49×10-14 | |
Time/s | 1.08×10-1* | 2.04×10-1 | 1.98×10-1 | 2.10×10-1 | 2.62×10-1 | 1.52×10-1 | |
f9 | Mean | 1.51 | 5.48×10-2 | 6.72×10-5* | 7.24×10-4 | 1.22 | 1.72×10-2 |
Best | 5.85×10-1 | 1.73×10-2 | 1.32×10-7* | 7.12×10-6 | 3.09×10-2 | 8.50×10-4 | |
Std | 6.72×10-1 | 1.91×10-2 | 6.83×10-5* | 1.55×10-3 | 1.23 | 1.55×10-2 | |
Time/s | 1.59×10-1* | 3.16×10-1 | 3.18×10-1 | 3.27×10-1 | 3.06×10-1 | 2.09×10-1 | |
f10 | Mean | 2.77×102 | 5.65×101 | 2.79×101 | 2.64×101* | 5.39×103 | 2.69×101 |
Best | 2.49×101* | 3.96×10-2 | 2.74×101 | 2.62×101 | 5.89×102 | 2.57×101 | |
Std | 4.33×102 | 4.31×101 | 2.05×10-1 | 1.33×10-1* | 5.25×103 | 7.83×10-1 | |
Time/s | 1.24×10-1* | 2.47×10-1 | 2.47×10-1 | 2.60×10-1 | 2.81×10-1 | 1.78×10-1 | |
Note: the mark "*" at the top right of the data indicates the best result obtained by all algorithms. |
Comparison group | +/=/- | Comparison group | +/=/- |
DCORSSA-PSO VS SSA | 9/1/0 | DCORSSA-PSO VS PSO | 10/0/0 |
DCORSSA-PSO VS DCOSSA | 9/0/1 | DCORSSA-PSO VS GWO | 10/0/0 |
DCORSSA-PSO VS DCORSSA | 7/2/1 | --- |
Algorithm | SSA | DCOSSA | DCORSSA | DCORSSA-PSO | PSO | GWO |
ARV | 4.8533 | 3.5700 | 2.2567 | 1.4300 | 5.8700 | 3.0200 |
rank | 5 | 4 | 2 | 1 | 6 | 3 |
rules | x1 | x2 | x3 | y1 | rules | x1 | x2 | x3 | y1 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||||
1 | 0 | 0 | 0 | 1 | 0 | 0 | 15 | 0.5 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0 | 0.5 | 0.2 | 0.5 | 0.3 | 16 | 0.5 | 1 | 0 | 0 | 0 | 1 |
3 | 0 | 0 | 1 | 0 | 0 | 1 | 17 | 0.5 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0 | 0.5 | 0 | 0.3 | 0.5 | 0.2 | 18 | 0.5 | 1 | 1 | 0 | 0 | 1 |
5 | 0 | 0.5 | 0.5 | 0.2 | 0.3 | 0.5 | 19 | 1 | 0 | 0 | 0 | 0 | 1 |
6 | 0 | 0.5 | 1 | 0 | 0 | 1 | 20 | 1 | 0 | 0.5 | 0 | 0 | 1 |
7 | 0 | 1 | 0 | 0 | 0 | 1 | 21 | 1 | 0 | 1 | 0 | 0 | 1 |
8 | 0 | 1 | 0.5 | 0 | 0 | 1 | 22 | 1 | 0.5 | 0 | 0 | 0 | 1 |
9 | 0 | 1 | 1 | 0 | 0 | 1 | 23 | 1 | 0.5 | 0.5 | 0 | 0 | 1 |
10 | 0.5 | 0 | 0 | 0.2 | 0.5 | 0.3 | 24 | 1 | 0.5 | 1 | 0 | 0 | 1 |
11 | 0.5 | 0 | 0.5 | 0.1 | 0.4 | 0.5 | 25 | 1 | 1 | 0 | 0 | 0 | 1 |
12 | 0.5 | 0 | 1 | 0 | 0 | 1 | 26 | 1 | 1 | 0.5 | 0 | 0 | 1 |
13 | 0.5 | 0.5 | 0 | 0.1 | 0.5 | 0.4 | 27 | 1 | 1 | 1 | 0 | 0 | 1 |
14 | 0.5 | 0.5 | 0.5 | 0.1 | 0.4 | 0.5 | - | - | - | - | - | - | - |
rules | x4 | x5 | y2 | rules | x4 | x5 | y2 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.4 | 0.2 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
rules | y1 | y2 | y3 | rules | y1 | y2 | y3 | ||||
0 | 0.5 | 1 | 0 | 0.5 | 1 | ||||||
1 | 0 | 0 | 1 | 0 | 0 | 6 | 0.5 | 1 | 0 | 0 | 1 |
2 | 0 | 0.5 | 0.4 | 0.5 | 0.1 | 7 | 1 | 0 | 0.1 | 0.2 | 0.7 |
3 | 0 | 1 | 0.1 | 0.1 | 0.8 | 8 | 1 | 0.5 | 0 | 0 | 1 |
4 | 0.5 | 0 | 0.8 | 0.1 | 0.1 | 9 | 1 | 1 | 0 | 0 | 1 |
5 | 0.5 | 0.5 | 0.1 | 0.5 | 0.4 | - | - | - | - | - | - |
i | 1 | 2 | 3 | 4 | 5 |
105αi | 2.540×10-5 | 2.483 × 10-5 | 6.420 × 10-5 | 7.160 × 10-5 | 2.417 × 10-5 |
βi | 1.500 | 1.500 | 1.500 | 1.500 | 1.500 |
Optimized parameters | SSA | DCORSSA-PSO | PSO | GWO | ||||
P(xi) | ni | P(xi) | ni | P(xi) | ni | P(xi) | ni | |
x1 | 1.28 × 10-1 | 3 | 1.09 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.37 × 10-1 | 3 |
x2 | 9.22 × 10-2 | 3 | 1.08 × 10-1 | 3 | 1.58 × 10-1 | 3 | 1.68 × 10-1 | 3 |
x3 | 1.38 × 10-1 | 3 | 9.00 × 10-2 | 3 | 1.07 × 10-1 | 3 | 1.88 × 10-1 | 3 |
x4 | 1.26 × 10-1 | 3 | 1.41 × 10-1 | 3 | 1.70 × 10-1 | 3 | 1.57 × 10-1 | 3 |
x5 | 1.16 × 10-1 | 3 | 9.66 × 10-2 | 3 | 1.11 × 10-1 | 3 | 1.59 × 10-1 | 3 |
P | 5.03 × 10-3 | 3.71 × 10-3* | 8.03 × 10-3 | 1.22 × 10-2 | ||||
C | 175.00 | 175.00 | 115.76 | 105.89* | ||||
Time/s | 4.91 × 10-1* | 6.05 × 10-1 | 8.17 × 10-1 | 5.09 × 10-1 |
Measure | SSA | DCORSSA-PSO | PSO | GWO |
Mean | 6.956 × 10-3 | 3.746 × 10-3* | 5.196 × 10-3 | 8.304 × 10-3 |
Best | 3.710 × 10-3 | 3.709 × 10-3* | 4.387 × 10-3 | 4.404 × 10-3 |
Std | 3.703 × 10-3 | 3.886 × 10-5* | 8.625 × 10-4 | 2.518 × 10-3 |
Time/s | 5.466 × 10-1* | 6.795 × 10-1 | 8.975 × 10-1 | 5.617 × 10-1 |