Opposition-based learning (OBL) is an optimization method widely applied to algorithms. Through analysis, it has been found that different variants of OBL demonstrate varying performance in solving different problems, which makes it crucial for multiple OBL strategies to co-optimize. Therefore, this study proposed a dynamic allocation of OBL in differential evolution for multi-role individuals. Before the population update in DAODE, individuals in the population played multiple roles and were stored in corresponding archives. Subsequently, different roles received respective rewards through a comprehensive ranking mechanism based on OBL, which assigned an OBL strategy to maintain a balance between exploration and exploitation within the population. In addition, a mutation strategy based on multi-role archives was proposed. Individuals for mutation operations were selected from the archives, thereby influencing the population to evolve toward more promising regions. Experimental results were compared between DAODE and state of the art algorithms on the benchmark suite presented at the 2017 IEEE conference on evolutionary computation (CEC2017). Furthermore, statistical tests were conducted to examine the significance differences between DAODE and the state of the art algorithms. The experimental results indicated that the overall performance of DAODE surpasses all state of the art algorithms on more than half of the test functions. Additionally, the results of statistical tests also demonstrated that DAODE consistently ranked first in comprehensive ranking.
Citation: Jian Guan, Fei Yu, Hongrun Wu, Yingpin Chen, Zhenglong Xiang, Xuewen Xia, Yuanxiang Li. Dynamic allocation of opposition-based learning in differential evolution for multi-role individuals[J]. Electronic Research Archive, 2024, 32(5): 3241-3274. doi: 10.3934/era.2024149
Related Papers:
[1]
Tianbao Liu, Lingling Yang, Yue Li, Xiwen Qin .
An improved dung beetle optimizer based on Padé approximation strategy for global optimization and feature selection. Electronic Research Archive, 2025, 33(3): 1693-1762.
doi: 10.3934/era.2025079
[2]
Xu Xin, Xiaoli Wang, Tao Zhang, Haichao Chen, Qian Guo, Shaorui Zhou .
Liner alliance shipping network design model with shippers' choice inertia and empty container relocation. Electronic Research Archive, 2023, 31(9): 5509-5540.
doi: 10.3934/era.2023280
[3]
Zhuanzhe Zhao, Mengxian Wang, Yongming Liu, Zhibo Liu, Yuelin Lu, Yu Chen, Zhijian Tu .
Adaptive clustering algorithm based on improved marine predation algorithm and its application in bearing fault diagnosis. Electronic Research Archive, 2023, 31(11): 7078-7103.
doi: 10.3934/era.2023359
[4]
Xiangwen Yin .
Promoting peer learning in education: Exploring continuous action iterated dilemma and team leader rotation mechanism in peer-led instruction. Electronic Research Archive, 2023, 31(11): 6552-6563.
doi: 10.3934/era.2023331
[5]
Kun Han, Feng Jiang, Haiqi Zhu, Mengxuan Shao, Ruyu Yan .
Learning cooperative strategies in StarCraft through role-based monotonic value function factorization. Electronic Research Archive, 2024, 32(2): 779-798.
doi: 10.3934/era.2024037
[6]
Yujia Liu, Ziyi Chen, Wenqing Xiong, Donglin Zhu, Changjun Zhou .
GFPSMA: An improved algorithm based on flower pollination, slime mould, and game inspiration for global optimization. Electronic Research Archive, 2024, 32(6): 3867-3936.
doi: 10.3934/era.2024175
[7]
Feng Qiu, Hui Xu, Fukui Li .
Applying modified golden jackal optimization to intrusion detection for Software-Defined Networking. Electronic Research Archive, 2024, 32(1): 418-444.
doi: 10.3934/era.2024021
[8]
Chengtian Ouyang, Huichuang Wu, Jiaying Shen, Yangyang Zheng, Rui Li, Yilin Yao, Lin Zhang .
IEDO-net: Optimized Resnet50 for the classification of COVID-19. Electronic Research Archive, 2023, 31(12): 7578-7601.
doi: 10.3934/era.2023383
[9]
Rajakumar Ramalingam, Saleena B, Shakila Basheer, Prakash Balasubramanian, Mamoon Rashid, Gitanjali Jayaraman .
EECHS-ARO: Energy-efficient cluster head selection mechanism for livestock industry using artificial rabbits optimization and wireless sensor networks. Electronic Research Archive, 2023, 31(6): 3123-3144.
doi: 10.3934/era.2023158
[10]
Hebing Zhang, Xiaojing Zheng .
Multi-Local-Worlds economic and management complex adaptive system with agent behavior and local configuration. Electronic Research Archive, 2024, 32(4): 2824-2847.
doi: 10.3934/era.2024128
Abstract
Opposition-based learning (OBL) is an optimization method widely applied to algorithms. Through analysis, it has been found that different variants of OBL demonstrate varying performance in solving different problems, which makes it crucial for multiple OBL strategies to co-optimize. Therefore, this study proposed a dynamic allocation of OBL in differential evolution for multi-role individuals. Before the population update in DAODE, individuals in the population played multiple roles and were stored in corresponding archives. Subsequently, different roles received respective rewards through a comprehensive ranking mechanism based on OBL, which assigned an OBL strategy to maintain a balance between exploration and exploitation within the population. In addition, a mutation strategy based on multi-role archives was proposed. Individuals for mutation operations were selected from the archives, thereby influencing the population to evolve toward more promising regions. Experimental results were compared between DAODE and state of the art algorithms on the benchmark suite presented at the 2017 IEEE conference on evolutionary computation (CEC2017). Furthermore, statistical tests were conducted to examine the significance differences between DAODE and the state of the art algorithms. The experimental results indicated that the overall performance of DAODE surpasses all state of the art algorithms on more than half of the test functions. Additionally, the results of statistical tests also demonstrated that DAODE consistently ranked first in comprehensive ranking.
1.
Introduction
Several real-world optimization problems exist, such as data analytics technology [1], medical diagnosis [2], electric load dispatch [3], truss optimization [4], wind farm layout optimization [5] and so on. Optimization methods primarily include traditional and intelligent optimization methods. Traditional optimization methods typically rely on manually crafted rules and strategies, unable to fully leverage the potential information from large-scale data and complex problems, and perform poorly when dealing with high-dimensional, nonlinear, and dynamic systems. However, intelligent optimization methods such as metaheuristic algorithms (MAs) are widely used in the real world because of their simplicity and applicability compared to traditional optimization methods, especially for various optimization problems with large-scale, multi-objective, and multi-constraint characteristics. So far, researchers have proposed several MAs, which mainly include evolutionary algorithms represented by genetic algorithms (GA) [6] and differential evolution (DE), and swarm intelligence algorithms represented by particle swarm optimization (PSO) [7], ant colony optimization (ACO) [8], artificial bee colony (ABC) [9], and gray wolf optimizer (GWO) [10]. Meanwhile, some intelligent algorithms proposed in recent years have also attracted widespread attention, such as parrot optimizer (PO) [11], rime optimization algorithm (RIME) [12], weighted mean of vectors (INFO) [13], and hunger games search (HGS) [14], among others. Among these algorithms, DE, proposed by Price and Storn in 1995 [15], has received widespread research attention due to its fewer parameters and simple structure, making it easily implementable. The strong optimization capability of DE has also been validated in various real-world applications, such as power system optimization [16], path planning problems [17], and logistics scheduling [18], among others [19].
Since the proposal of DE, researchers have made a series of improvements, such as parameter adaptation, hybrid optimization with various algorithms, and the introduction of optimization strategies. These improvement approaches have further enhanced the performance of DE and gained more attention. It is worth noting that among the optimization strategies, opposition-based learning (OBL) is a powerful technique proposed by Tizhoosh [20] for enhancing algorithm performance. OBL generates opposite solutions in evolutionary algorithms while evaluating both the original and opposite solutions based on the objective function, thereby enhancing the algorithm's ability to escape local optima and converge quickly toward the global optimum solution. Rahnamayan et al. [21] first broadened the search space of DE using OBL. Subsequently, researchers have proposed several variants of OBL to further enhance its optimization capability. However, these methods often become disadvantaged when faced with more complex problems. On the one hand, classical optimization algorithms often grapple with escaping local optima, particularly when confronted with objective functions containing numerous local optima or extreme points. On the other hand, the majority of prior studies commonly utilized a singular OBL strategy for algorithm optimization, rather than exploring multiple OBL strategies (OBLs). However, it has been discovered that different variants of OBL exhibit varying optimization performances. For example, some OBL variants are advantageous for population exploration, while others enhance algorithm exploitation. Therefore, in light of the existing gaps within the co-optimization algorithm for multiple OBL strategies, this paper argues that research on co-optimization algorithms for multiple OBL strategies has become crucial.
Dealing with the coordination of different OBL variants at various stages of the algorithm is crucial in the process of multiple OBL cooperative optimization. Exploration and exploitation are the two cornerstones of population evolution, and their balance significantly affects the performance of evolutionary algorithms [22]. Exploration is the process of completely accessing new regions in a search space. In the early stages of an algorithm, good exploration capability can provide a good search space for the initial population. However, emphasizing only exploration and neglecting exploitation reduces the convergence speed. Exploitation is the process of accessing a search space near the previously visited points. In the later stages of the algorithm, a good exploitation ability can make the population converge to the optimal solution quickly. Simultaneously, excessive emphasis on exploitation causes the algorithm to become premature and fall into local optimal solutions. Hence, in order to achieve a better balance between exploration and exploitation capabilities of the algorithm while employing multiple OBL cooperative optimization methods, this paper recognizes that the diversity and convergence of the population directly correspond to the exploration and exploitation capabilities [23]. To this end, dynamically allocating an OBL strategy for individuals in the population based on the characteristics and variations of the population at each stage is a promising approach. Assigning an OBL strategy with different characteristics to satisfy the needs of individuals in the population can maximize the algorithm's balance state [24].
In the previous research results of this paper, the effectiveness of OBL has been successfully validated [25]. To further study the collaborative optimization of OBLs, this paper proposes a dynamic allocation of opposition-based learning in differential evolution for multi-role individuals (DAODE), to efficiently exploit the co-optimization effect among multiple OBL strategies. To begin, the population individuals of each generation are divided into multiple roles and stored in corresponding archives. Then, a comprehensive ranking mechanism based on OBL is utilized to assign corresponding rewards to different roles. Specifically, different roles are assigned different OBL strategies to maintain a balance between population exploration and exploitation. Furthermore, this paper believes that the archives of different roles represent the proximity between the roles and the global optimal solution. Therefore, this paper proposes a mutation strategy based on multi-role archives, where individuals in the mutation operation are selected from different archives. This enhances the convergence speed of the algorithm.
The main contributions of this paper are as follows:
1) Introduce a novel individual evaluation metric and assign individuals three roles: elite, commoner, and inferior.
2) Propose a mechanism for dynamically allocating OBL strategies based on the different characteristics of various OBL strategies.
3) To flexibly allocate OBL, define a comprehensive ranking mechanism for multiple OBL strategies.
4) Based on the different roles of individuals, propose a mutation strategy based on multi-role archives.
The remainder of this paper is organized as follows: Section 2 describes the traditional DE framework and OBL strategy, and reviews the application of OBL and its variants in MAs. The details of the proposed DAODE algorithm are presented in Section 3. In Section 4, the experimental results of DAODE are analyzed and discussed. Finally, Section 5 briefly discusses the conclusions and future developments of this study.
2.
Background
In this section, the traditional DE framework and OBL strategies are described, and we review the application of OBL in MAs and the study of classical OBL variants.
2.1. Differential evolution
Due to its simplicity and efficiency, traditional DE has been extensively researched as an optimization algorithm. It first generates an initial population P0 randomly and then generates an offspring by mutation, crossover, and selection operations. The details are as follows:
2.1.1. Initialization
The jth dimension of the ith vector xi,j in the initial population P0, is defined as follows:
x0i,j=lbj+rand(0,1)⋅(ubj−lbj)
(2.1)
where i=[1,2,…,NP] and j=[1,2,…,D], NP and D denote the size and dimension of the population, respectively; ubj and lbj denote the upper and lower boundaries of the vector, respectively; and rand(0,1) is a uniformly distributed random number in the range [0, 1].
2.1.2. Mutation
The mutation operation is the most important part of the DE. In the classical DE mutation strategy (DE/rand/1), when the population is iteratively updated, Xti generates mutation vector Vt+1i of the offspring using the mutation operator. Thus far, there are several types of mutation strategies, and the six most frequently used mutation strategies are as follows:
(1) "DE/rand/1"
Vt+1i=Xtr1+F⋅(Xtr2−Xtr3)
(2.2)
(2) "DE/best/1"
Vt+1i=Xtbest+F⋅(Xtr1−Xtr2)
(2.3)
(3) "DE/rand/2"
Vt+1i=Xtr1+F1⋅(Xtr2−Xtr3)+F2⋅(Xtr4−Xtr5)
(2.4)
(4) "DE/best/2"
Vt+1i=Xtbest+F1⋅(Xtr1−Xtr2)+F2⋅(Xtr3−Xtr4)
(2.5)
(5) "DE/current-to-rand/1"
Vt+1i=Xti+rand⋅(Xtr1−Xti)+F⋅(Xtr2−Xtr3)
(2.6)
(6) "DE/current-to-best/1"
Vt+1i=Xti+F1⋅(Xtbest−Xti)+F2⋅(Xtr1−Xtr2)
(2.7)
where Xti, Xtr1, Xtr2, Xtr3, Xtr4, and Xtr5 denote randomly selected vectors; indices i, r1, r2, r3, r4, and r5 are unique integers randomly selected from the range [1,NP]; F denotes the control parameter of the perturbation vector; and Xtbest denotes the optimal vector in the t-th generation.
2.1.3. Crossover
The crossover operation of the DE population is aimed at enhancing the diversity of the population and promoting the generation of structured differences within the population. The definition of the trial vector Uti generated by mutation is as follows.
where CR∈[0,1] denotes the crossover parameter, j∈[1,D] denotes the dimension of a vector, and jrand is a random integer generated from the range [1,D]; xti,j denotes the j-th dimension of the original vector Xti, and vt+1i,j, ut+1i,j are the j-th dimension of the (t+1)-th iteration of the mutation vector Vti and the trial vector Uti, respectively.
2.1.4. Selection
In the selection operation, the DE will select the better of the original vector Xti and the trial vector Ut+1i to maintain the next generation Xt+1i. It is defined as follows:
Xt+1i={Ut+1i,if f(Ut+1i)≤f(Xti)Xti,otherwise
(2.9)
where f(Xti) and f(Ut+1i) denote the fitness values of original vector Xti and trial vector Ut+1i, respectively. Using a greedy selection method, a better individual is selected for the next generation.
2.2. Opposition-based learning
Inspired by the opposite concept, OBL is a general and efficient method, proposed by Tizhoosh [20]. It is often used in various optimization algorithms to improve algorithm performance and effectively address real-world optimization problems [26]. The OBL is defined as follows:
First, the opposite solution is defined as follows in one-dimensional space.
Opposition number: Let x∈[a,b] be a real number, where a and b, respectively, represent the minimum and maximum values of the variable x. The opposition number ˇx is defined as follows:
ˇx=a+b−x
(2.10)
Similarly, the opposite solution can easily be extended to a D-dimensional space, which is defined as follows:
Opposition point: Let X=(x1,x2,…,xD) be a point in a D-dimensional space, where xj∈[aj,bj] and j∈[1,D]. The calculation of the opposition point ˇxj for the j-th dimension of the i-th individual is as follows:
ˇxj=aj+bj−xj
(2.11)
Subsequently, to use the OBL strategy flexibly, Rahnamayan et al. [21] proposed the concept of dynamic boundaries based on the original OBL [20]. With iterations of the algorithm, the search space can be continuously reduced. Therefore, the OBL can efficiently determine the optimal solution. The details of the OBL with D-dimensional dynamic bounds are as follows.
ˇxti,j=atj+btj−xti,jatj=min(xtj),btj=max(xtj)
(2.12)
where ˇxti,j denotes the point opposite to the j-th dimension of the the i-th vector at the t-th iteration, xti,j; and atj and btj denote the minimum and maximum values of the j-th dimension at the t-th iteration, respectively.
2.3. Application of OBL and its variants in MAs
Since the introduction of OBL, it has been used in several optimization algorithms. Several variants of OBL have been proposed based on traditional OBL to improve its optimization capability in algorithms. Therefore, this section briefly introduces studies on combining OBL with different optimization algorithms and review the concepts of some classical OBL variants.
2.3.1. OBL and MAs
In many studies, to improve the diversity and convergence of algorithm populations, OBL is often used in the population initialization and the evolutionary process of various algorithms. Details are summarized in Table 1.
However, as the complexity of a problem increases, traditional OBL exhibits inefficient performance in high-dimensional or multimodal problems. Therefore, several relevant variants of the OBL have been proposed, as shown in Table 2.
Table 2.
A research work of some classic OBL variants.
As described in Section 2.3, OBL has been shown to be a practical and efficient optimization strategy. From these studies, it is observed that people usually use only one OBL strategy in their algorithms and that these OBL strategies exhibit different performances in addressing different types of problems. For instance, the different OBL strategies exhibit varying population diversity and convergence rates in DE, and in the early stage of the algorithm, DE using OBL with better diversity is beneficial for the population to explore the unknown space. In the later stage of the algorithm, using OBL with better convergence will be beneficial for finding the optimal solution. Therefore, there is a lack of research on the cooperative effect of OBLs, and using OBLs with different characteristics to address the corresponding problems. Moreover, the study of dynamically allocating OBL to the population at different time periods is highly significant.
Related studies have demonstrated that dynamic allocation methods produce favorable performance for algorithms [53]. In DE, exploration and exploitation significantly affect the performance of the DE, and methods for the dynamic selection of parameters or strategies have been proposed to maintain the balance between them. Depending on the problem, the dynamic allocation mechanism can select parameters or strategies that are suitable for the current problem, thus improving the ability of the algorithm to address the problem. A dynamic allocation strategy based on the ranking mechanism is a common research method [54], and the corresponding strategy is adopted at the ranking level to achieve reasonable resource allocation and performance improvement.
Building upon the aforementioned inspiration, this paper proposes a dynamic allocation of OBL in DAODE. In DAODE, based on the characteristics of individuals in each generation of the population, this paper first assigns three roles to each individual and stores them in the corresponding archives. In the mutation operation, DAODE then employs a novel mutation strategy based on multi-role archives to expedite the search for the global optimum solution. Lastly, the OBL pool, composed of multiple OBL strategies, dynamically assigns an OBL strategy to individuals of different roles. Different OBL strategies are allocated to roles at different stages, ensuring both population diversity and improved convergence speed. The complete framework of DAODE is shown in Figure 1. Details of this process are described in the following subsections.
The DE is a stochastic search algorithm that randomly generates populations without prior knowledge. In several studies, archiving has been used to retain the genes of elite individuals for use in future generations [53], thus reducing the impact of a lack of prior knowledge. Additionally, dividing the population into multiple roles can provide a good balance for the algorithm [55]. Therefore, this paper will assign individuals three roles: elite, commoner, and inferior before each generation's population update, and store them in the corresponding archives.
In DAODE, this paper first evaluates the position and fitness of each individual. The position of an individual is determined by its euclidean distance from the current optimum solution. Then, a comprehensive characteristic of the individual is derived from the position and fitness. Finally, based on this comprehensive characteristic, this paper assigns three roles to the individuals and stores them in their corresponding archives, the details of which are provided in Eq (3.1). As in the real world, the proportion of individuals with high and low abilities is small, while those with general abilities are the majority. Thus, the proportions of elite, commoner, and inferior roles in DAODE were m%, n%, and p% (e.g., m = 20, n = 50, and p = 30, respectively), and the final proportions were generated by the experiment (see Section 4.5.1).
Elite roles provide suitable genes for their offspring and are beneficial for improving the accuracy of the solution. Inferior roles aim to improve the exploratory ability of a population. Commoner plays a balancing role in the exploration and exploitation of the entire population. The multi-role individuals and archives are shown in Figure 2. From the figure, it can be observed that the darker the color of the individual, the smaller the fitness and the closer it is to the optimal solution.
E=α∗Ef+β∗Ep
(3.1)
where E, Ef, Ep denote the comprehensive evaluation, fitness, and position indices for each individual, respectively. α, β correspond to scaling parameters between Ef and Ep, respectively, and they will be generated by the experiment.
When generating trial vectors in a population, this paper identifies the roles of each individual and assigns them with corresponding OBL strategies. Therefore, this paper proposes a multiple DAO. The model of the DAO mechanism is shown in Figure 3, and the details are as follows.
To ensure the flexibility of dynamically allocating OBLs, this paper defines a comprehensive ranking mechanism for OBLs to efficiently allocate the corresponding OBLs to roles. First, the diversity and fitness of each OBL are evaluated and ranked, and the diversity is measured using Eq (3.2), as proposed by Morales-Castañeda [56].
Divj=1NPNP∑i=1|median(xj)−xji|Div=1DD∑j=1Divj
(3.2)
where xji denotes the j-th dimension of the i-th vector. The median(xj) symbolizes the median of the j-th dimension of the population. The diversity Div is calculated as the average of Divj in each dimension. The calculation of Eq (3.2) is involved in each iteration.
RD=rankDivRF=i
(3.3)
where RD and RF denote the diversity and fitness ranking values of each OBL, respectively, and i denotes the fitness ranking, where a smaller fitness value indicates a smaller ranking value. Therefore, the diversity and fitness rankings were calculated using Eq (3.3).
Subsequently, the diversity of the DE population gradually decreased with continuous iterations, and the fitness gradually converged to the optimal solution. Therefore, this paper designs a weight parameter for the diversity and fitness rankings of the OBL and obtained a comprehensive ranking of the OBL from the weight parameter and the ranking result. The comprehensive rank is calculated as follows:
R=ω∗RD+(1−ω)∗RFω=12+12∗cos(iterMax_Iter∗θ∗π)
(3.4)
where R denotes comprehensive ranking of OBL, ω denotes a weight parameter by the cosine function, and iter and Max_Iter denote the number of current iterations and the maximum number of iterations, respectively. To normalize the parameter ω, an additional parameter θ is added here.
Finally, based on the comprehensive ranking of OBL, the pool of OBL strategies was divided into three levels: excellent OBLs, ordinary OBLs, and inferior OBLs. As shown in Figure 3, the darker color in the OBL pool indicates that this OBL has a better optimization capability at the current stage. Specifically, after each update of the algorithm population, each type of OBL evaluates the diversity and convergence of the current search environment. They are then ranked using a comprehensive ranking mechanism, and based on the individuals with different roles, an appropriate OBL strategy is assigned to each individual accordingly. To maintain the balance between the exploration and exploitation of the DE, elite roles randomly select an OBL in the inferior strategy pool that has k OBLs, ensuring the individual's superiority. By contrast, inferior roles select an OBL randomly in an excellent strategy pool with i OBLs, which drives the individual to a good region. However, for individuals with ordinary OBLs, a suitable OBL is obtained using a Gaussian distribution in an ordinary OBL strategy pool, which ensures a balanced state. This means that after each iteration of the algorithm, different individuals will change their roles, and with the continuous changes in the search environment, individuals will also select different OBL strategies from the OBL pool as optimization strategies. This achieves a state of dynamically allocating OBL throughout the entire process of the algorithm.
3.3. Archive-based mutation strategy (ABM)
Mutation operations play an important role in DE, and the selection of mutant individuals affects the offspring trend. A mutation strategy based on multi-role archiving is proposed in this study, and we named it "ABM". In ABM, the base vector of the mutation strategy is randomly selected from the archives of elite roles AE to ensure the superiority of offspring. However, the difference vector of the mutation perturbation is the difference between two vectors randomly selected from the archives of commoner roles AC and those of inferior roles AI, which avoids the problem of excessive perturbation and accelerates convergence. The mutation strategy of ABM in a two-dimensional space is shown in Figure 4.
Figure 4.
ABM strategy in a two-dimensional space.
In Figure 4, the darker regions on the left figure represent positions closer to the optimal solution, and the red pentagon represents the location of the optimal solution in the two-dimensional space. Different-colored regions symbolize the distribution range of different classes of individuals, and individuals with mutations were selected from these regions. As the mutation process shows in the figure on the right, the offspring will continue to approach the optimal solution.
3.4. The entire pseudocode of DAODE
Based on this discussion, this study constructs a dynamic allocation of OBL in DAODE. The entire pseudocode of DAODE is presented in Algorithm 1. At the beginning of DAODE, the OBL with the highest level of diversity is selected as the initial strategy to improve the diversity of the initial population in the evolutionary stage of the algorithm. To begin, in line 5, individuals in the population are assigned three roles and stored in their respective archives. Subsequently, a mutation vector is generated using the mutation strategy based on the archiving of multi-role in line 7, and a trial vector is generated using the crossover operation in line 8. At this point, the dynamic allocation OBL mechanism is applied to the current roles in lines 9 and 10 to generate the opposite vector. Finally, the offsprings are evaluated in line 12 until the iterations of the algorithm are completed. Furthermore, the source code for this study can be found at the following link: https://github.com/gjianzx/DAODE.
Algorithm 1 DAODE
1: Randomly generate NP individuals as the initial population P0 by Eq (2.1); 2: Select the largest diversity OBL to generate the opposite population OP0; 3: Evaluate the fitness of P0 and OP0; 4: while the termination condition is not satisfied do 5: Multi-role individuals and archives (MIA); 6: fori=1 to NPdo 7: Generate a mutation vector Vt+1i by using the ABM; 8: Generate a trial vector Ut+1i by Eq (2.8); 9: Ranking OBLs by Eqs (3.3) and (3.4); 10: Generate the opposite vector OXt+1i by using DAO; 11: Evaluate the fitness of Ut+1i and OXt+1i; 12: Retaining excellent vectors Xt+1i from Xti, Ut+1i and OXt+1i by Eq (2.9); 13: end for 14: t = t + 1; 15: end while
3.5. Complexity analysis of DAODE
Compared with the original DE, DAODE also needs to calculate the time complexity of MIA, DAO, and ABM, and their time complexities are analyzed individually below.
In MIA, first, each individual in the population is sorted according to the complexity of O(NP×logNP). The population is then classified into three levels according to the sorting of roles, and the complexity is O(NP×NP). Finally, the archiving of individuals is performed simultaneously with the classification, and no additional time complexity was calculated. Thus, the overall complexity of the MIA is O(NP×logNP+NP×NP).
In the DAO, the additional complexity is mainly caused by the ranking of the OBLs and the dynamic allocation OBL mechanism. The sorting of the OBLs is divided into diversity and fitness sorting, both of which have a complexity of O(logN), where N denotes the total number of OBLs. In the dynamic allocation OBL mechanism, each role must select an OBL strategy with complexity O(NP).
In ABM, the three vectors of the mutation operation are randomly selected from the archives, which have the same complexity as the original DE, which is O(NP×D).
The complexity of the original DE was O(Max_Iter×NP×D), where Max_Iter was the maximum value of the algorithm iteration. The total complexity of DAODE is O(Max_Iter×(NP×logNP+NP×NP+logN+logN+NP+NP×D)), that is, O(Max_Iter×NP×max(NP,logNP,logN,D)). Therefore, DAODE remains computationally efficient in terms of time compared to the original DE.
4.
Experimental results and analysis
In this section, the performance of DAODE is evaluated mainly on test functions presented at the 2017 IEEE conference on evolutionary computation (CEC2017), and the experimental results are analyzed and compared in detail. The experiments included a comparison between DAODE and several state of the art algorithms and OBL variants (i.e., the improved OBL strategy introduced in Section 2.3.2). The differences between DAODE and several other algorithms were further evaluated using statistical tests. In addition, this paper analyzes the influence of the proposed strategy on DE. The experimental details are as follows.
4.1. Benchmark functions
The CEC2017 benchmark suite used in this study comprised 29 test functions for unimodal functions (f1−f3), simple multimodal functions (f4−f10), hybrid functions (f11−f20), and composition function (f21−f30). Meanwhile, these functions represented actual problems with various characteristics. The test function information is listed in Table 3, where N denotes the number of functions in the hybrid and composition functions, f∗i and fi(x∗) denote the optimal solution for each test function, and [−100,100]D denotes the search bound for each test function. More detailed information on the CEC2017 benchmark suite can be found in the literature [57].
Table 3.
CEC2017 benchmark suite.
Type
No.
Function
f∗i=fi(x∗)
Unimodal
f1
Shifted and Rotated Bent Cigar Function
100
f2
Shifted and Rotated Sum of Different Power Function*
200
f3
Shifted and Rotated Zakharov Function
300
Multimodal
f4
Shifted and Rotated Rosenbrock's Function
400
f5
Shifted and Rotated Rastrigin's Function
500
f6
Shifted and Rotated Expanded Scaffer's F6 Function
600
f7
Shifted and Rotated Lunacek Bi_Rastrigin Function
700
f8
Shifted and Rotated Non-Continuous Rastrigin's Function
800
f9
Shifted and Rotated Levy Function
900
f10
Shifted and Rotated Schwefel's Function
1000
Hybrid
f11
Hybrid Function 1 (N=3)
1100
f12
Hybrid Function 2 (N = 3)
1200
f13
Hybrid Function 3 (N = 3)
1300
f14
Hybrid Function 4 (N = 4)
1400
f15
Hybrid Function 5 (N = 4)
1500
f16
Hybrid Function 6 (N = 4)
1600
f17
Hybrid Function 6 (N = 5)
1700
f18
Hybrid Function 6 (N = 5)
1800
f19
Hybrid Function 6 (N = 5)
1900
f20
Hybrid Function 6 (N = 6)
2000
Composition
f21
Composition Function 1 (N=3)
2100
f22
Composition Function 2 (N = 3)
2200
f23
Composition Function 3 (N = 4)
2300
f24
Composition Function 4 (N = 4)
2400
f25
Composition Function 5 (N = 5)
2500
f26
Composition Function 6 (N = 5)
2600
f27
Composition Function 7 (N = 6)
2700
f28
Composition Function 8 (N = 6)
2800
f29
Composition Function 9 (N = 3)
2900
f30
Composition Function 10 (N = 3)
3000
Search Range:[−100,100]D
*f2 has been excluded because it shows unstable behavior especially for higher dimensions, and significant performance variations for the same algorithm implemented in Matlab, C [57].
For a fair comparison of the experiments, the common parameters of the algorithms compared in this study were set as follows:
● Popualtion size (NP): 100
● Mutation factor (F): 0.5
● Crossover factor (CR): 0.9
● The problem dimension (D): 30 and 50
● Maximum number of function evaluations (Max_FES): 10000×D
● Maximum number of algorithm iterations (Max_Iter): Max_FES/NP
● The independent number of runs (Run_Num): 30
For each algorithm, the additional parameters are listed in Table 4, the details of which can be found in the corresponding literature. In addition, all simulation environments in this study included Windows 10, an Intel(R) Core(TM) i5-7300HQ CPU, RAM-16 GB, and MATLAB 2020a.
Table 4.
Additional parameters for the comparison algorithm.
To evaluate the overall performance of DAODE, the seven state of the art algorithms listed in Table 4 are compared and analyzed. Moreover, to verify the flexibility of the dynamic allocation of OBL in DAODE, the performance of each OBL in the OBL pool in DE was evaluated and compared with the experimental results of DAODE.
The accuracy of each algorithm was calculated using Eq (4.1), where fi(x) denotes the result obtained by the algorithm on the i-th test function, and fi(x∗) denotes the optimal solution that the algorithm can achieve in the test function (e.g., the optimal solution for f1 in Table 3 is f1(x∗)=100).
FEv=fi(x)−fi(x∗)
(4.1)
In addition, each algorithm was run 30 times independently of each test function, and the mean value (Mean) and standard deviation (Std) were recorded. The data in the table is represented in scientific notation, such as 7.14E+08 and 9.47E-16, which are equivalent to 7.14×108 and 9.47×10−16, respectively. Additionally, the optimal solution of the comparison results is highlighted in bold. Notably, "+/-/=" denotes that DAODE is better than, approximately the same as, or lower than the corresponding algorithm, respectively. In addition, this paper plots the convergence of the algorithms on several test functions, which clearly show the convergence trend of each algorithm.
4.3.1. Comparison of state of the art algorithms
These state of the art algorithms include adaptive mechanisms combined with OBL and novel optimization strategies. Therefore, they are more convincing than the comparison algorithms for DAODE. The following seven algorithms are briefly described. JaDE [58] is an adaptive DE with optional external archiving that improves optimization performance by implementing a new mutation strategy with optional external archiving and adaptively updating the control parameters. ACDE/F [59] proposed a belief space strategy incorporating a generalized OBL and a parameter adaptive adjustment mechanism, aimed at striking a balance between global exploration and local exploitation capabilities. Based on the original dynamic OBL (DOL), OMLDE [60] introduced a mutual learning (ML) method aimed at guiding individual-determined ML and proposed opposite ML. In DODE [61], OBLs are simply classified and a dual OBL is proposed that contains two OBL strategies and a protection mechanism. OBA [50] mathematically proves the probability that the original and opposite solutions are close to the optimal solution when constructing an opposition-based metaheuristic optimization algorithm. For NBODE [27], new parameters and weight factors are incorporated into the neighborhood mutation model (DE/current-to-best/1), introducing a mutation strategy (DE/neighbor-to-neighbor/1) that focuses on local mutations for enhanced efficiency over global mutations. Finally, GODE [62] believes that more promising solutions exist around the opposite solution, and thus uses Gaussian perturbation to widen the neighborhood of the opposite solution and increase the possibility of finding a better solution.
The comparison results of DAODE with the state of the art algorithms at 30D and 50D test functions are summarized in Tables 5 and 6, respectively. According to the comparison results, DAODE still exhibited good performance against the other algorithms for unimodal functions (f1−f3) as the dimensions of the problem increased. For the multimodal function (f4−f10), although DAODE is not the best among several compared algorithms, it still has the top ranking and is next to the first among several test functions (e.g., f5, f7, and f8). In the hybrid (f11−f20) and composition (f21−f30) functions, DAODE outperforms most algorithms at 30D, which is strongly related to the proposed DAO. As the dimension increased to 50D, DAODE outperformed other state of the art algorithms on only half of the test functions. However, it still achieved a second or third ranking on the other test functions.
Table 5.
Comparison results of DAODE and state-of-the-art algorithms on CEC2017 benchmark suite (D=30).
The convergence curves of the compared algorithms are shown in Figures 5 and 7 for 30 independent runs on the CEC2017 benchmark suite. DAODE outperformed the other state of the art algorithms in several test functions and exhibited a faster convergence rate. This also demonstrates that ABM can accelerate the development of populations in good areas by selecting superior individuals as base vectors for mutation operations. The convergence curves of f16, f17, and f26 at 30D and f3, f16, f20, f23, and f26 at 50D exhibited a decreasing trend. Thus, DAODE has great potential for searching optimal solutions to these functions. Meanwhile, for the convenience of observing the distribution of results from 30 runs of each algorithm, partial boxplots of the test functions are provided in Figures 6 and 8, corresponding to the convergence plots. In these test functions, it can be clearly observed that DAODE achieves superior solutions compared to other algorithms, demonstrating significant performance advantages.
Figure 5.
The convergence curves of 30 independent runs for the comparison algorithm on CEC2017 benchmark suite (D=30).
Therefore, DAODE exhibits good performance compared with the state of the art algorithms in the CEC2017 benchmark suites of 30D and 50D. In particular, for JaDE, ACDE/F, OBA, and NBODE, DAODE has an absolute advantage and contains various state of the art adaptive strategies, which indicate the effectiveness of the optimization strategies proposed by DAODE. Moreover, it was experimentally found that the reason for the insignificant performance of DAODE on the multimodal function (f4−f10) may mainly lie in the multimodal function possessing multiple local optima, which incorrectly selects locally optimal individuals as superior individuals in the MIA, reducing the possibility of searching for optimal solutions.
4.3.2. Comparison of OBL strategies
To test the performance of the dynamically allocated OBL mechanism, this paper evaluates the experimental results of each OBL combined with DE in the OBL pool and compared their performance with that of DAODE. A total of 13 strategies are included: original, QOBL [41], QROBL [42], SOBL [43], GOBL [44], FQROBL [45], POBL [46], COBL [47], RBL [48], OCL [49], EO/REO [50] and COOBL [51]. Their information is briefly described in Table 2, and detailed information can be found in the corresponding literature. Note that, when combined with DE, the algorithm names are: ODE(OBL), QODE(QOBL), QRODE(QROBL), EODE(EO), REODE(REO), FQRODE(FQROBL), GODE(GOBL), RODE(RBL), CODE(COBL), PODE(POBL), SODE(SOBL), OCDE(OCL), and COODE(COOBL).
The comparison results of DAODE with the OBL variants are listed in Tables 7 and 8. For unimodal functions (f1−f3), both the DAODE and OBL variants performed well on the test function of 30D. When the number of dimensions increased to 50, the performance of the OBL variants exhibited a decreasing trend, but DAODE still found effective solutions. When solving the multimodal functions (f4−f10) problem, the performance of DAODE was not outstanding and only outperformed the OBL variants on f10 for 30D and f7 for 50D. Notably, DAODE had a significant effect on the hybrid and composition functions of 30D. However, when D=50, although the results of DAODE are not optimal, it can be observed from the data in Table 8 that DAODE is extremely close to the OBL variants with the optimal solution. In addition, the results from the sign test ("+/ = /-") can be obtained such that the comprehensive performance of DAODE outperforms each OBL variant and has a significant effect compared to ODE, QODE, GODE, RODE, PODE, SODE, OCDE, and COODE, which verifies the robustness of the dynamically allocated OBL mechanism proposed in this study.
Table 7.
The comparison results of DAODE with various OBLs on CEC2017 benchmark suite (D=30).
In recent years, statistical tests have been widely used in the field of computational intelligence to further evaluate the performance of algorithms and determine whether the proposed algorithm is significantly different from existing state of the art algorithms [63]. In this study, two nonparametric tests, the Wilcoxon and Friedman tests, were used to verify whether DAODE is improved compared to state of the art algorithms.
4.4.1. Wilcoxon-test results
The Wilcoxon test tests whether there is a significant difference between two datasets. In this study, a significance level of 0.05 was used to test the difference between DAODE and other algorithms, and the comparison results between DAODE and the state of the art algorithms, OBL variants, are respectively presented in Tables 9 and 10. Terms R+ and R− denote the sum of the rankings that DAODE outperforms or underperforms the competitors, respectively, and when the p-value is less than 0.05 it implies that DAODE is significantly different from the competitors. The computational formulas for them can be found in [63].
Table 9.
Wilcoxon-test results for comparing algorithms on CEC2017 benchmark suite.
From the results in Table 9, it can be observed that the R+ values of DAODE compared to the competitors are much larger than the R− values whenever D=30 or D=50, and the p-values are also less than 0.05. Therefore, DAODE differs significantly from these algorithms, which corresponds to the comparison results in Section 4.3.1. Furthermore, this study exhibited significant differences between the DAODE and OBL variants (Table 10). When D=30, it can be observed that DAODE outperforms all the OBL variants according to the values of R+, R−, and p-value. However, when D=50, although the R+ values of DAODE were greater than the R− values compared to the QROBL, FQROBL, and COBL variants, the p-values were all greater than 0.05; thus, there was no significant difference between DAODE and these three variants. Overall, it can be concluded that DAODE is effective for high-dimensional problems.
4.4.2. Friedman-test results
The Friedman test determines whether a significant difference exists between the algorithms using rank. The test results of DAODE with the state of the art algorithms and the OBL variants are summarized in Tables 11 and 12, respectively. Note that "ARV" is the average ranking value of each algorithm in the CEC2017 benchmark suite. For ease of observation, the "ARV" values are ranked to obtain the actual ranking values denoted as "Rank", where a smaller "Rank" value implies better algorithm performance.
Table 11.
Friedman-test results for comparing algorithms on CEC2017 benchmark suite.
From the data in the table, it can be observed that DAODE is ranked first in both tables, indicating that its overall performance is better than that of several state of the art algorithms and OBL variants. In addition, as the problem dimension increases, the ranking of DAODE remains at the top position, although the "ARV" of DAODE becomes larger, and the performance may decrease. Thus, the overall performance of DAODE was significantly better than that of the other algorithms.
4.5. Performance analysis of newly introduced strategies and the parameter
In this section, the effectiveness and robustness of the strategies proposed in this study, such as ABM, MIA, and their candidate strategies, are tested for a comparative performance analysis. The effectiveness of the DAO is verified in Section 4.3.2, therefore, it is not analyzed here. Owing to space limitations, the results were tested using only the CEC2017 benchmark suite of 30D.
4.5.1. Effectiveness of MIA
It is well-known that elite and inferior roles make up a small proportion of the population, while commoner roles make up the majority of the population. Therefore, in MIA, the individuals of DAODE are classified and archived in three levels: elite (AE), commoner (AC), and inferior roles (AI), and set AE:AC:AI to 2:5:3. To test the effectiveness of this proportion, MIA was compared with two other candidate proportions for performance: MIA-2 (3:5:2) and MIA-3 (1:5:4). Here, the percentage of commoner roles is set to 50%, such that the impact of elite and inferior roles with different proportions on the performance of the algorithm can be tested.
The comparison results in Table 13 show that DAODE/MIA performed better than DAODE/MIA-2. Only a few test functions of DAODE/MIA-2 outperform DAODE/MIA, such as f16, f28, and f30; however, a careful observation shows that there is no significant difference between them. This indicates that increasing the proportion of AE can have a negative impact on the algorithm, which may be because some roles in AE are not elite roles, causing the population to deviate from more promising areas. For DAODE/MIA-3, reducing the proportion of AE while increasing the proportion of AI, and the overall performance of DAODE/MIA-3 is enhanced compared with that of DAODE/MIA-2. Notably, DAODE/MIA-3 exhibits better performance than DAODE for some simple multimodal functions (e.g., f5, f8, and f10) and composition functions (e.g., f21 and f22), indicating that decreasing the AE proportion is beneficial to achieve better solutions for some functions. However, increasing the AI proportion also affects the algorithm performance. According to the result of "+/ = /-", DAODE will perform better. The convergence curves of the three proportions in the MIA are shown in Figure 9. Therefore, the experiment proved that the proportions of AE and AI were smaller and similar, thereby verifying the effectiveness of MIA in improving the performance of DAODE.
Table 13.
Comparison results of MIA and candidate strategies on cec2017 benchmark suite (D=30).
In DE, the strategy of the mutation operation significantly affects the performance of the algorithm. In ABM, the individuals of the mutation operation are selected from the archives of the MIA, the base vectors of the mutation operation from the AE, and the difference vector of the perturbation is generated by the difference between the vectors selected from AC and AI. The details of ABM are described in Section 3.3. To test the effectiveness of ABM, the performance of DAODE/ABM is compared with two other variation strategies: the first is the classical DE/rand/1, and the other mutation strategy (ABM-2) is similar to ABM, except that the base vector of the mutation operation is selected from AC, and the difference vector of perturbation is the difference between the vectors selected from AE and AI.
The comparison results of the different mutation strategies are summarized in Table 14, where DAODE/ABM outperformed DAODE/rand/1 and DAODE/ABM-2 for most test functions. In particular, for the unimodal function f3, simple multimodal functions f8, f10, hybrid functions f16, f17, and composition functions f20, f29, DAODE/ABM has an overwhelming advantage in terms of performance, as observed from the convergence curves in Figure 10. However, DAODE/rand/1 only performed better on individual test functions (e.g., f6, f9, and f13), as did DAODE/ABM-2 (e.g., f12, f21). In addition, the comparison results of DAODE/rand/1 and DAODE/ABM-2 demonstrate that DAODE/ABM-2 appears to have reduced performance on several test functions, and the convergence curves in Figure 10 show that DAODE/ABM-2 has a slower convergence rate (e.g., f16, f20, and f29). This is because the difference vector of the perturbation of DAODE/ABM-2 is the difference between the vectors selected from AE and AI, which causes the algorithm to be in a large range of perturbations at all times, and it appears that the algorithm converges excessively slow. Therefore, DAODE/ABM has two main advantages. In contrast, the base vector of the mutation operation is selected from AE, which will benefit the offspring individuals in inheriting good genes and make the population develop into a more promising area. Conversely, the difference vector of perturbation is the difference between the vectors selected from AC and AI, which avoids the phenomenon of a large perturbation.
Table 14.
Comparison results of ABM and candidate strategies on CEC2017 benchmark suite (D=30).
In previous research, there was a focus on utilizing an OBL strategy optimization algorithm, however, different OBLs exhibit different performances in addressing different problems, making it crucial to study multiple OBL co-optimization algorithms. Therefore, this study proposes a dynamic allocation of opposition-based learning in differential evolution for multi-role individuals (DAODE). Before each generation of population updates, individuals in the population are assigned multiple roles and stored in their corresponding archives (MIA), and each role is identified based on its comprehensive characteristics, including its fitness value and position. Subsequently, this paper classifies each role in the current population and selects the corresponding OBL strategy for each role using the dynamic allocation OBL mechanism (DAO). In the DAO, this paper defines a ranking mechanism to determine the characteristics of OBLs. This mechanism analyzes the diversity and fitness values of each OBL to obtain a comprehensive ranking of OBLs, providing a foundation for the adaptive selection of individuals. In addition, this paper proposes an individual-based archival mutation strategy (ABM) by MIA to drive populations to more promising regions.
To evaluate the performance of the proposed algorithm, DAODE is compared with several other algorithms on the CEC2017 benchmark suite and further test for significant differences between them. First, the experimental comparison results and statistical tested of DAODE with state of the art algorithms show that DAODE achieves significant performance results. The performance of DAODE is again compared with that of the OBL variants, and the experimental results confirm the effectiveness of the dynamic allocation OBL mechanism proposed by DAODE. Finally, the performance of the proposed new strategies is analyzed in this study, and the experiments demonstrate the effectiveness of MIA and ABM, which significantly improve the performance of the algorithms.
The experiment demonstrates that the comprehensive performance of DAODE surpasses that of its competitors and exhibits significant differences from them. However, DAODE still has some shortcomings, such as premature convergence when dealing with multimodal functions. First, further research is needed to determine the optimal ratio of the three roles in MIA, to obtain more accurate proportions. Second, it is worth exploring in detail whether the mutation operation in ABM is suitable for every individual in the population. Lastly, and most importantly, a more flexible OBL dynamic allocation method must be developed in DAO to effectively utilize each OBL strategy. Therefore, in future work, this study will focus on enhancing DAODE's performance and applying it to real-world problems like industrial safety inspection, image classification, and feature selection. When used for feature selection, DAODE can select appropriate OBL strategies to address different types of data, thereby filtering out the most redundant features from the original data and improving the final classification performance.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work was supported by National Natural Science Foundation of China (62106092); Natural Science Foundation of Fujian Province (2020J05169, 2022J01916); Natural Science Foundation of the Jiangsu Higher Education Institutions of China (22KJB520023).
Conflict of interest
No conflict of interest exit in the submission of this manuscript, and manuscript is approved by all authors for publication.
References
[1]
L. Migliorelli, D. Berardini, K. Cela, M. Coccia, L. Villani, E. Frontoni, et al., A store-and-forward cloud-based telemonitoring system for automatic assessing dysarthria evolution in neurological diseases from video-recording analysis, Comput. Biol. Med., 163 (2023), 107194. https://doi.org/10.1016/j.compbiomed.2023.107194 doi: 10.1016/j.compbiomed.2023.107194
[2]
W. Zhu, L. Fang, X. Ye, M. Medani, J. Escorcia-Gutierrez, IDRM: Brain tumor image segmentation with boosted rime optimization, Comput. Biol. Med., 166 (2023), 107551. https://doi.org/10.1016/j.compbiomed.2023.107551 doi: 10.1016/j.compbiomed.2023.107551
[3]
X. Zhang, Z. Wang, Z. Lu, Multi-objective load dispatch for microgrid with electric vehicles using modified gravitational search and particle swarm optimization algorithm, Appl. Energy, 306 (2022), 118018. http://dx.doi.org/10.1016/j.apenergy.2021.118018 doi: 10.1016/j.apenergy.2021.118018
[4]
S. Yin, Q. Luo, Y. Zhou, IBMSMA: An indicator-based multi-swarm slime mould algorithm for multi-objective truss optimization problems, J. Bionic Eng., 20 (2023), 1333–1360. http://dx.doi.org/10.1007/s42235-022-00307-9 doi: 10.1007/s42235-022-00307-9
[5]
X. Ju, F. Liu, L. Wang, W. J. Lee, Wind farm layout optimization based on support vector regression guided genetic algorithm with consideration of participation among landowners, Energy Convers. Manage., 196 (2019), 1267–1281. http://dx.doi.org/10.1016/j.enconman.2019.06.082 doi: 10.1016/j.enconman.2019.06.082
[6]
J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT press, 1992.
[7]
J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95-International Conference on Neural Networks, 4 (1995), 1942–1948. http://dx.doi.org/10.1109/ICNN.1995.488968
[8]
M. Dorigo, V. Maniezzo, A. Colorni, Ant system: Optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybern. Part B, 26 (1996), 29–41. http://dx.doi.org/10.1109/3477.484436 doi: 10.1109/3477.484436
[9]
D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Report, Technical report-tr06, Erciyes university, engineering faculty, computer, 2005.
[10]
S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software, 69 (2014), 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 doi: 10.1016/j.advengsoft.2013.12.007
[11]
J. Lian, G. Hui, L. Ma, T. Zhu, X. Wu, A. A. Heidari, et al., Parrot optimizer: Algorithm and applications to medical problems, Comput. Biol. Med., 172 (2024), 108064. https://doi.org/10.1016/j.compbiomed.2024.108064 doi: 10.1016/j.compbiomed.2024.108064
[12]
H. Su, D. Zhao, A. A. Heidari, L. Liu, X. Zhang, M. Mafarja, et al., Rime: A physics-based optimization, Neurocomputing, 532 (2023), 183–214. https://doi.org/10.1016/j.neucom.2023.02.010 doi: 10.1016/j.neucom.2023.02.010
[13]
I. Ahmadianfar, A. A. Heidari, S. Noshadian, H. Chen, A. H. Gandomi, INFO: An efficient optimization algorithm based on weighted mean of vectors, Expert Syst. Appl., 195 (2022), 116516. https://doi.org/10.1016/j.eswa.2022.116516 doi: 10.1016/j.eswa.2022.116516
[14]
Y. Yang, H. Chen, A. A. Heidari, A. H. Gandomi, Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts, Expert Syst. Appl., 177 (2021), 114864. https://doi.org/10.1016/j.eswa.2021.114864 doi: 10.1016/j.eswa.2021.114864
[15]
R. Storn, K. Price, Differential evolution–-a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., 11 (1997), 341–359. http://dx.doi.org/10.1023/A:1008202821328 doi: 10.1023/A:1008202821328
[16]
D. Liu, Z. Hu, Q. Su, Neighborhood-based differential evolution algorithm with direction induced strategy for the large-scale combined heat and power economic dispatch problem, Inf. Sci., 613 (2022), 469–493. https://doi.org/10.1016/j.ins.2022.09.025 doi: 10.1016/j.ins.2022.09.025
[17]
C. Zhang, W. Zhou, W. Qin, W. Tang, A novel UAV path planning approach: Heuristic crossing search and rescue optimization algorithm, Expert Syst. Appl., 215 (2023), 119243. https://doi.org/10.1016/j.eswa.2022.119243 doi: 10.1016/j.eswa.2022.119243
[18]
M. Sajid, H. Mittal, S. Pare, M. Prasad, Routing and scheduling optimization for UAV assisted delivery system: A hybrid approach, Appl. Soft Comput., 126 (2022), 109225. https://doi.org/10.1016/j.asoc.2022.109225 doi: 10.1016/j.asoc.2022.109225
[19]
L. Abualigah, M. A. Elaziz, D. Yousri, M. A. A. Al-qaness, A. A. Ewees, R. A. Zitar, Augmented arithmetic optimization algorithm using opposite-based learning and lévy flight distribution for global optimization and data clustering, J. Intell. Manuf., 34 (2023), 3523–3561. http://dx.doi.org/10.1007/s10845-022-02016-w doi: 10.1007/s10845-022-02016-w
[20]
H. R. Tizhoosh, Opposition-based learning: A new scheme for machine intelligence, in International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06), 1 (2005), 695–701. http://dx.doi.org/10.1109/CIMCA.2005.1631345
[21]
S. Rahnamayan, H. R. Tizhoosh, M. M. A. Salama, Opposition-based differential evolution, IEEE Trans. Evol. Comput., 12 (2008), 64–79. http://dx.doi.org/10.1109/TEVC.2007.894200 doi: 10.1109/TEVC.2007.894200
[22]
M. Črepinšek, S. H. Liu, M. Mernik, Exploration and exploitation in evolutionary algorithms, ACM Comput. Surv., 45 (2013), 1–33. http://dx.doi.org/10.1145/2480741.2480752 doi: 10.1145/2480741.2480752
[23]
H. L. Kwa, J. Philippot, R. Bouffanais, Effect of swarm density on collective tracking performance, Swarm Intell., 17 (2023), 253–281. http://dx.doi.org/10.1007/s11721-023-00225-4 doi: 10.1007/s11721-023-00225-4
[24]
P. Joćko, B. M. Ombuki-Berman, A. P. Engelbrecht, Multi-guide particle swarm optimisation archive management strategies for dynamic optimisation problems, Swarm Intell., 16 (2022), 143–168. http://dx.doi.org/10.1007/s11721-022-00210-3 doi: 10.1007/s11721-022-00210-3
[25]
F. Yu, J. Guan, H. R. Wu, C. Y. Chen, X. W. Xia, Lens imaging opposition-based learning for differential evolution with cauchy perturbation, Appl. Soft Comput., 152 (2023), 111211. https://doi.org/10.1016/j.asoc.2023.111211 doi: 10.1016/j.asoc.2023.111211
[26]
S. Mahdavi, S. Rahnamayan, K. Deb, Opposition based learning: A literature review, Swarm Evol. Comput., 39 (2018), 1–23. http://dx.doi.org/10.1016/j.swevo.2017.09.010 doi: 10.1016/j.swevo.2017.09.010
[27]
W. Deng, S. F. Shang, X. Cai, H. M. Zhao, Y. J. Song, J. J. Xu, An improved differential evolution algorithm and its application in optimization problem, Soft Comput., 25 (2021), 5277–5298. http://dx.doi.org/10.1007/s00500-020-05527-x doi: 10.1007/s00500-020-05527-x
[28]
L. L. Kang, R. S. Chen, W. L. Cao, Y. C. Chen, Non-inertial opposition-based particle swarm optimization and its theoretical analysis for deep learning applications, Appl. Soft Comput., 88 (2020), 10. http://dx.doi.org/10.1016/j.asoc.2019.106038 doi: 10.1016/j.asoc.2019.106038
[29]
S. Dhargupta, M. Ghosh, S. Mirjalili, R. Sarkar, Selective opposition based grey wolf optimization, Expert Syst. Appl., 151 (2020), 13. http://dx.doi.org/10.1016/j.eswa.2020.113389 doi: 10.1016/j.eswa.2020.113389
[30]
A. Chatterjee, S. Ghoshal, V. Mukherjee, Solution of combined economic and emission dispatch problems of power systems by an opposition-based harmony search algorithm, Int. J. Electr. Power Energy Syst., 39 (2012), 9–20. https://doi.org/10.1016/j.ijepes.2011.12.004 doi: 10.1016/j.ijepes.2011.12.004
[31]
B. Kazemi, M. Ahmadi, S. Talebi, Optimum and reliable routing in VANETs: An opposition based ant colony algorithm scheme, in 2013 International Conference on Connected Vehicles and Expo (ICCVE), (2013), 926–930.
[32]
Y. Y. Zhang, Backtracking search algorithm with specular reflection learning for global optimization, Knowl.-Based Syst., 212 (2021), 17. https://doi.org/10.1016/j.knosys.2020.106546 doi: 10.1016/j.knosys.2020.106546
[33]
R. Patel, M. M. Raghuwanshi, L. G. Malik, Decomposition based multi-objective genetic algorithm (DMOGA) with opposition based learning, in 2012 Fourth International Conference on Computational Intelligence and Communication Networks, (2012), 605–610. https://doi.org/10.1109/cicn.2012.79
[34]
M. Tair, N. Bacanin, M. Zivkovic, K. Venkatachalam, A chaotic oppositional whale optimisation algorithm with firefly search for medical diagnostics, Comput. Mater. Continua, 72 (2022). https://doi.org/10.32604/cmc.2022.024989 doi: 10.32604/cmc.2022.024989
[35]
L. Abualigah, A. Diabat, M. A. Elaziz, Improved slime mould algorithm by opposition-based learning and Levy flight distribution for global optimization and advances in real-world engineering problems, J. Ambient Intell. Humanized Comput., 14 (2023), 1163–1202. https://doi.org/10.1007/s12652-021-03372-w doi: 10.1007/s12652-021-03372-w
[36]
S. K. Joshi, Chaos embedded opposition based learning for gravitational search algorithm, Appl. Intell., 53 (2023), 5567–5586. https://doi.org/10.1007/s10489-022-03786-9 doi: 10.1007/s10489-022-03786-9
[37]
V. H. S. Pham, N. T. N. Dang, V. N. Nguyen, Hybrid sine cosine algorithm with integrated roulette wheel selection and opposition-based learning for engineering optimization problems, Int. J. Comput. Intell. Syst., 16 (2023), 171. https://doi.org/10.1007/s44196-023-00350-2 doi: 10.1007/s44196-023-00350-2
[38]
N. Bacanin, U. Arnaut, M. Zivkovic, T. Bezdan, T. A. Rashid, Energy efficient clustering in wireless sensor networks by opposition-based initialization bat algorithm, in Computer Networks and Inventive Communication Technologies. Lecture Notes on Data Engineering and Communications Technologies (eds. S. Smys, R. Bestak, R. Palanisamy and I. Kotuliak), (2022), 1–16. https://doi.org/10.1007/978-981-16-3728-5_1
[39]
T. Bezdan, A. Petrovic, M. Zivkovic, I. Strumberger, V. K. Devi, N. Bacanin, Current best opposition-based learning salp swarm algorithm for global numerical optimization, in 2021 Zooming Innovation in Consumer Technologies Conference (ZINC), (2021), 5–10. https://doi.org/10.1109/ZINC52049.2021.9499275
[40]
S. J. Mousavirad, D. Oliva, S. Hinojosa, G. Schaefer, Differential evolution-based neural network training incorporating a centroid-based strategy and dynamic opposition-based learning, in 2021 IEEE Congress on Evolutionary Computation (CEC), (2021), 1233–1240. https://doi.org/10.1109/CEC45853.2021.9504801
[41]
S. Rahnamayan, H. R. Tizhoosh, M. M.A. Salama, Quasi-oppositional differential evolution, in 2007 IEEE Congress on Evolutionary Computation, (2007), 2229–2236. https://doi.org/10.1109/CEC.2007.4424748
[42]
M. Ergezer, D. Simon, D. Du Oppositional biogeography-based optimization, in 2009 IEEE International Conference on Systems, Man and Cybernetics, (2009), 1009–1014. https://doi.org/10.1109/ICSMC.2009.5346043
[43]
H. R. Tizhoosh, M. Ventresca, S. Rahnamayan, Opposition-based computing, in Oppositional Concepts in Computational Intelligence (eds. H. R. Tizhoosh and M. Ventresca), Springer, (2008), 11–28. https://doi.org/10.1007/978-3-540-70829-2_2
[44]
H. Wang, Z. Wu, S. Rahnamayan, Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems, Soft Comput., 15 (2010), 2127–2140. http://dx.doi.org/10.1007/s00500-010-0642-7 doi: 10.1007/s00500-010-0642-7
[45]
M. Ergezer, D. Simon, Probabilistic properties of fitness-based quasi-reflection in evolutionary algorithms, Comput. Oper. Res., 63 (2015), 114–124. http://https://doi.org/10.1016/j.cor.2015.03.013 doi: 10.1016/j.cor.2015.03.013
[46]
Z. Hu, Y. Bao, T. Xiong, Partial opposition-based adaptive differential evolution algorithms: Evaluation on the CEC 2014 benchmark set for real-parameter optimization, in 2014 IEEE congress on evolutionary computation (CEC), (2014), 2259–2265. http://dx.doi.org/10.1109/CEC.2014.6900489
[47]
S. Rahnamayan, J. Jesuthasan, F. Bourennani, G. F. Naterer, H. Salehinejad, Centroid opposition-based differential evolution, Int. J. Appl. Metaheuristic Comput., 5 (2014), 1–25. http://dx.doi.org/10.4018/ijamc.2014100101 doi: 10.4018/ijamc.2014100101
[48]
H. Liu, Z. Wu, H. Li, H. Wang, S. Rahnamayan, C. Deng, Rotation-based learning: A novel extension of opposition-based learning, in PRICAI 2014: Trends in Artificial Intelligence. PRICAI 2014. Lecture Notes in Computer Science (eds. D. N. Pham and S. B. Park), Springer International Publishing, (2014), 511–522. https://doi.org/10.1007/978-3-319-13560-1_41
[49]
H. Xu, C. D. Erdbrink, V. V. Krzhizhanovskaya, How to speed up optimization? Opposite-center learning and its application to differential evolution, Proc. Comput. Sci., 51 (2015), 805–814. http://doi.org/10.1016/j.procs.2015.05.203 doi: 10.1016/j.procs.2015.05.203
[50]
Z. Seif, M. B. Ahmadi, An opposition-based algorithm for function optimization, Eng. Appl. Artifi. Intell., 37 (2015), 293–306. http://dx.doi.org/10.1016/j.engappai.2014.09.009 doi: 10.1016/j.engappai.2014.09.009
[51]
Q. Xu, L. Wang, B. He, N. Wang, Modified opposition-based differential evolution for function optimization, J. Comput. Inf. Syst., 7 (2011), 1582–1591.
[52]
S. Y. Park, J. J. Lee, Stochastic opposition-based learning using a beta distribution in differential evolution, IEEE Trans. Cybern., 46 (2016), 2184–2194. http://dx.doi.org/10.1109/TCYB.2015.2469722 doi: 10.1109/TCYB.2015.2469722
[53]
X. Xia, L. Gui, Y. Zhang, X. Xu, F. Yu, H. Wu, et al., A fitness-based adaptive differential evolution algorithm, Inf. Sci., 549 (2021), 116–141. http://dx.doi.org/10.1016/j.ins.2020.11.015 doi: 10.1016/j.ins.2020.11.015
[54]
H. Deng, L. Peng, H. Zhang, B. Yang, Z. Chen, Ranking-based biased learning swarm optimizer for large-scale optimization, Inf. Sci., 493 (2019), 120–137. http://dx.doi.org/10.1016/j.ins.2019.04.037 doi: 10.1016/j.ins.2019.04.037
[55]
L. Gui, X. Xia, F. Yu, H. Wu, R. Wu, B. Wei, et al., A multi-role based differential evolution, Swarm Evol. Comput., 50 (2019), 100508. https://doi.org/10.1016/j.swevo.2019.03.003 doi: 10.1016/j.swevo.2019.03.003
[56]
B. Morales-Castañeda, D. Zaldívar, E. Cuevas, F. Fausto, A. Rodríguez, A better balance in metaheuristic algorithms: Does it exist?, Swarm Evol. Comput., 54 (2020), 100671. http://dx.doi.org/10.1016/j.swevo.2020.100671 doi: 10.1016/j.swevo.2020.100671
[57]
G. Wu, R. Mallipeddi, P. Suganthan, Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization, Nanyang Technol. Univ. Singapore Tech. Rep., (2016), 1–18.
[58]
J. Zhang, A. C. Sanderson, JADE: Adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput., 13 (2009), 945–958. http://dx.doi.org/10.1109/tevc.2009.2014613 doi: 10.1109/tevc.2009.2014613
[59]
W. Deng, H. C. Ni, Y. Liu, H. L. Chen, H. M. Zhao, An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation, Appl. Soft Comput., 127 (2022), 20. http://dx.doi.org/10.1016/j.asoc.2022.109419 doi: 10.1016/j.asoc.2022.109419
[60]
Y. L. Xu, X. F. Yang, Z. L. Yang, X. P. Li, P. Wang, R. Z. Ding, et al., An enhanced differential evolution algorithm with a new oppositional-mutual learning strategy, Neurocomputing, 435 (2021), 162–175. http://dx.doi.org/10.1016/j.neucom.2021.01.003 doi: 10.1016/j.neucom.2021.01.003
[61]
J. Li, Y. Gao, K. Wang, Y. Sun, A dual opposition-based learning for differential evolution with protective mechanism for engineering optimization problems, Appl. Soft Comput., 113 (2021), 107942. http://dx.doi.org/10.1016/j.asoc.2021.107942 doi: 10.1016/j.asoc.2021.107942
[62]
X. C. Zhao, S. Feng, J. L. Hao, X. Q. Zuo, Y. Zhang, Neighborhood opposition-based differential evolution with gaussian perturbation, Soft Comput., 25 (2021), 27–46. http://dx.doi.org/10.1007/s00500-020-05425-2 doi: 10.1007/s00500-020-05425-2
[63]
J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol. Comput., 1 (2011), 3–18. http://dx.doi.org/10.1016/j.swevo.2011.02.002 doi: 10.1016/j.swevo.2011.02.002
This article has been cited by:
1.
Fei Yu, Jian Guan, Hongrun Wu, Hui Wang, Biyang Ma,
Multi-population differential evolution approach for feature selection with mutual information ranking,
2025,
260,
09574174,
125404,
10.1016/j.eswa.2024.125404
Shifted and Rotated Sum of Different Power Function*
200
f3
Shifted and Rotated Zakharov Function
300
Multimodal
f4
Shifted and Rotated Rosenbrock's Function
400
f5
Shifted and Rotated Rastrigin's Function
500
f6
Shifted and Rotated Expanded Scaffer's F6 Function
600
f7
Shifted and Rotated Lunacek Bi_Rastrigin Function
700
f8
Shifted and Rotated Non-Continuous Rastrigin's Function
800
f9
Shifted and Rotated Levy Function
900
f10
Shifted and Rotated Schwefel's Function
1000
Hybrid
f11
Hybrid Function 1 (N=3)
1100
f12
Hybrid Function 2 (N = 3)
1200
f13
Hybrid Function 3 (N = 3)
1300
f14
Hybrid Function 4 (N = 4)
1400
f15
Hybrid Function 5 (N = 4)
1500
f16
Hybrid Function 6 (N = 4)
1600
f17
Hybrid Function 6 (N = 5)
1700
f18
Hybrid Function 6 (N = 5)
1800
f19
Hybrid Function 6 (N = 5)
1900
f20
Hybrid Function 6 (N = 6)
2000
Composition
f21
Composition Function 1 (N=3)
2100
f22
Composition Function 2 (N = 3)
2200
f23
Composition Function 3 (N = 4)
2300
f24
Composition Function 4 (N = 4)
2400
f25
Composition Function 5 (N = 5)
2500
f26
Composition Function 6 (N = 5)
2600
f27
Composition Function 7 (N = 6)
2700
f28
Composition Function 8 (N = 6)
2800
f29
Composition Function 9 (N = 3)
2900
f30
Composition Function 10 (N = 3)
3000
Search Range:[−100,100]D
*f2 has been excluded because it shows unstable behavior especially for higher dimensions, and significant performance variations for the same algorithm implemented in Matlab, C [57].
Shifted and Rotated Sum of Different Power Function*
200
f3
Shifted and Rotated Zakharov Function
300
Multimodal
f4
Shifted and Rotated Rosenbrock's Function
400
f5
Shifted and Rotated Rastrigin's Function
500
f6
Shifted and Rotated Expanded Scaffer's F6 Function
600
f7
Shifted and Rotated Lunacek Bi_Rastrigin Function
700
f8
Shifted and Rotated Non-Continuous Rastrigin's Function
800
f9
Shifted and Rotated Levy Function
900
f10
Shifted and Rotated Schwefel's Function
1000
Hybrid
f11
Hybrid Function 1 (N=3)
1100
f12
Hybrid Function 2 (N = 3)
1200
f13
Hybrid Function 3 (N = 3)
1300
f14
Hybrid Function 4 (N = 4)
1400
f15
Hybrid Function 5 (N = 4)
1500
f16
Hybrid Function 6 (N = 4)
1600
f17
Hybrid Function 6 (N = 5)
1700
f18
Hybrid Function 6 (N = 5)
1800
f19
Hybrid Function 6 (N = 5)
1900
f20
Hybrid Function 6 (N = 6)
2000
Composition
f21
Composition Function 1 (N=3)
2100
f22
Composition Function 2 (N = 3)
2200
f23
Composition Function 3 (N = 4)
2300
f24
Composition Function 4 (N = 4)
2400
f25
Composition Function 5 (N = 5)
2500
f26
Composition Function 6 (N = 5)
2600
f27
Composition Function 7 (N = 6)
2700
f28
Composition Function 8 (N = 6)
2800
f29
Composition Function 9 (N = 3)
2900
f30
Composition Function 10 (N = 3)
3000
Search Range:[−100,100]D
*f2 has been excluded because it shows unstable behavior especially for higher dimensions, and significant performance variations for the same algorithm implemented in Matlab, C [57].