
Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.
Citation: Amsa Shabbir, Aqsa Rasheed, Huma Shehraz, Aliya Saleem, Bushra Zafar, Muhammad Sajid, Nouman Ali, Saadat Hanif Dar, Tehmina Shehryar. Detection of glaucoma using retinal fundus images: A comprehensive review[J]. Mathematical Biosciences and Engineering, 2021, 18(3): 2033-2076. doi: 10.3934/mbe.2021106
[1] | Cesar J. Montiel Moctezuma, Jaime Mora, Miguel González Mendoza . A self-adaptive mechanism using weibull probability distribution to improve metaheuristic algorithms to solve combinatorial optimization problems in dynamic environments. Mathematical Biosciences and Engineering, 2020, 17(2): 975-997. doi: 10.3934/mbe.2020052 |
[2] | YoungSu Yun, Mitsuo Gen, Tserengotov Nomin Erdene . Applying GA-PSO-TLBO approach to engineering optimization problems. Mathematical Biosciences and Engineering, 2023, 20(1): 552-571. doi: 10.3934/mbe.2023025 |
[3] | Shihong Yin, Qifang Luo, Yanlian Du, Yongquan Zhou . DTSMA: Dominant Swarm with Adaptive T-distribution Mutation-based Slime Mould Algorithm. Mathematical Biosciences and Engineering, 2022, 19(3): 2240-2285. doi: 10.3934/mbe.2022105 |
[4] | Yan Yan, Yong Qian, Hongzhong Ma, Changwu Hu . Research on imbalanced data fault diagnosis of on-load tap changers based on IGWO-WELM. Mathematical Biosciences and Engineering, 2023, 20(3): 4877-4895. doi: 10.3934/mbe.2023226 |
[5] | Yufei Wang, Yujun Zhang, Yuxin Yan, Juan Zhao, Zhengming Gao . An enhanced aquila optimization algorithm with velocity-aided global search mechanism and adaptive opposition-based learning. Mathematical Biosciences and Engineering, 2023, 20(4): 6422-6467. doi: 10.3934/mbe.2023278 |
[6] | Xiaoxuan Pei, Kewen Li, Yongming Li . A survey of adaptive optimal control theory. Mathematical Biosciences and Engineering, 2022, 19(12): 12058-12072. doi: 10.3934/mbe.2022561 |
[7] | Rong Zheng, Heming Jia, Laith Abualigah, Shuang Wang, Di Wu . An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Mathematical Biosciences and Engineering, 2022, 19(4): 3994-4037. doi: 10.3934/mbe.2022184 |
[8] | Yuting Liu, Hongwei Ding, Zongshan Wang, Gushen Jin, Bo Li, Zhijun Yang, Gaurav Dhiman . A chaos-based adaptive equilibrium optimizer algorithm for solving global optimization problems. Mathematical Biosciences and Engineering, 2023, 20(9): 17242-17271. doi: 10.3934/mbe.2023768 |
[9] | Jin Zhang, Nan Ma, Zhixuan Wu, Cheng Wang, Yongqiang Yao . Intelligent control of self-driving vehicles based on adaptive sampling supervised actor-critic and human driving experience. Mathematical Biosciences and Engineering, 2024, 21(5): 6077-6096. doi: 10.3934/mbe.2024267 |
[10] | Hongmin Chen, Zhuo Wang, Di Wu, Heming Jia, Changsheng Wen, Honghua Rao, Laith Abualigah . An improved multi-strategy beluga whale optimization for global optimization problems. Mathematical Biosciences and Engineering, 2023, 20(7): 13267-13317. doi: 10.3934/mbe.2023592 |
Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.
The pursuit of the optimal solution among numerous alternatives to either maximize or minimize the objective function, while adhering to constraints, constitutes an optimization problem. Such challenges manifest ubiquitously across various domains including but not limited to signal processing, image processing, production scheduling, task allocation, pattern recognition, automatic control, and machine design. Optimization algorithms primarily harness the tremendous computational prowess of computers to iteratively explore viable solutions to the problem. After a large number of feasible solutions have been obtained, the most appropriate solution is selected to formulate a computational method for solving the problem [1]. Various optimization methods such as the particle swarm algorithm [2], Whale algorithm [3], and Ant-Lion algorithm [4], etc. have been widely used in the above fields and have yielded great economic and social benefits. Given the myriad variables, intricacies, computational overheads, and nonlinearity inherent in optimization challenges, scientists and engineers globally continue their quest for an efficient and versatile optimization methodology.
The grey wolf optimization algorithm (GWO) [5] is a new heuristic swarm intelligence optimization algorithm proposed by Mirjalili et al. in 2014, that finds applications across diverse domains including engineering, medicine, image processing, and biological sciences. The GWO draws inspiration from the social structure and hunting tactics of grey wolves in the wild. As shown in Figure 1, the wolf pack is specifically divided into four ranks. Through the guidance of the three alpha wolves, the grey wolves conduct collective searches, encirclement, and attacks on prey, realizing the search for targets. Owing to its remarkable efficiency and minimal adjustable parameters, the GWO algorithm is characterized by ease of implementation. Consequently, in recent years, it has been applied to many fields, such as workshop scheduling [6], path planning [7], power systems [8], fuzzy control systems [9], image segmentation [10], and so on. However, the GWO algorithm's reliance on the top three wolves for guiding the entire search process often results in rapid convergence towards these wolves. Therefore, GWO suffers from the disadvantages of low population diversity, tendency to fall into local optima, slow convergence in later stages, and imbalance in the exploration and exploitation process. Due to these deficiencies, there is a significant gap between GWO and SMAF1 [11] in the optimal tuning of interval type-2 fuzzy controllers. In light of these drawbacks, scholars have proposed a series of improved solutions. For instance, the method proposed in [12] to delete half of the less fit search agents and relocate them near the three best wolves can improve local search and convergence towards promising regions of the search space. In [13], GWO was applied to optimally tune the parameter vectors of a fuzzy control system. With the rapid development of neural networks, more and more improvements have emerged. In [14], the authors proposed an improved anti-noise adaptive long short-term memory (ANA-LSTM) neural network with high-robustness feature extraction and optimal parameter characterization for accurate Remaining Useful Life (RUL) prediction. In [15], an improved robust multi-time scale singular filtering-Gaussian process regression-long short-term memory (SF-GPR-LSTM) modeling method is proposed for remaining capacity estimation, and these methods have achieved good improvement results. Nevertheless, drawing inspiration from [16], there exists a potential to integrate intelligent optimization algorithms with LSTM. GWO could be employed to compute meaningful and optimal hyperparameters for CNN-LSTM networks, yielding notable performance enhancements. In [17], the author introduced random search agents into the position update equation to increase the algorithm's exploration capability. However, in the exploitation stage, there is no limit on the influence of random search agents, which can weaken the local search capability of the algorithm and affect convergence accuracy. The impact on convergence accuracy is particularly significant in constrained engineering design problems. In [18], the author integrated GWO with PSO to improve convergence accuracy, but the weakness of easily falling into local optima remains. In [19], the author introduced a crossover operator between two random individuals to achieve information sharing among individuals, improving convergence speed and solution quality. When the population falls into local optima, individuals gather together, and the crossover operator loses its effect, lacking the ability to escape local optima. In [20], the author incorporated opposition-based learning into GWO to improve convergence speed, but the complexity of the algorithm also increases. In [21], the author used fuzzy logic to dynamically adjust parameters, change the weights of the three alpha wolves, and highlight the leadership disparities of the grey wolf pack to improve convergence accuracy. The original GWO has slow convergence speed, is prone to falling into local optima, has weak search ability, and has low convergence accuracy. To address these shortcomings, this paper proposes an adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO):
1). ASGWO proposes a piecewise nonlinear factor a to achieve a balance between global search and local development.
2). ASGWO integrates a dynamic logarithmic spiral into the foundational position update equation, gradually diminishing its configuration over successive iterations. This augmentation serves to broaden the algorithm's search domain while concurrently enriching population diversity.
3). ASGWO replaces the static step size of the original position update with an adaptive self-learning step size, dynamically adjusting it according to the learning of evolutionary success rate and iteration count. This adaptation enables the algorithm to optimize step size in alignment with current information, thereby enhancing both convergence speed and the algorithm's capability to circumvent local optima.
4). ASGWO also proposes a new location update strategy, using the global optimal location and randomly generated locations as learning samples, and adding dual adaptive convergence factors to control the influence of the two learning samples.
We endeavor for ASGWO to demonstrate robust performance across 23 test functions and to attain exceptional outcomes when employed in engineering scenarios. Subsequent experimental findings will substantiate this notion.
The rest of the article is structured as follows: Section 2 briefly introduces the mathematical model of the GWO algorithm. Section 3 describes the improvement strategy and implementation steps of ASGWO. Section 4 analyzes the experimental results of the benchmark function. Section 5 shows applications of ASGWO on real engineering problems. Finally, Section 6 presents the conclusions of this article.
In designing GWO, a mathematical model is constructed for the grey wolf population. The wolf with the best fitness is the α wolf, followed by the β wolf and the δ wolf, and the remaining solutions are the ω wolves. During hunting, ω wolves will approach, surround, and attack prey under the guidance of α wolf, β wolf, and δ wolf. For a d-dimensional optimization problem, the population in GWO consists of multiple grey wolves, each representing a candidate solution. The position vector of the grey wolf represents the feature vector of the corresponding candidate solution. The objective function value of the candidate solution corresponds to the fitness of the grey wolf.
The wolf's strategy of surrounding the prey during hunting, to mathematically model it, proposes the following equation:
→D=|→C×→Xp(t)−→X(t)| | (2.1) |
→X(t+1)=→Xp(t)−→A×→D | (2.2) |
a(t)=2−2tMaxIter | (2.3) |
→A=2a⋅→r1−→a | (2.4) |
→C=2⋅→r2 | (2.5) |
where the t represents the current iteration number, MaxIter is the total iteration number, →D is the distance between the wolf and the prey, →Xp(t) is the position vector of the prey, →X(t+1) is the position vector of the wolf at iteration t, →A and →C are coefficient vectors, →r1 and →r2 are random vectors in [0, 1], the component of a decreases linearly from 2 to 0 during the iteration process, and →a is a vector composed of scalars a.
In an abstract search space, we do not know the location of the prey. To mathematically simulate the hunting behavior of grey wolves, we assume that α wolves, β wolves, and δ wolves have better knowledge of the potential location of the prey. Therefore, we save the first three best solutions obtained so far and require other search agents to update their positions based on the guidance of the position of the three best search agent. In nature, there are also differences in social hierarchy among the three best wolves. Therefore, this article refers to the fitness weight mentioned in the literature[18] to reflect the differences between the three best wolves, making the algorithm more consistent with the social hierarchy of grey wolves. The fitness of alpha wolf is the best among the three best wolves, so the inertia weight of the alpha wolf is the largest, followed by the delta wolf and the omega wolf. The following formula is proposed in this regard.
→Dα=|→C1×→Xα−→X|,→Dβ=|→C2×→Xβ−→X|,→Dδ=|→C3×→Xδ−→X| | (2.6) |
→X1=→Xα−→A1×→Dα,→X2=→Xβ−→A2×→Dβ,→Xδ=→Xδ−→A3×→Dδ | (2.7) |
→X(t+1)=(W1⋅→X1+W2⋅→X2+W3⋅→X3) | (2.8) |
W1=ZαZα+Zβ+Zδ,W2=ZβZα+Zβ+Zδ,W3=ZδZα+Zβ+Zδ | (2.9) |
where the →Xα, →Xβ, and →Xδ are the position vectors of the alpha, beta, and delta wolves, and Zα, Zβ, and Zδ represent the reciprocal of the fitness of alpha, beta, and delta wolves, respectively.
When the prey stops moving, the grey wolf attacks the prey to complete the hunt. To approach the prey in the mathematical simulation, we reduce the value of a to reduce the fluctuation range of →A. →A is a random value in the range of [−2a, 2a], where a decreases from 2 to 0 with the number of iterations. When the random value of →A is between [−1, 1], the next position of the search agent is anywhere between the current position and the prey position. Therefore, when |→A|<1, the wolf group explores the search space. Grey wolves mainly search based on the positions of alpha wolves, beta wolves, and delta wolves. They separate from each other to find prey and converge to attack prey. To mathematically model divergence, we use random values when →|A|>1 to force the search agent to deviate from the location of the prey, thus exploring the search space.
The grey wolf completes hunting by repeating the steps of encirclement and hunting as described above. The pseudocode of the original GWO algorithm is shown in Algorithm 1.
Algorithm 1 Grey Wolf Algorithm. |
1: Initialize the grey wolf population 2: repeat 3: Calculate parameters a, A, and C 4: Calculate the fitness of the search agent 5: Find the three best agents: α, β, γ 6: Update search agent position through Equation2.8 7: until The conditions for termination are met Output:optimal solution |
In the original grey wolf optimization algorithm, the global exploration and local exploitation abilities of the algorithm are determined by the coefficient |→A|, which is determined by the convergence factor a. The convergence factor a decreases linearly from 2 to 0, imparting upon the algorithm pronounced global exploration capabilities in its nascent phases and robust local exploitation prowess in its subsequent stages. However, the algorithm's convergence exhibits nonlinearity throughout the iterative process. The linear reduction of the convergence factor a cannot well fits the real search situation. In the iterative process of the algorithm, if the linear convergence factor a decreases too fast in the early stage, it may lead to insufficient exploration, and then the algorithm very easily falls into premature convergence in the exploitation process. Conversely, a sluggish reduction of the linear convergence factor a in the later stages can significantly prolong convergence time and diminish convergence efficiency. In the early stages of search, the nonlinear convergence factor a can decrease at a smaller rate to ensure sufficient global exploration; in the later stages of exploitation, the nonlinear convergence factor a decreases faster, thereby improving the convergence speed and enhancing local exploitation. Dividing the iterative process into early-stage exploration and late-stage exploitation can facilitate achieving a harmonious equilibrium between global exploration and local exploitation across a wide spectrum of problems. Therefore, this article advocates for the adoption of a segmented nonlinear convergence factor, with the concrete formula as follows:
{a=2−tan1.5(2⋅tMaxIter⋅π4), if t<MaxIter2a=tan1.5(2⋅(MaxIter−t)MaxIter⋅π4), otherwise} | (3.1) |
For x within the interval [0, 1], the growth rate of tan(π4x) is greater than that of x. Thus, the rate of decrease for 2−tan(π4x) is slower compared to 2−x. On the other hand, for x within the interval [1,2], the growth rate of tan(x) is exceeds that of x. Therefore, the descent rate of tan(2−x) outpaces that of 2−x. To amplify this effect, an exponential function with a base greater than 1 can be applied. As per Eq (3.1), during the initial phase of the iteration process, the nonlinear convergence factor a undergoes a gradual reduction, maintaining a larger value compared to the linear convergence factor, so that the algorithm can explore a broader search space, improve population diversity, and establish a robust groundwork for algorithm exploitation. Subsequently, in the second half of the iteration process, the preliminary comprehensive exploration reduces the possibility of falling into local optimal solutions. Moreover, the nonlinear convergence factor a decreases faster, enabling the algorithm to quickly enter fine local exploitation and improve the convergence speed. Figure 2 illustrates the curve of the nonlinear convergence factor a changing with the number of iterations.
In the original grey wolf optimization algorithm, the wolf pack moves slowly towards the three best wolves in a straight line to approach the prey, as these top-ranking wolves can acquire a wealth of prey-related data. However, this approach to position updating is overly simplistic, potentially leading to the oversight of crucial information during movement, resulting in a small search range and easy premature convergence. Given the cautious nature of grey wolves, they do not move in a straight line but move slowly in circles to avoid scaring the prey and causing hunting failure. During the ultimate pursuit, straight-line movement enables swift proximity to the prey; however, owing to the prey's evasive maneuvers and the grey wolves' limitations in maintaining straight-line hunting, the final hunt is also curved movement. Therefore, the incorporation of a logarithmic spiral into the position update process offers a more faithful emulation of grey wolf locomotion. Concurrently, as the distance decreases, the curvature of the grey wolf motion also becomes smaller. Therefore, the configuration of the logarithmic spiral is regulated by amalgamating the functions of cos and √tMaxiter to generate a monotonically decreasing function. With an escalation in the number of iterations, the configuration of the logarithmic spiral is changed to become smaller, aligning more closely with the movement patterns of grey wolves. Therefore, this article proposes a dynamic spiral position update, with the concrete formula as follows:
→X1=→Xα−→A1×→Dα⋅eb⋅l1⋅cos(2⋅π⋅r3)→X2=→Xβ−→A2×→Dβ⋅eb⋅l2⋅cos(2⋅π⋅r4)→X3=→Xδ−→A3×→Dδ⋅eb⋅l3⋅cos(2⋅π⋅r5) | (3.2) |
b=cos(√tMaxIter⋅π) | (3.3) |
where l1, l2, l3, r1, r2, r3 are random values of [-1, 1], respectively.
Utilizing Eq (2.9), wolves can update their location via the logarithmic spiral, traversing regions inaccessible to linear movement, obtaining additional information in the path, expanding the search range of the algorithm, and improving the diversity of the population. The dynamic spiral parameter in Eq (3.1) depends on the number of iterations, resulting in a larger spiral configuration during the initial stages of iteration. This enables the wolves to explore a larger range, further expanding the search range of the algorithm, and circumventing premature convergence with better population diversity; in the later stages of iteration, the spiral shape becomes smaller, enhancing the local development ability of the wolves and improving the convergence speed of the algorithm.
Equation (2.8) stipulates that the step size of the traditional grey wolf optimization algorithm is fixed. In scenarios where a longer step size is warranted for convergence, a short current step size mandates a gradual approach to the optimal point, thereby impeding convergence speed. Conversely, if a smaller step size is needed for convergence, an excessively large algorithmic step size may result in the search agent's oscillation around the optimal point, perpetually advancing and retreating. Moreover, a single fixed step size cannot make reasonable use of current information, leading the algorithm to fall into locally optimal solutions. Therefore, modulating the algorithm's step size based on the current evolutionary success rate (ratio) and iteration count offers a potential solution. When the algorithm needs a longer step size, increasing the step size can improve the convergence speed. When the algorithm needs a shorter step size, reducing the step size can prevent the search agent from oscillating around the optimal point. In the early stages, the step size should be larger to conduct wide-area exploration of the search space, and in the later stages, the step size should be smaller to achieve fine local development of the search space. In addition, when the algorithm falls into local optimal solutions, the ratio will decrease significantly. In this case, a larger step size is needed to help the algorithm escape from locally optimal solutions. Therefore, this article proposes adaptive step adjustment based on evolutionary success rate, with the concrete formula as follows:
ratio(t+1)=k(t)SearchAgents | (3.4) |
→X(t+1)=(W1⋅→X1+W2⋅→X2+W3⋅→X3)⋅S(t+1) | (3.5) |
S(t)=1−ratio(t)−ζabs(ratio(t)−ζ)⋅tMaxIter⋅(ratio(t)+0.02)1ratio(t)2+0.01 | (3.6) |
where k(t) represents the number of search agents with improved fitness in the t iteration, SearchAgents represents the number of all search agents, ratio(t+1) represents the evolutionary success rate of the t+1 generation, and S(t) represents the step of the t generation.
In Eq (3.2), the concept of evolutionary success rate (ratio) is proposed. The evolutionary success rate refers to the ratio of search agents with improved fitness in the previous iteration to the total number of search agents. Through the adaptive adjustment of the wolf pack's step size based on the evolutionary success rate, the algorithm can better handle different situations. When the ratio is less than the threshold value ζ (ζ = 0.67), the evolutionary success rate of the wolf pack is relatively low, indicating that the algorithm may be trapped in a local optimum. Under such circumstances, a larger step size is obtained by subtracting a negative number from 1, which is used to enhance the algorithm's ability to escape the local optima. When the ratio is greater than or equal to the threshold value ζ, the evolutionary success rate of the wolf pack is high, indicating that most wolves have found better positions. This demonstrates that the current search method aligns with the optimization process. Therefore, by subtracting a large positive number from 1, the step size of the grey wolves is reduced, which enhances their local exploitation ability and allows for more precise development. In addition, this article also takes into account the impact of iteration times on step size. In the early stages of iteration, t is small, so the step size is large, and the wolf pack tends to conduct global exploration. In the later stages of iteration, t is large, so the step size is small, and the wolf pack tends to focus on local exploitation.
In the traditional grey wolf optimization algorithm, the positions of the wolf pack are completely guided by alpha, delta, and omega wolves. However, as the number of iterations increases, the wolf pack tends to concentrate in a limited region. This phenomenon significantly heightens the risk of falling into local optima and makes it difficult to jump out of local optima when facing complex problems. The randomness of evolutionary algorithms leads most evolutionary algorithms to be black box optimizers, and we cannot accurately judge when the algorithm is exploring the search space when it is developing, or whether it has fallen into local optima. Therefore, when the evolutionary success rate of the algorithm is low, it may be exploring the search space or may have fallen into local optima. When the algorithm is in the exploration stage, randomly generated positions can help the algorithm explore a broader search space. When the algorithm falls into local optima, randomly generated positions can increase population diversity and help the algorithm jump out of local optima. In both cases, randomly generated positions can provide effective assistance. By adding a convergence factor that decreases with the number of iterations to the randomly generated position, we prioritize exploration in the early stages and increase the ability to jump out of local optima in the later stages. The global optimal position can better guide the search direction of search agents. When the evolutionary success rate is low, the algorithm may have fallen into local optima, so we should reduce the influence of the global optimal position, and vice versa. Therefore, this article proposes a new position update strategy that adds convergence factors to both the global optimal position and randomly generated positions. The equation is as follows:
{→X(t+1)=(1+eratio(t))⋅→Xp−e(−4⋅l2MaxIter2)⋅((→ub−→lb)⋅→r6+→lb), r7<0.5→X(t+1)=(1+eratio(t))⋅→Xp+e(−4⋅l2MaxIter2)⋅((→ub−→lb)⋅→r6+→lb), r7>0.5} | (3.7) |
where →r6 is a random vector in [0, 1], →ub and →lb are the lower and upper bounds, respectively, and r7 is a random value in [0, 1].
In the new position update strategy, we changed the strategy of using the three best wolves to guide the evolution direction of the algorithm to using both the global optimal position and randomly generated positions as learning samples. The global optimal position as a learning sample ensures that the search agents evolve in the correct direction, while adding randomly generated positions can increase population diversity, expand the search range of the algorithm, and greatly enhance the ability to jump out of local optima. Second, we utilize the ratio to control the inertia weight of the global optimal position, ensuring that the inertia weight of the global optimal position is always greater than 1 to ensure its influence. Additionally, we use an exponential function ex to amplify the influence of the global optimal position. When the ratio is small, the inertia weight of the global optimal position is small, increasing the influence of randomly generated positions and improving the ability to jump out of local optima. Finally, in the early stages of the search, the algorithm should focus on exploration to ensure that search agents explore the search space as much as possible. Therefore, the inertia weight of randomly generated positions is relatively large in the early stages. When x is linearly increasing on the interval [0, 1], e−4x2 is nonlinearly decreasing. On the interval [0, 1], the function e−4x2 exhibits convexity in the initial segment where the second derivative is greater than 0, indicating a slower rate of decrease. In the later segment, where the second derivative is less than 0, the function demonstrates concavity, indicating a faster rate of decrease. As the number of iterations increases, the inertia weight of randomly generated positions decreases nonlinearly; it decreases slowly in the early stages to maintain a large inertia weight to explore the search space, and it decreases quickly in the later stages while not affecting the exploitation of the algorithm in later stages to provide a possibility for jumping out of local optima.
Based on the Eqs (3.2) and (3.5), we can derive the position update value of the j-th dimension for the i-th wolf as follows:
xt+1ij=w1st[xαj−(2αtr11−αt)|2r12xαj−xtij|spiral1]+w2st[xβj−(2αtr21−αt)|2r22xβj−xtij|spiral2]+w3st[xδj−(2αtr31−αt)|2r32xδj−xtij|spiral3]=(w1xαj+w2xβj+w3xδj)st−at[(2r11−1)|2r12xαj−xtij|spiral1(2r21−1)|2r22xβj−xtij|spiral2+(2r31−1)|2r32xδj−xtij|spiral3] | (3.8) |
where xt+1ij represents the value of the j-th dimension position of the ith wolf in the next iteration, xtij represents the value of the j-th dimension position of the ith wolf in the current iteration, xαj,xβj,xδj represents the value of the j-th dimension position of the three best wolves in the current iteration, αt is the value of α in the current iteration, which decreases from 2 to 0 as the number of iterations increases, st is the step length in the current iteration, and r11,r12,r21,r22,r31,r32 is a random value in [0, 1].
In ASGWO, as the number of iterations increases, αt gradually approaches 0. When the number of iterations approaches its maximum value, the impact of the second term in Eq (3.8) on the xt+1ij position can be ignored. At this time, st also approaches infinity with 1. Assuming that the positions of the three leader wolves remain unchanged, xt+1ij approaches a constant value, so ASGWO has convergence.
To evaluate the performance of ASGWO, we used two sets of test functions from the literature [22] to benchmark the algorithm's exploration and exploitation. The first set of test functions are the classic unimodal functions (f1–f7) in Table 1, which have only one global optimal value and do not risk falling into local minima. They are used to test the algorithm's exploitation. The second set of test functions are the common multimodal functions (f8–f13) in Table 2 and fixed-dimension multimodal benchmark functions (f14–f23) in Table 3, which have many local minima and the algorithm is highly likely to fall into local optima. They are used to test the algorithm's ability to jump out of local optima and examine exploration[23]. In Tables 1–3, the third column Dim, represents the dimension of the benchmark function, the fourth column Range, represents the upper and lower limits of the benchmark function, and the fifth column fmin, represents the global minimum point of the benchmark function.
Function | Dim | Range | fmin |
f1(x)=∑ni=1xi2 | 30 | [−100, 100] | 0 |
f2(x)=∑ni=1|xi|+∏ni=1|xi| | 30 | [−10, 10] | 0 |
f3(x)=∑ni=1(∑ij−1xj2)2 | 30 | [−100, 100] | 0 |
f4(x)=maxi{|xi|,1≤i≤n} | 30 | [−100, 100] | 0 |
f5(x)=∑n−1i=1[100(xi+1−x2i)2+(xi−1)2] | 30 | [−30, 30] | 0 |
f6(x)=∑ni=1([xi+0.5])2 | 30 | [−100, 100] | 0 |
f7(x)=∑ni=1ixi4+random[0,1) | 30 | [−1.28, 1.28] | 0 |
Function | Dim | Range | fmin |
f8(x)=∑ni=1−xisin(√|xi|) | 30 | [−500, 500] | 0 |
f9(x)=∑ni=1[x2i−10cos(2πxi)+10] | 30 | [−5.12, 5.12] | 0 |
f10(x)=−20exp(−0.2√1n∑ni=1x2i)−exp(1n∑ni=1cos(2πxi))+20+e | 30 | [−32, 32] | 0 |
f11(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | 30 | [−600, 600] | 0 |
f12=πn{10sin(πy1)+∑n−1i=1(yi−1)2[1+10sin2(πyi+1)]+(yn−1)2}+∑ni=1u(xi,10,100,4)yi=1+xi+14u(xi,a,k,m)={k(xi−a)m xi>a0 −a<xi<ak(−xi−a)m xi<−a} | 30 | [−50, 50] | 0 |
f13(x)=0.1{sin2(3πxi)+∑n1(xi−1)2[1+sin2(3πxi+1)]+(xn−1)2[1+sin2(2πxn)]}+∑ni=1u(xi,5,100,4) | 30 | [−50, 50] | 0 |
Function | Dim | Range | fmin |
f14(x)=(1500+∑25j=11j+∑2i=1(xi−aij))−1 | 2 | [−65, 65] | 1 |
f15(x)=∑11i=1[ai−x1(b2i+bix2)b2i+bix3+x4]2 | 4 | [−5, 5] | 0.0003 |
f16(x)=4x21−2.1x41+13x61+x1x2−4x22+4x42 | 2 | [−5, 5] | −1.0316 |
f17(x)=(x2−5.14π2x21+5πx1−6)2+10(1−18π)cosx1+10 | 2 | [−5, 5] | 0.398 |
f18(x)=[1+(x1+x2+1)2(19−14x1+3x21−14x2+6x1x2+3x22)]×[30+(2x1−3x2)2×(18−32x1+12x21+48x2−36x1x2+27x22)] | 2 | [−2, 2] | 3 |
f19(x)=−∑4i=1ciexp(−∑3j=1aij(xj−pij)2) | 3 | [−1, 3] | −3.86 |
f20(x)=−∑4i=1ciexp(−∑6j=1aij(xj−pij)2) | 6 | [0, 1] | −3.32 |
f21(x)=−∑5i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.1532 |
f22(x)=−∑7i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.4028 |
f23(x)=−∑10i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.5363 |
All encoding in this article was implemented on a Windows - 10 platform using Python 3.8 on a computer with an Intel(R) Core(TM) i5-8300H CPU processor and 8GB of memory.
In assessing ASGWO's convergence accuracy, convergence speed, population diversity, and capability to evade local optima, we juxtaposed its optimization efficacy on both unimodal and multimodal test functions with the native GWO algorithm, alongside two classical algorithms: PSO [2] and WOA [3]. The parameter settings for the algorithms are recorded in Table 4. For different benchmark functions, the four algorithms were independently run 30 times, with an iteration number of 500 per independent run and a population size of 20. The average and variance were taken to generate statistical results. The experimental results for unimodal, multimodal, and fixed-dimension multimodal benchmark functions are shown in Tables 3, 5, and 6, respectively.
Algorithm | Parameters |
GWO | a linearly decreased over iterations from 2 to 0 |
PSO | ω = 1, c1=2, c2 = 2 |
WOA | a linearly decreased over iterations from 2 to 0 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f1 | 2.42×10−26 | 3.07×10−26 | 5.89 | 5.27 | 5.28×10−7 | 1.58×10−6 | 0.00 | 0.00 |
f2 | 4.08×10−16 | 2.71×10−16 | 8.85 | 8.00 | 2.42×10−10 | 6.57×10−10 | 9.1×10−243 | 6.7×10−243 |
f3 | 5.89×10−4 | 1.62×10−2 | 22.4 | 7.83 | 2.14×10−2 | 3.60×10−2 | 0.00 | 0.00 |
f4 | 2.83×10−5 | 1.86×10−5 | 1.24 | 0.398 | 1.27×10−2 | 2.59×10−2 | 1.1×10−201 | 5.7×10−201 |
f5 | 27.3 | 0.813 | 2.36×102 | 1.64×102 | 28.6 | 0.330 | 26.3 | 0.314 |
f6 | 1.37 | 0.492 | 9.26 | 3.25 | 22.8 | 53.2 | 4.39×10−5 | 1.66×10−5 |
f7 | 3.65×10−3 | 1.52×10−3 | 1.73×102 | 53.1 | 8.56×10−2 | 0.177 | 3.03×10−3 | 4.71×10−4 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f8 | −6.2×103 | 6.51×102 | −5.2×103 | 7.00×102 | −3.3×103 | 2.87×102 | −7.0×103 | 4.52×102 |
f9 | 13.4 | 10.6 | 1.73×102 | 22.4 | 9.42×10−12 | 2.74×10−11 | 0.00 | 0.00 |
f10 | 1.38×10−13 | 2.52×10−14 | 2.78 | 0.448 | 1.13×10−8 | 2.96×10−8 | 1.11×10−14 | 2.91×10−15 |
f11 | 5.81×10−3 | 8.93×10−3 | 0.650 | 0.174 | 2.22×10−16 | 3.71×10−13 | 0.00 | 0.00 |
f12 | 6.86×10−2 | 5.72×10−2 | 0.839 | 0.405 | 0.868 | 0.271 | 1.40×10−2 | 9.56×10−3 |
f13 | 0.632 | 0.244 | 1.52 | 0.757 | 2.36 | 0.149 | 3.34×10−5 | 1.27×10−5 |
According to the experimental results of Table 3 for the unimodal test functions, we can observe that ASGWO exhibits superior performance compared to GWO, PSO, and WOA. First, in the test functions f1 and f3, ASGWO found the global optimal value, while the other three algorithms were still distant from the global optimal value. Second, in the test functions f2, f4, f6, and f7, although ASGWO did not find the global optimal value, it still demonstrated a significant improvement in convergence accuracy compared to the original GWO, PSO, and WOA. Finally, as shown in Figure 11, the contour lines of function f5 form a parabolic shape, and the global optimal value lies in the valley of this parabolic shape. While it may be easy for algorithms to find this valley, convergence to the global optimal value is extremely challenging due to the slow gradient change within this narrow valley. Therefore, the performance improvement of ASGWO on the function f5 was not as significant as expected, but it still outperformed GWO, PSO, and WOA. Additionally, as shown in Table 1, function f6 is a step function, which is characterized by plateaus and discontinuity. Since GWO, PSO, and WOA performing searches within local neighborhoods, all the points within the local neighborhood will have the same fitness value except for a few boundaries between plateaus, it is difficult for them to move from the current plateau to a lower plateau. However, ASGWO's adaptive step size can help the algorithm produce longer jumps with a higher probability, making it easier for ASGWO to move towards lower plateaus. As shown in Figures 4 and 5, ASGWO's convergence speed far exceeded the other algorithms. Finally, the experimental results of Table 3 indicate that compared to the other three algorithms, ASGWO has a smaller standard deviation, representing more stable convergence and stronger robustness.
In summary, ASGWO has significantly improved the convergence accuracy, convergence speed, and robustness of unimodal test functions. This is because ASGWO improves the local development ability of the algorithm by rapidly decreasing the nonlinear convergence factor in the later stages, and utilizes more path information through the spiral to improve the local development ability by making the spiral smaller in the later stages, thereby improving the convergence accuracy. In addition, the dynamic spiral and adaptive step size also significantly contribute to the improvement of convergence speed.
The experimental results of Table 5 indicate that ASGWO still performs better than GWO, PSO, and WOA on multimodal test functions. First, in the test functions f9 and f11, ASGWO found the global optimal value. In contrast, the original GWO, PSO, and WOA had significant differences in convergence accuracy. The test functions f9 and f11 have the characteristics of highly multimodal and regularly distributed minimum positions, suggesting that ASGWO performs well on multimodal functions with regularly distributed minimum positions. Additionally, from the convergence curve of function f9 in Figure 5, we can observe that, even when ASGWO gets trapped in a local optimum in the later stages of the algorithm, it still has the ability to escape from the local optimum and find the global optimal value. Then, for the remaining functions f8, f10, f12, and f13, ASGWO did not find the global optimal value. However, the final convergence result of ASGWO is still superior to the other three algorithms. Therefore, ASGWO has a significant improvement in the convergence results on multimodal functions with many local minima. This is because the nonlinear convergence factor decreases slowly in the early stage, allowing ASGWO to fully explore the search space and lay a solid foundation for avoiding premature convergence. Due to the similarity of function f12 and function f5, the improvement of ASGWO on function f12 did not meet our expectations. From Figures 10 and 11, we can see that because of the complexity of multimodal functions, algorithms may still get trapped in local optima. Therefore, we use the evolution success rate to assess the state of the algorithm. When the algorithm gets trapped in a local optimum, increasing the step size can help it escape from the local optimum. Additionally, new position update strategies can also improve population diversity and help the algorithm escape from local optima. Furthermore, function f8 is a typical deception problem: there is only one global optimal point, which is far away from the local minima. Getting trapped in a local optimum is difficult to escape. However, as shown in Figure 5, ASGWO still demonstrates an impressive ability to escape from local optima on function f8. Finally, from Figures 5–8, we can see that ASGWO still exhibits good convergence speed on multi-peak functions.
On the experimental results of the Table 7 fixed-dimension multimodal benchmark functions, ASGWO has a slight gap with WOA on f15, but has significant advantages on other functions. From Figure 9, it can be seen that ASGWO has a significant improvement in convergence speed on complex functions.
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f14 | 5.01 | 4.27 | 1.13 | 0.302 | 2.86 | 0.971 | 1.10 | 0.288 |
f15 | 8.38×10−3 | 9.77×10−3 | 1.04×10−2 | 8.53×10−3 | 3.94×10−3 | 3.25×10−3 | 4.53×10−3 | 7.91×10−3 |
f16 | −1.0 | 2.99×10−8 | 0.861 | 0.921 | 0.640 | 0.326 | −1.0 | 5.10×10−8 |
f17 | 0.397 | 6.30×10−5 | 0.861 | 0.921 | 0.640 | 0.326 | 0.397 | 5.10×10−8 |
f18 | 3.00 | 1.46×10−5 | 3.02 | 1.84×10−2 | 4.20 | 2.01 | 3.00 | 5.04×10−5 |
f19 | −3.8 | 3.47×10−3 | −3.8 | 2.98×10−3 | −3.7 | 3.20×10−2 | −3.8 | 3.15×10−3 |
f20 | −3.2 | 9.32×10−2 | −3.0 | 0.271 | −2.5 | 0.608 | −3.2 | 4.76×10−2 |
f21 | −8.0 | 2.46 | −8.1 | 1.62 | −2.3 | 1.91 | −9.0 | 2.00 |
f22 | −7.6 | 3.16 | −6.2 | 1.76 | −1.9 | 1.41 | −8.6 | 2.97 |
f23 | −10 | 3.20×10−3 | −6.7 | 1.36 | −1.8 | 1.57 | −10 | 1.61×10−5 |
As illustrated in Figure 4, ASGWO places emphasis on exploring the search space during the early stages, leading to swift convergence in the subsequent phases. Owing to the GWO's linearly decreasing convergence factor, the algorithm encounters inadequate exploration in the initial phases and gradual convergence in the later stages. To mitigate this, we introduced a modification, transitioning it to a piecewise nonlinear convergence factor. Furthermore, during the exploitation stage, a preference for a smaller step size is evident in ASGWO to ensure precise convergence, as opposed to larger step sizes that might induce oscillations and impact convergence speed. This is facilitated through ASGWO's incorporation of a dynamic self-learning step size, computed based on the current iteration number and population evolution success rate. This adaptation is reflected in the substantial enhancement of convergence speed, as evidenced in Figures 5, 6, and 9.
The original strategy of the GWO algorithm involves the wolf pack consistently converging towards the best three wolves, leading to premature convergence, diminished population diversity, and a propensity to be ensnared in local optima. As depicted in Figures 5 and 6, ASGWO maintains convergence even when conventional algorithms succumb to local optima entrapment. This attribute is credited to the dynamic logarithmic spiral, which empowers the algorithm to glean more information along the path, thereby enriching population diversity. Moreover, the updated position update equation introduces a dynamic influence factor, endowing more significant influence to randomly generated positions during the initial stages, further bolstering population diversity. In Figures 5 and 9, the descending zigzag shape evident in ASGWO's convergence curves suggests that, utilizing the dynamic self-learning step size, ASGWO adapts its step size to be larger during periods of low evolution success rate. This strategic adjustment enhances the algorithm's capacity to evade local optima. In summary, these refinements in ASGWO culminate in enhanced optimization performance, convergence speed, and population diversity.
In Section 4.2, wherein ASGWO was juxtaposed with traditional algorithms, it exhibited notable superiority. To further substantiate ASGWO's optimization prowess, we opted to assess it against two novel variants of the GWO algorithm, specifically SOGWO[24] and EOGWO[25]. SOGWO utilizes Spearman's correlation coefficient to select certain dimensions of the ω wolves for opposition learning, thus avoiding unnecessary exploration and enabling rapid convergence without compromising the probability of finding the optimal solution. EOGWO performs a simplex based opposition on all the wolves. Instead of taking the upper and lower limits of the function, opposition is done using the limits of all the wolves. For different benchmark functions, the three algorithms were independently run 25 times each with a maximum iteration of 1000 and a population size of 50. The average value and variance were calculated to generate statistical results, as shown in Table 8.
Function | SOGWO | EOGWO | ASGWO | |||
Mean | Std | Mean | Std | Mean | Std | |
f1 | 6.04×10−77 | 1.48×10−76 | 2.81×10−71 | 8.46×10−71 | 0.00 | 0.00 |
f2 | 1.17×10−44 | 1.34×10−44 | 4.31×10−42 | 7.87×10−42 | 0.00 | 0.00 |
f3 | 5.39×10−22 | 2.59×10−21 | 1.52×10−20 | 4.02×10−20 | 0.00 | 0.00 |
f4 | 7.08×10−21 | 1.51×10−19 | 8.06×10−19 | 1.11×10−18 | 0.00 | 0.00 |
f5 | 26.4 | 0.762 | 26.3 | 0.7364 | 25.2 | 0.663 |
f6 | 0.282 | 0.247 | 0.3290 | 0.245 | 1.06×10−6 | 2.37×10−7 |
f7 | 4.93×10−4 | 2.71×10−4 | 6.07×10−4 | 4.32×10−4 | 1.04×10−3 | 6.49×10−4 |
f8 | −6.5×103 | 8.02×102 | −6.27×103 | 7.71×102 | −7.31×103 | 8.67×102 |
f9 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
f10 | 8.88×10−16 | 0.00 | 1.40×10−14 | 3.20×10−15 | 1.36×10−14 | 2.71×10−15 |
f11 | 0.00 | 0.00 | 1.68×10−3 | 4.80×10−3 | 0.00 | 0.00 |
f12 | 5.60×10−2 | 1.42×10−5 | 2.29×10−2 | 1.85×10−2 | 3.91×10−3 | 3.16×10−3 |
f13 | 0.352 | 0.128 | 0.257 | 0.164 | 1.27×10−6 | 4.73×10−7 |
f14 | 3.40 | 3.72 | 3.82 | 3.86 | 0.998 | 8.24×10−13 |
f15 | 2.38×10−3 | 6.02×10−3 | 5.24×10−3 | 8.68×10−3 | 4.31×10−3 | 7.01×10−3 |
f16 | −1.03 | 3.75×10−9 | −1.02 | 3.45×10−9 | −1.03 | 2.42×10−11 |
f17 | 0.397 | 4.85×10−7 | 0.398 | 4.82×10−7 | 0.398 | 1.68×10−8 |
f18 | 3.00 | 4.63×10−6 | 3.00 | 3.60×10−6 | 3.00 | 3.57×10−6 |
f19 | −3.86 | 2.71×10−3 | −3.86 | 2.36×10−3 | −3.86 | 1.03×10−6 |
f20 | −3.26 | 7.37×10−2 | −3.27 | 7.55×10−2 | −3.32 | 2.93×10−8 |
f21 | −9.65 | 1.50 | −9.93 | 2.07 | −9.34 | 1.40 |
f22 | −10.4 | 2.65×10−4 | −10.2 | 1.05 | −10.6 | 1.06×10−5 |
f23 | −10.4 | 0.540 | −10.2 | 1.62 | −10.4 | 1.04×10−5 |
From the experimental results in Table 8, we can see that ASGWO converges to the global optimum point on 30-dimensional unimodal functions f1–f4, 30-dimensional multimodal functions f9, and f11, and fixed-dimension multimodal benchmark functions f16–f20, while SOGWO and EOGWO only converge to the global optimum point on function f9. In addition, ASGWO also has good performance on functions f5, f6, f8, f12, and f13, with convergence accuracy far exceeding SOGWO and EOGWO. Finally, ASGWO performs slightly worse than SOGWO and EOGWO on function f7, f10, and f15, but the error is within a reasonable range. Therefore, compared with new GWO variants, ASGWO still has good optimization performance.
ASGWO has exhibited promising outcomes when compared with GWO, PSO, WOA, SOGWO, and EOGWO across 23 test functions. In order to ascertain the efficacy of ASGWO in unfamiliar domains characterized by constraints, we juxtapose it with diverse algorithms on four practical application problems.
The gear train design problem is an unconstrained discrete design problem in mechanical engineering. It involves arranging and combining multiple gears in a specific way to transmit rotational motion and force from one shaft to another. To simplify the problem, we only consider the gear ratio, which is the most basic factor, as shown in Figure 17. The objective of this problem is to minimize the gear ratio as close as possible to 1/6.931. The gear ratio is defined as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft. For matching gears, this ratio is inversely proportional to the number of teeth on the input and output gears. The minimum tooth count for each gear is 12, and the maximum tooth count is 60. Treating the number of teeth A(x1), teeth B(x2), teeth C(x3), and teeth D(x4) as a design variable, reasonable selection and optimization of this variable can be used to achieve better performance of the gear system. Mathematically, the problem is stated as follows:
Min f(x)=(16.931−x3x2x1x4)2s.t. 12≤x1,x2,x3,x4≤60 | (5.1) |
We solve this problem with ASGWO and compare the results to GWO, WOA, K-WOA[26], IWOA[27], GSA-BBO[28], GSO[29], ABC[30], CAB[31], CS[32], FUZZY[33], and MFO[34] in Table 10. K-WOA utilizes K-means clustering to create multiple collaborative search sub-groups based on WOA to explore the search space; IWOA assigns exploration or exploitation to search agents based on their fitness. All the parameters of these algorithms are recorded in Table 9. The results show the average best fitness obtained from 30 independent executions of each algorithm, the standard deviation (SD) of the best fitness obtained from each independent execution, and the optimization parameters (x1,x2,x3,x4) selected in the best solution of each algorithm. The experimental result of WOA, K-WOA, IWOA, GSA-BBO, GSO, ABC, CAB, CS, FUZZY, and MFO in Table 10 are from the literature [26]. In Table 10, the ASGWO algorithm obtains the best value with transmission ratio (4.14×10−15).
Parameter | Value |
same initialization configuration | Population Size is 50, Maxiter is 50000 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
GWO | a linearly decreased over iterations from 2 to 0 |
WOA | a linearly decreased over iterations from 2 to 0 |
IWOA | Scaling factor for beta (0.2, 0.8), DE mutation scheme(DE/best/1/bin), a linearly decreased over iterations from 2 to 0 |
K-WOA | fixed number of clusters k = 18 |
GSA-BBO | k = 2, I = 1, E1 = 1, Siv = 4, Rnorm = 2, Rpower = 1 |
GSO | the acceleration constants are 2.05 |
ABC | Onlooker 50%, employees 50%, acceleration coefficient upper bound (a)= 1, LL = (0.6×dimensions×population) |
CAB | Mbest = 4, Hp = 0.2 |
CS | β = 1.5, Discover = 0.25 |
FUZZY | Nflames = round(Npop - k)(Npop - 1)kmax |
MFO | c1=2, c2 = 2, ω decreased from 0.9 to 0.2 |
Algorithm | Optimal solution | Optimal cost | SD | |||
x1 | x2 | x3 | x4 | |||
ASGWO | 24 | 18 | 59 | 53 | 4.14×10−15 | 7.57×10−15 |
GWO | 26 | 18 | 60 | 54 | 1.49×10−14 | 2.75×10−14 |
WOA | 16 | 19 | 49 | 43 | 1.15×10−9 | 1.39×10−9 |
IWOA | 30 | 13 | 51 | 53 | 2.39×10−9 | 2.53×10−9 |
K-WOA | 19 | 16 | 43 | 49 | 2.70×10−12 | 0.00 |
GSA-BBO | 16 | 19 | 49 | 43 | 8.72×10−10 | 8.38×10−10 |
GSO | 60 | 29 | 52 | 60 | 0.732 | 0.00 |
ABC | 16 | 19 | 49 | 43 | 6.62×10−11 | 1.65×10−10 |
CAB | 12 | 12 | 35 | 12 | 0.675 | 0.180 |
CS | 16 | 19 | 43 | 49 | 1.47×10−10 | 2.65×10−10 |
FUZZY | 12 | 23 | 33 | 57 | 2.57×10−3 | 4.87×10−3 |
MFO | 19 | 16 | 49 | 43 | 4.85×10−9 | 6.90×10−9 |
The objective of Pressure Vessel Design (PVD) is to minimize the total cost related to materials, forming, and welding while fulfilling production requirements, as shown in Figure 18. This engineering problem involves four constraints and four design variables: shell thickness (Ts=x1), head thickness (Th=x2), inner radius (R=x3), and vessel length (L=x4). The welding cost is divided into vertical welding cost and horizontal welding cost. The estimation method is to multiply the average cost per pound of welding material by the weight of the required welding material, which is 0.6224x1x3x4+1.7781x2x23. The material and forming costs will be represented by combining the two costs into an average cost per forming operation, which is 3.1661x21x4+19.84x21x3. Mathematically, the problem is stated as follows:
Min f(x)=0.6224x1x3x4+1.7781x2x23+3.1661x21x4+19.84x21x3s.t. g1(x)=−x1+0.0193x3≤0 g2(x)=−x2+0.00954x3≤0 g3(x)=−πx23x4−43πx23+1296000≤0 g4(x)=x4−240≤0 1×0.0625≤x1,x2≤99×0.0625 10≤x3,x4≤200 | (5.2) |
In order to find the optimal cost, the ASGWO algorithm is implemented 30 times on this problem and the recorded results are shown in Table 12. We obtain the results of GWO, WOA, PSO, GA, SSA, ES [35], SC-GWO [36], mGWO [37], wGWO [38], and chaotic SSA from literature [36], which are also presented in the same table. SC-GWO combines the SCA, which integrates social and cognitive components, with GWO that balances exploration and exploitation. mGWO uses adaptive methods to strike a balance between exploration and exploitation. All the parameters of these algorithms are recorded in the Table 11. From the Table 12, it can be observed that the optimal cost of the proposed algorithm (6010.9908) is better than all other reported algorithms.
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
SC-GWO | ω decreased from 0.7 to 0.2 |
mGWO | a=2(1−tMaxiter) |
wGWO | a linearly decreased over iterations from 2 to 0 |
GA | Crossover Rate = 0.7, Mutation Rate = 0.01 |
SSA | c1 unlinearly decreased over iterations |
Chaotic SSA | c1 = logistic chaotic map(c1) |
ES | σ = 3.0, μ = 100, λ = 300 |
Algorithm | Optimal solution | Optimal cost | |||
x1 | x2 | x3 | x4 | ||
ASGWO | 0.8327 | 0.4122 | 43.1396 | 168.1458 | 6010.9908 |
GWO | 0.8750 | 0.4375 | 44.9807 | 144.1081 | 6136.6600 |
SC-GWO | 0.8125 | 0.4375 | 42.0984 | 176.6370 | 6059.7179 |
mGWO | 0.8125 | 0.4375 | 42.0982 | 176.6386 | 6059.7359 |
wGWO | 0.8125 | 0.4375 | 42.09842 | 176.637 | 6059.7207 |
PSO | 0.8125 | 0.4375 | 42.0913 | 176.7465 | 6061.0777 |
GA | 0.9375 | 0.5000 | 48.3290 | 112.6790 | 6410.3810 |
SSA | 0.8125 | 0.4375 | 42.09836 | 176.6376 | 6059.7254 |
Chaotic SSA | 0.8750 | 0.4375 | 45.33679 | 140.2539 | 6090.527 |
WOA | 0.8125 | 0.4375 | 42.0982 | 176.6389 | 6059.7410 |
ES | 0.8125 | 0.437 | 42.0980 | 176.6405 | 6059.7456 |
The design of car crashworthiness poses a challenge in the context of car side impact mitigation, aiming to minimize vehicle weight, passenger impact forces, and the average velocity of the V-shaped pillar. This challenge encompasses ten constraints, including limits on abdominal load, pubic force, V-pillar velocity, rib defects, and so on. Additionally, there were eleven design variables that described the thickness of the B-pillar inner panel (x1), the B-pillar reinforcement (x2), the floor inner panel thickness (x3), the crossbeam (x4), the door beam (x5), the door belt line reinforcement (x6), the roof longitudinal beam (x7), the B-pillar inner panel (x8), the floor inner panel (x9), the guardrail height (x10), and the collision position (x11). The optimization problem formula is as follows:
Min f(x)=1.98+4.90x1+6.67x2+6.98x3+4.01x4+1.78x5+2.73x7s.t. g1(x)=1.16−0.3717x2x4−0.00931x2x10−0.484x3x9+0.01343x6x10−1≤0 g2(x)=46.36−9.9x2−12.9x1x8+0.1107x3x10−32≤0 g3(x)=33.86+2.95x3+0.1792x3−5.057x1x2−11.0x2x8−0.0215x5x10−9.98x7x8+22.0x8x9−32≤0 g4(x)=28.98+3.818x3−4.2x1x2+0.0207x5x10+6.63x6x9−7.7x7x8+0.32x9x10−32≤0 g5(x)=0.261−0.0159x1x2−0.188x1x8−0.019x2x7+0.0144x3x5+0.0008757x5x10+0.08045x6x9 +0.00139x8x11+0.00001575x10x11−0.32≤0 g6(x)=0.214+0.00817x5−0.131x1x8−0.0704x1x9+0.03099x2x6−0.018x2x7+0.0208x3x8 +0.121x3x9−0.00364x5x6+0.0007715x5x10−0.0005354x6x10+0.00121x8x11−0.32≤0 g7(x)=0.74−0.61x2−0.163x3x8+0.001232x3x10−0.166x7x9+0.227x22−0.32≤0 g8(x)=4.72−0.5x4−0.19x2x3−0.0122x4x10+0.009325x6x10+0.000191x211−4≤0 g9(x)=10.58−0.674x1x2−1.95x2x8+0.02054x3x10−0.0198x4x10+0.028x6x10−9.9≤0 g10(x)=16.45−0.489x3x7−0.843x5x6+0.0432x9x10−0.0556x9x11−0.000786x211−15.7≤0 0.5≤x1,x2,x3,x4,x5,x6,x7≤1.5 0.192≤x8,x9≤0.345 −30≤x10,x11≤30 | (5.3) |
Through the literature[39], the optimal experimental results of IROA[39], SMA[40], HHOCM[41], ROLGWO[42], and MALO[43] in the design of car crashworthiness problem are obtained. IROA introduces an autonomous foraging mechanism, giving each search agent a small chance to randomly search for food or search based on the current food location. ROLGWO proposes a modified parameter "C" strategy to balance exploration and exploitation in GWO. Additionally, a new random opposite-based learning strategy is introduced to help the population escape local optima. All the parameters of these algorithms are recorded in the Table 14. In this article, under the same experimental parameters (500 iterations and 30 search agents), the ASGWO is tested, and the best experimental result is 22.871876, ranking first among these algorithms. Therefore, ASGWO has outstanding advantages in solving the design of the car crashworthiness problem.
Algorithm | ASGWO | IROA | SMA | HHOCM | ROLGWO | MALO |
x1 | 0.500041 | 0.5 | 0.5 | 0.50016380 | 0.5012548 | 0.5 |
x2 | 1.1345446 | 1.23105679 | 1.22739249 | 1.248612358 | 1.2455510 | 1.22810442 |
x3 | 0.5000862 | 0.5 | 0.5 | 0.65955791 | 0.50004578 | 0.5 |
x4 | 1.2790514 | 1.19766142 | 1.20428741 | 1.098515362 | 1.18025396 | 1.21264054 |
x5 | 0.5002007 | 0.5 | 1.20428741 | 0.757988599 | 0.50003477 | 0.5 |
x6 | 1.4999609 | 1.07429465 | 1.04185969 | 0.76726834 | 1.16588047 | 1.30804056 |
x7 | 0.5000544 | 0.5 | 0.5 | 0.500055187 | 0.50008827 | 0.5 |
x8 | 0.3449606 | 0.3449999 | 0.345 | 0.34310489 | 0.3448952 | 0.34499984 |
x9 | 0.3324805 | 0.3443286 | 0.3424831 | 0.19203186 | 0.2995826 | 0.28040129 |
x10 | −16.33320 | 0.9523965 | 0.2967546 | 2.89880509 | 3.5950796 | 0.42429341 |
x11 | −2.149117 | 1.0114033 | 1.1579641 | −4.5511746 | 2.2901802 | 4.65653809 |
fmin | 22.871876 | 23.188937 | 23.191021 | 24.483584 | 23.222427 | 23.229404 |
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
IROA | C = 0.1; α∈[−1, 9]; μ = 0.499; z = 0.07; y = 0.1 |
SMA | z = 0.03 |
HHOCM | The value of escaping energy decreases from 2 to 0, mutation rate decreases linearly from 1 to 0 |
ROLGWO | r3∈ [0, 1] |
MALO | Switch possibility = 0.5 |
Data mining is currently a highly discussed topic, with the aim of acquiring and processing large datasets to extract actionable knowledge. However, the high dimensionality of feature space poses a significant challenge in data mining, mainly due to the computational complexity involved. Feature selection has emerged as a solution to overcome this challenge. It aims to choose the most relevant subset of features from the original feature set to reduce dimensionality, lower computational costs, and significantly enhance the efficiency of models. Moreover, feature selection can reduce feature redundancy, thereby improving the generalization ability of models. Therefore, feature selection is an indispensable part of the machine learning process, enabling the construction of simpler, more efficient, and more interpretable machine learning models.
This paper considers feature selection as a multi-objective problem: minimizing the number of selected features and maximizing the feature-measure. The goal of feature selection is to either select or not select the most beneficial features, which is a binary problem. However, the positions generated by ASGWO are continuous and cannot be directly applied to feature selection. Therefore, this paper sets the search space of ASGWO to [0, 1] and maps the positions of the standard ASGWO agents to the binary space using the simplest transformation function, as shown in the equation below.
xbinaryij={0 if xij≤0.51 if xij>0.5 | (5.4) |
where xij represents the numerical value of the position of the j-th dimension of the i-th search agent, while xbinaryij represents the numerical value of the position of the j-th dimension of the i-th search agent mapped to the binary space.
This paper evaluates the performance of ASGWO using a KNN classifier through a ten-fold cross-validation approach on the seven UCI datasets listed in Table 15. In each run, the F-measure value and the number of selected features are recorded, and the averages are taken over ten iterations. These results are then compared with those obtained for BASO, BGA, BPSO, and BGWO from the literature [44], and the comparisons are tabulated in Table 17. To ensure fairness in testing, the same test parameters as those listed in Table 16 are used for all five algorithms.
Datasets | No. of attributes | No. of samples |
Breast-w | 9 | 699 |
Credit-g | 20 | 1000 |
Dermatology | 34 | 366 |
Glass | 9 | 214 |
Ionosphere | 34 | 351 |
Lymphography | 18 | 148 |
Sonar | 60 | 208 |
Parameter | Value |
K for KNN | 3 |
Dimension of population | 10 |
Number of iterations | 100 |
Number of runs | 10 |
Acceleration constants in PSO | [2,2] |
Inertia w in BPSO | [0.9, 0.4] |
Parameter A in BGWO | min=0, max=2 |
Dataset | Breast-w | Credit-g | Dermato | Glass | Ionosph | Lymph. | Sonar | |
KNN | 0.965 | 0.593 | 0.873 | 0.591 | 0.817 | 0.712 | 0.816 | |
Average F-measure | BASO | 0.982 | 0.829 | 0.988 | 0.778 | 0.887 | 0.896 | 0.892 |
BGA | 0.983 | 0.831 | 0.988 | 0.750 | 0.887 | 0.896 | 0.892 | |
BPSO | 0.981 | 0.824 | 0.987 | 0.753 | 0.870 | 0.893 | 0.880 | |
BGWO | 0.981 | 0.825 | 0.989 | 0.754 | 0.853 | 0.868 | 0.865 | |
ASGWO | 0.988 | 0.733 | 0.998 | 0.705 | 0.958 | 0.955 | 0.969 | |
Selected feature | BASO | 6.5 | 11.1 | 19.7 | 7.6 | 11.4 | 10.8 | 27.5 |
BGA | 6.3 | 10.6 | 19.4 | 6.3 | 11.4 | 10.2 | 28.8 | |
BPSO | 6.5 | 9.9 | 20 | 7.8 | 11.1 | 9.9 | 28.9 | |
BGWO | 7.1 | 13.9 | 25.6 | 7.4 | 11.7 | 13.3 | 41.6 | |
ASGWO | 4.74 | 10.04 | 17.82 | 4.92 | 13.24 | 7.6 | 27.46 |
By analyzing Table 17, we can observe that the F-measure results of the KNN classifier with feature selection using ASGWO significantly outperforms the direct application of the KNN classifier. Additionally, the number of features is effectively reduced. Therefore, ASGWO can be effectively applied to feature selection, improving classification accuracy and reducing computational complexity. Notably, on the Lymphography and Sonar datasets, ASGWO outperforms BASO, BPSO, and BGWO in terms of F-measure, while also selecting the fewest number of features. On the Breast-w and Dermatology datasets, although ASGWO has only a slight advantage in F-measure, the number of features is significantly reduced by 24 and 30.3%, respectively, significantly improving the efficiency of the classifier. This is due to ASGWO's self-learning ability, which allows it to fully associate the current state with each feature, enhancing its understanding of feature availability. Furthermore, on the Ionosphere dataset, ASGWO achieves the best F-measure, albeit with a slightly higher number of features compared to other algorithms. This tradeoff of slightly increased computational cost for improved F-measure is entirely acceptable. However, on the Credit-g dataset, ASGWO performs poorly due to the significant increase in sample size compared to the increase in feature size. This is because ASGWO's binary mapping is too simple and cannot make good decisions when dealing with low-dimensional features in multi-class classification tasks.
Due to the low convergence accuracy, slow convergence speed, and tendency to get trapped in the local optima of the original grey wolf optimizer (GWO), this article proposes an adaptive dynamic self-learning grey wolf optimization to address these issues. First, a nonlinear piecewise convergence factor is proposed to ensure sufficient search and rapid development. Second, a dynamic logarithmic spiral line based on the number of iterations is used to guide the wolves toward the best wolf, expanding the search range in the early iterations and improving population diversity. In the later iterations, the algorithm's local development ability is enhanced to accelerate convergence. Third, a dynamic self-learning step size based on the rational learning of evolution success rate and the number of iterations is introduced to improve algorithm convergence speed. Through self-learning of current information, calculate the appropriate step size for the current algorithm, preventing the step size from being too cautious or aggressive, to avoid algorithm oscillation and the effect of convergence speed. When the algorithm gets trapped in a local optimum, increasing the step size helps the algorithm escape from the local optimum. Finally, a new position update strategy is proposed. Based on the evolution success rate, the original position update strategy and the new position update strategy are selected. The new position update strategy adds a randomly generated search agent as a learning sample. In the early stage of the algorithm, it can help improve population diversity and expand the search range. In the later stage of the algorithm, it can help escape from local optima. The learning samples of the new position update strategy also include the global optimal position to provide effective guidance for the evolution direction. In addition, controlling the influence of two learning samples based on the algorithm's state using dual convergence factors is crucial in the position update stragegy. One convergence factor ensures global optimal leadership, and the other expands exploration in the early stage, and increases the possibility of jumping out of local optima without affecting development in the later stage. The performance of ASGWO was tested on 23 benchmark functions and compared with classical algorithms GWO, PSO, WOA, and new GWO variants: SOGWO, EOGWO. The experimental results showed that ASGWO had higher convergence accuracy, faster convergence rate, and stronger ability to escape local optima compared to both the original GWO and classical algorithms, as well as new variants. In addition, through the results of real engineering problems, we can find that ASGWO also performs better in the unknown search space, which shows the applicability of ASGWO in solving real problems and feature selection. However, on valley test functions where local optimal changes are not obvious, there is still much room for improvement in the convergence accuracy of ASGWO, which will be our future research direction.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors are grateful to the editor and reviewers for their constructive comments and suggestions, which have improved the presentation. This work is financially supported by Youth Fund of Fundamental Research Funds of Jiangnan University, JUSRP122034.
The authors declare there is no conflict of interest.
[1] | A. Latif, A. Rasheed, U. Sajid, J. Ahmed, N. Ali, N. I. Ratyal, et al., Content-based image retrieval and feature extraction: a comprehensive review, Math. Probl. Eng., 2019. |
[2] |
B. Gupta, M. Tiwari, S. S. Lamba, Visibility improvement and mass segmentation of mammogram images using quantile separated histogram equalisation with local contrast enhancement, CAAI Trans. Intell. Technol., 4 (2019), 73–79. doi: 10.1049/trit.2018.1006
![]() |
[3] |
S. Maheshwari, V. Kanhangad, R. B. Pachori, S. V. Bhandary, U. R. Acharya, Automated glaucoma diagnosis using bit-plane slicing and local binary pattern techniques, Comput. Biol. Med., 105 (2019), 72–80. doi: 10.1016/j.compbiomed.2018.11.028
![]() |
[4] |
S. Masood, M. Sharif, M. Raza, M. Yasmin, M. Iqbal, M. Younus Javed, Glaucoma disease: A survey, Curr. Med. Imaging, 11 (2015), 272–283. doi: 10.2174/157340561104150727171246
![]() |
[5] |
U. R. Acharya, S. Bhat, J. E. Koh, S. V. Bhandary, H. Adeli, A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images, Comput. Biol. Med., 88 (2017), 72–83. doi: 10.1016/j.compbiomed.2017.06.022
![]() |
[6] |
B. J. Shingleton, L. S. Gamell, M. W. O'Donoghue, S. L. Baylus, R. King, Long-term changes in intraocular pressure after clear corneal phacoemulsification: Normal patients versus glaucoma suspect and glaucoma patients, J. Cataract. Refract. Surg., 25 (1999), 885–890. doi: 10.1016/S0886-3350(99)00107-8
![]() |
[7] |
K. F. Jamous, M. Kalloniatis, M. P. Hennessy, A. Agar, A. Hayen, B. Zangerl, Clinical model assisting with the collaborative care of glaucoma patients and suspects, Clin. Exp. Ophthalmol., 43 (2015), 308–319. doi: 10.1111/ceo.12466
![]() |
[8] | T. Khalil, M. U. Akram, S. Khalid, S. H. Dar, N. Ali, A study to identify limitations of existing automated systems to detect glaucoma at initial and curable stage, Int. J. Imaging Syst. Technol., 8 (2021). |
[9] |
H. A. Quigley, A. T. Broman, The number of people with glaucoma worldwide in 2010 and 2020, Br. J. Ophthalmol., 90 (2006), 262–267. doi: 10.1136/bjo.2005.081224
![]() |
[10] |
C. Costagliola, R. Dell'Omo, M. R. Romano, M. Rinaldi, L. Zeppa, F. Parmeggiani, Pharmacotherapy of intraocular pressure: part i. parasympathomimetic, sympathomimetic and sympatholytics, Expert Opin. Pharmacother., 10 (2009), 2663–2677. doi: 10.1517/14656560903300103
![]() |
[11] | A. A. Salam, M. U. Akram, K. Wazir, S. M. Anwar, M. Majid, Autonomous glaucoma detection from fundus image using cup to disc ratio and hybrid features, in ISSPIT.), IEEE, 2015,370–374. |
[12] | R. JMJ, Leading causes of blindness worldwide, Bull. Soc. Belge. Ophtalmol., 283 (2002), 19–25. |
[13] | M. K. Dutta, A. K. Mourya, A. Singh, M. Parthasarathi, R. Burget, K. Riha, Glaucoma detection by segmenting the super pixels from fundus colour retinal images, in 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom.), IEEE, 2014, 86–90. |
[14] | C. E. Willoughby, D. Ponzin, S. Ferrari, A. Lobo, K. Landau, Y. Omidi, Anatomy and physiology of the human eye: effects of mucopolysaccharidoses disease on structure and function–a review, Clin. Exp. Ophthalmol., 38 (2010), 2–11. |
[15] |
M. S. Haleem, L. Han, J. Van Hemert, B. Li, Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review, Comput. Med. Imaging Graph., 37 (2013), 581–596. doi: 10.1016/j.compmedimag.2013.09.005
![]() |
[16] |
A. Sarhan, J. Rokne, R. Alhajj, Glaucoma detection using image processing techniques: A literature review, Comput. Med. Imaging Graph., 78 (2019), 101657. doi: 10.1016/j.compmedimag.2019.101657
![]() |
[17] |
H. A. Quigley, Neuronal death in glaucoma, Prog. Retin. Eye Res., 18 (1999), 39–57. doi: 10.1016/S1350-9462(98)00014-7
![]() |
[18] |
N. Salamat, M. M. S. Missen, A. Rashid, Diabetic retinopathy techniques in retinal images: A review, Artif. Intell. Med., 97 (2019), 168–188. doi: 10.1016/j.artmed.2018.10.009
![]() |
[19] | F. Bokhari, T. Syedia, M. Sharif, M. Yasmin, S. L. Fernandes, Fundus image segmentation and feature extraction for the detection of glaucoma: A new approach, Curr. Med. Imaging Rev., 14 (2018), 77–87. |
[20] | A. Agarwal, S. Gulia, S. Chaudhary, M. K. Dutta, R. Burget, K. Riha, Automatic glaucoma detection using adaptive threshold based technique in fundus image, in (TSP.), IEEE, 2015,416–420. |
[21] | L. Xiong, H. Li, Y. Zheng, Automatic detection of glaucoma in retinal images, in 2014 9th IEEE Conference on Industrial Electronics and Applications, IEEE, 2014, 1016–1019. |
[22] |
A. Diaz-Pinto, S. Morales, V. Naranjo, T. Köhler, J. M. Mossi, A. Navea, Cnns for automatic glaucoma assessment using fundus images: An extensive validation, Biomed. Eng. Online, 18 (2019), 29. doi: 10.1186/s12938-019-0649-y
![]() |
[23] | T. Kersey, C. I. Clement, P. Bloom, M. F. Cordeiro, New trends in glaucoma risk, diagnosis & management, Indian J. Med. Res., 137 (2013), 659. |
[24] |
A. L. Coleman, S. Miglior, Risk factors for glaucoma onset and progression, Surv. Ophthalmol., 53 (2008), S3–S10. doi: 10.1016/j.survophthal.2008.08.006
![]() |
[25] |
T. Saba, S. T. F. Bokhari, M. Sharif, M. Yasmin, M. Raza, Fundus image classification methods for the detection of glaucoma: A review, Microsc. Res. Tech., 81 (2018), 1105–1121. doi: 10.1002/jemt.23094
![]() |
[26] |
T. Aung, L. Ocaka, N. D. Ebenezer, A. G. Morris, M. Krawczak, D. L. Thiselton, et al., A major marker for normal tension glaucoma: association with polymorphisms in the opa1 gene, Hum. Genet., 110 (2002), 52–56. doi: 10.1007/s00439-001-0645-7
![]() |
[27] | M. A. Khaimi, Canaloplasty: A minimally invasive and maximally effective glaucoma treatment, J. Ophthalmol., 2015. |
[28] |
M. Bechmann, M. J. Thiel, B. Roesen, S. Ullrich, M. W. Ulbig, K. Ludwig, Central corneal thickness determined with optical coherence tomography in various types of glaucoma, Br. J. Ophthalmol., 84 (2000), 1233–1237. doi: 10.1136/bjo.84.11.1233
![]() |
[29] |
D. Ahram, W. Alward, M. Kuehn, The genetic mechanisms of primary angle closure glaucoma, Eye., 29 (2015), 1251–1259. doi: 10.1038/eye.2015.124
![]() |
[30] |
R. Törnquist, Chamber depth in primary acute glaucoma, Br. J. Ophthalmol., 40 (1956), 421. doi: 10.1136/bjo.40.7.421
![]() |
[31] |
H. S. Sugar, F. A. Barbour, Pigmentary glaucoma*: A rare clinical entity, Am. J. Ophthalmol., 32 (1949), 90–92. doi: 10.1016/0002-9394(49)91112-5
![]() |
[32] |
H. S. Sugar, Pigmentary glaucoma: A 25-year review, Am. J. Ophthalmol., 62 (1966), 499–507. doi: 10.1016/0002-9394(66)91330-4
![]() |
[33] |
R. Ritch, U. Schlötzer-Schrehardt, A. G. Konstas, Why is glaucoma associated with exfoliation syndrome?, Prog. Retin. Eye Res., 22 (2003), 253–275. doi: 10.1016/S1350-9462(02)00014-9
![]() |
[34] |
J. L.-O. De, C. A. Girkin, Ocular trauma-related glaucoma., Ophthalmol. Clin. North. Am., 15 (2002), 215–223. doi: 10.1016/S0896-1549(02)00011-1
![]() |
[35] |
E. Milder, K. Davis, Ocular trauma and glaucoma, Int. Ophthalmol. Clin., 48 (2008), 47–64. doi: 10.1097/IIO.0b013e318187fcb8
![]() |
[36] |
T. G. Papadaki, I. P. Zacharopoulos, L. R. Pasquale, W. B. Christen, P. A. Netland, C. S. Foster, Long-term results of ahmed glaucoma valve implantation for uveitic glaucoma, Am. J. Ophthalmol., 144 (2007), 62–69. doi: 10.1016/j.ajo.2007.03.013
![]() |
[37] |
H. C. Laganowski, M. G. K. Muir, R. A. Hitchings, Glaucoma and the iridocorneal endothelial syndrome, Arch. Ophthalmol., 110 (1992), 346–350. doi: 10.1001/archopht.1992.01080150044025
![]() |
[38] |
C. L. Ho, D. S. Walton, Primary congenital glaucoma: 2004 update, J. Pediatr. Ophthalmol. Strabismus., 41 (2004), 271–288. doi: 10.3928/01913913-20040901-11
![]() |
[39] |
M. Erdurmuş, R. Yağcı, Ö. Atış, R. Karadağ, A. Akbaş, İ. F. Hepşen, Antioxidant status and oxidative stress in primary open angle glaucoma and pseudoexfoliative glaucoma, Curr. Eye Res., 36 (2011), 713–718. doi: 10.3109/02713683.2011.584370
![]() |
[40] |
S. S. Hayreh, Neovascular glaucoma, Prog. Retin. Eye Res., 26 (2007), 470–485. doi: 10.1016/j.preteyeres.2007.06.001
![]() |
[41] |
D. A. Lee, E. J. Higginbotham, Glaucoma and its treatment: a review, Am. J. Health. Syst. Pharm., 62 (2005), 691–699. doi: 10.1093/ajhp/62.7.691
![]() |
[42] | X. Wang, R. Khan, A. Coleman, Device-modified trabeculectomy for glaucoma, Cochrane. Database Syst. Rev.. |
[43] |
K. R. Sung, J. S. Kim, G. Wollstein, L. Folio, M. S. Kook, J. S. Schuman, Imaging of the retinal nerve fibre layer with spectral domain optical coherence tomography for glaucoma diagnosis, Br. J. Ophthalmol., 95 (2011), 909–914. doi: 10.1136/bjo.2010.186924
![]() |
[44] |
M. E. Karlen, E. Sanchez, C. C. Schnyder, M. Sickenberg, A. Mermoud, Deep sclerectomy with collagen implant: medium term results, Br. J. Ophthalmol., 83 (1999), 6–11. doi: 10.1136/bjo.83.1.6
![]() |
[45] | J. Carrillo, L. Bautista, J. Villamizar, J. Rueda, M. Sanchez, D. Rueda, Glaucoma detection using fundus images of the eye, 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), IEEE, 2019, 1–4. |
[46] | N. Sengar, M. K. Dutta, R. Burget, M. Ranjoha, Automated detection of suspected glaucoma in digital fundus images, 2017 40th International Conference on Telecommunications and Signal Processing (TSP), IEEE, 2017,749–752. |
[47] | A. Poshtyar, J. Shanbehzadeh, H. Ahmadieh, Automatic measurement of cup to disc ratio for diagnosis of glaucoma on retinal fundus images, in 2013 6th International Conference on Biomedical Engineering and Informatics, IEEE, 2013, 24–27. |
[48] | F. Khan, S. A. Khan, U. U. Yasin, I. ul Haq, U. Qamar, Detection of glaucoma using retinal fundus images, in The 6th 2013 Biomedical Engineering International Conference, IEEE, 2013, 1–5. |
[49] |
H. Yamada, T. Akagi, H. Nakanishi, H. O. Ikeda, Y. Kimura, K. Suda, et al., Microstructure of peripapillary atrophy and subsequent visual field progression in treated primary open-angle glaucoma, Ophthalmology, 123 (2016), 542–551. doi: 10.1016/j.ophtha.2015.10.061
![]() |
[50] |
J. B. Jonas, Clinical implications of peripapillary atrophy in glaucoma, Curr. Opin. Ophthalmol., 16 (2005), 84–88. doi: 10.1097/01.icu.0000156135.20570.30
![]() |
[51] |
K. H. Park, G. Tomita, S. Y. Liou, Y. Kitazawa, Correlation between peripapillary atrophy and optic nerve damage in normal-tension glaucoma, Ophthalmol., 103 (1996), 1899–1906. doi: 10.1016/S0161-6420(96)30409-0
![]() |
[52] |
F. A. Medeiros, L. M. Zangwill, C. Bowd, R. M. Vessani, R. Susanna Jr, R. N. Weinreb, Evaluation of retinal nerve fiber layer, optic nerve head, and macular thickness measurements for glaucoma detection using optical coherence tomography, Am. J. Ophthalmol., 139 (2005), 44–55. doi: 10.1016/j.ajo.2004.08.069
![]() |
[53] |
G. Wollstein, J. S. Schuman, L. L. Price, A. Aydin, P. C. Stark, E. Hertzmark, et al., Optical coherence tomography longitudinal evaluation of retinal nerve fiber layer thickness in glaucoma, Arch. Ophthalmol., 123 (2005), 464–470. doi: 10.1001/archopht.123.4.464
![]() |
[54] |
M. Armaly, The optic cup in the normal eye: I. cup width, depth, vessel displacement, ocular tension and outflow facility, Am. J. Ophthalmol., 68 (1969), 401–407. doi: 10.1016/0002-9394(69)90702-8
![]() |
[55] | M. Galdos, A. Bayon, F. D. Rodriguez, C. Mico, S. C. Sharma, E. Vecino, Morphology of retinal vessels in the optic disk in a göttingen minipig experimental glaucoma model, Vet. Ophthalmol., 15 (2012), 36–46. |
[56] | W. Zhou, Y. Yi, Y. Gao, J. Dai, Optic disc and cup segmentation in retinal images for glaucoma diagnosis by locally statistical active contour model with structure prior, Comput. Math. Methods. Med., 2019. |
[57] |
P. Sharma, P. A. Sample, L. M. Zangwill, J. S. Schuman, Diagnostic tools for glaucoma detection and management, Surv. Ophthalmol., 53 (2008), S17–S32. doi: 10.1016/j.survophthal.2008.08.003
![]() |
[58] | M. J. Greaney, D. C. Hoffman, D. F. Garway-Heath, M. Nakla, A. L. Coleman, J. Caprioli, Comparison of optic nerve imaging methods to distinguish normal eyes from those with glaucoma, Invest. Ophthalmol. Vis. Sci., 43 (2002), 140–145. |
[59] |
R. Bock, J. Meier, L. G. Nyúl, J. Hornegger, G. Michelson, Glaucoma risk index: automated glaucoma detection from color fundus images, Med. Image. Anal., 14 (2010), 471–481. doi: 10.1016/j.media.2009.12.006
![]() |
[60] |
K. Chan, T.-W. Lee, P. A. Sample, M. H. Goldbaum, R. N. Weinreb, T. J. Sejnowski, Comparison of machine learning and traditional classifiers in glaucoma diagnosis, IEEE Trans. Biomed. Eng., 49 (2002), 963–974. doi: 10.1109/TBME.2002.802012
![]() |
[61] | N. Varachiu, C. Karanicolas, M. Ulieru, Computational intelligence for medical knowledge acquisition with application to glaucoma, in Proceedings First IEEE International Conference on Cognitive Informatics, IEEE, 2002,233–238. |
[62] | J. Yu, S. S. R. Abidi, P. H. Artes, A. McIntyre, M. Heywood, Automated optic nerve analysis for diagnostic support inglaucoma, in CBMS'05., IEEE, 2005, 97–102. |
[63] | R. Bock, J. Meier, G. Michelson, L. G. Nyúl and J. Hornegger, Classifying glaucoma with image-based features from fundus photographs, in Joint Pattern Recognition Symposium., Springer, 2007,355–364. |
[64] | Y. Hatanaka, A. Noudo, C. Muramatsu, A. Sawada, T. Hara, T. Yamamoto, et al., Vertical cup-to-disc ratio measurement for diagnosis of glaucoma on fundus images, in Medical Imaging 2010: Computer-Aided Diagnosis, vol. 7624, International Society for Optics and Photonics, 2010, 76243C. |
[65] | S. S. Abirami, S. G. Shoba, Glaucoma images classification using fuzzy min-max neural network based on data-core, IJISME., 1 (2013), 9–15. |
[66] |
J. Liu, Z. Zhang, D. W. K. Wong, Y. Xu, F. Yin, J. Cheng, et al., Automatic glaucoma diagnosis through medical imaging informatics, Journal of the American Medical Informatics Association, 20 (2013), 1021–1027. doi: 10.1136/amiajnl-2012-001336
![]() |
[67] | A. Almazroa, R. Burman, K. Raahemifar, V. Lakshminarayanan, Optic disc and optic cup segmentation methodologies for glaucoma image detection: A survey, J. Ophthalmol., 2015. |
[68] | A. Dey, S. K. Bandyopadhyay, Automated glaucoma detection using support vector machine classification method, J. Adv. Med. Med. Res., 1–12. |
[69] |
F. R. Silva, V. G. Vidotti, F. Cremasco, M. Dias, E. S. Gomi, V. P. Costa, Sensitivity and specificity of machine learning classifiers for glaucoma diagnosis using spectral domain oct and standard automated perimetry, Arq. Bras. Oftalmol., 76 (2013), 170–174. doi: 10.1590/S0004-27492013000300008
![]() |
[70] |
M. U. Akram, A. Tariq, S. Khalid, M. Y. Javed, S. Abbas, U. U. Yasin, Glaucoma detection using novel optic disc localization, hybrid feature set and classification techniques, Australas. Phys. Eng. Sci. Med., 38 (2015), 643–655. doi: 10.1007/s13246-015-0377-y
![]() |
[71] | A. T. A. Al-Sammarraie, R. R. Jassem, T. K. Ibrahim, Mixed convection heat transfer in inclined tubes with constant heat flux, Eur. J. Sci. Res., 97 (2013), 144–158. |
[72] |
M. R. K. Mookiah, U. R. Acharya, C. M. Lim, A. Petznick, J. S. Suri, Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features, Knowl. Based. Syst., 33 (2012), 73–82. doi: 10.1016/j.knosys.2012.02.010
![]() |
[73] | C. Raja, N. Gangatharan, Glaucoma detection in fundal retinal images using trispectrum and complex wavelet-based features, Eur. J. Sci. Res., 97 (2013), 159–171. |
[74] | G. Lim, Y. Cheng, W. Hsu, M. L. Lee, Integrated optic disc and cup segmentation with deep learning, in ICTAI., IEEE, 2015,162–169. |
[75] | K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, L. Van Gool, Deep retinal image understanding, in International conference on medical image computing and computer-assisted intervention, Springer, 2016,140–148. |
[76] |
B. Al-Bander, W. Al-Nuaimy, B. M. Williams, Y. Zheng, Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc, Biomed. Signal Process Control., 40 (2018), 91–101. doi: 10.1016/j.bspc.2017.09.008
![]() |
[77] |
A. Mitra, P. S. Banerjee, S. Roy, S. Roy, S. K. Setua, The region of interest localization for glaucoma analysis from retinal fundus image using deep learning, Comput. Methods Programs Biomed., 165 (2018), 25–35. doi: 10.1016/j.cmpb.2018.08.003
![]() |
[78] |
S. S. Kruthiventi, K. Ayush, R. V. Babu, Deepfix: A fully convolutional neural network for predicting human eye fixations, IEEE Trans. Image. Process., 26 (2017), 4446–4456. doi: 10.1109/TIP.2017.2710620
![]() |
[79] | M. Norouzifard, A. Nemati, A. Abdul-Rahman, H. GholamHosseini, R. Klette, A comparison of transfer learning techniques, deep convolutional neural network and multilayer neural network methods for the diagnosis of glaucomatous optic neuropathy, in International Computer Symposium, Springer, 2018,627–635. |
[80] | X. Sun, Y. Xu, W. Zhao, T. You, J. Liu, Optic disc segmentation from retinal fundus images via deep object detection networks, in EMBC., IEEE, 2018, 5954–5957. |
[81] | Z. Ghassabi, J. Shanbehzadeh, K. Nouri-Mahdavi, A unified optic nerve head and optic cup segmentation using unsupervised neural networks for glaucoma screening, in EMBC., IEEE, 2018, 5942–5945. |
[82] |
J. H. Tan, S. V. Bhandary, S. Sivaprasad, Y. Hagiwara, A. Bagchi, U. Raghavendra, et al., Age-related macular degeneration detection using deep convolutional neural network, Future Gener. Comput. Syst., 87 (2018), 127–135. doi: 10.1016/j.future.2018.05.001
![]() |
[83] |
J. Zilly, J. M. Buhmann, D. Mahapatra, Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation, Comput. Med. Imaging Graph., 55 (2017), 28–41. doi: 10.1016/j.compmedimag.2016.07.012
![]() |
[84] |
J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N.-M. Tan, et al., Superpixel classification based optic disc and optic cup segmentation for glaucoma screening, IEEE Trans. Med. Imaging, 32 (2013), 1019–1032. doi: 10.1109/TMI.2013.2247770
![]() |
[85] | H. Ahmad, A. Yamin, A. Shakeel, S. O. Gillani, U. Ansari, Detection of glaucoma using retinal fundus images, in iCREATE., IEEE, 2014,321–324. |
[86] | S. Kavitha, S. Karthikeyan, K. Duraiswamy, Neuroretinal rim quantification in fundus images to detect glaucoma, IJCSNS., 10 (2010), 134–140. |
[87] | Z. Zhang, B. H. Lee, J. Liu, D. W. K. Wong, N. M. Tan, J. H. Lim, et al., Optic disc region of interest localization in fundus image for glaucoma detection in argali, in 2010 5th IEEE Conference on Industrial Electronics and Applications, IEEE, 2010, 1686–1689. |
[88] |
D. Welfer, J. Scharcanski, C. M. Kitamura, M. M. Dal Pizzol, L. W. Ludwig, D. R. Marinho, Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach, Computers Biol. Med., 40 (2010), 124–137. doi: 10.1016/j.compbiomed.2009.11.009
![]() |
[89] |
H. Tjandrasa, A. Wijayanti, N. Suciati, Optic nerve head segmentation using hough transform and active contours, Telkomnika, 10 (2012), 531. doi: 10.12928/telkomnika.v10i3.833
![]() |
[90] | M. Tavakoli, M. Nazar, A. Golestaneh, F. Kalantari, Automated optic nerve head detection based on different retinal vasculature segmentation methods and mathematical morphology, in NSS/MIC., IEEE, 2017, 1–7. |
[91] |
P. Bibiloni, M. González-Hidalgo, S. Massanet, A real-time fuzzy morphological algorithm for retinal vessel segmentation, J. Real Time Image Process., 16 (2019), 2337–2350. doi: 10.1007/s11554-018-0748-1
![]() |
[92] | A. Agarwal, A. Issac, A. Singh, M. K. Dutta, Automatic imaging method for optic disc segmentation using morphological techniques and active contour fitting, in 2016 Ninth International Conference on Contemporary Computing (IC3), IEEE, 2016, 1–5. |
[93] | S. Pal, S. Chatterjee, Mathematical morphology aided optic disk segmentation from retinal images, in 2017 3rd International Conference on Condition Assessment Techniques in Electrical Systems (CATCON), IEEE, 2017,380–385. |
[94] | L. Wang, A. Bhalerao, Model based segmentation for retinal fundus images, in Scandinavian Conference on Image Analysis, Springer, 2003,422–429. |
[95] | G. Deng, L. Cahill, An adaptive gaussian filter for noise reduction and edge detection, in 1993 IEEE conference record nuclear science symposium and medical imaging conference, IEEE, 1993, 1615–1619. |
[96] |
K. A. Vermeer, F. M. Vos, H. G. Lemij, A. M. Vossepoel, A model based method for retinal blood vessel detection, Comput. Biol. Med., 34 (2004), 209–219. doi: 10.1016/S0010-4825(03)00055-6
![]() |
[97] | J. I. Orlando, E. Prokofyeva, M. B. Blaschko, A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images, IEEE Trans. Biomed. Eng., 64 (2016), 16–27. |
[98] | R. Ingle, P. Mishra, Cup segmentation by gradient method for the assessment of glaucoma from retinal image, Int. J. Latest Trends. Eng. Technol., 4 (2013), 2540–2543. |
[99] | G. D. Joshi, J. Sivaswamy, S. Krishnadas, Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment, IEEE Trans. Biomed. Eng., 30 (2011), 1192–1205. |
[100] | W. W. K. Damon, J. Liu, T. N. Meng, Y. Fengshou, W. T. Yin, Automatic detection of the optic cup using vessel kinking in digital retinal fundus images, in 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), IEEE, 2012, 1647–1650. |
[101] | D. Finkelstein, Kinks, J. Math. Phys., 7 (1966), 1218–1225. |
[102] | Y. Xu, L. Duan, S. Lin, X. Chen, D. W. K. Wong, T. Y. Wong, et al., Optic cup segmentation for glaucoma detection using low-rank superpixel representation, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2014,788–795. |
[103] | Y. Xu, J. Liu, S. Lin, D. Xu, C. Y. Cheung, T. Aung, et al., Efficient optic cup detection from intra-image learning with retinal structure priors, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2012, 58–65. |
[104] | M. A. Aslam, M. N. Salik, F. Chughtai, N. Ali, S. H. Dar, T. Khalil, Image classification based on mid-level feature fusion, in 2019 15th International Conference on Emerging Technologies (ICET), IEEE, 2019, 1–6. |
[105] |
N. Ali, K. B. Bajwa, R. Sablatnig, S. A. Chatzichristofis, Z. Iqbal, M. Rashid, et al., A novel image retrieval based on visual words integration of sift and surf, PloS. one., 11 (2016), e0157428. doi: 10.1371/journal.pone.0157428
![]() |
[106] | C.-Y. Ho, T.-W. Pai, H.-T. Chang, H.-Y. Chen, An atomatic fundus image analysis system for clinical diagnosis of glaucoma, in 2011 International Conference on Complex, Intelligent, and Software Intensive Systems, IEEE, 2011,559–564. |
[107] |
H.-T. Chang, C.-H. Liu, T.-W. Pai, Estimation and extraction of b-cell linear epitopes predicted by mathematical morphology approaches, J. Mol. Recognit., 21 (2008), 431–441. doi: 10.1002/jmr.910
![]() |
[108] | D. Wong, J. Liu, J. Lim, N. Tan, Z. Zhang, S. Lu, et al., Intelligent fusion of cup-to-disc ratio determination methods for glaucoma detection in argali, in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2009, 5777–5780. |
[109] | F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, C. Cheung, M. Baskaran, et al., Automated segmentation of optic disc and optic cup in fundus images for glaucoma diagnosis, in 2012 25th IEEE international symposium on computer-based medical systems (CBMS), IEEE, 2012, 1–6. |
[110] | S. Chandrika, K. Nirmala, Analysis of cdr detection for glaucoma diagnosis, IJERA., 2 (2013), 23–27. |
[111] | N. Annu, J. Justin, Automated classification of glaucoma images by wavelet energy features, IJERA., 5 (2013), 1716–1721. |
[112] |
H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, X. Cao, Joint optic disc and cup segmentation based on multi-label deep network and polar transformation, IEEE Trans. Med. Imaging., 37 (2018), 1597–1605. doi: 10.1109/TMI.2018.2791488
![]() |
[113] | P. K. Dhar, T. Shimamura, Blind svd-based audio watermarking using entropy and log-polar transformation, JISA., 20 (2015), 74–83. |
[114] | D. Wong, J. Liu, J. Lim, H. Li, T. Wong, Automated detection of kinks from blood vessels for optic cup segmentation in retinal images, in Medical Imaging 2009: Computer-Aided Diagnosis, vol. 7260, International Society for Optics and Photonics, 2009, 72601J. |
[115] | A. Murthi, M. Madheswaran, Enhancement of optic cup to disc ratio detection in glaucoma diagnosis, in 2012 International Conference on Computer Communication and Informatics, IEEE, 2012, 1–5. |
[116] |
N. E. A. Khalid, N. M. Noor, N. M. Ariff, Fuzzy c-means (fcm) for optic cup and disc segmentation with morphological operation, Procedia. Comput. Sci., 42 (2014), 255–262. doi: 10.1016/j.procs.2014.11.060
![]() |
[117] | H. A. Nugroho, W. K. Oktoeberza, A. Erasari, A. Utami, C. Cahyono, Segmentation of optic disc and optic cup in colour fundus images based on morphological reconstruction, in 2017 9th International Conference on Information Technology and Electrical Engineering (ICITEE), IEEE, 2017, 1–5. |
[118] |
L. Zhang, M. Fisher, W. Wang, Retinal vessel segmentation using multi-scale textons derived from keypoints, Comput. Med. Imaging Graph., 45 (2015), 47–56. doi: 10.1016/j.compmedimag.2015.07.006
![]() |
[119] | X. Li, B. Aldridge, R. Fisher, J. Rees, Estimating the ground truth from multiple individual segmentations incorporating prior pattern analysis with application to skin lesion segmentation, in 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, IEEE, 2011, 1438–1441. |
[120] | M. M. Fraz, P. Remagnino, A. Hoppe, S. Velastin, B. Uyyanonvara, S. Barman, A supervised method for retinal blood vessel segmentation using line strength, multiscale gabor and morphological features, in 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), IEEE, 2011,410–415. |
[121] | M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, M. D. Abramoff, Comparative study of retinal vessel segmentation methods on a new publicly available database, in Medical imaging 2004: Image processing, vol. 5370, International Society for Optics and Photonics, 2004,648–656. |
[122] | J. Y. Choi, T. K. Yoo, J. G. Seo, J. Kwak, T. T. Um, T. H. Rim, Multi-categorical deep learning neural network to classify retinal images: a pilot study employing small database, PloS one, 12. |
[123] | L. Zhang, M. Fisher, W. Wang, Comparative performance of texton based vascular tree segmentation in retinal images, in 2014 IEEE International Conference on Image Processing (ICIP), IEEE, 2014,952–956. |
[124] |
A. Septiarini, A. Harjoko, R. Pulungan, R. Ekantini, Automated detection of retinal nerve fiber layer by texture-based analysis for glaucoma evaluation, Healthc. Inform. Res., 24 (2018), 335–345. doi: 10.4258/hir.2018.24.4.335
![]() |
[125] | B. S. Kirar, D. K. Agrawal, Computer aided diagnosis of glaucoma using discrete and empirical wavelet transform from fundus images, IET Image Process., 13 (2018), 73–82. |
[126] | K. Nirmala, N. Venkateswaran, C. V. Kumar, J. S. Christobel, Glaucoma detection using wavelet based contourlet transform, in 2017 International Conference on Intelligent Computing and Control (I2C2), IEEE, 2017, 1–5. |
[127] |
A. A. G. Elseid, A. O. Hamza, Glaucoma detection using retinal nerve fiber layer texture features, J. Clin. Eng., 44 (2019), 180–185. doi: 10.1097/JCE.0000000000000361
![]() |
[128] | M. Claro, L. Santos, W. Silva, F. Araújo, N. Moura, A. Macedo, Automatic glaucoma detection based on optic disc segmentation and texture feature extraction, CLEI Electron. J., 19 (2016), 5. |
[129] | L. Abdel-Hamid, Glaucoma detection from retinal images using statistical and textural wavelet features, J. Digit Imaging., 1–8. |
[130] |
S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, R. Garnavi, A feature agnostic approach for glaucoma detection in oct volumes, PloS. one., 14 (2019), e0219126. doi: 10.1371/journal.pone.0219915
![]() |
[131] |
D. C. Hood, A. S. Raza, On improving the use of oct imaging for detecting glaucomatous damage, Br. J. Ophthalmol., 98 (2014), ii1–ii9. doi: 10.1136/bjophthalmol-2014-305156
![]() |
[132] |
D. C. Hood, Improving our understanding, and detection, of glaucomatous damage: an approach based upon optical coherence tomography (oct), Prog.Retin. Eye Res., 57 (2017), 46–75. doi: 10.1016/j.preteyeres.2016.12.002
![]() |
[133] |
H. S. Basavegowda, G. Dagnew, Deep learning approach for microarray cancer data classification, CAAI Trans. Intell. Technol., 5 (2020), 22–33. doi: 10.1049/trit.2019.0028
![]() |
[134] | X. Chen, Y. Xu, D. W. K. Wong, T. Y. Wong, J. Liu, Glaucoma detection based on deep convolutional neural network, in 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC), IEEE, 2015,715–718. |
[135] |
U. T. Nguyen, A. Bhuiyan, L. A. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern Recognit., 46 (2013), 703–715. doi: 10.1016/j.patcog.2012.08.009
![]() |
[136] |
J. H. Tan, U. R. Acharya, S. V. Bhandary, K. C. Chua, S. Sivaprasad, Segmentation of optic disc, fovea and retinal vasculature using a single convolutional neural network, J. Comput. Sci., 20 (2017), 70–79. doi: 10.1016/j.jocs.2017.02.006
![]() |
[137] |
Y. Chai, H. Liu, J. Xu, Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models, Knowl. Based Syst., 161 (2018), 147–156. doi: 10.1016/j.knosys.2018.07.043
![]() |
[138] | A. Pal, M. R. Moorthy, A. Shahina, G-eyenet: A convolutional autoencoding classifier framework for the detection of glaucoma from retinal fundus images, in 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, 2018, 2775–2779. |
[139] |
R. Asaoka, M. Tanito, N. Shibata, K. Mitsuhashi, K. Nakahara, Y. Fujino, et al., Validation of a deep learning model to screen for glaucoma using images from different fundus cameras and data augmentation, Ophthalmol. Glaucoma., 2 (2019), 224–231. doi: 10.1016/j.ogla.2019.03.008
![]() |
[140] | X. Chen, Y. Xu, S. Yan, D. W. K. Wong, T. Y. Wong, J. Liu, Automatic feature learning for glaucoma detection based on deep learning, in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015,669–677. |
[141] | A. Singh, S. Sengupta, V. Lakshminarayanan, Glaucoma diagnosis using transfer learning methods, in Applications of Machine Learning, vol. 11139, International Society for Optics and Photonics, 2019, 111390U. |
[142] | S. Maheshwari, V. Kanhangad, R. B. Pachori, Cnn-based approach for glaucoma diagnosis using transfer learning and lbp-based data augmentation, arXiv. preprint. |
[143] |
S. Sengupta, A. Singh, H. A. Leopold, T. Gulati, V. Lakshminarayanan, Ophthalmic diagnosis using deep learning with fundus images–a critical review, Artif. Intell. Med., 102 (2020), 101758. doi: 10.1016/j.artmed.2019.101758
![]() |
[144] |
U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, U. R. Acharya, Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images, Inf. Sci., 441 (2018), 41–49. doi: 10.1016/j.ins.2018.01.051
![]() |
[145] |
R. Asaoka, H. Murata, A. Iwase, M. Araie, Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier, Ophthalmol., 123 (2016), 1974–1980. doi: 10.1016/j.ophtha.2016.05.029
![]() |
[146] |
Z. Li, Y. He, S. Keel, W. Meng, R. T. Chang, M. He, Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs, Ophthalmol., 125 (2018), 1199–1206. doi: 10.1016/j.ophtha.2018.01.023
![]() |
[147] | V. V. Raghavan, V. N. Gudivada, V. Govindaraju, C. R. Rao, Cognitive computing: Theory and applications, Elsevier., 2016. |
[148] | J. Sivaswamy, S. Krishnadas, G. D. Joshi, M. Jain, A. U. S. Tabish, Drishti-gs: Retinal image dataset for optic nerve head (onh) segmentation, in 2014 IEEE 11th international symposium on biomedical imaging (ISBI), IEEE, 2014, 53–56. |
[149] | A. Chakravarty, J. Sivaswamy, Glaucoma classification with a fusion of segmentation and image-based features, in 2016 IEEE 13th international symposium on biomedical imaging (ISBI), IEEE, 2016,689–692. |
[150] |
E. Decencière, X. Zhang, G. Cazuguel, B. Lay, B. Cochener, C. Trone, et al., Feedback on a publicly distributed image database: The messidor database, Image Analys. Stereol., 33 (2014), 231–234. doi: 10.5566/ias.1155
![]() |
[151] |
A. Allam, A. Youssif, A. Ghalwash, Automatic segmentation of optic disc in eye fundus images: a survey, ELCVIA, 14 (2015), 1–20. doi: 10.5565/rev/elcvia.762
![]() |
[152] |
P. Porwal, S. Pachade, R. Kamble, M. Kokare, G. Deshmukh, V. Sahasrabuddhe, et al., Indian diabetic retinopathy image dataset (idrid): A database for diabetic retinopathy screening research, Data, 3 (2018), 25. doi: 10.3390/data3030025
![]() |
[153] | F. Calimeri, A. Marzullo, C. Stamile, G. Terracina, Optic disc detection using fine tuned convolutional neural networks, in 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), IEEE, 2016, 69–75. |
[154] |
M. N. Reza, Automatic detection of optic disc in color fundus retinal images using circle operator, Biomed. Signal. Process. Control., 45 (2018), 274–283. doi: 10.1016/j.bspc.2018.05.027
![]() |
[155] | Z. Zhang, F. S. Yin, J. Liu, W. K. Wong, N. M. Tan, B. H. Lee, et al., Origa-light: An online retinal fundus image database for glaucoma analysis and research, in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, IEEE, 2010, 3065–3068. |
[156] | Z. Zhang, J. Liu, F. Yin, B.-H. Lee, D. W. K. Wong, K. R. Sung, Achiko-k: Database of fundus images from glaucoma patients, in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), IEEE, 2013,228–231. |
[157] | F. Yin, J. Liu, D. W. K. Wong, N. M. Tan, B. H. Lee, J. Cheng, et al., Achiko-i retinal fundus image database and its evaluation on cup-to-disc ratio measurement, in 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), IEEE, 2013,224–227. |
[158] | F. Fumero, S. Alayón, J. L. Sanchez, J. Sigut, M. Gonzalez-Hernandez, Rim-one: An open retinal image database for optic nerve evaluation, in 2011 24th international symposium on computer-based medical systems (CBMS), IEEE, 2011, 1–6. |
[159] |
J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, et al., Optic nerve head segmentation, IEEE Trans. Med. Imaging., 23 (2004), 256–264. doi: 10.1109/TMI.2003.823261
![]() |
[160] |
C. C. Sng, L.-L. Foo, C.-Y. Cheng, J. C. Allen Jr, M. He, G. Krishnaswamy, et al., Determinants of anterior chamber depth: the singapore chinese eye study, Ophthalmol., 119 (2012), 1143–1150. doi: 10.1016/j.ophtha.2012.01.011
![]() |
[161] | Z. Zhang, J. Liu, C. K. Kwoh, X. Sim, W. T. Tay, Y. Tan, et al., Learning in glaucoma genetic risk assessment, in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, IEEE, 2010, 6182–6185. |
[162] | D. Wong, J. Liu, J. Lim, X. Jia, F. Yin, H. Li, et al., Level-set based automatic cup-to-disc ratio determination using retinal fundus images in argali, in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE, 2008, 2266–2269. |
[163] | T. Ge, L. Cui, B. Chang, Z. Sui, F. Wei, M. Zhou, Seri: A dataset for sub-event relation inference from an encyclopedia, in CCF International Conference on Natural Language Processing and Chinese Computing, Springer, 2018,268–277. |
[164] | M. Haloi, Improved microaneurysm detection using deep neural networks, arXiv. preprint.. |
[165] |
B. Antal, A. Hajdu, An ensemble-based system for automatic screening of diabetic retinopathy, Knowl. Based Syst., 60 (2014), 20–27. doi: 10.1016/j.knosys.2013.12.023
![]() |
[166] |
J. Nayak, R. Acharya, P. S. Bhat, N. Shetty, T.-C. Lim, Automated diagnosis of glaucoma using digital fundus images, J. Med. Syst., 33 (2009), 337. doi: 10.1007/s10916-008-9195-z
![]() |
[167] |
J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, M. J. Cree, Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification, IEEE Trans. Med. Imaging., 25 (2006), 1214–1222. doi: 10.1109/TMI.2006.879967
![]() |
[168] |
S. Kankanahalli, P. M. Burlina, Y. Wolfson, D. E. Freund, N. M. Bressler, Automated classification of severity of age-related macular degeneration from fundus photographs, Invest. Ophthalmol. Vis. Sci., 54 (2013), 1789–1796. doi: 10.1167/iovs.12-10928
![]() |
[169] | K. Prasad, P. Sajith, M. Neema, L. Madhu, P. Priya, Multiple eye disease detection using deep neural network, in TENCON 2019-2019 IEEE Region 10 Conference (TENCON), IEEE, 2019, 2148–2153. |
1. | Ahmad MohdAziz Hussein, Saleh Ali Alomari, Mohammad H. Almomani, Raed Abu Zitar, Kashif Saleem, Aseel Smerat, Shawd Nusier, Laith Abualigah, A Smart IoT-Cloud Framework with Adaptive Deep Learning for Real-Time Epileptic Seizure Detection, 2024, 0278-081X, 10.1007/s00034-024-02919-4 | |
2. | Chiara Furio, Luciano Lamberti, Catalin I. Pruncu, An Efficient and Fast Hybrid GWO-JAYA Algorithm for Design Optimization, 2024, 14, 2076-3417, 9610, 10.3390/app14209610 | |
3. | Danfeng Chen, Junlang Liu, Tengyun Li, Jun He, Yong Chen, Wenbo Zhu, Research on Mobile Robot Path Planning Based on MSIAR-GWO Algorithm, 2025, 25, 1424-8220, 892, 10.3390/s25030892 | |
4. | Srishti Kumari, Shweta Jindal, Arun Sharma, Test case optimization using grey wolf algorithm, 2025, 33, 0963-9314, 10.1007/s11219-025-09717-4 |
Function | Dim | Range | fmin |
f1(x)=∑ni=1xi2 | 30 | [−100, 100] | 0 |
f2(x)=∑ni=1|xi|+∏ni=1|xi| | 30 | [−10, 10] | 0 |
f3(x)=∑ni=1(∑ij−1xj2)2 | 30 | [−100, 100] | 0 |
f4(x)=maxi{|xi|,1≤i≤n} | 30 | [−100, 100] | 0 |
f5(x)=∑n−1i=1[100(xi+1−x2i)2+(xi−1)2] | 30 | [−30, 30] | 0 |
f6(x)=∑ni=1([xi+0.5])2 | 30 | [−100, 100] | 0 |
f7(x)=∑ni=1ixi4+random[0,1) | 30 | [−1.28, 1.28] | 0 |
Function | Dim | Range | fmin |
f8(x)=∑ni=1−xisin(√|xi|) | 30 | [−500, 500] | 0 |
f9(x)=∑ni=1[x2i−10cos(2πxi)+10] | 30 | [−5.12, 5.12] | 0 |
f10(x)=−20exp(−0.2√1n∑ni=1x2i)−exp(1n∑ni=1cos(2πxi))+20+e | 30 | [−32, 32] | 0 |
f11(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | 30 | [−600, 600] | 0 |
f12=πn{10sin(πy1)+∑n−1i=1(yi−1)2[1+10sin2(πyi+1)]+(yn−1)2}+∑ni=1u(xi,10,100,4)yi=1+xi+14u(xi,a,k,m)={k(xi−a)m xi>a0 −a<xi<ak(−xi−a)m xi<−a} | 30 | [−50, 50] | 0 |
f13(x)=0.1{sin2(3πxi)+∑n1(xi−1)2[1+sin2(3πxi+1)]+(xn−1)2[1+sin2(2πxn)]}+∑ni=1u(xi,5,100,4) | 30 | [−50, 50] | 0 |
Function | Dim | Range | fmin |
f14(x)=(1500+∑25j=11j+∑2i=1(xi−aij))−1 | 2 | [−65, 65] | 1 |
f15(x)=∑11i=1[ai−x1(b2i+bix2)b2i+bix3+x4]2 | 4 | [−5, 5] | 0.0003 |
f16(x)=4x21−2.1x41+13x61+x1x2−4x22+4x42 | 2 | [−5, 5] | −1.0316 |
f17(x)=(x2−5.14π2x21+5πx1−6)2+10(1−18π)cosx1+10 | 2 | [−5, 5] | 0.398 |
f18(x)=[1+(x1+x2+1)2(19−14x1+3x21−14x2+6x1x2+3x22)]×[30+(2x1−3x2)2×(18−32x1+12x21+48x2−36x1x2+27x22)] | 2 | [−2, 2] | 3 |
f19(x)=−∑4i=1ciexp(−∑3j=1aij(xj−pij)2) | 3 | [−1, 3] | −3.86 |
f20(x)=−∑4i=1ciexp(−∑6j=1aij(xj−pij)2) | 6 | [0, 1] | −3.32 |
f21(x)=−∑5i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.1532 |
f22(x)=−∑7i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.4028 |
f23(x)=−∑10i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.5363 |
Algorithm | Parameters |
GWO | a linearly decreased over iterations from 2 to 0 |
PSO | ω = 1, c1=2, c2 = 2 |
WOA | a linearly decreased over iterations from 2 to 0 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f1 | 2.42×10−26 | 3.07×10−26 | 5.89 | 5.27 | 5.28×10−7 | 1.58×10−6 | 0.00 | 0.00 |
f2 | 4.08×10−16 | 2.71×10−16 | 8.85 | 8.00 | 2.42×10−10 | 6.57×10−10 | 9.1×10−243 | 6.7×10−243 |
f3 | 5.89×10−4 | 1.62×10−2 | 22.4 | 7.83 | 2.14×10−2 | 3.60×10−2 | 0.00 | 0.00 |
f4 | 2.83×10−5 | 1.86×10−5 | 1.24 | 0.398 | 1.27×10−2 | 2.59×10−2 | 1.1×10−201 | 5.7×10−201 |
f5 | 27.3 | 0.813 | 2.36×102 | 1.64×102 | 28.6 | 0.330 | 26.3 | 0.314 |
f6 | 1.37 | 0.492 | 9.26 | 3.25 | 22.8 | 53.2 | 4.39×10−5 | 1.66×10−5 |
f7 | 3.65×10−3 | 1.52×10−3 | 1.73×102 | 53.1 | 8.56×10−2 | 0.177 | 3.03×10−3 | 4.71×10−4 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f8 | −6.2×103 | 6.51×102 | −5.2×103 | 7.00×102 | −3.3×103 | 2.87×102 | −7.0×103 | 4.52×102 |
f9 | 13.4 | 10.6 | 1.73×102 | 22.4 | 9.42×10−12 | 2.74×10−11 | 0.00 | 0.00 |
f10 | 1.38×10−13 | 2.52×10−14 | 2.78 | 0.448 | 1.13×10−8 | 2.96×10−8 | 1.11×10−14 | 2.91×10−15 |
f11 | 5.81×10−3 | 8.93×10−3 | 0.650 | 0.174 | 2.22×10−16 | 3.71×10−13 | 0.00 | 0.00 |
f12 | 6.86×10−2 | 5.72×10−2 | 0.839 | 0.405 | 0.868 | 0.271 | 1.40×10−2 | 9.56×10−3 |
f13 | 0.632 | 0.244 | 1.52 | 0.757 | 2.36 | 0.149 | 3.34×10−5 | 1.27×10−5 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f14 | 5.01 | 4.27 | 1.13 | 0.302 | 2.86 | 0.971 | 1.10 | 0.288 |
f15 | 8.38×10−3 | 9.77×10−3 | 1.04×10−2 | 8.53×10−3 | 3.94×10−3 | 3.25×10−3 | 4.53×10−3 | 7.91×10−3 |
f16 | −1.0 | 2.99×10−8 | 0.861 | 0.921 | 0.640 | 0.326 | −1.0 | 5.10×10−8 |
f17 | 0.397 | 6.30×10−5 | 0.861 | 0.921 | 0.640 | 0.326 | 0.397 | 5.10×10−8 |
f18 | 3.00 | 1.46×10−5 | 3.02 | 1.84×10−2 | 4.20 | 2.01 | 3.00 | 5.04×10−5 |
f19 | −3.8 | 3.47×10−3 | −3.8 | 2.98×10−3 | −3.7 | 3.20×10−2 | −3.8 | 3.15×10−3 |
f20 | −3.2 | 9.32×10−2 | −3.0 | 0.271 | −2.5 | 0.608 | −3.2 | 4.76×10−2 |
f21 | −8.0 | 2.46 | −8.1 | 1.62 | −2.3 | 1.91 | −9.0 | 2.00 |
f22 | −7.6 | 3.16 | −6.2 | 1.76 | −1.9 | 1.41 | −8.6 | 2.97 |
f23 | −10 | 3.20×10−3 | −6.7 | 1.36 | −1.8 | 1.57 | −10 | 1.61×10−5 |
Function | SOGWO | EOGWO | ASGWO | |||
Mean | Std | Mean | Std | Mean | Std | |
f1 | 6.04×10−77 | 1.48×10−76 | 2.81×10−71 | 8.46×10−71 | 0.00 | 0.00 |
f2 | 1.17×10−44 | 1.34×10−44 | 4.31×10−42 | 7.87×10−42 | 0.00 | 0.00 |
f3 | 5.39×10−22 | 2.59×10−21 | 1.52×10−20 | 4.02×10−20 | 0.00 | 0.00 |
f4 | 7.08×10−21 | 1.51×10−19 | 8.06×10−19 | 1.11×10−18 | 0.00 | 0.00 |
f5 | 26.4 | 0.762 | 26.3 | 0.7364 | 25.2 | 0.663 |
f6 | 0.282 | 0.247 | 0.3290 | 0.245 | 1.06×10−6 | 2.37×10−7 |
f7 | 4.93×10−4 | 2.71×10−4 | 6.07×10−4 | 4.32×10−4 | 1.04×10−3 | 6.49×10−4 |
f8 | −6.5×103 | 8.02×102 | −6.27×103 | 7.71×102 | −7.31×103 | 8.67×102 |
f9 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
f10 | 8.88×10−16 | 0.00 | 1.40×10−14 | 3.20×10−15 | 1.36×10−14 | 2.71×10−15 |
f11 | 0.00 | 0.00 | 1.68×10−3 | 4.80×10−3 | 0.00 | 0.00 |
f12 | 5.60×10−2 | 1.42×10−5 | 2.29×10−2 | 1.85×10−2 | 3.91×10−3 | 3.16×10−3 |
f13 | 0.352 | 0.128 | 0.257 | 0.164 | 1.27×10−6 | 4.73×10−7 |
f14 | 3.40 | 3.72 | 3.82 | 3.86 | 0.998 | 8.24×10−13 |
f15 | 2.38×10−3 | 6.02×10−3 | 5.24×10−3 | 8.68×10−3 | 4.31×10−3 | 7.01×10−3 |
f16 | −1.03 | 3.75×10−9 | −1.02 | 3.45×10−9 | −1.03 | 2.42×10−11 |
f17 | 0.397 | 4.85×10−7 | 0.398 | 4.82×10−7 | 0.398 | 1.68×10−8 |
f18 | 3.00 | 4.63×10−6 | 3.00 | 3.60×10−6 | 3.00 | 3.57×10−6 |
f19 | −3.86 | 2.71×10−3 | −3.86 | 2.36×10−3 | −3.86 | 1.03×10−6 |
f20 | −3.26 | 7.37×10−2 | −3.27 | 7.55×10−2 | −3.32 | 2.93×10−8 |
f21 | −9.65 | 1.50 | −9.93 | 2.07 | −9.34 | 1.40 |
f22 | −10.4 | 2.65×10−4 | −10.2 | 1.05 | −10.6 | 1.06×10−5 |
f23 | −10.4 | 0.540 | −10.2 | 1.62 | −10.4 | 1.04×10−5 |
Parameter | Value |
same initialization configuration | Population Size is 50, Maxiter is 50000 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
GWO | a linearly decreased over iterations from 2 to 0 |
WOA | a linearly decreased over iterations from 2 to 0 |
IWOA | Scaling factor for beta (0.2, 0.8), DE mutation scheme(DE/best/1/bin), a linearly decreased over iterations from 2 to 0 |
K-WOA | fixed number of clusters k = 18 |
GSA-BBO | k = 2, I = 1, E1 = 1, Siv = 4, Rnorm = 2, Rpower = 1 |
GSO | the acceleration constants are 2.05 |
ABC | Onlooker 50%, employees 50%, acceleration coefficient upper bound (a)= 1, LL = (0.6×dimensions×population) |
CAB | Mbest = 4, Hp = 0.2 |
CS | β = 1.5, Discover = 0.25 |
FUZZY | Nflames = round(Npop - k)(Npop - 1)kmax |
MFO | c1=2, c2 = 2, ω decreased from 0.9 to 0.2 |
Algorithm | Optimal solution | Optimal cost | SD | |||
x1 | x2 | x3 | x4 | |||
ASGWO | 24 | 18 | 59 | 53 | 4.14×10−15 | 7.57×10−15 |
GWO | 26 | 18 | 60 | 54 | 1.49×10−14 | 2.75×10−14 |
WOA | 16 | 19 | 49 | 43 | 1.15×10−9 | 1.39×10−9 |
IWOA | 30 | 13 | 51 | 53 | 2.39×10−9 | 2.53×10−9 |
K-WOA | 19 | 16 | 43 | 49 | 2.70×10−12 | 0.00 |
GSA-BBO | 16 | 19 | 49 | 43 | 8.72×10−10 | 8.38×10−10 |
GSO | 60 | 29 | 52 | 60 | 0.732 | 0.00 |
ABC | 16 | 19 | 49 | 43 | 6.62×10−11 | 1.65×10−10 |
CAB | 12 | 12 | 35 | 12 | 0.675 | 0.180 |
CS | 16 | 19 | 43 | 49 | 1.47×10−10 | 2.65×10−10 |
FUZZY | 12 | 23 | 33 | 57 | 2.57×10−3 | 4.87×10−3 |
MFO | 19 | 16 | 49 | 43 | 4.85×10−9 | 6.90×10−9 |
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
SC-GWO | ω decreased from 0.7 to 0.2 |
mGWO | a=2(1−tMaxiter) |
wGWO | a linearly decreased over iterations from 2 to 0 |
GA | Crossover Rate = 0.7, Mutation Rate = 0.01 |
SSA | c1 unlinearly decreased over iterations |
Chaotic SSA | c1 = logistic chaotic map(c1) |
ES | σ = 3.0, μ = 100, λ = 300 |
Algorithm | Optimal solution | Optimal cost | |||
x1 | x2 | x3 | x4 | ||
ASGWO | 0.8327 | 0.4122 | 43.1396 | 168.1458 | 6010.9908 |
GWO | 0.8750 | 0.4375 | 44.9807 | 144.1081 | 6136.6600 |
SC-GWO | 0.8125 | 0.4375 | 42.0984 | 176.6370 | 6059.7179 |
mGWO | 0.8125 | 0.4375 | 42.0982 | 176.6386 | 6059.7359 |
wGWO | 0.8125 | 0.4375 | 42.09842 | 176.637 | 6059.7207 |
PSO | 0.8125 | 0.4375 | 42.0913 | 176.7465 | 6061.0777 |
GA | 0.9375 | 0.5000 | 48.3290 | 112.6790 | 6410.3810 |
SSA | 0.8125 | 0.4375 | 42.09836 | 176.6376 | 6059.7254 |
Chaotic SSA | 0.8750 | 0.4375 | 45.33679 | 140.2539 | 6090.527 |
WOA | 0.8125 | 0.4375 | 42.0982 | 176.6389 | 6059.7410 |
ES | 0.8125 | 0.437 | 42.0980 | 176.6405 | 6059.7456 |
Algorithm | ASGWO | IROA | SMA | HHOCM | ROLGWO | MALO |
x1 | 0.500041 | 0.5 | 0.5 | 0.50016380 | 0.5012548 | 0.5 |
x2 | 1.1345446 | 1.23105679 | 1.22739249 | 1.248612358 | 1.2455510 | 1.22810442 |
x3 | 0.5000862 | 0.5 | 0.5 | 0.65955791 | 0.50004578 | 0.5 |
x4 | 1.2790514 | 1.19766142 | 1.20428741 | 1.098515362 | 1.18025396 | 1.21264054 |
x5 | 0.5002007 | 0.5 | 1.20428741 | 0.757988599 | 0.50003477 | 0.5 |
x6 | 1.4999609 | 1.07429465 | 1.04185969 | 0.76726834 | 1.16588047 | 1.30804056 |
x7 | 0.5000544 | 0.5 | 0.5 | 0.500055187 | 0.50008827 | 0.5 |
x8 | 0.3449606 | 0.3449999 | 0.345 | 0.34310489 | 0.3448952 | 0.34499984 |
x9 | 0.3324805 | 0.3443286 | 0.3424831 | 0.19203186 | 0.2995826 | 0.28040129 |
x10 | −16.33320 | 0.9523965 | 0.2967546 | 2.89880509 | 3.5950796 | 0.42429341 |
x11 | −2.149117 | 1.0114033 | 1.1579641 | −4.5511746 | 2.2901802 | 4.65653809 |
fmin | 22.871876 | 23.188937 | 23.191021 | 24.483584 | 23.222427 | 23.229404 |
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
IROA | C = 0.1; α∈[−1, 9]; μ = 0.499; z = 0.07; y = 0.1 |
SMA | z = 0.03 |
HHOCM | The value of escaping energy decreases from 2 to 0, mutation rate decreases linearly from 1 to 0 |
ROLGWO | r3∈ [0, 1] |
MALO | Switch possibility = 0.5 |
Datasets | No. of attributes | No. of samples |
Breast-w | 9 | 699 |
Credit-g | 20 | 1000 |
Dermatology | 34 | 366 |
Glass | 9 | 214 |
Ionosphere | 34 | 351 |
Lymphography | 18 | 148 |
Sonar | 60 | 208 |
Dataset | Breast-w | Credit-g | Dermato | Glass | Ionosph | Lymph. | Sonar | |
KNN | 0.965 | 0.593 | 0.873 | 0.591 | 0.817 | 0.712 | 0.816 | |
Average F-measure | BASO | 0.982 | 0.829 | 0.988 | 0.778 | 0.887 | 0.896 | 0.892 |
BGA | 0.983 | 0.831 | 0.988 | 0.750 | 0.887 | 0.896 | 0.892 | |
BPSO | 0.981 | 0.824 | 0.987 | 0.753 | 0.870 | 0.893 | 0.880 | |
BGWO | 0.981 | 0.825 | 0.989 | 0.754 | 0.853 | 0.868 | 0.865 | |
ASGWO | 0.988 | 0.733 | 0.998 | 0.705 | 0.958 | 0.955 | 0.969 | |
Selected feature | BASO | 6.5 | 11.1 | 19.7 | 7.6 | 11.4 | 10.8 | 27.5 |
BGA | 6.3 | 10.6 | 19.4 | 6.3 | 11.4 | 10.2 | 28.8 | |
BPSO | 6.5 | 9.9 | 20 | 7.8 | 11.1 | 9.9 | 28.9 | |
BGWO | 7.1 | 13.9 | 25.6 | 7.4 | 11.7 | 13.3 | 41.6 | |
ASGWO | 4.74 | 10.04 | 17.82 | 4.92 | 13.24 | 7.6 | 27.46 |
Function | Dim | Range | fmin |
f1(x)=∑ni=1xi2 | 30 | [−100, 100] | 0 |
f2(x)=∑ni=1|xi|+∏ni=1|xi| | 30 | [−10, 10] | 0 |
f3(x)=∑ni=1(∑ij−1xj2)2 | 30 | [−100, 100] | 0 |
f4(x)=maxi{|xi|,1≤i≤n} | 30 | [−100, 100] | 0 |
f5(x)=∑n−1i=1[100(xi+1−x2i)2+(xi−1)2] | 30 | [−30, 30] | 0 |
f6(x)=∑ni=1([xi+0.5])2 | 30 | [−100, 100] | 0 |
f7(x)=∑ni=1ixi4+random[0,1) | 30 | [−1.28, 1.28] | 0 |
Function | Dim | Range | fmin |
f8(x)=∑ni=1−xisin(√|xi|) | 30 | [−500, 500] | 0 |
f9(x)=∑ni=1[x2i−10cos(2πxi)+10] | 30 | [−5.12, 5.12] | 0 |
f10(x)=−20exp(−0.2√1n∑ni=1x2i)−exp(1n∑ni=1cos(2πxi))+20+e | 30 | [−32, 32] | 0 |
f11(x)=14000∑ni=1x2i−∏ni=1cos(xi√i)+1 | 30 | [−600, 600] | 0 |
f12=πn{10sin(πy1)+∑n−1i=1(yi−1)2[1+10sin2(πyi+1)]+(yn−1)2}+∑ni=1u(xi,10,100,4)yi=1+xi+14u(xi,a,k,m)={k(xi−a)m xi>a0 −a<xi<ak(−xi−a)m xi<−a} | 30 | [−50, 50] | 0 |
f13(x)=0.1{sin2(3πxi)+∑n1(xi−1)2[1+sin2(3πxi+1)]+(xn−1)2[1+sin2(2πxn)]}+∑ni=1u(xi,5,100,4) | 30 | [−50, 50] | 0 |
Function | Dim | Range | fmin |
f14(x)=(1500+∑25j=11j+∑2i=1(xi−aij))−1 | 2 | [−65, 65] | 1 |
f15(x)=∑11i=1[ai−x1(b2i+bix2)b2i+bix3+x4]2 | 4 | [−5, 5] | 0.0003 |
f16(x)=4x21−2.1x41+13x61+x1x2−4x22+4x42 | 2 | [−5, 5] | −1.0316 |
f17(x)=(x2−5.14π2x21+5πx1−6)2+10(1−18π)cosx1+10 | 2 | [−5, 5] | 0.398 |
f18(x)=[1+(x1+x2+1)2(19−14x1+3x21−14x2+6x1x2+3x22)]×[30+(2x1−3x2)2×(18−32x1+12x21+48x2−36x1x2+27x22)] | 2 | [−2, 2] | 3 |
f19(x)=−∑4i=1ciexp(−∑3j=1aij(xj−pij)2) | 3 | [−1, 3] | −3.86 |
f20(x)=−∑4i=1ciexp(−∑6j=1aij(xj−pij)2) | 6 | [0, 1] | −3.32 |
f21(x)=−∑5i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.1532 |
f22(x)=−∑7i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.4028 |
f23(x)=−∑10i=1[(X−ai)(X−ai)T+ci]−1 | 4 | [0, 10] | −10.5363 |
Algorithm | Parameters |
GWO | a linearly decreased over iterations from 2 to 0 |
PSO | ω = 1, c1=2, c2 = 2 |
WOA | a linearly decreased over iterations from 2 to 0 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f1 | 2.42×10−26 | 3.07×10−26 | 5.89 | 5.27 | 5.28×10−7 | 1.58×10−6 | 0.00 | 0.00 |
f2 | 4.08×10−16 | 2.71×10−16 | 8.85 | 8.00 | 2.42×10−10 | 6.57×10−10 | 9.1×10−243 | 6.7×10−243 |
f3 | 5.89×10−4 | 1.62×10−2 | 22.4 | 7.83 | 2.14×10−2 | 3.60×10−2 | 0.00 | 0.00 |
f4 | 2.83×10−5 | 1.86×10−5 | 1.24 | 0.398 | 1.27×10−2 | 2.59×10−2 | 1.1×10−201 | 5.7×10−201 |
f5 | 27.3 | 0.813 | 2.36×102 | 1.64×102 | 28.6 | 0.330 | 26.3 | 0.314 |
f6 | 1.37 | 0.492 | 9.26 | 3.25 | 22.8 | 53.2 | 4.39×10−5 | 1.66×10−5 |
f7 | 3.65×10−3 | 1.52×10−3 | 1.73×102 | 53.1 | 8.56×10−2 | 0.177 | 3.03×10−3 | 4.71×10−4 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f8 | −6.2×103 | 6.51×102 | −5.2×103 | 7.00×102 | −3.3×103 | 2.87×102 | −7.0×103 | 4.52×102 |
f9 | 13.4 | 10.6 | 1.73×102 | 22.4 | 9.42×10−12 | 2.74×10−11 | 0.00 | 0.00 |
f10 | 1.38×10−13 | 2.52×10−14 | 2.78 | 0.448 | 1.13×10−8 | 2.96×10−8 | 1.11×10−14 | 2.91×10−15 |
f11 | 5.81×10−3 | 8.93×10−3 | 0.650 | 0.174 | 2.22×10−16 | 3.71×10−13 | 0.00 | 0.00 |
f12 | 6.86×10−2 | 5.72×10−2 | 0.839 | 0.405 | 0.868 | 0.271 | 1.40×10−2 | 9.56×10−3 |
f13 | 0.632 | 0.244 | 1.52 | 0.757 | 2.36 | 0.149 | 3.34×10−5 | 1.27×10−5 |
Function | GWO | PSO | WOA | ASGWO | ||||
Mean | Std | Mean | Std | Mean | Std | Mean | Std | |
f14 | 5.01 | 4.27 | 1.13 | 0.302 | 2.86 | 0.971 | 1.10 | 0.288 |
f15 | 8.38×10−3 | 9.77×10−3 | 1.04×10−2 | 8.53×10−3 | 3.94×10−3 | 3.25×10−3 | 4.53×10−3 | 7.91×10−3 |
f16 | −1.0 | 2.99×10−8 | 0.861 | 0.921 | 0.640 | 0.326 | −1.0 | 5.10×10−8 |
f17 | 0.397 | 6.30×10−5 | 0.861 | 0.921 | 0.640 | 0.326 | 0.397 | 5.10×10−8 |
f18 | 3.00 | 1.46×10−5 | 3.02 | 1.84×10−2 | 4.20 | 2.01 | 3.00 | 5.04×10−5 |
f19 | −3.8 | 3.47×10−3 | −3.8 | 2.98×10−3 | −3.7 | 3.20×10−2 | −3.8 | 3.15×10−3 |
f20 | −3.2 | 9.32×10−2 | −3.0 | 0.271 | −2.5 | 0.608 | −3.2 | 4.76×10−2 |
f21 | −8.0 | 2.46 | −8.1 | 1.62 | −2.3 | 1.91 | −9.0 | 2.00 |
f22 | −7.6 | 3.16 | −6.2 | 1.76 | −1.9 | 1.41 | −8.6 | 2.97 |
f23 | −10 | 3.20×10−3 | −6.7 | 1.36 | −1.8 | 1.57 | −10 | 1.61×10−5 |
Function | SOGWO | EOGWO | ASGWO | |||
Mean | Std | Mean | Std | Mean | Std | |
f1 | 6.04×10−77 | 1.48×10−76 | 2.81×10−71 | 8.46×10−71 | 0.00 | 0.00 |
f2 | 1.17×10−44 | 1.34×10−44 | 4.31×10−42 | 7.87×10−42 | 0.00 | 0.00 |
f3 | 5.39×10−22 | 2.59×10−21 | 1.52×10−20 | 4.02×10−20 | 0.00 | 0.00 |
f4 | 7.08×10−21 | 1.51×10−19 | 8.06×10−19 | 1.11×10−18 | 0.00 | 0.00 |
f5 | 26.4 | 0.762 | 26.3 | 0.7364 | 25.2 | 0.663 |
f6 | 0.282 | 0.247 | 0.3290 | 0.245 | 1.06×10−6 | 2.37×10−7 |
f7 | 4.93×10−4 | 2.71×10−4 | 6.07×10−4 | 4.32×10−4 | 1.04×10−3 | 6.49×10−4 |
f8 | −6.5×103 | 8.02×102 | −6.27×103 | 7.71×102 | −7.31×103 | 8.67×102 |
f9 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
f10 | 8.88×10−16 | 0.00 | 1.40×10−14 | 3.20×10−15 | 1.36×10−14 | 2.71×10−15 |
f11 | 0.00 | 0.00 | 1.68×10−3 | 4.80×10−3 | 0.00 | 0.00 |
f12 | 5.60×10−2 | 1.42×10−5 | 2.29×10−2 | 1.85×10−2 | 3.91×10−3 | 3.16×10−3 |
f13 | 0.352 | 0.128 | 0.257 | 0.164 | 1.27×10−6 | 4.73×10−7 |
f14 | 3.40 | 3.72 | 3.82 | 3.86 | 0.998 | 8.24×10−13 |
f15 | 2.38×10−3 | 6.02×10−3 | 5.24×10−3 | 8.68×10−3 | 4.31×10−3 | 7.01×10−3 |
f16 | −1.03 | 3.75×10−9 | −1.02 | 3.45×10−9 | −1.03 | 2.42×10−11 |
f17 | 0.397 | 4.85×10−7 | 0.398 | 4.82×10−7 | 0.398 | 1.68×10−8 |
f18 | 3.00 | 4.63×10−6 | 3.00 | 3.60×10−6 | 3.00 | 3.57×10−6 |
f19 | −3.86 | 2.71×10−3 | −3.86 | 2.36×10−3 | −3.86 | 1.03×10−6 |
f20 | −3.26 | 7.37×10−2 | −3.27 | 7.55×10−2 | −3.32 | 2.93×10−8 |
f21 | −9.65 | 1.50 | −9.93 | 2.07 | −9.34 | 1.40 |
f22 | −10.4 | 2.65×10−4 | −10.2 | 1.05 | −10.6 | 1.06×10−5 |
f23 | −10.4 | 0.540 | −10.2 | 1.62 | −10.4 | 1.04×10−5 |
Parameter | Value |
same initialization configuration | Population Size is 50, Maxiter is 50000 |
ASGWO | a decreased from 2 to 0 unlinearly, ζ = 0.67 |
GWO | a linearly decreased over iterations from 2 to 0 |
WOA | a linearly decreased over iterations from 2 to 0 |
IWOA | Scaling factor for beta (0.2, 0.8), DE mutation scheme(DE/best/1/bin), a linearly decreased over iterations from 2 to 0 |
K-WOA | fixed number of clusters k = 18 |
GSA-BBO | k = 2, I = 1, E1 = 1, Siv = 4, Rnorm = 2, Rpower = 1 |
GSO | the acceleration constants are 2.05 |
ABC | Onlooker 50%, employees 50%, acceleration coefficient upper bound (a)= 1, LL = (0.6×dimensions×population) |
CAB | Mbest = 4, Hp = 0.2 |
CS | β = 1.5, Discover = 0.25 |
FUZZY | Nflames = round(Npop - k)(Npop - 1)kmax |
MFO | c1=2, c2 = 2, ω decreased from 0.9 to 0.2 |
Algorithm | Optimal solution | Optimal cost | SD | |||
x1 | x2 | x3 | x4 | |||
ASGWO | 24 | 18 | 59 | 53 | 4.14×10−15 | 7.57×10−15 |
GWO | 26 | 18 | 60 | 54 | 1.49×10−14 | 2.75×10−14 |
WOA | 16 | 19 | 49 | 43 | 1.15×10−9 | 1.39×10−9 |
IWOA | 30 | 13 | 51 | 53 | 2.39×10−9 | 2.53×10−9 |
K-WOA | 19 | 16 | 43 | 49 | 2.70×10−12 | 0.00 |
GSA-BBO | 16 | 19 | 49 | 43 | 8.72×10−10 | 8.38×10−10 |
GSO | 60 | 29 | 52 | 60 | 0.732 | 0.00 |
ABC | 16 | 19 | 49 | 43 | 6.62×10−11 | 1.65×10−10 |
CAB | 12 | 12 | 35 | 12 | 0.675 | 0.180 |
CS | 16 | 19 | 43 | 49 | 1.47×10−10 | 2.65×10−10 |
FUZZY | 12 | 23 | 33 | 57 | 2.57×10−3 | 4.87×10−3 |
MFO | 19 | 16 | 49 | 43 | 4.85×10−9 | 6.90×10−9 |
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
SC-GWO | ω decreased from 0.7 to 0.2 |
mGWO | a=2(1−tMaxiter) |
wGWO | a linearly decreased over iterations from 2 to 0 |
GA | Crossover Rate = 0.7, Mutation Rate = 0.01 |
SSA | c1 unlinearly decreased over iterations |
Chaotic SSA | c1 = logistic chaotic map(c1) |
ES | σ = 3.0, μ = 100, λ = 300 |
Algorithm | Optimal solution | Optimal cost | |||
x1 | x2 | x3 | x4 | ||
ASGWO | 0.8327 | 0.4122 | 43.1396 | 168.1458 | 6010.9908 |
GWO | 0.8750 | 0.4375 | 44.9807 | 144.1081 | 6136.6600 |
SC-GWO | 0.8125 | 0.4375 | 42.0984 | 176.6370 | 6059.7179 |
mGWO | 0.8125 | 0.4375 | 42.0982 | 176.6386 | 6059.7359 |
wGWO | 0.8125 | 0.4375 | 42.09842 | 176.637 | 6059.7207 |
PSO | 0.8125 | 0.4375 | 42.0913 | 176.7465 | 6061.0777 |
GA | 0.9375 | 0.5000 | 48.3290 | 112.6790 | 6410.3810 |
SSA | 0.8125 | 0.4375 | 42.09836 | 176.6376 | 6059.7254 |
Chaotic SSA | 0.8750 | 0.4375 | 45.33679 | 140.2539 | 6090.527 |
WOA | 0.8125 | 0.4375 | 42.0982 | 176.6389 | 6059.7410 |
ES | 0.8125 | 0.437 | 42.0980 | 176.6405 | 6059.7456 |
Algorithm | ASGWO | IROA | SMA | HHOCM | ROLGWO | MALO |
x1 | 0.500041 | 0.5 | 0.5 | 0.50016380 | 0.5012548 | 0.5 |
x2 | 1.1345446 | 1.23105679 | 1.22739249 | 1.248612358 | 1.2455510 | 1.22810442 |
x3 | 0.5000862 | 0.5 | 0.5 | 0.65955791 | 0.50004578 | 0.5 |
x4 | 1.2790514 | 1.19766142 | 1.20428741 | 1.098515362 | 1.18025396 | 1.21264054 |
x5 | 0.5002007 | 0.5 | 1.20428741 | 0.757988599 | 0.50003477 | 0.5 |
x6 | 1.4999609 | 1.07429465 | 1.04185969 | 0.76726834 | 1.16588047 | 1.30804056 |
x7 | 0.5000544 | 0.5 | 0.5 | 0.500055187 | 0.50008827 | 0.5 |
x8 | 0.3449606 | 0.3449999 | 0.345 | 0.34310489 | 0.3448952 | 0.34499984 |
x9 | 0.3324805 | 0.3443286 | 0.3424831 | 0.19203186 | 0.2995826 | 0.28040129 |
x10 | −16.33320 | 0.9523965 | 0.2967546 | 2.89880509 | 3.5950796 | 0.42429341 |
x11 | −2.149117 | 1.0114033 | 1.1579641 | −4.5511746 | 2.2901802 | 4.65653809 |
fmin | 22.871876 | 23.188937 | 23.191021 | 24.483584 | 23.222427 | 23.229404 |
Parameter | Value |
same initialization configuration | Population Size is 25, Maxiter is 500 |
IROA | C = 0.1; α∈[−1, 9]; μ = 0.499; z = 0.07; y = 0.1 |
SMA | z = 0.03 |
HHOCM | The value of escaping energy decreases from 2 to 0, mutation rate decreases linearly from 1 to 0 |
ROLGWO | r3∈ [0, 1] |
MALO | Switch possibility = 0.5 |
Datasets | No. of attributes | No. of samples |
Breast-w | 9 | 699 |
Credit-g | 20 | 1000 |
Dermatology | 34 | 366 |
Glass | 9 | 214 |
Ionosphere | 34 | 351 |
Lymphography | 18 | 148 |
Sonar | 60 | 208 |
Parameter | Value |
K for KNN | 3 |
Dimension of population | 10 |
Number of iterations | 100 |
Number of runs | 10 |
Acceleration constants in PSO | [2,2] |
Inertia w in BPSO | [0.9, 0.4] |
Parameter A in BGWO | min=0, max=2 |
Dataset | Breast-w | Credit-g | Dermato | Glass | Ionosph | Lymph. | Sonar | |
KNN | 0.965 | 0.593 | 0.873 | 0.591 | 0.817 | 0.712 | 0.816 | |
Average F-measure | BASO | 0.982 | 0.829 | 0.988 | 0.778 | 0.887 | 0.896 | 0.892 |
BGA | 0.983 | 0.831 | 0.988 | 0.750 | 0.887 | 0.896 | 0.892 | |
BPSO | 0.981 | 0.824 | 0.987 | 0.753 | 0.870 | 0.893 | 0.880 | |
BGWO | 0.981 | 0.825 | 0.989 | 0.754 | 0.853 | 0.868 | 0.865 | |
ASGWO | 0.988 | 0.733 | 0.998 | 0.705 | 0.958 | 0.955 | 0.969 | |
Selected feature | BASO | 6.5 | 11.1 | 19.7 | 7.6 | 11.4 | 10.8 | 27.5 |
BGA | 6.3 | 10.6 | 19.4 | 6.3 | 11.4 | 10.2 | 28.8 | |
BPSO | 6.5 | 9.9 | 20 | 7.8 | 11.1 | 9.9 | 28.9 | |
BGWO | 7.1 | 13.9 | 25.6 | 7.4 | 11.7 | 13.3 | 41.6 | |
ASGWO | 4.74 | 10.04 | 17.82 | 4.92 | 13.24 | 7.6 | 27.46 |