1.
Introduction
1.1. Background and present situation
With the rapid development of science and technology, more and more engineering problems can be regarded as strict optimization problems. For these optimization problems, the preliminary work mainly focuses on various mathematical techniques, but these methods probably cannot find the global optimal solution effectively. On the contrary, many intelligent optimization algorithms inspired by natural phenomena have been developed and widely used in various scientific and technological fields instead of traditional optimization algorithms. These intelligent optimization techniques show ideal results in solving complex engineering problems, such as structural design problems, multi-channel steering operation problems and grinding operation problems, so intelligent optimization techniques have attracted the attention of many scholars [1]. Until now, outstanding heuristic intelligence optimization algorithms [2,3,4] have been proposed to solve complex problems, which are shown in Table 1. All these artificial intelligence optimization algorithms have been successfully applied to various optimization problems, and the effectiveness of these intelligent algorithms has been proved.
In 2010, Indian scholar Rao et al. [20] proposed a swarm intelligence algorithm—the teaching-learning-based optimization (TLBO) algorithm, which is proposed and inspired by the teaching-learning phenomenon of a class. Because of its own advantages, it has inspired a wide range of studies and applications. The TLBO algorithm has been successfully applied to function optimization problems, engineering optimization problems and some other practical applications [21,22,23].
1.2. The research state of TLBO algorithm
The TLBO algorithm will be studied in this paper. The optimization idea of this algorithm regards the population as a class, in which the individual with the best fitness is the teacher. The teacher can improve the average score of the whole class through teaching activities, so as to realize the optimization evolution of the whole population. Students communicate with each other through a certain mechanism to maintain the diversity of the population and avoid premature convergence of the algorithm. The principle of TLBO is simple and easy to understand, and requires few parameters to be set. TLBO has attracted the attention of many researchers at home and abroad since it was proposed. It has been successfully applied to the optimization of large-scale continuous nonlinear problems [24], identifying photovoltaic cell model parameters [25], optimizing distribution of local automatic voltage adjustment in distributed systems [26], data clustering [27], optimization of assembly sequence planning for industrial robots [28], the set alliance knapsack problem [29] and other problems.
However, the TLBO algorithm still has several shortcomings. For instance, the TLBO algorithm is accomplished in solving low-dimensional or high-dimensional uni-modal functions, but for multi-modal functions, it is easy to get trapped in a local optimum, which is caused by the update mechanism during the teaching phase. With the progress of iteration, the population individuals approach the optimal solution, causing the loss of population diversity. The convergence accuracy, convergence speed and arithmetic speed of the TLBO algorithm still needs to be further improved.
In recent years, domestic and foreign researchers have conducted extensive and in-depth studies on the issues of the TLBO algorithm mentioned above. Ghasemi et al. [30] introduced mutation operator to the learning phase of the TLBO algorithm to enhance the population diversity. Li et al. [31] introduced inertia weight and acceleration weight to the teaching phase and learning phase to improve its convergence speed and the quality of the solution. Wang et al. [32] designed a sub-population-based teaching phase to enhance particle diversity and improve the convergence speed of the algorithm. Yu et al. [33] introduced feedback, mutation and crossover operators from differential evolution and chaotic wave algorithms to the TLBO algorithm, respectively improving its development capability, the diversity of the population and its ability to escape local optima. Tsai [34] constructed a mutation strategy by randomly selecting the difference vector of two individuals as the third individual's mutation source. Rao and Patel [35] introduced the elite mechanism to the TLBO algorithm, improving its convergence accuracy. Zou et al. [36] introduced the dynamic grouping mechanism to the TLBO algorithm to enhance its global search capability. Chen et al. [37] introduced the local learning and self-learning mechanisms to the TLBO algorithm to enhance its search capability. Sultana and Roy [38] introduced the reverse learning and quasi-reverse learning mechanisms to improve its convergence speed and the quality of the solution. Zou et al. [39] solved global optimization problems by adding dynamic group strategy to the TLBO algorithm, thus improving its global search capability. Tuo et al. [40] combined the harmony search and TLBO algorithms to effectively solve complex high-dimensional optimization problems.
In order to further improve the performance of the TLBO algorithm, three improvement mechanisms are introduced: 1) the chaotic mapping function is used to initialize the population individuals to increase population diversity and enhance the global search capability. 2) In the "teaching phase", three parameters, namely inertia weight, acceleration coefficient and self-adaptive teaching factor, are introduced to improve the algorithm's arithmetic speed and the quality of the solution. 3) In the "learning phase", the idea of heredity is used to update the population. The latest individuals are taken as the next iteration's population to maintain population diversity in the later stage of optimization and improve the global search capability. 20 IEEE congress on Evolutionary Computation (CEC) benchmark test functions are used to verify the performance of the proposed algorithm. Compared to several state-of-the-art algorithms, namely GA, ALO, DA, MFO, SCA and TLBO, the experimental results show that the proposed algorithm has excellent performance in terms of convergence accuracy and the global search capability.
1.3. Boiler combustion optimization by heuristic optimization algorithm
During the combustion process of a station boiler, large amounts of polluting gases are produced, such as NOx, SO2 and CO2, which cause great harm to the human living environment. Simultaneously, a large amount of coal is consumed. Therefore, the realization of dynamic multi-objective optimal control of the boiler combustion process under variable load is an effective method to reduce environmental pollution and save coal resources, which is called the boiler combustion optimization problem. Therefore, the boiler combustion optimization problem can be classified into a class of variable load, multi-variable, constrained dynamic multi-objective optimization problems. In recent years, with the rapid development of artificial intelligence technology, many researchers tried to use machine learning and heuristic optimization algorithms to optimize the adjustable operating parameters of the boiler combustion process, to achieve the goal of improving the boiler thermal efficiency or reducing the emission concentration of polluting gases [41,42,43,44,45].
The power station boiler has the characteristics of nonlinearity, strong coupling and large lag, which make it difficult for the traditional optimization method to achieve the goal of energy saving and emission reduction of the boiler. However, the heuristic intelligent optimization algorithm can realize the optimization and adapt to the uncertainty in the optimization process in the absence of systematic accurate analytical expressions or mathematical models. Therefore, domestic and foreign scholars are keen to apply the heuristic intelligent optimization algorithm to solve the boiler combustion optimization problem [46,47,48]. Rahat et al. used the novel multi-target evolutionary algorithm and data-driven model to find the equilibrium relationship between the emission concentration of nitrogen oxides and the carbon content of fly ash, effectively solving the contradiction between boiler thermal efficiency and NOx [49]. Reference [50] proposed a boiler combustion optimization algorithm based on big data driven case matching, using data mining technology to analyze the data in Supervisory Information System (SIS) and establish the combustion case database. Online optimization can match the real-time operation data of a distributed control system (DCS) and case database, finding the best operating parameters suitable for current working conditions, and realizing the online optimization of boiler combustion. Reference [51] used a deep neural network and multi-objective optimization algorithm to achieve multi-objective optimization of the boiler combustion process, effectively balancing the thermal efficiency and nitrogen oxide emission concentration. In this paper, the proposed ETLBO is combined with extreme learning machine to solve the boiler combustion problem for reducing NOx emissions.
The contributions of this paper are summarized as follows:
1) A kind of evolutionary teaching-learning-based optimization algorithm (ETLBO) is proposed.
2) The proposed ETLBO is used to solve 20 benchmark testing functions.
3) The proposed ETLBO-ELM method is applied to optimize the adjustment operation parameters of a 330 MW circulation fluidized bed boiler for reducing NOx emissions.
The structure of this paper is as follows: The basic TLBO algorithm is given in Section 2. The proposed ETLBO is given in Section 3. Section 4 shows the performance evaluation of the ETLBO. Section 5 shows the boiler combustion model and optimization. The conclusion of this paper is in Section 6.
2.
Basic TLBO algorithm
The TLBO algorithm is inspired by the teaching-learning phenomenon. Students are regarded as population individuals, their grades in each subject are regarded as the solutions to be optimized, the number of subjects is the solution dimension and the best individual becomes the teacher. The core idea of the TLBO algorithm is to simulate the teaching-learning process of a class. First, the best individual in the population is selected as the teacher, who improves the students' grades through teaching, thus achieving the goal of improving the average grade of the class. Students can learn from each other by comparing their grades, complementing each other's strengths and making progress together. Therefore, the TLBO algorithm is divided into two phases: "teaching phase" and "learning phase".
2.1. Teaching phase
In an ideal situation, students can learn from the teacher's guidance and reach the same level as the teacher. However, due to the interaction of many other factors, the teacher can only help the students reach a certain level. This phenomenon can be expressed mathematically as follows:
Formula (1) represents the difference between the best grade and average grade; ri is a random number between 0 and 1; Mteacher is the best individual, regarded as teacher; TF is a teaching factor, which is shown in formula (2); Mi is the average score at the i-th iteration. In formula (3), Xold,i is the old solution at the i-th iteration and Xnew,i is the new solution after updating.
2.2. Learning phase
Students learn from each other through mutual communication and learning to acquire knowledge. Students with lower grades learn from those with higher grades, complementing each other's strengths and making progress together. Two students, Xi and Xj, are randomly selected. For the minimum optimization problem, if Xi is better than Xj, then Xj learns from Xi; otherwise, if Xj is better than Xi, then Xi learns from Xj. This phenomenon can be expressed mathematically as follows:
As shown in formulas (4) and (5), if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.
The pseudo-code of the original TLBO is given in Algorithm 1.
3.
The proposed ETLBO algorithm
In order to improve the convergence accuracy and convergence speed of the original TLBO algorithm, a kind of ETLBO algorithm is proposed. The ETLBO algorithm adopts three adjustment mechanisms: First, the initialization of the population individuals is improved by applying chaotic mapping sequences, which can enhance the population diversity. Second, in the teaching phase, three coefficients are introduced to improve the convergence speed and the solution quality. Finally, in the learning phase, the idea of heredity is used to update the population, and all the latest individuals are taken as the population for the next iteration, which helps to maintain the diversity of the population in the later stage of optimization and further improve the global search capability. The specific implementation process of the ETLBO algorithm is described in the following subsections.
3.1. Using chaos mapping to initialize individuals
In the original TLBO algorithm, the population individuals were initialized using pseudo-random sequences, which resulted in a high degree of uncertainty in the diversity of the population and could not guarantee that the solutions were uniformly distributed in the search space, ultimately leading to premature convergence of the algorithm. Since chaos mapping has the properties of traverseability, randomness and overall stability, using the chaotic sequence generated by the chaos mapping function as the initial position of the population individuals can enhance the uniformity of traversing the initial solution, thereby improving the diversity of the population and the global search capability. Therefore, this paper adopts logistic chaos mapping to initialize the population, and the standard logistic chaos mapping is:
In formula (6), μ is a random number between 0 and 4; Zk is the k-th chaotic variable with the value range of [0, 1].
After considering various factors, this paper sets μ=4. The specific formula is as follows:
Finally, the formula for converting chaotic solutions into solutions in the solution space of practical problems is as follows:
In formula (8), popold is the population matrix before transformation; range is a matrix consisting of 60 × 1 copies of (ub−lb); lower is a matrix consisting of 60 × 1 copies of lb; ub is the upper constraint; lb is the lower constraint; popnew is the population matrix after transformation.
3.2. Teaching phase
In the original TLBO algorithm, the teaching coefficient TF affects the search speed and search ability of the algorithm, and determines the change of the average value. Among them, a larger TF value speeds up the search speed, but it also makes the algorithm easily get trapped into local optima. A smaller TF value slows down the search speed, but it also makes the algorithm escape local optima. In addition, the teaching coefficient TF is a random number with a value of 1 or 2, and students can only fully accept or fully reject the teacher's teaching, which is inconsistent with the actual situation. After considering various situations, this paper proposes a composite individual update mechanism in the stage where the teacher teaches students, and three parameters are set, including inertia weight ωi, acceleration coefficient ϕi and self-adaptive teaching coefficient TF. The introduced inertia weight and acceleration coefficient can improve the search speed and the quality of the solution of the algorithm. The self-adaptive teaching coefficient is a monotonically decreasing function, which is related to the current number of iterations and the maximum number of iterations. The self-adaptive teaching coefficient can speed up the convergence speed of the algorithm in the early stage and slow down the convergence speed in the later stage to avoid getting trapped in local optimization, and the combination of the two can avoid premature convergence of the algorithm. The specific formula is as follows:
In formula (9), ωi is the inertia weight, which affects Xold,i. In formula (10), ϕi is the acceleration coefficient, which improves the search speed of the "teaching phase". In formula (11), TF is the self-adaptive teaching factor, which determines the degree of early maturity of the algorithm. f(i) is the fitness value of the i-th student, which is shown in formulas (9) and (10). a is the maximum fitness value in the iteration, which is shown in formulas (9) and (10). t is the current number of iterations, which is shown in formulas (9)–(11). T is the maximum number of iterations, which is shown in formula (11). In formula (12), Mteacher is the best individual in the population, that is, the teacher. Mi is the average score at the i-th iteration. As shown in formula (12), if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.
3.3. Learning phase
During the learning process of the original TLBO algorithm, the population of the next iteration is generated through mutual communication and learning among students. The main idea is to randomly select two students and let the one with poor performance learn from the one with good performance. However, this method reduces the diversity of the population in the later optimization stages, which can easily lead to getting trapped in local optimization. As the idea of heredity is a type of search technique based on self-adaptive probability, it increases the flexibility of the search process. Although this probability feature may produce some low-fitness individuals, more excellent individuals will be generated as the evolution process continues, gradually making the population evolve to a state containing an approximate optimal solution. Moreover, the idea of heredity is scalable and easy to combine with other algorithms to generate hybrid algorithms that integrate the advantages of both. Based on the learning process of the original TLBO algorithm, this paper proposes adopting the idea of heredity to update the population, where all the latest individuals generated through crossover and mutation operations are used as the population of the next iteration. This method helps to maintain the diversity of the population in the later optimization stages and further improve the global search capability. The specific computation steps of ETLBO are described below and the flowchart of the ETLBO algorithm is presented in Figure 1.
Step-1: The mutual learning among students is conducted according to the "learning phase" of the original TLBO algorithm, where the students with poor performance learn from those with good performance. This method satisfies the following conditions: if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.
Step-2: Sort all individuals in ascending order based on their fitness values and divide them into two groups, A and B, according to certain rules.
Group A: Individuals ranked 1st, 3rd, 5th, 7th, ..., etc.
Group B: Individuals ranked 2nd, 4th, 6th, 8th, ..., etc.
Step-3: Crossover operation: The first half of individuals in groups A and B are crossed over according to certain rules, and the resulting offspring individuals replace the second half of individuals with lower rankings. The specific formula is as follows:
In formula (13), Ai is the i-th student in Group A, Bi is the i-th student in Group B and N is the population size. In formula (14), β is a balancing parameter, r is a random number between 0 and 1 and η is a custom parameter value where the larger the value, the closer the offspring individuals are to their parents.
Step-4: Mutation operation: All students are mutated one by one according to the mutation probability Pm. For a property value of an individual, if the random number r∈[0,1]<Pm, a mutation operation is performed, that is, the property value is inverted. The specific formula is as follows:
In formula (15), Xi,j is the j-th property value of the i-th individual, ub is the upper constraint, lb is the lower constraint, N is the population size and d is the population dimension.
Step-5: Take all the latest individuals generated by crossover and mutation operations as the population for the next iteration.
After calculation, the computational complexity of ETLBO is O(Max_iter×pop_num×dim). Max_iter is the maximum number of iterations, pop_num is the population size and dim is the population dimension.
The pseudo-code of the proposed ETLBO is given in Algorithm 2.
4.
Performance testing of the ETLBO algorithm
In this section, 20 benchmark mathematical functions in Table 2 are used to verify the performance of the ETLBO algorithm. Seen from Table 2, F1–F7 are uni-modal test functions used to test the convergence accuracy and solution capability of the ETLBO algorithm, F8–F12 are multi-modal test functions and F13–F20 are fixed-dimension multi-modal test functions. F8–F20 are used together to test the global search capability of the ETLBO algorithm. Several state-of-the-art optimization algorithms are regarded as comparison algorithms, which are recorded in Table 3. This section compares the ETLBO algorithm with GA, ALO, DA, MFO, SCA and the original TLBO algorithms. The relevant parameters set for the seven algorithms when testing the 20 CEC benchmark test functions are shown in Table 3.
Due to the stochastic nature of meta-heuristics, the results of one single run may be unreliable. Therefore, each algorithm runs 30 times independently to reduce the statistical error. The performance of different optimization algorithms in terms of the mean and standard deviation of solutions is obtained from the 30 independent runs for 10, 30 and 50 dimensional functions. The maximal iteration 1000 is used as the stopping criterion. All experimental results are recorded in Tables 4–10. It should be noted that the closer the mean value is to the theoretical optimal value of the test function and the smaller the mean square deviation is, the better the convergence accuracy, the quality of the solution and stability of the algorithm. In addition, the optimal performance parameters of the seven algorithms are highlighted in bold font in Tables 4–10.
Figure 2 shows the comparison of the convergence process curves of the seven algorithms on the 20 CEC benchmark test functions, where the horizontal and vertical axes represent the number of iterations and the fitness value, respectively.
The choice of 10, 30 and 50 as dimensions for benchmark functions is generally because they are representative. Lower dimensions, such as 10, can be used to evaluate the performance of algorithms on relatively smaller problem sizes. Higher dimensions, such as 50, can be used to evaluate the performance of algorithms on larger and more complex problem sizes. Additionally, selecting these specific dimensions facilitates the comparison of different algorithm performances. These dimensions have been widely used and have become standard settings for benchmark test functions. By conducting tests on these dimensions, the results become more comparable and help researchers better understand algorithm performance across different problem sizes.
All experiments were run on a 64-bit XiaoXin Air 15IKBR laptop, with an Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz processor and 8.00 GB of RAM. The simulation software is MATLAB 2021a.
4.1. Experiments on 7 uni-modal test functions
In this subsection, 7 uni-modal test functions are used to evaluate the performance of ETLBO. The experimental results of 10, 30 and 50 dimensional functions are listed in Tables 4, 5 and 6, respectively. From Tables 4–6, it can be seen that compared to the other six algorithms, the ETLBO algorithm can achieve smaller optimal solutions, mean values and mean square deviations for almost all uni-modal functions in 10, 30 and 50 dimensions. The results of the tests on five functions, F1, F2, F3, F4 and F7, are particularly indicative that the ETLBO algorithm can produce satisfactory results for uni-modal functions by effectively utilizing the search space, and has good convergence accuracy and stability. Additionally, Figures 2–4 graphically present the comparison in terms of convergence speed and solution quality for solving 7 multi-modal functions (F1, F2, F3, F4, F5, F6 and F7) with 10 dimensions, 30 dimensions and 50 dimensions, separately. Seen from the three figures, it is obvious that the ETLBO has the fastest convergence speed and the highest convergence accuracy on most functions compared to others.
4.2. Experiments on 5 multi-modal test functions
In this subsection, 5 multi-modal test functions are used to evaluate the performance of ETLBO. The experimental results of 10, 30 and 50 dimensional functions are listed in Tables 7, 8 and 9, respectively. Boldface in the tables indicates the best results. Good performance of an algorithm is indicated by smaller mean values. Stronger stability of an algorithm is indicated by lower standard deviation values. According to these tables, the proposed ETLBO algorithm presents superior performance on most functions. Seen from Table 7, the performance of ETLBO is better than other algorithms for functions F8, F9, F10 and F11. The ALO has the smallest mean and standard deviation for function F12. Seen from Tables 8 and 9, the ETLBO still shows the best performance on functions F8, F9, F10 and F11 compared to other algorithms. In brief, the proposed ETLBO improves the solution quality for multi-modal functions. Additionally, Figures 5–7 graphically present the comparison in terms of convergence speed and solution quality for solving 5 multi-modal functions (F8, F9, F10, F11 and F12) with 10 dimensions, 30 dimensions and 50 dimensions, separately. Seen from the three figures, it is obvious that ETLBO has the fastest convergence speed and the highest convergence accuracy on most functions compared to other algorithms.
4.3. Experiments on 8 fixed dimension test functions
In this subsection, 8 fixed dimension functions are applied to evaluate the performance of the proposed ETLBO algorithm. The experimental results are listed in Table 10. According to this table, the ETLBO algorithm presents the best solution on five functions (F15, F16, F18, F19, F20). Therefore, the ETLBO is the most fit to address multi-modal testing functions with fixed dimensions.
5.
Boiler combustion optimization
In this section, the proposed ETLBO algorithm is used to optimize the boiler's adjustment parameters for reducing the NOx emission concentration. First, based on the boiler operation parameters, extreme learning machine (ELM) [52] is applied to build NOx emission models. Secondly, based on the building model, the ETLBO is used to reduce NOx emission. Note that the extreme learning machine is an effective modeling method that has been applied in many fields. Therefore, detailed description of extreme learning machine can be found in Reference [52], whereas this paper only gives the applicable rules of extreme learning machine.
5.1. Analysis and modeling of NOx emissions
This section focuses on using ELM to establish a prediction model of NOx emissions. For a 330 MW circulating fluidized bed boiler (CFBB), a total of 28,800 sets of operation data were collected, including 26 input parameters and one output parameter. A detailed description of these data can be found in reference [53]. Considering the small data sampling interval and low data fluctuation in the CFBB, this paper sets the data sampling interval as every 80 units, resulting in a total of 360 datasets to fully ensure the generalization ability of the prediction model, which is divided into three parts: training data, validation data and testing data. The proportions are 65, 15 and 20%, respectively. The training data are used to establish the prediction model and determine the model parameters. The validation data are used to verify the effectiveness of the prediction model. The test data are used to test the generalization ability of the prediction model. In this paper, the ELM algorithm model is configured with 26 input nodes, 41 hidden layer neural nodes and 1 output node, and the "Sigmoid" function is set as the hidden layer activation function. Note that in order to make these boilers' operation data dimension unity, they are processed by min-max normalization method.
In order to prove the excellent accuracy and effectiveness of the NOx emission prediction model established using extreme learning machine, this section conducted 30 independent test experiments and obtained the corresponding average results, which are recorded in Table 11. Note that experiment results are obtained after normalization in Table 11. From the testing results, it can be observed that the mean value reaches the precision of 10−2, indicating that the prediction model has a high level of accuracy. The S.D. value reaches the precision of 10−3, indicating that the prediction model exhibits good stability. The R2 value is extremely close to 1, demonstrating the prediction model has strong generalization and regression capabilities. In addition, the mean absolute percentage error (MAPE) value reaches the precision of 10−4 or 10−5 and the mean absolute error (MAE) value reaches the precision of 10−2, proving that the prediction model has the ability to approximate the target values effectively.
In Figures 8 and 9, the solid line with red asterisks represents the actual NOx emission of one boiler, while the dotted line with black circles represents the predicted NOx emission of the ELM model. Seen from Figures 8 and 9, the predicted NOx emission can almost match the actual NOx emission. Therefore, the NOx emission model built by ELM is effective.
5.2. Optimizing NOx emissions
In this subsection, based on the established NOx emission prediction model by ELM, the ETLBO algorithm is applied to optimize the adjustment parameters of the CFBB for reducing NOx emissions.
In the process of optimizing NOx emissions, the main focus is on optimizing 11 adjustable parameters that have a significant impact on NOx emissions, while keeping the remaining parameters unchanged. The specific details are shown in Table 12. The objective function for optimizing NOx emissions is as follows:
Based on the established NOx emission prediction model, the ETLBO algorithm is applied to tune the adjustable parameters in order to achieve the goal of reducing NOx emissions. In this section, the maximum number of iterations for the ETLBO algorithm is set to 50, the population size is set to 40 and the dimension is set to 11. Other parameters remain unchanged. The operating condition data is then optimized. Figure 10 shows the comparison curve before and after the optimization of NOx emissions, where the solid line with red asterisks represents the data before optimization, and the dotted line with blue circles represents the data after optimization. Seen from Figure 10, it can be visually observed that after optimizing 72 sets of data using the ETLBO algorithm, there is a certain degree of reduction in NOx emissions. This proves that the NOx emission model based on the ELM algorithm is effective, and the ETLBO algorithm proposed in this paper is an effective strategy for solving complex global optimization problems.
According to Table 13, in this section of the experiment, three sets of data, D, E and F, were randomly selected from the test data. By comparing the data before and after parameter optimization, it can be visually observed that the NOx emissions were significantly reduced. The coal feed amount is reduced, the primary air flow rate increased and the flue gas oxygen concentration decreased after optimization for samples D, E and F. The secondary air flow rate of sample D decreased, but the secondary air flow rate of samples E and F both increased. In the end, the NOx emissions for samples D, E and F are reduced by 93.8604 mg/Nm3, 75.5935 mg/Nm3 and 27.0340 mg/Nm3, respectively. Therefore, considering only the reduction of NOx emissions, the ETLBO-ELM method proposed in this paper is an effective strategy.
6.
Conclusions
To enhance the performance of the original TLBO algorithm, an evolution TLBO algorithm is proposed. Compared to TLBO, the proposed ETLBO uses the chaotic function to initialize population individuals, introduces the inertia weight, acceleration coefficient and self-adaptive teaching factors into the teaching phase, and the idea of heredity is used to update the population in the learning phase. 20 benchmark test functions are used to verify the performance of ETLBO and experimental results show that the ETLBO outperforms the conventional TLBO on most test functions. Therefore, the ETLBO has good convergence ability. Additionally, the ETLBO combines with extreme learning machine to solve the boiler combustion optimization problem. Experimental results reveal that NOx emissions can be reduced. In conclusion, the ETLBO algorithm is an effective optimization method.
In the future, the performance of the ETLBO algorithm will be further improved and applied to engineering optimization problems. Additionally, further research is needed to provide rigorous mathematical proofs for the convergence of the ETLBO algorithm. The multi-objective version of the ETLBO algorithm and its application in uncertain engineering problems also deserve further study.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
Funding: This work is supported the National Natural Science Foundation of China (Grant No. 62203332), the Natural Science Foundation of Tianjin (Grant No.20JCQNJC00430) and College Students' Innovative Entrepreneurial Training Plan Program (Grant No. 202310069032).
Conflict of interest
The authors declare there is no conflict of interest.