Research article

Optimizing boiler combustion parameters based on evolution teaching-learning-based optimization algorithm for reducing NOx emission concentration


  • Received: 20 August 2023 Revised: 20 October 2023 Accepted: 02 November 2023 Published: 08 November 2023
  • How to reduce a boiler's NOx emission concentration is an urgent problem for thermal power plants. Therefore, in this paper, we combine an evolution teaching-learning-based optimization algorithm with extreme learning machine to optimize a boiler's combustion parameters for reducing NOx emission concentration. Evolution teaching-learning-based optimization algorithm (ETLBO) is a variant of conventional teaching-learning-based optimization algorithm, which uses a chaotic mapping function to initialize individuals' positions and employs the idea of genetic evolution into the learner phase. To verify the effectiveness of ETLBO, 20 IEEE congress on Evolutionary Computation benchmark test functions are applied to test its convergence speed and convergence accuracy. Experimental results reveal that ETLBO shows the best convergence accuracy on most functions compared to other state-of-the-art optimization algorithms. In addition, the ETLBO is used to reduce boilers' NOx emissions by optimizing combustion parameters, such as coal supply amount and the air valve. Result shows that ETLBO is well-suited to solve the boiler combustion optimization problem.

    Citation: Yunpeng Ma, Shilin Liu, Shan Gao, Chenheng Xu, Wenbo Guo. Optimizing boiler combustion parameters based on evolution teaching-learning-based optimization algorithm for reducing NOx emission concentration[J]. Mathematical Biosciences and Engineering, 2023, 20(11): 20317-20344. doi: 10.3934/mbe.2023899

    Related Papers:

    [1] Yongquan Zhou, Yanbiao Niu, Qifang Luo, Ming Jiang . Teaching learning-based whale optimization algorithm for multi-layer perceptron neural network training. Mathematical Biosciences and Engineering, 2020, 17(5): 5987-6025. doi: 10.3934/mbe.2020319
    [2] Yufeng Qian . Exploration of machine algorithms based on deep learning model and feature extraction. Mathematical Biosciences and Engineering, 2021, 18(6): 7602-7618. doi: 10.3934/mbe.2021376
    [3] Koji Oshima, Daisuke Yamamoto, Atsuhiro Yumoto, Song-Ju Kim, Yusuke Ito, Mikio Hasegawa . Online machine learning algorithms to optimize performances of complex wireless communication systems. Mathematical Biosciences and Engineering, 2022, 19(2): 2056-2094. doi: 10.3934/mbe.2022097
    [4] Vinh Huy Chau . Powerlifting score prediction using a machine learning method. Mathematical Biosciences and Engineering, 2021, 18(2): 1040-1050. doi: 10.3934/mbe.2021056
    [5] Hong Lu, Hongxiang Zhan, Tinghua Wang . A multi-strategy improved snake optimizer and its application to SVM parameter selection. Mathematical Biosciences and Engineering, 2024, 21(10): 7297-7336. doi: 10.3934/mbe.2024322
    [6] YoungSu Yun, Mitsuo Gen, Tserengotov Nomin Erdene . Applying GA-PSO-TLBO approach to engineering optimization problems. Mathematical Biosciences and Engineering, 2023, 20(1): 552-571. doi: 10.3934/mbe.2023025
    [7] Chun Li, Ying Chen, Zhijin Zhao . Frequency hopping signal detection based on optimized generalized S transform and ResNet. Mathematical Biosciences and Engineering, 2023, 20(7): 12843-12863. doi: 10.3934/mbe.2023573
    [8] Zhishan Zheng, Lin Zhou, Han Wu, Lihong Zhou . Construction cost prediction system based on Random Forest optimized by the Bird Swarm Algorithm. Mathematical Biosciences and Engineering, 2023, 20(8): 15044-15074. doi: 10.3934/mbe.2023674
    [9] Dashe Li, Xueying Wang, Jiajun Sun, Huanhai Yang . AI-HydSu: An advanced hybrid approach using support vector regression and particle swarm optimization for dissolved oxygen forecasting. Mathematical Biosciences and Engineering, 2021, 18(4): 3646-3666. doi: 10.3934/mbe.2021182
    [10] Shangbo Zhou, Yuxiao Han, Long Sha, Shufang Zhu . A multi-sample particle swarm optimization algorithm based on electric field force. Mathematical Biosciences and Engineering, 2021, 18(6): 7464-7489. doi: 10.3934/mbe.2021369
  • How to reduce a boiler's NOx emission concentration is an urgent problem for thermal power plants. Therefore, in this paper, we combine an evolution teaching-learning-based optimization algorithm with extreme learning machine to optimize a boiler's combustion parameters for reducing NOx emission concentration. Evolution teaching-learning-based optimization algorithm (ETLBO) is a variant of conventional teaching-learning-based optimization algorithm, which uses a chaotic mapping function to initialize individuals' positions and employs the idea of genetic evolution into the learner phase. To verify the effectiveness of ETLBO, 20 IEEE congress on Evolutionary Computation benchmark test functions are applied to test its convergence speed and convergence accuracy. Experimental results reveal that ETLBO shows the best convergence accuracy on most functions compared to other state-of-the-art optimization algorithms. In addition, the ETLBO is used to reduce boilers' NOx emissions by optimizing combustion parameters, such as coal supply amount and the air valve. Result shows that ETLBO is well-suited to solve the boiler combustion optimization problem.



    With the rapid development of science and technology, more and more engineering problems can be regarded as strict optimization problems. For these optimization problems, the preliminary work mainly focuses on various mathematical techniques, but these methods probably cannot find the global optimal solution effectively. On the contrary, many intelligent optimization algorithms inspired by natural phenomena have been developed and widely used in various scientific and technological fields instead of traditional optimization algorithms. These intelligent optimization techniques show ideal results in solving complex engineering problems, such as structural design problems, multi-channel steering operation problems and grinding operation problems, so intelligent optimization techniques have attracted the attention of many scholars [1]. Until now, outstanding heuristic intelligence optimization algorithms [2,3,4] have been proposed to solve complex problems, which are shown in Table 1. All these artificial intelligence optimization algorithms have been successfully applied to various optimization problems, and the effectiveness of these intelligent algorithms has been proved.

    Table 1.  Outstanding heuristic intelligence optimization algorithms.
    Abbreviation Full name
    PSO Particle Swarm Optimization [5]
    ACO Ant Colony Optimization [6]
    SFLA Shuffled Frog Leaping Algorithm [7]
    ABC Artificial Bee Colony [8]
    AFSO Artificial Fish Swarm Optimization [9]
    GWO Grey Wolf Optimization [10]
    BFO Bacteria Foraging Optimization [11]
    WOA Whale Optimization Algorithm [12]
    SO Snake Optimizer [13]
    GSA Gravitational Search Algorithm [14]
    GA Genetic Algorithm [15]
    ALO Ant Lion Optimizer [16]
    DA Dragonfly Algorithm [17]
    MFO Moth-Flame Optimization [18]
    SCA Sine Cosine Algorithm [19]
    TLBO Teaching-Learning-Based Optimization [20]
    ETLBO Evolution teaching-learning-based optimization [our method]

     | Show Table
    DownLoad: CSV

    In 2010, Indian scholar Rao et al. [20] proposed a swarm intelligence algorithm—the teaching-learning-based optimization (TLBO) algorithm, which is proposed and inspired by the teaching-learning phenomenon of a class. Because of its own advantages, it has inspired a wide range of studies and applications. The TLBO algorithm has been successfully applied to function optimization problems, engineering optimization problems and some other practical applications [21,22,23].

    The TLBO algorithm will be studied in this paper. The optimization idea of this algorithm regards the population as a class, in which the individual with the best fitness is the teacher. The teacher can improve the average score of the whole class through teaching activities, so as to realize the optimization evolution of the whole population. Students communicate with each other through a certain mechanism to maintain the diversity of the population and avoid premature convergence of the algorithm. The principle of TLBO is simple and easy to understand, and requires few parameters to be set. TLBO has attracted the attention of many researchers at home and abroad since it was proposed. It has been successfully applied to the optimization of large-scale continuous nonlinear problems [24], identifying photovoltaic cell model parameters [25], optimizing distribution of local automatic voltage adjustment in distributed systems [26], data clustering [27], optimization of assembly sequence planning for industrial robots [28], the set alliance knapsack problem [29] and other problems.

    However, the TLBO algorithm still has several shortcomings. For instance, the TLBO algorithm is accomplished in solving low-dimensional or high-dimensional uni-modal functions, but for multi-modal functions, it is easy to get trapped in a local optimum, which is caused by the update mechanism during the teaching phase. With the progress of iteration, the population individuals approach the optimal solution, causing the loss of population diversity. The convergence accuracy, convergence speed and arithmetic speed of the TLBO algorithm still needs to be further improved.

    In recent years, domestic and foreign researchers have conducted extensive and in-depth studies on the issues of the TLBO algorithm mentioned above. Ghasemi et al. [30] introduced mutation operator to the learning phase of the TLBO algorithm to enhance the population diversity. Li et al. [31] introduced inertia weight and acceleration weight to the teaching phase and learning phase to improve its convergence speed and the quality of the solution. Wang et al. [32] designed a sub-population-based teaching phase to enhance particle diversity and improve the convergence speed of the algorithm. Yu et al. [33] introduced feedback, mutation and crossover operators from differential evolution and chaotic wave algorithms to the TLBO algorithm, respectively improving its development capability, the diversity of the population and its ability to escape local optima. Tsai [34] constructed a mutation strategy by randomly selecting the difference vector of two individuals as the third individual's mutation source. Rao and Patel [35] introduced the elite mechanism to the TLBO algorithm, improving its convergence accuracy. Zou et al. [36] introduced the dynamic grouping mechanism to the TLBO algorithm to enhance its global search capability. Chen et al. [37] introduced the local learning and self-learning mechanisms to the TLBO algorithm to enhance its search capability. Sultana and Roy [38] introduced the reverse learning and quasi-reverse learning mechanisms to improve its convergence speed and the quality of the solution. Zou et al. [39] solved global optimization problems by adding dynamic group strategy to the TLBO algorithm, thus improving its global search capability. Tuo et al. [40] combined the harmony search and TLBO algorithms to effectively solve complex high-dimensional optimization problems.

    In order to further improve the performance of the TLBO algorithm, three improvement mechanisms are introduced: 1) the chaotic mapping function is used to initialize the population individuals to increase population diversity and enhance the global search capability. 2) In the "teaching phase", three parameters, namely inertia weight, acceleration coefficient and self-adaptive teaching factor, are introduced to improve the algorithm's arithmetic speed and the quality of the solution. 3) In the "learning phase", the idea of heredity is used to update the population. The latest individuals are taken as the next iteration's population to maintain population diversity in the later stage of optimization and improve the global search capability. 20 IEEE congress on Evolutionary Computation (CEC) benchmark test functions are used to verify the performance of the proposed algorithm. Compared to several state-of-the-art algorithms, namely GA, ALO, DA, MFO, SCA and TLBO, the experimental results show that the proposed algorithm has excellent performance in terms of convergence accuracy and the global search capability.

    During the combustion process of a station boiler, large amounts of polluting gases are produced, such as NOx, SO2 and CO2, which cause great harm to the human living environment. Simultaneously, a large amount of coal is consumed. Therefore, the realization of dynamic multi-objective optimal control of the boiler combustion process under variable load is an effective method to reduce environmental pollution and save coal resources, which is called the boiler combustion optimization problem. Therefore, the boiler combustion optimization problem can be classified into a class of variable load, multi-variable, constrained dynamic multi-objective optimization problems. In recent years, with the rapid development of artificial intelligence technology, many researchers tried to use machine learning and heuristic optimization algorithms to optimize the adjustable operating parameters of the boiler combustion process, to achieve the goal of improving the boiler thermal efficiency or reducing the emission concentration of polluting gases [41,42,43,44,45].

    The power station boiler has the characteristics of nonlinearity, strong coupling and large lag, which make it difficult for the traditional optimization method to achieve the goal of energy saving and emission reduction of the boiler. However, the heuristic intelligent optimization algorithm can realize the optimization and adapt to the uncertainty in the optimization process in the absence of systematic accurate analytical expressions or mathematical models. Therefore, domestic and foreign scholars are keen to apply the heuristic intelligent optimization algorithm to solve the boiler combustion optimization problem [46,47,48]. Rahat et al. used the novel multi-target evolutionary algorithm and data-driven model to find the equilibrium relationship between the emission concentration of nitrogen oxides and the carbon content of fly ash, effectively solving the contradiction between boiler thermal efficiency and NOx [49]. Reference [50] proposed a boiler combustion optimization algorithm based on big data driven case matching, using data mining technology to analyze the data in Supervisory Information System (SIS) and establish the combustion case database. Online optimization can match the real-time operation data of a distributed control system (DCS) and case database, finding the best operating parameters suitable for current working conditions, and realizing the online optimization of boiler combustion. Reference [51] used a deep neural network and multi-objective optimization algorithm to achieve multi-objective optimization of the boiler combustion process, effectively balancing the thermal efficiency and nitrogen oxide emission concentration. In this paper, the proposed ETLBO is combined with extreme learning machine to solve the boiler combustion problem for reducing NOx emissions.

    The contributions of this paper are summarized as follows:

    1) A kind of evolutionary teaching-learning-based optimization algorithm (ETLBO) is proposed.

    2) The proposed ETLBO is used to solve 20 benchmark testing functions.

    3) The proposed ETLBO-ELM method is applied to optimize the adjustment operation parameters of a 330 MW circulation fluidized bed boiler for reducing NOx emissions.

    The structure of this paper is as follows: The basic TLBO algorithm is given in Section 2. The proposed ETLBO is given in Section 3. Section 4 shows the performance evaluation of the ETLBO. Section 5 shows the boiler combustion model and optimization. The conclusion of this paper is in Section 6.

    The TLBO algorithm is inspired by the teaching-learning phenomenon. Students are regarded as population individuals, their grades in each subject are regarded as the solutions to be optimized, the number of subjects is the solution dimension and the best individual becomes the teacher. The core idea of the TLBO algorithm is to simulate the teaching-learning process of a class. First, the best individual in the population is selected as the teacher, who improves the students' grades through teaching, thus achieving the goal of improving the average grade of the class. Students can learn from each other by comparing their grades, complementing each other's strengths and making progress together. Therefore, the TLBO algorithm is divided into two phases: "teaching phase" and "learning phase".

    In an ideal situation, students can learn from the teacher's guidance and reach the same level as the teacher. However, due to the interaction of many other factors, the teacher can only help the students reach a certain level. This phenomenon can be expressed mathematically as follows:

    difference_meani=ri(MteacherTFMi), (1)
    TF=round[1+rand(0,1)], (2)
    Xnew,i=Xold,i+difference_meani. (3)

    Formula (1) represents the difference between the best grade and average grade; ri is a random number between 0 and 1; Mteacher is the best individual, regarded as teacher; TF is a teaching factor, which is shown in formula (2); Mi is the average score at the i-th iteration. In formula (3), Xold,i is the old solution at the i-th iteration and Xnew,i is the new solution after updating.

    Students learn from each other through mutual communication and learning to acquire knowledge. Students with lower grades learn from those with higher grades, complementing each other's strengths and making progress together. Two students, Xi and Xj, are randomly selected. For the minimum optimization problem, if Xi is better than Xj, then Xj learns from Xi; otherwise, if Xj is better than Xi, then Xi learns from Xj. This phenomenon can be expressed mathematically as follows:

    Xnew,i=Xold,i+ri(XiXj),iff(Xi)<f(Xj), (4)
    Xnew,i=Xold,i+ri(XjXi),iff(Xi)f(Xj). (5)

    As shown in formulas (4) and (5), if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.

    The pseudo-code of the original TLBO is given in Algorithm 1.

    Algorithm 1
    1: Objective function f(x), xi(i=1,2,...,n)
    2: Initialize algorithm parameters.
    3: Generate the initial population of individuals.
    4: Evaluate the fitness of the population.
    5: While the stopping criteria is not adequate do
    6: Teaching phase
    7: Select the best individual Xbest in the current population.
    8: Calculate the mean value Xmean.
    9: For each student in population do
    10: Xi learn from the Xbest and produce new solution Xnew,i by using Eq (3).
    11: Evaluate new solutions.
    12: Update better solutions.
    13: End For
    14: Learning phase
    15: For each student in population do
    16: Randomly select a learner Xj.
    17: If Xi is superior than Xj then
    18: Produce new solution Xnew,i by using Eq (4).
    19: Else
    20: Produce new solution Xnew,i by using Eq (5).
    21: End If
    22: Evaluate new solutions.
    23: Update better solutions.
    24: End For
    25: gen = gen + 1.
    26: End While
    27: Output the best solution found.

    In order to improve the convergence accuracy and convergence speed of the original TLBO algorithm, a kind of ETLBO algorithm is proposed. The ETLBO algorithm adopts three adjustment mechanisms: First, the initialization of the population individuals is improved by applying chaotic mapping sequences, which can enhance the population diversity. Second, in the teaching phase, three coefficients are introduced to improve the convergence speed and the solution quality. Finally, in the learning phase, the idea of heredity is used to update the population, and all the latest individuals are taken as the population for the next iteration, which helps to maintain the diversity of the population in the later stage of optimization and further improve the global search capability. The specific implementation process of the ETLBO algorithm is described in the following subsections.

    In the original TLBO algorithm, the population individuals were initialized using pseudo-random sequences, which resulted in a high degree of uncertainty in the diversity of the population and could not guarantee that the solutions were uniformly distributed in the search space, ultimately leading to premature convergence of the algorithm. Since chaos mapping has the properties of traverseability, randomness and overall stability, using the chaotic sequence generated by the chaos mapping function as the initial position of the population individuals can enhance the uniformity of traversing the initial solution, thereby improving the diversity of the population and the global search capability. Therefore, this paper adopts logistic chaos mapping to initialize the population, and the standard logistic chaos mapping is:

    Zk+1=μZk(1Zk), (6)
    Z0{0,0.25,0.5,0.75,1.0},μ[0,4].

    In formula (6), μ is a random number between 0 and 4; Zk is the k-th chaotic variable with the value range of [0, 1].

    After considering various factors, this paper sets μ=4. The specific formula is as follows:

    Zk+1=4Zk(1Zk), (7)
    Z0{0,0.25,0.5,0.75,1.0},μ[0,4].

    Finally, the formula for converting chaotic solutions into solutions in the solution space of practical problems is as follows:

    popnew=range2(1+popold)+lower. (8)

    In formula (8), popold is the population matrix before transformation; range is a matrix consisting of 60 × 1 copies of (ublb); lower is a matrix consisting of 60 × 1 copies of lb; ub is the upper constraint; lb is the lower constraint; popnew is the population matrix after transformation.

    In the original TLBO algorithm, the teaching coefficient TF affects the search speed and search ability of the algorithm, and determines the change of the average value. Among them, a larger TF value speeds up the search speed, but it also makes the algorithm easily get trapped into local optima. A smaller TF value slows down the search speed, but it also makes the algorithm escape local optima. In addition, the teaching coefficient TF is a random number with a value of 1 or 2, and students can only fully accept or fully reject the teacher's teaching, which is inconsistent with the actual situation. After considering various situations, this paper proposes a composite individual update mechanism in the stage where the teacher teaches students, and three parameters are set, including inertia weight ωi, acceleration coefficient ϕi and self-adaptive teaching coefficient TF. The introduced inertia weight and acceleration coefficient can improve the search speed and the quality of the solution of the algorithm. The self-adaptive teaching coefficient is a monotonically decreasing function, which is related to the current number of iterations and the maximum number of iterations. The self-adaptive teaching coefficient can speed up the convergence speed of the algorithm in the early stage and slow down the convergence speed in the later stage to avoid getting trapped in local optimization, and the combination of the two can avoid premature convergence of the algorithm. The specific formula is as follows:

    ωi=1/(1+exp(f(i)/a)×t), (9)
    ϕi=1/(1+exp(f(i)/a))t, (10)
    TF=1+cos(πt/2T), (11)
    Xnew,i=ωiXold,i+ϕi(MteacherTFMi). (12)

    In formula (9), ωi is the inertia weight, which affects Xold,i. In formula (10), ϕi is the acceleration coefficient, which improves the search speed of the "teaching phase". In formula (11), TF is the self-adaptive teaching factor, which determines the degree of early maturity of the algorithm. f(i) is the fitness value of the i-th student, which is shown in formulas (9) and (10). a is the maximum fitness value in the iteration, which is shown in formulas (9) and (10). t is the current number of iterations, which is shown in formulas (9)–(11). T is the maximum number of iterations, which is shown in formula (11). In formula (12), Mteacher is the best individual in the population, that is, the teacher. Mi is the average score at the i-th iteration. As shown in formula (12), if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.

    During the learning process of the original TLBO algorithm, the population of the next iteration is generated through mutual communication and learning among students. The main idea is to randomly select two students and let the one with poor performance learn from the one with good performance. However, this method reduces the diversity of the population in the later optimization stages, which can easily lead to getting trapped in local optimization. As the idea of heredity is a type of search technique based on self-adaptive probability, it increases the flexibility of the search process. Although this probability feature may produce some low-fitness individuals, more excellent individuals will be generated as the evolution process continues, gradually making the population evolve to a state containing an approximate optimal solution. Moreover, the idea of heredity is scalable and easy to combine with other algorithms to generate hybrid algorithms that integrate the advantages of both. Based on the learning process of the original TLBO algorithm, this paper proposes adopting the idea of heredity to update the population, where all the latest individuals generated through crossover and mutation operations are used as the population of the next iteration. This method helps to maintain the diversity of the population in the later optimization stages and further improve the global search capability. The specific computation steps of ETLBO are described below and the flowchart of the ETLBO algorithm is presented in Figure 1.

    Figure 1.  The flowchart of the ETLBO algorithm.

    Step-1: The mutual learning among students is conducted according to the "learning phase" of the original TLBO algorithm, where the students with poor performance learn from those with good performance. This method satisfies the following conditions: if the updated Xnew,i is better than the old Xold,i, then Xnew,i is accepted; otherwise, Xold,i remains unchanged.

    Step-2: Sort all individuals in ascending order based on their fitness values and divide them into two groups, A and B, according to certain rules.

    Group A: Individuals ranked 1st, 3rd, 5th, 7th, ..., etc.

    Group B: Individuals ranked 2nd, 4th, 6th, 8th, ..., etc.

    Step-3: Crossover operation: The first half of individuals in groups A and B are crossed over according to certain rules, and the resulting offspring individuals replace the second half of individuals with lower rankings. The specific formula is as follows:

    Xnew,j={0.5×[(1+β)×Ai+(1β)×Bi]0.5×[(1β)×Ai+(1+β)×Bi],{i=1,2,,N4j=N4+1,N4+2,,N2, (13)
    β={(r+2)11+η,r0.5(122r)11+η,r>0.5. (14)

    In formula (13), Ai is the i-th student in Group A, Bi is the i-th student in Group B and N is the population size. In formula (14), β is a balancing parameter, r is a random number between 0 and 1 and η is a custom parameter value where the larger the value, the closer the offspring individuals are to their parents.

    Step-4: Mutation operation: All students are mutated one by one according to the mutation probability Pm. For a property value of an individual, if the random number r[0,1]<Pm, a mutation operation is performed, that is, the property value is inverted. The specific formula is as follows:

    Xi,j=ub+lbXi,j,{i=1,2,,Nj=1,2,,d. (15)

    In formula (15), Xi,j is the j-th property value of the i-th individual, ub is the upper constraint, lb is the lower constraint, N is the population size and d is the population dimension.

    Step-5: Take all the latest individuals generated by crossover and mutation operations as the population for the next iteration.

    After calculation, the computational complexity of ETLBO is O(Max_iter×pop_num×dim). Max_iter is the maximum number of iterations, pop_num is the population size and dim is the population dimension.

    The pseudo-code of the proposed ETLBO is given in Algorithm 2.

    Algorithm 2
    1: Objective function f(x), xi(i=1,2,...,n)
    2: Initialize algorithm parameters.
    3: Use logistic chaos mapping to generate the initial population of individuals.
    4: Evaluate the fitness of the population.
    5: While the stopping criteria is not adequate do
    6: Teaching phase
    7: Select the best individual Xbest in the current population.
    8: Calculation the mean value Xmean.
    9: For each student in population do
    10: Xi learn from the Xbest and produce new solution Xnew,i by using Eq (12).
    11: Evaluate new solutions.
    12: Update better solutions.
    13: End For
    14: Learning phase
    15: For each student in population do
    16: Randomly select a learner Xj.
    17: If Xi is superior to Xj then
    18: Produce new solution Xnew,i by using Xnew,i=Xold,i+r(XiXj).
    19: Else
    20: Produce new solution Xnew,i by using Xnew,i=Xold,i+r(XjXi).
    21: End If
    22: Evaluate new solutions.
    23: Update better solutions.
    24: End For
    25: Crossover and Mutation
    26: Sort by fitness value.
    27: Divide the students into two groups according to the Step-2.
    28: For each student in the first half of each group do
    29: Perform crossover operation.
    30: Produce new solution Xnew,i by using Eq (13).
    31: End For
    32: Replace the lower-ranked students with the offspring students obtained.
    33: For each student in population do
    34: Perform mutation operation according to the mutation probability.
    35: Produce new solution Xnew,i by using Eq (15).
    36: End For
    37: Take all the latest students as the population for the next iteration.
    38: gen = gen+1.
    39: End While
    40: Output the best solution found.

    In this section, 20 benchmark mathematical functions in Table 2 are used to verify the performance of the ETLBO algorithm. Seen from Table 2, F1–F7 are uni-modal test functions used to test the convergence accuracy and solution capability of the ETLBO algorithm, F8–F12 are multi-modal test functions and F13–F20 are fixed-dimension multi-modal test functions. F8–F20 are used together to test the global search capability of the ETLBO algorithm. Several state-of-the-art optimization algorithms are regarded as comparison algorithms, which are recorded in Table 3. This section compares the ETLBO algorithm with GA, ALO, DA, MFO, SCA and the original TLBO algorithms. The relevant parameters set for the seven algorithms when testing the 20 CEC benchmark test functions are shown in Table 3.

    Table 2.  20 benchmark test functions.
    Optimization function Value range Optimum Type
    F1(x)=ni=1(xi)2 [−100,100]n 0 Unimodal
    F2(x)=ni=1|xi|+ni=1|xi| [−10, 10]n 0 Unimodal
    F3(x)=ni=1(ij=1xj)2 [−100,100]n 0 Unimodal
    F4(x)=max|xi|,1in [−100,100]n 0 Unimodal
    F5(x)=n1i=1[100(xi+1xi2)2+(xi1)2] [−30, 30]n 0 Unimodal
    F6(x)=ni=1(xi+0.5)2 [−100,100]n 0 Unimodal
    F7(x)=ni=1ixi4random[0,1) [−1.28, 1.28]n 0 Unimodal
    F8(x)=ni=1[xi210cos(2πxi)+10] [−5.12, 5.12]n 0 Multimodal
    F9(x)=20exp(0.21nni=1xi2)exp(1nni=1cos(2πxi))+20+e [−32, 32]n 0 Multimodal
    F10(x)=14000ni=1xi2ni=1cos(xii)+1 [−600,600]n 0 Multimodal
    F11(x)=πn{10sin2(πy1)+n1i=1(yi1)2[1+10sin2(πyi+1)]+(yn1)2}+ni=1u(xi,10,100,4)
    yi=1+xi+14,u(xi,a,k,m)={k(xia)m,xi>a0,axi<ak(xia)m,xi<a
    [−50, 50]n 0 Multimodal
    F12(x)=0.1{sin2(3πxi)+n1i=1(xi1)2[1+sin2(3πxi+1)]+(xn1)2[1+sin2(2πxn)]}+ni=1u(xi,5,100,4) [−50, 50]n 0 Multimodal
    F13(x)=[1500+25j=11j+2i=1(xiaij)6]1 [−65.536, 65.536] 1 Fixed dimension
    F14(x)=11i=1[aix1(b2i+bix2)b2i+bix3+x4]2 [−5, 5] 0.0003075 Fixed dimension
    F15(x)=4x212.1x41+13x61+x1x24x22+4x42 [−5, 5] −1.0316285 Fixed dimension
    F16(x)=4i=1ciexp[3j=1aij(xjpij)2] [0, 1] −3.86 Fixed dimension
    F17(x)=4i=1ciexp[6j=1aij(xjpij)2] [0, 1] −3.32 Fixed dimension
    F18(x)=5i=1[(xai)(xai)T+ci]1 [0, 10] −10.1532 Fixed dimension
    F19(x)=7i=1[(xai)(xai)T+ci]1 [0, 10] −10.4028 Fixed dimension
    F20(x)=10i=1[(xai)(xai)T+ci]1 [0, 10] −10.5363 Fixed dimension

     | Show Table
    DownLoad: CSV
    Table 3.  Algorithm parameter settings.
    Algorithm Population size Number of iterations Others
    GA 60 1000 pc = 0.8, pm = 0.05
    ALO 60 1000
    DA 60 1000 β = 3/2
    MFO 60 1000 b = 1
    SCA 60 1000 a = 2
    TLBO 60 1000
    ETLBO 60 1000 μ = 4, η = 40, pm = 0.01

     | Show Table
    DownLoad: CSV

    Due to the stochastic nature of meta-heuristics, the results of one single run may be unreliable. Therefore, each algorithm runs 30 times independently to reduce the statistical error. The performance of different optimization algorithms in terms of the mean and standard deviation of solutions is obtained from the 30 independent runs for 10, 30 and 50 dimensional functions. The maximal iteration 1000 is used as the stopping criterion. All experimental results are recorded in Tables 410. It should be noted that the closer the mean value is to the theoretical optimal value of the test function and the smaller the mean square deviation is, the better the convergence accuracy, the quality of the solution and stability of the algorithm. In addition, the optimal performance parameters of the seven algorithms are highlighted in bold font in Tables 410.

    Table 4.  Experiment comparison results on 7 uni-modal testing functions with 10 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F1 Mean 1.01 × 10−01 1.31 × 10−09 5.24 × 10−01 1.74 × 10−31 6.91 × 10−31 3.73 × 10+00 0.00 × 10+00
    Std 9.05 × 10−02 5.47 × 10−10 1.08 × 10+00 5.33 × 10−31 3.09 × 10−30 1.72 × 10+00 0.00 × 10+00
    F2 Mean 4.32 × 10−02 3.00 × 10−01 1.04 × 10+00 2.07 × 10−19 7.16 × 10−22 4.51 × 10+00 0.00 × 10+00
    Std 2.86 × 10−02 9.98 × 10−01 2.26 × 10+00 2.90 × 10−19 1.39 × 10−21 1.40 × 10+00 0.00 × 10+00
    F3 Mean 1.92 × 10+03 1.46 × 10−07 8.64 × 10+00 7.21 × 10−10 4.64 × 10−15 1.37 × 10+01 0.00 × 10+00
    Std 1.09 × 10+03 2.02 × 10−07 1.17 × 10+01 1.70 × 10−09 1.49 × 10−14 1.15 × 10+01 0.00 × 10+00
    F4 Mean 3.75 × 10+00 3.90 × 10−05 7.67 × 10−01 7.03 × 10−03 2.67 × 10−11 9.06 × 10−01 0.00 × 10+00
    Std 1.45 × 10+00 4.26 × 10−05 8.59 × 10−01 3.14 × 10−02 3.84 × 10−11 3.56 × 10−01 0.00 × 10+00
    F5 Mean 2.39 × 10+02 2.76 × 10+01 2.90 × 10+02 3.30 × 10+01 6.98 × 10+00 2.54 × 10+02 8.72 × 10+00
    Std 5.67 × 10+02 8.00 × 10+01 4.37 × 10+02 8.46 × 10+01 3.07 × 10−01 1.29 × 10+02 5.27 × 10−01
    F6 Mean 1.23 × 10−01 1.10 × 10−09 9.29 × 10−01 1.40 × 10−31 3.06 × 10−01 5.66 × 10+00 6.08 × 10−01
    Std 1.47 × 10−01 4.26 × 10−10 1.43 × 10+00 2.24 × 10−31 1.23 × 10−01 2.80 × 10+00 7.27 × 10−01
    F7 Mean 3.07 × 10−03 4.74 × 10−03 7.74 × 10−03 2.20 × 10−03 1.20 × 10−03 1.73 × 10−01 2.16 × 10−03
    Std 1.99 × 10−03 2.96 × 10−03 6.92 × 10−03 1.26 × 10−03 2.48 × 10−03 1.36 × 10−01 1.40 × 10−03

     | Show Table
    DownLoad: CSV
    Table 5.  Experiment comparison results on 7 uni-modal testing functions with 30 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F1 Mean 2.30 × 10+03 1.97 × 10−07 2.74 × 10+02 3.00 × 10+03 2.32 × 10−03 2.30 × 10+01 1.37×10−21
    Std 1.59 × 10+03 8.53 × 10−08 2.91 × 10+02 4.70 × 10+03 4.98 × 10−03 6.74 × 10+00 4.97×10−21
    F2 Mean 3.08 × 10+01 3.01 × 10+01 9.88 × 10+00 3.35 × 10+01 3.33 × 10−06 2.28 × 10+01 0.00 × 10+00
    Std 8.55 × 10+00 4.35 × 10+01 4.28 × 10+00 1.57 × 10+01 6.82 × 10−06 2.77 × 10+00 0.00 × 10+00
    F3 Mean 3.71 × 10+04 1.70 × 10+02 5.65 × 10+03 1.91 × 10+04 1.94 × 10+03 1.60 × 10+02 0.00 × 10+00
    Std 1.04 × 10+04 1.10 × 10+02 7.08 × 10+03 1.22 × 10+04 2.23 × 10+03 7.24 × 10+01 0.00 × 10+00
    F4 Mean 5.58 × 10+01 7.48 × 10+00 1.30 × 10+01 4.63 × 10+01 1.20 × 10+01 1.13 × 10+00 0.00 × 10+00
    Std 9.58 × 10+00 3.98 × 10+00 8.06 × 10+00 1.19 × 10+01 8.92 × 10+00 3.05 × 10−01 0.00 × 10+00
    F5 Mean 2.94 × 10+05 1.25 × 10+02 1.57 × 10+04 1.43 × 10+04 1.13 × 10+02 2.01 × 10+03 2.84 × 10+01
    Std 4.90 × 10+05 2.82 × 10+02 3.05 × 10+04 3.27 × 10+04 2.85 × 10+02 8.93 × 10+02 6.16 × 10−01
    F6 Mean 3.12 × 10+03 1.90 × 10−07 4.23 × 10+02 1.00 × 10+03 4.13 × 10+00 3.18 × 10+01 3.39 × 10+00
    Std 2.40 × 10+03 1.46 × 10−07 3.51 × 10+02 3.08 × 10+03 3.33 × 10−01 6.37 × 10+00 6.10 × 10−01
    F7 Mean 5.61 × 10−01 4.19 × 10−02 8.55 × 10−02 1.66 × 10+00 1.52 × 10−02 1.73 × 10+00 2.13 × 10−03
    Std 7.30 × 10−01 1.32 × 10−02 6.30 × 10−02 3.93 × 10+00 1.32 × 10−02 9.81 × 10−01 8.01 × 10−04

     | Show Table
    DownLoad: CSV
    Table 6.  Experiment comparison results on 7 uni-modal testing functions with 50 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F1 Mean 3.24 × 10+04 6.24 × 10−05 1.19 × 10+03 6.50 × 10+03 1.68 × 10+01 3.32 × 10+01 2.23 × 10−21
    Std 9.99 × 10+03 3.19 × 10−05 9.88 × 10+02 9.33 × 10+03 2.18 × 10+01 9.84 × 10+00 7.03 × 10−21
    F2 Mean 8.44 × 10+01 8.40 × 10+01 1.67 × 10+01 6.51 × 10+01 2.55 × 10−03 3.78 × 10+01 0.00 × 10+00
    Std 1.83 × 10+01 8.38 × 10+01 8.36 × 10+00 2.66 × 10+01 2.90 × 10−03 4.17 × 10+00 0.00 × 10+00
    F3 Mean 1.02 × 10+05 3.30 × 10+03 1.76 × 10+04 4.05 × 10+04 2.67 × 10+04 3.97 × 10+02 0.00 × 10+00
    Std 2.29 × 10+04 1.05 × 10+03 1.08 × 10+04 1.40 × 10+04 1.20 × 10+04 1.68 × 10+02 0.00 × 10+00
    F4 Mean 7.63 × 10+01 1.57 × 10+01 2.00 × 10+01 7.76 × 10+01 4.92 × 10+01 1.03 × 10+00 6.40 × 10−12
    Std 8.07 × 10+00 2.30 × 10+00 1.11 × 10+01 6.55 × 10+00 1.08 × 10+01 2.71 × 10−01 2.86 × 10−11
    F5 Mean 2.63 × 10+07 2.75 × 10+02 1.65 × 10+05 4.02 × 10+06 2.82 × 10+05 3.29 × 10+03 4.83 × 10+01
    Std 1.49 × 10+07 4.38 × 10+02 2.21 × 10+05 1.79 × 10+07 4.85 × 10+05 1.51 × 10+03 4.89 × 10−01
    F6 Mean 3.33 × 10+04 6.56 × 10−05 1.45 × 10+03 6.50 × 10+03 2.82 × 10+01 6.00 × 10+01 7.46 × 10+00
    Std 8.68 × 10+03 5.14 × 10−05 9.81 × 10+02 8.74 × 10+03 3.21 × 10+01 1.24 × 10+01 5.18 × 10−01
    F7 Mean 2.60 × 10+01 1.22 × 10−01 5.29 × 10−01 2.36 × 10+01 2.48 × 10−01 6.61 × 10+00 2.21 × 10−03
    Std 1.25 × 10+01 4.61 × 10−02 7.07 × 10−01 3.76 × 10+01 1.89 × 10−01 3.48 × 10+00 1.08 × 10−03

     | Show Table
    DownLoad: CSV
    Table 7.  Experiment comparison results on 5 multi-modal testing functions with 10 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F8 Mean 3.11 × 10+00 1.73 × 10+01 1.74 × 10+01 1.58 × 10+01 1.19 × 10+00 3.71 × 10+01 0.00 × 10+00
    Std 2.23 × 10+00 9.42 × 10+00 1.30 × 10+01 1.03 × 10+01 5.31 × 10+00 8.29 × 10+00 0.00 × 10+00
    F9 Mean 1.50 × 10+00 1.72 × 10−05 1.56 × 10+00 4.00 × 10−15 3.29 × 10−15 3.59 × 10+00 4.44 × 10−16
    Std 3.09 × 10+00 3.91 × 10−06 1.26 × 10+00 0.00 × 10+00 1.46 × 10−15 6.80 × 10−01 0.00 × 10+00
    F10 Mean 6.13 × 10−01 2.09 × 10−01 2.49 × 10−01 1.66 × 10−01 2.71 × 10−03 4.19 × 10−01 0.00 × 10+00
    Std 3.43 × 10−01 1.27 × 10−01 2.43 × 10−01 1.05 × 10−01 8.43 × 10−03 1.53 × 10−01 0.00 × 10+00
    F11 Mean 9.34 × 10−02 1.14 × 10+00 5.46 × 10−01 7.79 × 10−02 6.04 × 10−02 9.32 × 10−01 3.25 × 10−02
    Std 2.82 × 10−01 2.62 × 10+00 6.22 × 10−01 3.48 × 10−01 2.56 × 10−02 4.57 × 10−01 7.27 × 10−02
    F12 Mean 1.42 × 10−02 1.10 × 10−03 3.17 × 10−01 4.39 × 10−03 2.18 × 10−01 2.97 × 10−01 7.43 × 10−01
    Std 1.85 × 10−02 3.38 × 10−03 3.54 × 10−01 5.52 × 10−03 7.16 × 10−02 2.22 × 10−01 3.44 × 10−01

     | Show Table
    DownLoad: CSV
    Table 8.  Experiment comparison results on 5 multi-modal testing functions with 30 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F8 Mean 1.13 × 10+02 7.08 × 10+01 7.87 × 10+01 1.44 × 10+02 1.37 × 10+01 1.85 × 10+02 0.00 × 10+00
    Std 3.81 × 10+01 2.01 × 10+01 4.55 × 10+01 2.50 × 10+01 2.39 × 10+01 2.32 × 10+01 0.00 × 10+00
    F9 Mean 1.89 × 10+01 1.93 × 10+00 4.99 × 10+00 1.54 × 10+01 1.14 × 10+01 4.48 × 10+00 4.44 × 10−16
    Std 6.46 × 10−01 4.96 × 10−01 1.18 × 10+00 8.03 × 10+00 9.72 × 10+00 3.32 × 10−01 0.00 × 10+00
    F10 Mean 2.71 × 10+01 9.10 × 10−03 3.11 × 10+00 1.81 × 10+01 8.34 × 10−02 7.49 × 10−01 0.00 × 10+00
    Std 2.40 × 10+01 8.14 × 10−03 2.07 × 10+00 3.71 × 10+01 1.39 × 10−01 1.18 × 10−01 0.00 × 10+00
    F11 Mean 1.03 × 10+01 7.59 × 10+00 4.98 × 10+00 4.20 × 10−01 1.96 × 10+01 1.50 × 10+00 2.54 × 10−01
    Std 4.55 × 10+00 3.45 × 10+00 4.26 × 10+00 7.31 × 10−01 8.44 × 10+01 7.26 × 10−01 3.10 × 10−01
    F12 Mean 3.91 × 10+04 7.87 × 10−03 2.60 × 10+02 2.17 × 10−01 6.64 × 10+00 1.28 × 10+00 2.16 × 10+00
    Std 6.87 × 10+04 1.20 × 10−02 5.47 × 10+02 6.43 × 10−01 1.72 × 10+01 2.29 × 10−01 4.94 × 10−01

     | Show Table
    DownLoad: CSV
    Table 9.  Experiment comparison results on 5 multi-modal testing functions with 50 dimensions.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F8 Mean 3.97 × 10+02 1.30 × 10+02 1.25 × 10+02 2.86 × 10+02 6.09 × 10+01 3.44 × 10+02 0.00 × 10+00
    Std 5.88 × 10+01 3.16 × 10+01 7.61 × 10+01 6.52 × 10+01 5.34 × 10+01 2.81 × 10+01 0.00 × 10+00
    F9 Mean 1.99 × 10+01 2.89 × 10+00 6.81 × 10+00 1.85 × 10+01 1.52 × 10+01 4.55 × 10+00 4.44 × 10−16
    Std 5.20 × 10−01 1.18 × 10+00 2.08 × 10+00 3.00 × 10+00 8.48 × 10+00 3.74 × 10−01 0.00 × 10+00
    F10 Mean 3.15 × 10+02 1.23 × 10−02 2.23 × 10+01 6.83 × 10+01 1.29 × 10+00 7.39 × 10−01 0.00 × 10+00
    Std 8.03 × 10+01 5.79 × 10−03 2.39 × 10+01 1.01 × 10+02 7.63 × 10−01 1.22 × 10−01 0.00 × 10+00
    F11 Mean 6.24 × 10+06 1.20 × 10+01 9.64 × 10+01 6.40 × 10+07 1.94 × 10+05 1.48 × 10+00 3.77 × 10−01
    Std 6.29 × 10+06 4.26 × 10+00 2.85 × 10+02 1.41 × 10+08 3.57 × 10+05 7.12 × 10−01 5.12 × 10−02
    F12 Mean 5.00 × 10+07 4.27 × 10+01 6.54 × 10+04 1.60 × 10+03 1.34 × 10+06 1.91 × 10+00 4.61 × 10+00
    Std 3.22 × 10+07 3.51 × 10+01 1.33 × 10+05 6.73 × 10+03 4.10 × 10+06 4.58 × 10−01 3.54 × 10−01

     | Show Table
    DownLoad: CSV
    Table 10.  Experiment comparison results on 8 multi-modal testing functions with fixed dimension.
    Function Item GA ALO DA MFO SCA TLBO ETLBO
    F13 Mean 1.05 × 10+00 1.15 × 10+00 1.05 × 10+00 1.29 × 10+00 1.20 × 10+00 1.27 × 10+01 4.49 × 10+00
    Std 2.22 × 10−01 3.64 × 10−01 2.22 × 10−01 1.11 × 10+00 6.11 × 10−01 4.29 × 10−09 5.08 × 10+00
    F14 Mean 5.10 × 10−03 1.74 × 10−03 1.47 × 10−03 9.39 × 10−04 8.67 × 10−04 1.13 × 10−03 3.70 × 10−03
    Std 6.34 × 10−03 4.39 × 10−03 4.72 × 10−04 3.09 × 10−04 3.41 × 10−04 4.64 × 10−04 9.02 × 10−03
    F15 Mean −1.00 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00 −1.03 × 10+00
    Std 5.11 × 10−02 2.00 × 10−14 1.22 × 10−08 2.28 × 10−16 8.50 × 10−06 6.19 × 10−03 5.69 × 10−07
    F16 Mean −3.44 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.86 × 10+00 −3.81 × 10+00 −3.86 × 10+00
    Std 2.41 × 10−01 1.26 × 10−05 1.56 × 10−03 9.31 × 10−15 3.06 × 10−03 7.23 × 10−02 2.28 × 10−15
    F17 Mean −1.73 × 10+00 −3.26 × 10+00 −3.26 × 10+00 −3.21 × 10+00 −3.04 × 10+00 −2.76 × 10+00 −3.24 × 10+00
    Std 4.81 × 10−01 6.03 × 10−02 7.17 × 10−02 5.61 × 10−02 2.18 × 10−01 3.88 × 10−01 4.54 × 10−02
    F18 Mean −1.49 × 10+00 −7.88 × 10+00 −9.39 × 10+00 −5.64 × 10+00 −3.00 × 10+00 −8.80 × 10+00 −1.02 × 10+01
    Std 1.06 × 10+00 2.94 × 10+00 1.85 × 10+00 3.20 × 10+00 2.00 × 10+00 2.41 × 10+00 1.37 × 10−04
    F19 Mean −1.59 × 10+00 −6.62 × 10+00 −1.04 × 10+01 −9.42 × 10+00 −4.85 × 10+00 −8.28 × 10+00 −1.04 × 10+01
    Std 6.67 × 10−01 3.29 × 10+00 5.29 × 10−03 2.43 × 10+00 1.93 × 10+00 2.44 × 10+00 5.51 × 10−05
    F20 Mean −1.58 × 10+00 −7.92 × 10+00 −1.02 × 10+01 −9.77 × 10+00 −5.43 × 10+00 −9.63 × 10+00 −1.05 × 10+01
    Std 5.22 × 10−01 3.37 × 10+00 1.20 × 10+00 2.37 × 10+00 1.11 × 10+00 2.22 × 10+00 5.86 × 10−05

     | Show Table
    DownLoad: CSV

    Figure 2 shows the comparison of the convergence process curves of the seven algorithms on the 20 CEC benchmark test functions, where the horizontal and vertical axes represent the number of iterations and the fitness value, respectively.

    Figure 2.  Simulation curves of 7 algorithms on 7 uni-modal functions with 10 dimensions.

    The choice of 10, 30 and 50 as dimensions for benchmark functions is generally because they are representative. Lower dimensions, such as 10, can be used to evaluate the performance of algorithms on relatively smaller problem sizes. Higher dimensions, such as 50, can be used to evaluate the performance of algorithms on larger and more complex problem sizes. Additionally, selecting these specific dimensions facilitates the comparison of different algorithm performances. These dimensions have been widely used and have become standard settings for benchmark test functions. By conducting tests on these dimensions, the results become more comparable and help researchers better understand algorithm performance across different problem sizes.

    All experiments were run on a 64-bit XiaoXin Air 15IKBR laptop, with an Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz processor and 8.00 GB of RAM. The simulation software is MATLAB 2021a.

    In this subsection, 7 uni-modal test functions are used to evaluate the performance of ETLBO. The experimental results of 10, 30 and 50 dimensional functions are listed in Tables 4, 5 and 6, respectively. From Tables 46, it can be seen that compared to the other six algorithms, the ETLBO algorithm can achieve smaller optimal solutions, mean values and mean square deviations for almost all uni-modal functions in 10, 30 and 50 dimensions. The results of the tests on five functions, F1, F2, F3, F4 and F7, are particularly indicative that the ETLBO algorithm can produce satisfactory results for uni-modal functions by effectively utilizing the search space, and has good convergence accuracy and stability. Additionally, Figures 24 graphically present the comparison in terms of convergence speed and solution quality for solving 7 multi-modal functions (F1, F2, F3, F4, F5, F6 and F7) with 10 dimensions, 30 dimensions and 50 dimensions, separately. Seen from the three figures, it is obvious that the ETLBO has the fastest convergence speed and the highest convergence accuracy on most functions compared to others.

    Figure 3.  Simulation curves of 7 algorithms on 7 uni-modal functions with 30 dimensions.
    Figure 4.  Simulation curves of 7 algorithms on 7 uni-modal functions with 50 dimensions.

    In this subsection, 5 multi-modal test functions are used to evaluate the performance of ETLBO. The experimental results of 10, 30 and 50 dimensional functions are listed in Tables 7, 8 and 9, respectively. Boldface in the tables indicates the best results. Good performance of an algorithm is indicated by smaller mean values. Stronger stability of an algorithm is indicated by lower standard deviation values. According to these tables, the proposed ETLBO algorithm presents superior performance on most functions. Seen from Table 7, the performance of ETLBO is better than other algorithms for functions F8, F9, F10 and F11. The ALO has the smallest mean and standard deviation for function F12. Seen from Tables 8 and 9, the ETLBO still shows the best performance on functions F8, F9, F10 and F11 compared to other algorithms. In brief, the proposed ETLBO improves the solution quality for multi-modal functions. Additionally, Figures 57 graphically present the comparison in terms of convergence speed and solution quality for solving 5 multi-modal functions (F8, F9, F10, F11 and F12) with 10 dimensions, 30 dimensions and 50 dimensions, separately. Seen from the three figures, it is obvious that ETLBO has the fastest convergence speed and the highest convergence accuracy on most functions compared to other algorithms.

    Figure 5.  Simulation curves of 7 algorithms on 5 multi-modal functions with 10 dimensions.
    Figure 6.  Simulation curves of 7 algorithms on 5 multi-modal functions with 30 dimensions.
    Figure 7.  Simulation curves of 7 algorithms on 5 multi-modal functions with 50 dimensions.

    In this subsection, 8 fixed dimension functions are applied to evaluate the performance of the proposed ETLBO algorithm. The experimental results are listed in Table 10. According to this table, the ETLBO algorithm presents the best solution on five functions (F15, F16, F18, F19, F20). Therefore, the ETLBO is the most fit to address multi-modal testing functions with fixed dimensions.

    In this section, the proposed ETLBO algorithm is used to optimize the boiler's adjustment parameters for reducing the NOx emission concentration. First, based on the boiler operation parameters, extreme learning machine (ELM) [52] is applied to build NOx emission models. Secondly, based on the building model, the ETLBO is used to reduce NOx emission. Note that the extreme learning machine is an effective modeling method that has been applied in many fields. Therefore, detailed description of extreme learning machine can be found in Reference [52], whereas this paper only gives the applicable rules of extreme learning machine.

    This section focuses on using ELM to establish a prediction model of NOx emissions. For a 330 MW circulating fluidized bed boiler (CFBB), a total of 28,800 sets of operation data were collected, including 26 input parameters and one output parameter. A detailed description of these data can be found in reference [53]. Considering the small data sampling interval and low data fluctuation in the CFBB, this paper sets the data sampling interval as every 80 units, resulting in a total of 360 datasets to fully ensure the generalization ability of the prediction model, which is divided into three parts: training data, validation data and testing data. The proportions are 65, 15 and 20%, respectively. The training data are used to establish the prediction model and determine the model parameters. The validation data are used to verify the effectiveness of the prediction model. The test data are used to test the generalization ability of the prediction model. In this paper, the ELM algorithm model is configured with 26 input nodes, 41 hidden layer neural nodes and 1 output node, and the "Sigmoid" function is set as the hidden layer activation function. Note that in order to make these boilers' operation data dimension unity, they are processed by min-max normalization method.

    In order to prove the excellent accuracy and effectiveness of the NOx emission prediction model established using extreme learning machine, this section conducted 30 independent test experiments and obtained the corresponding average results, which are recorded in Table 11. Note that experiment results are obtained after normalization in Table 11. From the testing results, it can be observed that the mean value reaches the precision of 10−2, indicating that the prediction model has a high level of accuracy. The S.D. value reaches the precision of 10−3, indicating that the prediction model exhibits good stability. The R2 value is extremely close to 1, demonstrating the prediction model has strong generalization and regression capabilities. In addition, the mean absolute percentage error (MAPE) value reaches the precision of 10−4 or 10−5 and the mean absolute error (MAE) value reaches the precision of 10−2, proving that the prediction model has the ability to approximate the target values effectively.

    Table 11.  Performance index statistics for NOx emission model.
    Performance index Training sample Validation sample Testing sample
    Mean 0.0813 0.0969 0.0959
    S.D. 0.0033 0.0069 0.0065
    MAPE 7.01 × 10−05 9.12 × 10−04 9.88 × 10−04
    MAE 0.0633 0.0761 0.0756
    R2 0.8944 0.8356 0.8241

     | Show Table
    DownLoad: CSV

    In Figures 8 and 9, the solid line with red asterisks represents the actual NOx emission of one boiler, while the dotted line with black circles represents the predicted NOx emission of the ELM model. Seen from Figures 8 and 9, the predicted NOx emission can almost match the actual NOx emission. Therefore, the NOx emission model built by ELM is effective.

    Figure 8.  NOx emission training model curve.
    Figure 9.  NOx emission testing model curve.

    In this subsection, based on the established NOx emission prediction model by ELM, the ETLBO algorithm is applied to optimize the adjustment parameters of the CFBB for reducing NOx emissions.

    In the process of optimizing NOx emissions, the main focus is on optimizing 11 adjustable parameters that have a significant impact on NOx emissions, while keeping the remaining parameters unchanged. The specific details are shown in Table 12. The objective function for optimizing NOx emissions is as follows:

    Table 12.  The 11 key parameters that affect NOx emissions.
    Variable Variable meaning Variable Variable meaning
    x Parameters to be optimized x1 Coal feed amount A
    x2 Coal feed amount B x3 Coal feed amount C
    x4 Coal feed amount D x5 Primary air flow rate at the entrance of the left-side air duct burner
    x6 Primary air flow rate at the entrance of the right-side air duct burner x7 Total flow rate of secondary air on the left side
    x8 Total flow rate of secondary air on the right side x9 Distribution flow rate of internal secondary air on the left side
    x10 Distribution flow rate of internal secondary air on the right side x11 Flue gas oxygen concentration
    ai The lower limit value of the i-th parameter to be optimized bi The upper limit value of the i-th parameter to be optimized

     | Show Table
    DownLoad: CSV
    minf(x)=fNOx(x), (16)
    x=[x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11], (17)
    s.t.aixibi. (18)

    Based on the established NOx emission prediction model, the ETLBO algorithm is applied to tune the adjustable parameters in order to achieve the goal of reducing NOx emissions. In this section, the maximum number of iterations for the ETLBO algorithm is set to 50, the population size is set to 40 and the dimension is set to 11. Other parameters remain unchanged. The operating condition data is then optimized. Figure 10 shows the comparison curve before and after the optimization of NOx emissions, where the solid line with red asterisks represents the data before optimization, and the dotted line with blue circles represents the data after optimization. Seen from Figure 10, it can be visually observed that after optimizing 72 sets of data using the ETLBO algorithm, there is a certain degree of reduction in NOx emissions. This proves that the NOx emission model based on the ELM algorithm is effective, and the ETLBO algorithm proposed in this paper is an effective strategy for solving complex global optimization problems.

    Figure 10.  The comparison curve before and after the optimization of NOx emissions.

    According to Table 13, in this section of the experiment, three sets of data, D, E and F, were randomly selected from the test data. By comparing the data before and after parameter optimization, it can be visually observed that the NOx emissions were significantly reduced. The coal feed amount is reduced, the primary air flow rate increased and the flue gas oxygen concentration decreased after optimization for samples D, E and F. The secondary air flow rate of sample D decreased, but the secondary air flow rate of samples E and F both increased. In the end, the NOx emissions for samples D, E and F are reduced by 93.8604 mg/Nm3, 75.5935 mg/Nm3 and 27.0340 mg/Nm3, respectively. Therefore, considering only the reduction of NOx emissions, the ETLBO-ELM method proposed in this paper is an effective strategy.

    Table 13.  Comparison of parameters before and after optimization of NOx emissions.
    Boiler adjustable parameters Test sample data D Test sample data E Test sample data F
    Before After Before After Before After
    Coal feed amount A 56.065 54.646 52.997 44.653 49.900 48.321
    B 54.653 53.258 44.670 40.369 49.536 48.345
    C 55.083 54.080 44.581 40.656 46.110 44.685
    D 55.434 54.431 52.812 44.089 43.499 42.101
    Primary air flow rate left 260.452 299.130 223.604 239.854 171.193 228.639
    right 210.330 301.190 251.297 316.067 212.389 299.588
    Total flow rate of secondary air left 451.981 433.481 353.973 553.023 155.050 373.982
    Total flow rate of internal secondary air left 613.652 598.490 622.234 785.970 292.283 355.985
    Total flow rate of secondary air right 1135.310 1007.080 1001.858 896.991 721.062 612.566
    Total flow rate of internal secondary air right 718.550 733.712 621.376 629.101 318.889 320.510
    Flue gas oxygen concentration 4.799 3.179 5.371 3.689 5.327 5.018
    NOx emission (mg/Nm3) 189.7310 95.8706 182.1030 106.5095 159.8260 132.7920

     | Show Table
    DownLoad: CSV

    To enhance the performance of the original TLBO algorithm, an evolution TLBO algorithm is proposed. Compared to TLBO, the proposed ETLBO uses the chaotic function to initialize population individuals, introduces the inertia weight, acceleration coefficient and self-adaptive teaching factors into the teaching phase, and the idea of heredity is used to update the population in the learning phase. 20 benchmark test functions are used to verify the performance of ETLBO and experimental results show that the ETLBO outperforms the conventional TLBO on most test functions. Therefore, the ETLBO has good convergence ability. Additionally, the ETLBO combines with extreme learning machine to solve the boiler combustion optimization problem. Experimental results reveal that NOx emissions can be reduced. In conclusion, the ETLBO algorithm is an effective optimization method.

    In the future, the performance of the ETLBO algorithm will be further improved and applied to engineering optimization problems. Additionally, further research is needed to provide rigorous mathematical proofs for the convergence of the ETLBO algorithm. The multi-objective version of the ETLBO algorithm and its application in uncertain engineering problems also deserve further study.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Funding: This work is supported the National Natural Science Foundation of China (Grant No. 62203332), the Natural Science Foundation of Tianjin (Grant No.20JCQNJC00430) and College Students' Innovative Entrepreneurial Training Plan Program (Grant No. 202310069032).

    The authors declare there is no conflict of interest.



    [1] F. Zou, L. Wang, X. Hei, D. Chen, Teaching-learning-based optimization with learning experience of other learners and its application, Appl. Soft Comput., 37 (2015), 725–736. https://doi.org/10.1016/j.asoc.2015.08.047 doi: 10.1016/j.asoc.2015.08.047
    [2] S. Yu, S. Su, Research and application of chaotic glowworm swarm optimization algorithm, J. Front. Comput. Sci. Technol., 8 (2014), 352–358. https://doi.org/10.3778/j.issn.1673-9418.1310016 doi: 10.3778/j.issn.1673-9418.1310016
    [3] S. He, Q. H. Wu, J. Saunders, Group search optimizer: An optimization algorithm inspired by animal searching behavior, IEEE Trans. Evolut. Comput., 13 (2009), 973–990. https://doi.org/10.1109/TEVC.2009.2011992 doi: 10.1109/TEVC.2009.2011992
    [4] D. Karaboga, B. Akay, A comparative study of Artificial Bee Colony algorithm, Appl. Math. Comput., 214 (2009), 108–132. https://doi.org/10.1016/j.amc.2009.03.090 doi: 10.1016/j.amc.2009.03.090
    [5] J. Kennedy, R. Eberhart, Particle swarm optimization, in Proceedings of ICNN'95-International Conference on Neural Networks, 4 (1995), 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
    [6] M. Dorigo, V. Maniezzo, A. Colorni, Ant System: Optimization by a colony of cooperating agents, IEEE Trans. Syst., Man, Cybern., Part B, 26 (1996), 29–41. https://doi.org/10.1109/3477.484436
    [7] M. M. Eusuff, K. E. Lansey, Optimization of water distribution network design using the shuffled frog leaping algorithm, J. Water Resour. Plann. Manage., 129 (2003), 210–225. https://doi.org/10.1061/(ASCE)0733-9496(2003)129:3(210) doi: 10.1061/(ASCE)0733-9496(2003)129:3(210)
    [8] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm, J. Global Optim., 39 (2007), 459–471. https://doi.org/10.1007/s10898-007-9149-x doi: 10.1007/s10898-007-9149-x
    [9] X. Li, Z. Shao, J. Qian, An optimizing method based on autonomous animats: Fish swarm algorithm, Syst. Eng.-Theory Pract., 11 (2002), 32–38.
    [10] S. Mirjalili, S. Saremi, S. M. Mirjalili, L. S. Coelho, Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization, Expert Syst. Appl., 47 (2016), 106–119. https://doi.org/10.1016/j.eswa.2015.10.039 doi: 10.1016/j.eswa.2015.10.039
    [11] K. M. Passino, Biomimicry of bacterial foraging for distributed optimization and control, IEEE Control Syst. Mag., 22 (2002), 52–67. https://doi.org/10.1109/MCS.2002.1004010 doi: 10.1109/MCS.2002.1004010
    [12] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw., 95 (2016), 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008 doi: 10.1016/j.advengsoft.2016.01.008
    [13] F. A. Hashim, A. G. Hussien, Snake Optimizer: A novel meta-heuristic optimization algorithm, Knowl.-Based Syst., 242 (2022), 108320. https://doi.org/10.1016/j.knosys.2022.108320 doi: 10.1016/j.knosys.2022.108320
    [14] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: A Gravitational Search Algorithm, Inf. Sci., 179 (2009), 2232–2248. https://doi.org/10.1016/j.ins.2009.03.004 doi: 10.1016/j.ins.2009.03.004
    [15] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley Publishing Company, Boston, 1989.
    [16] S. Mirjalili, The ant lion optimizer, Adv. Eng. Software, 83 (2015), 80–98. https://doi.org/10.1016/j.advengsoft.2015.01.010 doi: 10.1016/j.advengsoft.2015.01.010
    [17] S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems, Neural Comput. Appl., 27 (2016), 1053–1073. https://doi.org/10.1007/s00521-015-1920-1
    [18] S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowl.-Based Syst., 89 (2015), 228–249. https://doi.org/10.1016/j.knosys.2015.07.006 doi: 10.1016/j.knosys.2015.07.006
    [19] S. Mirjalili, SCA: A Sine Cosine Algorithm for solving optimization problems, Knowl.-Based Syst., 96 (2016), 120–133. https://doi.org/10.1016/j.knosys.2015.12.022 doi: 10.1016/j.knosys.2015.12.022
    [20] R. V. Rao, V. J. Savsani, D. P. Vakharia, Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems, Comput.-Aided Des., 43 (2011), 303–315. https://doi.org/10.1016/j.cad.2010.12.015 doi: 10.1016/j.cad.2010.12.015
    [21] R. V. Rao, V. Patel, An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems, Sci. Iran., 20 (2013), 710–720. https://doi.org/10.1016/j.scient.2012.12.005 doi: 10.1016/j.scient.2012.12.005
    [22] K. Yu, X. Wang, Z. Wang, Elitist teaching-learning-based optimization algorithm based on feedback, Acta Autom. Sin., 40 (2014), 1976–1983.
    [23] L. Gao, H. Ouyang, X. Kong, H. Liu, Teaching-learning based optimization algorithm with crossover operation, J. Northeastern Univ. (Nat. Sci.), 35 (2014), 323–327. https://doi.org/10.3969/j.issn.1005-3026.2014.03.005 doi: 10.3969/j.issn.1005-3026.2014.03.005
    [24] R. V. Rao, V. J. Savsani, D. P. Vakharia, Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems, Inf. Sci., 183 (2012), 1–15. https://doi.org/10.1016/j.ins.2011.08.006 doi: 10.1016/j.ins.2011.08.006
    [25] S. Yang, Y. Zhang, S. Xu, Z. Liao, J. Li, Parameter identification of photovoltaic cell model based on grouping teaching-learning-based optimization algorithm, Distrib. Energy, 7 (2022), 52–61. https://doi.org/10.16513/j.2096-2185.DE.2207307 doi: 10.16513/j.2096-2185.DE.2207307
    [26] T. Niknam, R. Azizipanah-Abarghooee, M. Rasoul Narimani, A new multi objective optimization approach based on TLBO for location of automatic voltage regulators in distribution systems, Eng. Appl. Artif. Intell., 25 (2012), 1577–1588. https://doi.org/10.1016/j.engappai.2012.07.004 doi: 10.1016/j.engappai.2012.07.004
    [27] S. C. Satapathy, A. Naik, Data clustering based on teaching-learning-based optimization, in International Conference on Swarm, Evolutionary, and Memetic Computing, 7077 (2011), 148–156. https://doi.org/10.1007/978-3-642-27242-4_18
    [28] A. B. Gunji, B. B. B. V. L. Deepak, C. M. V. A. R. Bahubalendruni, D. B. B. Biswal, An optimal robotic assembly sequence planning by assembly subsets detection method using teaching-learning-based optimization algorithm, IEEE Trans. Autom. Sci. Eng., 15 (2018), 1369–1385. https://doi.org/10.1109/TASE.2018.2791665 doi: 10.1109/TASE.2018.2791665
    [29] C. Wu, Y. He, J. Zhao, Solving set-union knapsack problem by modified teaching-learning-based optimization algorithm, J. Front. Comput. Sci. Technol., 12 (2018), 2007–2020. https://doi.org/10.3778/j.issn.1673-9418.1802021 doi: 10.3778/j.issn.1673-9418.1802021
    [30] M. Ghasemi, S. Ghavidel, M. Gitizadeh, E. Akbari, An improved teaching-learning-based optimization algorithm using Lévy mutation strategy for non-smooth optimal power flow, Int. J. Electr. Power Energy Syst., 65 (2015), 375–384. https://doi.org/10.1016/j.ijepes.2014.10.027 doi: 10.1016/j.ijepes.2014.10.027
    [31] G. Li, P. Niu, W. Zhang, Y. Liu, Model NOx emissions by least squares support vector machine with tuning based on ameliorated teaching-learning-based optimization, Chemom. Intell. Lab. Syst., 126 (2013), 11–20. https://doi.org/10.1016/j.chemolab.2013.04.012 doi: 10.1016/j.chemolab.2013.04.012
    [32] B. Wang, H. Li, Y. Feng, An improved teaching-learning-based optimization for constrained evolutionary optimization, Inf. Sci., 456 (2018), 131–144. https://doi.org/10.1016/j.ins.2018.04.083 doi: 10.1016/j.ins.2018.04.083
    [33] K. Yu, X. Wang, Z. Wang, An improved teaching-learning-based optimization algorithm for numerical and engineering optimization problems, J. Intell. Manuf., 27 (2016), 831–843. https://doi.org/10.1007/s10845-014-0918-3 doi: 10.1007/s10845-014-0918-3
    [34] H. Tsai, Confined teaching learning based optimization with variable search strategies for continuous optimization, Inf. Sci., 500 (2019), 34–47. https://doi.org/10.1016/j.ins.2019.05.065 doi: 10.1016/j.ins.2019.05.065
    [35] R. V. Rao, V. Patel, An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems, Int. J. Ind. Eng. Comput., 3 (2012), 535–560. https://doi.org/10.5267/j.ijiec.2012.03.007 doi: 10.5267/j.ijiec.2012.03.007
    [36] F. Zou, L. Wang, X. Hei, D. Chen, D. Yang, Teaching-learning-based optimization with dynamic group strategy for global optimization, Inf. Sci., 273 (2014), 112–131. https://doi.org/10.1016/j.ins.2014.03.038 doi: 10.1016/j.ins.2014.03.038
    [37] D. Chen, R. Lu, F. Zou, S. Li, Teaching-learning-based optimization with variable-population scheme and its application for ANN and global optimization, Neurocomputing, 173 (2016), 1096–1111. https://doi.org/10.1016/j.neucom.2015.08.068 doi: 10.1016/j.neucom.2015.08.068
    [38] S. Sultana, P. K. Roy, Multi-objective quasi-oppositional teaching learning based optimization for optimal location of distributed generator in radial distribution systems, Int. J. Electr. Power Energy Syst., 63 (2014), 534–545. https://doi.org/10.1016/j.ijepes.2014.06.031 doi: 10.1016/j.ijepes.2014.06.031
    [39] F. Zou, D. Chen, Q. Xu, A survey of teaching-learning-based optimization, Neurocomputing, 335 (2019), 366–383. https://doi.org/10.1016/j.neucom.2018.06.076 doi: 10.1016/j.neucom.2018.06.076
    [40] S. Tuo, L. Yong, F. Deng, Y. Li, Y. Lin, Q. Lu, HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems, Plos One, 12 (2017), 0175114. https://doi.org/10.1371/journal.pone.0175114 doi: 10.1371/journal.pone.0175114
    [41] X. Li, P. Niu, J. Liu, Combustion optimization of a boiler based on the chaos and Lévy flight vortex search algorithm, Appl. Math. Modell., 58 (2018), 3–18. https://doi.org/10.1016/j.apm.2018.01.043 doi: 10.1016/j.apm.2018.01.043
    [42] Y. Niu, J. Kang, F. Li, W. Ge, G. Zhou, Case-based reasoning based on grey-relational theory for the optimization of boiler combustion systems, ISA Trans., 103 (2020), 166–176. https://doi.org/10.1016/j.isatra.2020.03.024 doi: 10.1016/j.isatra.2020.03.024
    [43] Y. Shi, W. Zhong, X. Chen, A. B. Yu, Jie Li, Combustion optimization of ultra supercritical boiler based on artificial intelligence, Energy, 170 (2019), 804–817. https://doi.org/10.1016/j.energy.2018.12.172 doi: 10.1016/j.energy.2018.12.172
    [44] A. Aminmahalati, A. Fazlali, H. Safifikhani, Multi-objective optimization of CO boiler combustion chamber in the RFCC unit using NSGA Ⅱ algorithm, Energy, 221 (2021), 119859. https://doi.org/10.1016/j.energy.2021.119859 doi: 10.1016/j.energy.2021.119859
    [45] P. Tan, J. Xia, C. Zhang, Q. Fang, G. Chen, Modeling and reduction of NOX emissions for a 700MW coal-fired boiler with the advanced machine learning method, Energy, 94 (2016), 672–679. https://doi.org/10.1016/j.energy.2015.11.020 doi: 10.1016/j.energy.2015.11.020
    [46] Q. Li, Q. He, Z. Liu, Low NOx combustion optimization based on partial dimension opposition-based learning particle swarm optimization, Fuel, 310 (2022), 122352. https://doi.org/10.1016/j.fuel.2021.122352 doi: 10.1016/j.fuel.2021.122352
    [47] H. Xi, P. Liao, X. Wu, Simultaneous parametric optimization for design and operation of solvent-based post-combustion carbon capture using particle swarm optimization, Appl. Therm. Eng., 184 (2021), 116287. https://doi.org/10.1016/j.applthermaleng.2020.116287 doi: 10.1016/j.applthermaleng.2020.116287
    [48] M. V. J. J. Suresh, K. S. Reddy, A. K. Kolar, ANN-GA based optimization of a high ash coal-fired supercritical power plant, Appl. Energy, 88 (2011), 4867–4873. https://doi.org/10.1016/j.apenergy.2011.06.029 doi: 10.1016/j.apenergy.2011.06.029
    [49] A. A. M. Rahat, C. Wang, R. M. Everson, J. E. Fieldsend, Data-driven multi-objective optimization of coal-fired boiler combustion systems, Appl. Energy, 229 (2018): 446–458. https://doi.org/10.1016/j.apenergy.2018.07.101
    [50] F. Wang, S. Ma, H. Wang, Y. Li, Z. Qin, J. Zhang, A hybrid model integrating improved flower pollination algorithm-based feature selection and improved random forest for NOX emission estimation of coal-fired power plants, Measurement, 125 (2018), 303–312. https://doi.org/10.1016/j.measurement.2018.04.069 doi: 10.1016/j.measurement.2018.04.069
    [51] X. Hu, P. Niu, J. Wang, X. Zhang, Multi-objective prediction of coal-fired boiler with a deep hybrid neural networks, Atmos. Pollut. Res., 11 (2020), 1084–1090. https://doi.org/10.1016/j.apr.2020.04.001 doi: 10.1016/j.apr.2020.04.001
    [52] G. Huang, Q. Zhu, C. Siew, Extreme learning machine: a new learning scheme of feedforward neural network, in 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), 2 (2004), 985–990. https://doi.org/10.1109/IJCNN.2004.1380068
    [53] Y. Ma, C. Xu, H. Wang, R. Wang, S. Liu, X. Gu, Model NOx, SO2 emissions concentration and thermal efficiency of CFBB based on a hyper-parameter self-optimized broad learning system, Energies, 15 (2022), 7700. https://doi.org/10.3390/en15207700 doi: 10.3390/en15207700
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1381) PDF downloads(74) Cited by(0)

Figures and Tables

Figures(10)  /  Tables(13)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog