Research article Special Issues

Ship power load forecasting based on PSO-SVM


  • Compared with the land power grid, power capacity of ship power system is small, its power load has randomness. Ship power load forecasting is of great significance for the stability and safety of ship power system. Support vector machine (SVM) load forecasting algorithm is a common method of ship power load forecasting. In this paper, water flow velocity, wind speed and ship speed are used as the features of SVM to train the load forecasting algorithm, which strengthens the correlation between features and predicted values. At the same time, regularization parameter C and standardization parameter σ of SVM has a great influence on the prediction accuracy. Therefore, the improved particle swarm optimization algorithm is used to optimize these two parameters in real time to form a new improved particle swarm optimization support vector machine algorithm (IPSO-SVM), which reduces the load forecasting error, improves the prediction accuracy of ship power load, and improves the performance of ship energy management system.

    Citation: Xiaoqiang Dai, Kuicheng Sheng, Fangzhou Shu. Ship power load forecasting based on PSO-SVM[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 4547-4567. doi: 10.3934/mbe.2022210

    Related Papers:

    [1] Bei Liu, Hongzi Bai, Wei Chen, Huaquan Chen, Zhen Zhang . Automatic detection method of epileptic seizures based on IRCMDE and PSO-SVM. Mathematical Biosciences and Engineering, 2023, 20(5): 9349-9363. doi: 10.3934/mbe.2023410
    [2] Qian Zhang, Haigang Li . An improved least squares SVM with adaptive PSO for the prediction of coal spontaneous combustion. Mathematical Biosciences and Engineering, 2019, 16(4): 3169-3182. doi: 10.3934/mbe.2019157
    [3] Ming Zhu, Kai Wu, Yuanzhen Zhou, Zeyu Wang, Junfeng Qiao, Yong Wang, Xing Fan, Yonghong Nong, Wenhua Zi . Prediction of cooling moisture content after cut tobacco drying process based on a particle swarm optimization-extreme learning machine algorithm. Mathematical Biosciences and Engineering, 2021, 18(3): 2496-2507. doi: 10.3934/mbe.2021127
    [4] Cong Wang, Chang Liu, Mengliang Liao, Qi Yang . An enhanced diagnosis method for weak fault features of bearing acoustic emission signal based on compressed sensing. Mathematical Biosciences and Engineering, 2021, 18(2): 1670-1688. doi: 10.3934/mbe.2021086
    [5] Anlu Yuan, Tieyi Zhang, Lingcong Xiong, Zhipeng Zhang . Torque control strategy of electric racing car based on acceleration intention recognition. Mathematical Biosciences and Engineering, 2024, 21(2): 2879-2900. doi: 10.3934/mbe.2024128
    [6] Xiao Chen, Zhaoyou Zeng . Bird sound recognition based on adaptive frequency cepstral coefficient and improved support vector machine using a hunter-prey optimizer. Mathematical Biosciences and Engineering, 2023, 20(11): 19438-19453. doi: 10.3934/mbe.2023860
    [7] Dashe Li, Xueying Wang, Jiajun Sun, Huanhai Yang . AI-HydSu: An advanced hybrid approach using support vector regression and particle swarm optimization for dissolved oxygen forecasting. Mathematical Biosciences and Engineering, 2021, 18(4): 3646-3666. doi: 10.3934/mbe.2021182
    [8] Zhishan Zheng, Lin Zhou, Han Wu, Lihong Zhou . Construction cost prediction system based on Random Forest optimized by the Bird Swarm Algorithm. Mathematical Biosciences and Engineering, 2023, 20(8): 15044-15074. doi: 10.3934/mbe.2023674
    [9] Li-zhen Du, Shanfu Ke, Zhen Wang, Jing Tao, Lianqing Yu, Hongjun Li . Research on multi-load AGV path planning of weaving workshop based on time priority. Mathematical Biosciences and Engineering, 2019, 16(4): 2277-2292. doi: 10.3934/mbe.2019113
    [10] Xiao Liang, Taiyue Qi, Zhiyi Jin, Wangping Qian . Hybrid support vector machine optimization model for inversion of tunnel transient electromagnetic method. Mathematical Biosciences and Engineering, 2020, 17(4): 3998-4017. doi: 10.3934/mbe.2020221
  • Compared with the land power grid, power capacity of ship power system is small, its power load has randomness. Ship power load forecasting is of great significance for the stability and safety of ship power system. Support vector machine (SVM) load forecasting algorithm is a common method of ship power load forecasting. In this paper, water flow velocity, wind speed and ship speed are used as the features of SVM to train the load forecasting algorithm, which strengthens the correlation between features and predicted values. At the same time, regularization parameter C and standardization parameter σ of SVM has a great influence on the prediction accuracy. Therefore, the improved particle swarm optimization algorithm is used to optimize these two parameters in real time to form a new improved particle swarm optimization support vector machine algorithm (IPSO-SVM), which reduces the load forecasting error, improves the prediction accuracy of ship power load, and improves the performance of ship energy management system.



    With the advancement of intelligent ships, the degree of ship automation is getting higher and higher. The propulsion mode of ships is changed from main engine propulsion to electric propulsion. The stability of ship power system is related to the safety of ships. Ship power system has small capacity, complex working conditions and strong randomness of load, so the prediction of ship power load will be related to the management strategy of ship power system energy tube system and the stability of ship power system, which is of great significance to the safety of ship power system.

    Load prediction algorithms in land power grid are relatively mature, including regression analysis [1,2,3], grey model [4,5,6], fuzzy prediction [7,8,9], autoregressive integral moving average model [10,11] and other traditional prediction methods. There is also random forest [12,13], support vector machine [14,15], expert system [16,17], artificial neural network prediction method [18,19,20] and other machine learning methods. Support vector machine load prediction algorithm with its small number of sample data can solve the obvious advantages of high dimensional ship power system application is relatively mature. Compared with the land power system, the ship power system has the characteristics of equal capacity and load, complex working conditions and strong randomness of load, but the load of the ship power system has the same periodicity and continuity as the land power system. These factors will affect the existence and feasibility of the solution of the adjustable parameters of the prediction model [21], as well as the global existence and local convergence of the solution [22,23]. This paper studies the load prediction algorithm of support vector machine and the parameters such as temperature, water flow velocity, relative wind speed and ship speed are innovatively used as the training characteristic values of support vector machines, which strengthens the correlation between input characteristics and output load forecast value and reduces the prediction error. At the same time, aiming at the problem that it is difficult to select parameters when using support vector machine to forecast the load, particle swarm optimization algorithm is introduced to optimize parameters, which enhances the accuracy of ship power load forecasting algorithm.

    As a machine learning language based on statistical learning theory, support vector machine has strong advantages in solving problems such as small samples, nonlinearity and high dimensions [24]. Since the core content of support vector machine was proposed in 1992, it was initially widely used in pattern recognition to make decision rules with good generalization performance. Later, with the development of support vector machines and Vapnik introduced the insensitive loss function ε, the support vector machine algorithm is extended to some nonlinear regression problems. The basic idea of SVM is to define an optimal linear hyperplane and transform the algorithm for finding the hyperplane into an optimization problem [25]. According to the principle of structural risk minimization, the actual risk is minimized, and the value of h of empirical risk and VC dimension are minimized [26].

    Select training sample S as the training sample set for support vector machine regression prediction [27]:

    S={(x1,y1),,(xi,yi),,(xl,yl)}Rn×R (1)

    In Eq (1), S represents the training sample set of the system; xiRn represents the sample input information of the ship's historical power load, yiR represents the output of the corresponding input, and l is the total number of training samples in the training data set. Establish the mapping set f: RnR from the input space Rn to the output R, which satisfies f(x)=y, then the regression function is shown in Eq (2):

    y=f(x)=ωφ(x)+b (2)

    In Eq (2), ω represents the weight vector, b represents the bias of regression, φ(x) represents the mapping function of samples from low-dimensional to high-dimensional space. According to the theory of modern statistics, the regression prediction error of support vector machine is mainly caused by empirical risk and ω2. The total error can be expressed as the equation shown in Eq (3):

    R(ω)=12ω2+li=1e(f(xi)yi) (3)

    In Eq (3), e () is called loss function, which can be specifically expressed as Eq (4):

    (f(xi)yi)={0,|f(xi)yi|<ε|f(xi)yi|ε,other (4)

    In this equation, ε is called insensitive function. When using support vector machine for regression prediction, if you want to improve its prediction accuracy, you only need to minimize the value of R(ω) within the range of constraints. According to the structural risk minimization criterion, the risk minimization of support vector machine load forecasting algorithm can be expressed as:

    min[R(ω)]=min[12ω2+li=1e(f(xi)yi)] (5)

    when fitting the regression model, in order to enhance the accuracy of the regression model, it is necessary to consider the possible errors in the calculation process of the model. To realize the expression of prediction model error by introducing two relaxation factors ξi,ξi, then the loss function of Eq (5) can be expressed as Eq (6):

    min[12ω2+Cli=1e(ξi+ξi)]
    s.t.{yiωφ(x)be+ξiωφ(x)+byie+ξiξi,ξi0 (6)

    In Eq (6), the first half represents the generalization ability of the model, and the latter half is used to reduce the error in the training process. C (c > 0) is the penalty parameter of regression error, which is called regularization parameter. The minimum solution problem shown is a typical optimization problem with inequality constraints. When solving the minimum value, we can consider transforming it into a dual problem by introducing Lagrange function. By introducing Lagrange multiplier αi,αi, Lagrange equation as shown in Eq (7) can be established:

    L(ω,ξ,ξ,αi,αi,λi)=12ω2+Cli=1(ξ+ξ)li=1αi((e+ξi)yi+f(xi))
    li=1αi((e+ξi)yif(xi))li=1(λiξi+λiξi) (7)

    In Eq (7), αi,αi,λi0,i=1,,l.

    According to the optimal condition of Lagrange equation, there is Lω=0,Lb=0,Lξi=0,Lξi=0, expand and simplify the equation to obtain a set of equations shown in Eq (8):

    {Lω=ωli=1(αiαi)xi=0Lb=li=1(αiαi)=0Lξi=Cαiλi=0Lξi=Cαiλi=0,i=1,,l (8)

    By introducing Eq (8) into Eq (7), the dual problem of the optimization problem of Eq (9) can be obtained:

    Lmax=12li,j=1(αiαi)(αjαj)<xi,xj>
    +li=1(αiαi)yili=1(αiαi)εs.t.li=1(αiαi)=0 (9)

    In this equation, 0αi,αiC.

    By solving the quadratic programming problem of Eq (9), we can obtain the further expression of the regression function f(x), which can be divided into two categories: linear optimization and nonlinear optimization [28].

    When the training sample is a linear model:

    Lmax=12li,j=1(αiαi)(αjαj)xixj+li=1(αiαi)yili=1(αiαi)ε
    s.t.li=1(αiαi)=0 (10)

    By further solving Eq (10), we can get:

    f(x)=lxiNsv(αiαi)xixj+b (11)

    Nsv is the set of support vectors in the regression model.

    When the regression model is nonlinear:

    When solving the regression problem of nonlinear model, we usually use a nonlinear mapping to map the original training samples to the high-dimensional space, and realize the linear regression of training samples in the high-dimensional space. In the above process, the inner product operation <φ(x),φ(y)> is used in the high-dimensional space because the samples in the low-dimensional space are mapped to the high-dimensional space, then Eq (10) can be specifically expressed as follows when the regression model is nonlinear:

    Lmax=12li,j=1(αiαi)(αjαj)K(xi,xj)
    +li=1(αiαi)yili=1(αiαi)ε
    s.t.li=1(αiαi)=0 (12)

    K(xi,xj)=φ(x)φ(y) represents the kernel function. Then the regression model:

    f(x)=lxiNsv(αiαi)K(xi,xj)+b (13)

    Least squares support vector machine (LSSVM) is a new algorithm based on support vector machine. The analysis shows that the process of solving SVM model is actually to solve a quadratic programming problem with inequality constraints. In LSSVM algorithm, the square error term is selected as the optimization index, and the inequality constraint in SVM algorithm is replaced by equality constraint, so as to convert the quadratic programming problem of SVM into the solution of linear model [29].

    The basic idea of LSSVM regression is the same as that of nonlinear SVM, which maps samples into high-dimensional space through a nonlinear mapping, and performs linear regression on samples in high-dimensional space. Its basic expression is the same as that of SVM:

    y=f(x)=ωφ(x)+b (14)

    Then the optimization problem of LSSVM can be expressed as Eq (15):

    minJ(ω,b,e)=12ω2+12Cli=1ei2
    s.t.ωTφ(xi)+b+ei=yi,i=1,,l (15)

    In this equation: eiRl×1 is the regression error, R is the error vector composed of errors ei. And C represents the regularization parameter, which is used to control the penalty degree of error in the regression process.

    Similarly, Lagrange multipliers λ are introduced into Eq (15), Eq (15) can be expressed as:

    L(ω,b,e,λ)=J(ω,b,e)li=1λi(ωTφ(xi)+b+eiyi) (16)

    For solving the minimum problem in the specified scope with constraints, the KKT condition can be used to obtain:

    {Jω=li=1λiφ(xi)=0Jb=li=1λi=0Jei=λiCei=0Jλi=ωTφ(xi)+b+eiyi=0,i=1,,l (17)

    Eliminate ω and e in the equation, after solving the Eq (17), we can get:

    bC=[0EEK+IY]1[0Y] (18)

    In this equation, E=[1,1,,1]T, K(xi,xj)=φ(xi)Tφ(xj) represents the kernel function under suitable conditions, I stands for identity matrix, Y = [y1,y2,,yl]T, then the prediction model of LSSVM can be expressed as:

    y=li,j=1λiK(xi,xj)+b (19)

    λi and b can be solved by linear equations, K(xi,xj) represents the kernel function mapping samples from low-dimensional space to high-dimensional space. The commonly used kernel functions generally include linear kernel function, RBF kernel function, polynomial kernel function, etc. Gaussian radial basis kernel function (RBF) is widely used in SVM regression estimation because of its simple expression, radial symmetry and good analytical properties [30,31]. In this paper, RBF is selected as the kernel function:

    K(xi,xj)=exp[||xixj||22σ2] (20)

    In this this equation, xi represents the input vector, xj represents the center of the j-th Gaussian basis function, σ represents the standardized parameters that determine the width of the center point surrounded by the Gaussian function. ||xixj|| represents the norm of the vector (xixj), which can specifically represent the distance between the vector xi and the vector xj.

    When using LSSVM for load forecasting, the value of C and σ will have a great impact on the accuracy of prediction. In order to improve the prediction accuracy of load data regression model, particle swarm optimization algorithm is introduced to optimize the parameters of LSSVM load prediction algorithm, so as to form a new improved particle swarm optimization support vector machine algorithm, which enhances the accuracy of prediction model [32].

    Particle swarm optimization (PSO) is a swarm intelligence algorithm that simulates the predation behavior of birds [33]. The algorithm searches the solution space of the problem by simulating the foraging behavior of birds. Each particle in this algorithm has the function of information sharing, and the particle can change its position according to the fitness value in the environment. The position of each particle in the search space represents a feasible solution of the problem. Each particle moves in the search space at a certain speed, which represents the solution seeking process. The fitness value of the objective function of the problem is often used to judge the quality of the particle position. The population obtains the best particle position in the search space through multiple iterations, which represents the process of obtaining the optimal solution of the problem.

    Firstly, some particles are randomly generated in a space, and the scale of the example group (i.e., the number of particles) and the search dimension are set. If the total number of particles is n and the dimension is Q [34], each particle can be expressed as xi = (xi1,xi2,xi3,,xiQ), the corresponding speed can be expressed as vi = (vi1,vi2,vi3,,viQ), where i1,2,,n. Each particle needs to pay attention to the historical optimal solution pi = (pi1,pi2,pi3,,piQ) searched by its individual iterative process and the the global optimal solution pg = (pg1,pg2,pg3,,pgQ) searched by all particles in the population.

    In the iterative updating process of particles, the Eq (21) needs to be used to update the particle velocity and position in the population by PSO [35]:

    {vidk+1=ωvidk+c1ξ(pidkxidk)+c2η(pgdkxidk)xidk+1=xidk+γvidk+1 (21)

    In this equation: vidk and vidk+1 represent the velocity of particles at time k and time k+1. Similarly, xidk and xidk+1 represent the position of particles at that two moments. ω is inertia weight, c1 and c2 are learning factors, ξ and η represent random numbers evenly distributed between [0, 1]. γ is the constraint factor when updating the position of particles, which is generally set to 1.

    It can be seen from Eq (21) that the speed of particles in the search space is affected by three factors: the first part is that vidk is the speed of particles in the previous iteration, represents the flight inertia of particles, and has the ability to balance local search and global search. When vidk is small, the global search ability of particles is weak, but the local search ability is strong. When vidk is large, the local search ability of particles is weak, but the global search ability is strong; The second part is the thinking and learning of the particle on its own flight experience, that is, the current search tendency of the particle is changed by the attraction of the historical optimal solution pidk to the particle in the flight process, in which the learning factor c1 is used to adjust the moving step of the particle to the historical optimal solution pidk. The random adjustment of the value of ξ can reduce the situation that particles fall into local optimization and improve the global search ability of particles; The third part is the "social cognitive ability" of particles, that is, the process of particles learning the flight experience of the whole population. This process is directly related to the global optimal solution pgdk of the population. The learning factor c2 is used to control the step size of particles flying to the global optimal solution pgdk. Affected by the random adjustment of the value of η, particles can achieve mutual cooperation and information sharing. Under the joint action of these three factors, particles constantly adjust their position and flight speed through their own flight experience and population information sharing mechanism, and finally reach the optimal particle position to obtain the optimal solution of the problem to be solved [36].

    When ω=0, there is no memory term in the velocity formula. The velocity formula becomes:

    vidk+1=0+c1ξ(pidkxidk)+c2η(pgdkxidk) (22)

    Particles do not have inertial motion, and they do not have the ability to expand space. Particles may not be able to search a certain area in space. In the whole optimization process, the particle group shows too much local search, and does not show the ability of particle to search the whole search space [37].

    When c1=0, the individual cognition term in the velocity formula does not exist, the particle loses the ability of self-evaluation and judgment, and retains the ability of 'social cognition' of the best position in the particle group history.

    vidk+1=ωvidk+0+c2η(pgdkxidk) (23)

    Particles move rapidly in the search space, and through information exchange and mutual learning between particles, particles can enter the new search space for optimization, and the algorithm quickly enters the convergence state. However, when calculating some complex problems, the optimization performance is not good, and the calculation result may be a local optimal value in the space rather than the global optimal value [38].

    When c2=0, that is, the group cognitive item in the velocity formula does not exist, the particles cannot communicate and learn, and there is no information transmission between the entire population. In the entire search space, the particles are independent individuals.

    vidk+1=ωvidk+c1ξ(pidkxidk)+0 (24)

    Equivalent to a certain number of particles in the search space for blind search, each particle immediately search results are the extreme value of each particle, so often cannot get satisfactory results [39].

    When c1=0,c2=0, there is neither individual cognition nor group cognition in the velocity formula, only the memory part of the particle itself.

    vidk+1=ωvidk+0+0 (25)

    The basic particle swarm optimization process is as follows:

    Step1: Initialization: randomly generate the position and velocity of particles in the D dimension of the problem space.

    Step2: Evaluate particles: evaluate the applicable value of q-dimensional optimization function for each particle.

    Step3: Update the optimal position: 1) Compare the current fitness value of the particle with its individual history optimal value. If it is better than pi, its pi is updated to the current particle position. 2) Comparing the current fitness value of particles with the global optimal value of population. If it is better than pg, the global optimal solution position is updated to the current particle position.

    Step4: Update particles: Update particle velocity and position according to Eq (21).

    Step5: Stop condition: Loop back to Step 2 until the termination condition is satisfied, which is usually to meet the maximum iteration algebra.

    Corresponding to the above algorithm flow, the basic framework of particle swarm algorithm is shown in Figure 1.

    Figure 1.  The basic framework of particle swarm algorithm.

    During the operation of particle swarm optimization algorithm, the parameters affect the change of particle velocity and the update of particle position, and finally affect the result of particle optimization. The performance of particle swarm optimization algorithm mainly depends on the two key parts of local search and global search. When solving the actual optimization problem, it is necessary to balance global search and local search, and take into account the local search in the global search. In order to better solve this problem [40], the inertia weight ω is introduced into the flight inertia part vidk that affects the particle's flight speed to balance the local search and global search. The larger the value of ω, the more particles inherit the previous speed and the faster the particle moves, which is conducive to avoiding falling into a local state during optimization and the global optimization of the algorithm. On the contrary, the smaller the value of ω, the weaker the ability of particles to explore space, and the particles are easy to carry out rapid optimization in a local area of space, which is not conducive to the global optimization of particles. If the inertia weight ω is not a constant but a dynamic decreasing law in the process of particle iteration, the inertia weight ω of the particle in the search space is large in the early stage of the algorithm, and the particle can carry out rapid exploratory optimization in the whole search space, so that the particle approaches the area around the global optimal position in a short time; In the later stage of the algorithm, the inertia weight ω of the particles is small so that the particles focus on the area around the optimal position for centralized mining optimization.

    In this paper, a commonly used concave function decreasing strategy is taken as shown in Eq (26) to make the inertia weight ω keep large enough at the beginning of particle optimization so that the particles can quickly approach the optimal solution. In the later stage of the algorithm, the inertia weight ω is constantly reduced, so that the algorithm can be more stable and the optimization effect is better.

    ω=minω×(maxωminω)11+10CurCountLoopCount (26)

    In this formula: minω and maxω represent the maximum and minimum values of ω respectively. CurCount represents the current iteration number of the particle. LoopCount represents the total iteration number of the particle.

    In order to avoid particle swarm optimization falling into local optimization, the average particle distance and fitness variance are used as evaluation indexes to reinitialize particles [41,42,43]. During particle swarm initialization, the density of particles in the space is judged by the average particle spacing. If the average particle spacing is less than the set value, it means that the particle distribution is too dense and not evenly distributed in the space.

    In the later stage of particle swarm optimization, the particle converges to a certain optimal solution, and the average particle spacing of the particle is small. The fitness variance is used to evaluate whether the particle converges to the nonlocal optimal solution. When the average particle spacing of the particle is small and the fitness variance is small, the particle loses its activity prematurely and falls into the local optimal solution. At this point, reinitialize the particles and make the particles search again.

    During the particle initialization of particle swarm optimization, the distribution of particles in space is random, which indicates that most particles may concentrate near a certain value and affect the final optimization result. In order to make the particles search the global space and prevent the particles from falling into the local optimal solution in the early stage, the concept of particle average particle distance is introduced in the process of parameter optimization with particle swarm optimization, which guides the distribution of the initial population of particles in the space of particle distribution. Its basic definition is shown in Eq (27):

    D(t)=1|n||L|ni=1Qj=1(pij¯pj)2 (27)

    In this formula: n is the number of particles of the particle swarm; L is the diagonal value of the search space during the particle search process, which can be expressed in the program as: L=(2000.1)2+(200.1)2; Q is the dimension of the particle; pij represents the coordinate value of the j-th dimension of the i-th particle; ¯pj represents the average of all particles in the j-th dimension. The average particle distance represents the distribution of particles in the search space. The smaller the value of D(t), the more dense the particles are; On the contrary, it indicates that the particles are more dispersed. Through the average particle distance, the particles can more fully optimize the parameters in the search space.

    In the process of particle iteration, the particles will approach towards the global optimal solution. At the initial stage of particle operation, the convergence speed of particles is fast. When the particles encounter the local extreme point, the particles may approach the local extreme point, resulting in the rapid decline of particle speed. Particle swarm optimization loses the ability of iterative evolution, and the algorithm falls into the local optimal solution and cannot jump out. Therefore, fitness variance is added to evaluate the concentration degree of particle fitness and judge the current state of particle population. If the number of current iterations is k, the fitness value of the i-th particle in this generation population can be represented by fki (i1,2,,n), ¯¯f is the current average fitness value of the population, which can be expressed as:

    ¯¯f=1nni=1fki (28)

    σ2k is the fitness variance of the k-th generation of the population, then σ2k can be defined as:

    σ2k=ni=1(fki¯¯f)2 (29)

    Let the population fitness variance after introducing the normalization factor be σ2, which is used to characterize the convergence degree of particles, then σ2 can be defined as:

    σ2=ni=1(fki¯¯ff)2 (30)

    In this equation: f represents the normalized scaling factor, which is used to limit the size of variance, and is defined as:

    f={max|fi¯¯f|,max|fi¯¯f|>11,other (31)

    Let the population reach the global extreme point pgd in the t-th generation, that is

    limtpg(t)=pgd (32)

    Obviously, if the global extremum is pgd, when the particle is in c2η(pgdkxidk), that is, the third part of the particle flight speed update formula——the process of the particle learning the flight experience of the whole population, the particle will keep moving closer to pgd, and its individual extreme value p will be updated continuously. If the particle does not find a better position than pgd, the individual extreme value of the particle itself will be equal to pgd, that is

    limtpi(t)=pgd (33)

    The above formula shows that for any particle in the particle swarm, its final convergence position will be the optimal extreme value found by the whole particle swarm, and all particles will gather to this position.

    Let the population reach the global optimum at t-th generation, that is, pgd is the global optimum, and each particle converges to pgd under the influence of "social cognitive ability", that is, the global optimum. At this time, the fitness value of the particle is

    fki=f(pgd) (34)

    Therefore, the average fitness of particles is

    ¯¯fk=f(pgd) (35)

    At this time:

    σ2min (36)

    To sum up, in the process of particle iteration, the fitness of particles will get closer and closer, and the value of σ2 will become smaller and smaller, which reflects the continuous aggregation of particles. In order to make each particle traverse the search space as much as possible and continuously update the global optimal solution of the population, a threshold is set in the algorithm. When the average particle distance D(t) of the population and the fitness variance σ2 are both less than the given threshold, it indicates that the particle has fallen into the global extreme point, the particle flight speed calculated from Eq (21) is very small, which makes the particle difficult to change the position significantly. At this time, vmax is forced to update the flight velocity of particles in the algorithm, and the n particles in the population are mutated by vmax, so that particles are separated from the global extremum point in different directions with vmax, and are redistributed in all positions of the search space. Since the particles have memory function, the optimal position found by the particles so far is still stored in memory, and then the particles will search the solution space for a new round. In the search process, D(t) and σ2 begin to decrease again. When they are reduced to the given threshold, the algorithm will enter the mutation stage again. The whole evolutionary process of particles is to continuously search and mutate until the end of iterations to find the best position. The specific algorithm flow is shown in Figure 2.

    Figure 2.  Improving the process of PSO optimization algorithm.

    In this paper, the 42-day load forecasting data of a 60 m Zhenyang steam ferry is selected to form the training data set of its load forecasting model. In the data set, weather information and load information on the day of ship operation are included. Weather information specifically includes relative flow velocity and relative wind velocity; The load information is collected every hour for the ship's load power information during the daily operation time. Select the data of a certain day as an example to display the load data, as shown in Table 1.

    Table 1.  Typical load data.
    Date Time Average air temperature (℃) Flow velocity (m/s) Relative wind speed (m/s) Average speed (kn) Load power (kw)
    2.14 0 7.2 0.73 2.21 8.56 802
    2.14 2 7.1 0.75 2.23 8.51 821
    2.14 23 7.6 0.82 2.45 7.84 865
    2.15 0 7.5 0.74 2.25 8.25 793
    3.27 15 13.3 0.58 1.25 8.62 856
    3.27 23 9.3 0.63 0.95 8.15 816

     | Show Table
    DownLoad: CSV

    In the typical load data shown in Table 1, such as 2.14, the average temperature at time 0 is 7.2 ℃, the flow velocity is 0.73 m/s, the relative wind speed is 2.21 m/s, the average sailing speed is 8.56 kn, and the total load is 802 kw. Among them, the load power of two propulsion motors is about 600kw, and the other loads such as ship lighting, control and pump are 202 kw in total. At time 23, the total demand power of the two propulsion motors is 625 kw, and the total demand power of the other ship lighting, control and other pump loads is 196 kw.

    Due to the differences in units of different data types, in order to avoid the adverse impact of a certain data on the load forecasting results due to its magnitude, it is necessary to normalize the data. The data in Table 1 mainly includes temperature, flow velocity, relative wind speed, average speed and load data. Among them, the flow velocity data are already between 0 and 1, there is no need to normalize them. So it is only need to normalize the parameters affecting the ship's power load such as average temperature, relative wind speed, average sailing speed and load value [44].

    1) Normalization of parameters affecting ship power load

    Typical normalization of data:

    Tij=TijTjminTjmaxTjmin (37)

    In this equation: Tij represents normalized data; Tij represents the original data; Tjmin represents the lowest value of any day in the collected data; Tjmax represents the highest value at any time in the collected data, i=1,2,,n, j=1,2,,m.

    2) Normalization of load value

    When normalizing the load data, the logarithmic processing method is adopted [45]:

    xij=lg(xij) (38)

    In this equation: xij is the normalized load value; xij represents the original data at time i on day j, i=1,2,,n, j=1,2,,m.

    After collecting and processing the historical load data of ships, the load forecasting model of ships can be fitted and regressed. In the establishment of load forecasting model, considering the continuity and periodicity of load, the input training characteristics of load are summarized.

    Therefore, the selected input characteristics include the following : the average temperature, flow velocity, relative wind speed, average speed, load value at the same time of the previous day, the load value at the first two moments of the current forecast time of the forecast day, the load value at the first moment of the current forecast time of the forecast day, the average temperature, flow velocity, relative wind speed and average speed at the first moment of the forecast day. A total of 11 data are used as the input characteristics of training, and the output is the load value at the forecast time in the forecast day.

    When LSSVM is used for load forecasting, since the RBF kernel function has radial symmetry and good smoothness, and its expression is simple, especially in the case of multivariable input, it can also realize the mapping of input vector to high-dimensional space through nonlinear transformation and deal with the prediction problem of nonlinear relationship between input and output. Therefore, RBF is selected as the kernel function in the forecasting process [46], and two parameter values that the penalty parameter C and the standardized parameter σ of the kernel function need to be selected to carry out the regression of load data.

    In this LSSVM training, C = 50, σ = 3 are selected, and the load data of dates 2.14 to 3.26 in Table 1 are selected as the training data set to predict the load data of dates 3.27. The results after operation are shown in Figure 3. In order to evaluate the prediction error of the prediction curve, the prediction error curve and the average relative error value are selected as the evaluation indexes, and the obtained curve is shown in Figure 4.

    Figure 3.  LSSVM load forecasting curve.
    Figure 4.  Error curve.

    It can be seen from Figures 3 and 4 that the maximum error can reach 8%, the minimum error is around 2%, and the average relative error is about 3.2% when LSSVM is used for prediction.

    The particle dimension is set to be 2D, the total number of particles is 50, the number of iterations is 10, the value range of ω is [0.4, 0.9], the value range of parameter C is [0.1,200], and the value range of σ is [0.1, 20]. Taking the error as the evaluation index of fitness and running the LSSVM regression model optimized by particle swarm optimization, the function curve in Figures 5 and 6 can be obtained.

    Figure 5.  PSO-SVM load forecasting curve.
    Figure 6.  Load forecasting error curve.

    It can be seen from Figures 5 and 6 that the regression model obtained by using PSO algorithm to optimize the two parameters of LSSVM is more accurate. It can be seen from the error curve in Figure 6 that although the maximum error of prediction increases, the minimum error of prediction can reach around 0. The average relative error also shows that the improved SVM algorithm based on PSO can obtain better prediction results. In addition, when the prediction model is established, online performance and offline performance indicators are added as the evaluation basis for optimization, and the expression is shown in Eq (39):

    {OnLine(1,k)=sum(MeanAdapt(1,1:k))kOffLine(1,k)=max(MeanAdapt(1,1:k)) (39)

    In this equation: k=1,2,,LoopCount. MeanAdapt(1,1:k) represents the average fitness value of each generation of particles from the first generation to the k-th generation.

    At the beginning of optimization, the prediction error increases, the maximum average error of iteration can reach 2.87%, and the maximum average error of single iteration can reach 2.89%. Through continuous iteration, it can be seen that the error after iteration is getting smaller and smaller, and the average error is getting smaller and smaller, indicating that the optimization results are constantly optimizing.

    IPSO-SVM can not only make the space optimization of particles sufficient, but also avoid particles falling into local optimal solution. The simulation diagram of load forecasting is shown in Figures 7 and 8.

    Figure 7.  Load forecasting curve.
    Figure 8.  Load forecasting error.

    From the Figures 79, it can be seen that the accuracy of prediction has been further improved by adding the average particle distance and fitness variance to the particle swarm algorithm. At this time, the maximum error of prediction is about 8%, the minimum error is about 0, and the average relative error is reduced by about 0.013%. In the prediction iteration, the maximum average error of single iteration can reach 2.79%, and the maximum average error of iteration can reach 2.81%. In this paper, the two-dimensional optimization of two parameters is mainly realized, and the optimization range is small, and the effect is not obvious. It can be imagined that when the parameter dimension becomes larger and the optimization range becomes wider, the parameter optimization method will have more advantages. At the same time, it can be seen that after parameter optimization, the load prediction accuracy of IPSO-SVM is more accurate and the prediction error is smaller.

    Figure 9.  Online and offline performance.

    Load forecasting of ship power system is an important component of energy management of ship power system, which is related to the stability, security and economy of ship power system. In order to further reduce the error of load forecasting, the support vector machine model and method for ship power load forecasting are designed. The parameters such as temperature, flow velocity, relative wind speed and speed are innovatively used as the training eigenvalues of support vector machine, which strengthens the correlation between input characteristics and predicted output load and improves the prediction accuracy. Aiming at the problem that the regularization parameter C and the standardized parameter σ of the support vector machine are difficult to select, the particle swarm optimization algorithm is introduced to optimize. At the same time, the average particle distance and the fitness variance are used as the evaluation indexes. When the initial selection of particles is unreasonable and falls into local optimum, the particles are re-initialized to optimize to avoid particle swarm optimization falling into local optimum, and the accuracy of IPSO-SVM load forecasting is further improved.

    This work was supported by the Zhoushan Science and Technology Project(2022C13034) and Jiangsu Key R & D plan (BE2018007).

    On behalf of all authors, the corresponding author states that there is no conflict of interest.



    [1] G. G. Box, G. M. Jenkins, G. C. Reinsel, Time series analysis: forecasting and control, J. Time, 31 (1976), 238–242. https://doi.org/10.1111/j.1467-9892.2009.00643.x doi: 10.1111/j.1467-9892.2009.00643.x
    [2] J. He, G. Wei, L. L. Xiong, Fuzzy improvement of linear regression analysis for load forecasting, East China Electr. Power, 11 (2003), 21–23.
    [3] C. B. Yuan, T. Zhang, J. L. Zhu, Short-term power load forecasting based on MRA and regression analysis, Inf. Technol., 10 (2007), 88–90.
    [4] Q. Li, Z. Y. Wang, Z. R. Wang, Uncertainty evaluation for the dynamic calibration of pressure transducer, J. Bei-jing Univ. Aeronaut. Astronaut., 41 (2015), 847–856. https://doi.org/10.13700/j.bh.1001-5965.2014.0356 doi: 10.13700/j.bh.1001-5965.2014.0356
    [5] S. L. Guo, Q. L. Shui, X. Y. Gu, Summary of application of gray system theory in load forecasting, Ind. Instrum. Autom., 3 (2017), 24–27.
    [6] Y. Xue, N. Zhang, H. Wu, Z. Yu, R. Li, Short-term load forecasting method for user side microgrid based on UTCI-MIC and amplitude compression grey model, Power Syst. Technol., 44 (2020), 556–563. https://doi.org/10.13335/j.1000-3673.pst.2019.1870 doi: 10.13335/j.1000-3673.pst.2019.1870
    [7] G. J. Zhang, J. J. Qiu, J. H. Li, Multi-factor short-term load forecasting based on fuzzy inference system, Autom. Electr. Power Syst., 26 (2002), 49–53.
    [8] S. K. Ha, K. B. Song, B. S. Kim, Short-term load forecasting for the holidays using fuzzy linear regression method, IEEE Trans. Power Syst., 20 (2005), 96–101. https://doi.org/10.1109/TPWRS.2004.835632 doi: 10.1109/TPWRS.2004.835632
    [9] L. Feng, J. J. Qiu, Short-term load forecasting for anomalous days based on fuzzy multi-objective genetic optimization algorithm, Proc. CSEE, 25 (2005), 29–34. https://doi.org/10.1109/PES.2006.1708902 doi: 10.1109/PES.2006.1708902
    [10] X. AI, Z. Y. Zhou, Y. P. Wei, H. Zhang, L. Li, Bidding strategy of transferable load based on autoregressive integrated moving average model, Autom. Electr. Power Syst., 41 (2017), 26–31. https://doi.org/10.7500/AEPS20170119009 doi: 10.7500/AEPS20170119009
    [11] L. L. Liu, Short-term power load forecasting based on SARI-MA and SVR, Ph.D thesis, East China University of Technology, 2018.
    [12] X. Y. Wu, J. H. He, P. Zhang, J. Hu, Power system short-term load forecasting based on improved random forest with grey relation projection, Autom. Electr. Power Syst., 39 (2015), 50–55. https://doi.org/10.7500/AEPS20140916005 doi: 10.7500/AEPS20140916005
    [13] N. T. Huang, G. B. Lu, D. G. Xu, A permutation importance-based feature selection method for short-term electricity load forecasting using random forest, Energies, 9 (2016), 767. https://doi.org/10.3390/en9100767 doi: 10.3390/en9100767
    [14] Y. C. Li, T. J. Fang, E. K. Yu, Study of support vector machine method for short-term load forecasting, Proc. CSEE, 6 (2003), 55–59. https://doi.org/10.1109/ICNC.2007.659 doi: 10.1109/ICNC.2007.659
    [15] D. F. Zhao, W. C. Pang, J. S. Zhang, X. F. Wang, Based on Bayesian theory and online learning SVM for short term load forecasting, Proc. CSEE, 25 (2005), 8–13. https://doi.org/10.1109/ICNC.2007.659 doi: 10.1109/ICNC.2007.659
    [16] D. F. Zhao, X. F. Wang, L. Zhou, T. Zhang, D. Z. Xia, Short-term load forecasting using radial basis function networks and expert system, J. Xi'an Jiaotong Univ., 4 (2001), 331–334.
    [17] H. D. Zhao, Research on intelligent power short-term load forecasting system based on fuzzy expert system, Master thesis, South China University of Technology, 2001.
    [18] L. Li, J. Wei, C. B. Li, Y. Cao, B. Fang, Prediction of load model based on artificial neural network, Trans. China Electrotech. Soc., 30 (2015), 225–230. https://doi.org/10.19595/j.cnki.1000-6753.tces.2015.08.028 doi: 10.19595/j.cnki.1000-6753.tces.2015.08.028
    [19] J. R. Zhang, Research on power load forecasting based on the improved elman neural network, Chem. Eng. Trans., 51 (2016), 589–594. https://doi.org/10.3303/CET1651099 doi: 10.3303/CET1651099
    [20] R. Hu, S. Wen, Z. Zeng, T. Huang, A short-term power load forecasting model based on the generalized regression neural network with decreasing step fruit fly optimization algorithm, Neurocomputing, 221 (2017), 24–31. https://doi.org/10.1016/j.neucom.2016.09.027 doi: 10.1016/j.neucom.2016.09.027
    [21] X. Dai, C. Yang, S. Huang, T. Yu, Y. Zhu, Finite time blow-up for wave equation with dynamic boundary condition at critical and high energy levels in control systems, Electron. Res. Arch., 28 (2020), 91–102. https://doi.org/10.3934/era.2020006 doi: 10.3934/era.2020006
    [22] X. Q. Dai, Global existence of solution for multidimensional generalized double dispersion equation, Boundary Value Probl., 1 (2019), 1–4. https://doi.org/10.1186/s13661-019-1266-1 doi: 10.1186/s13661-019-1266-1
    [23] X. Q. Dai, W. K. Li, Non-global solution for visco-elastic dynamical system with nonlinear source term in control problem, Electron. Res. Arch., 29 (2021), 4087–4098. https://doi.org/10.3934/era.2021073 doi: 10.3934/era.2021073
    [24] N. Y. Liu, F. Mu, Short-term power load forecasting based on least squares support vector machine optimized by NRS and PSO algorithm, Mod. Electr. Tech., 42 (2019), 115–118. https://doi.org/10.16652/j.issn.1004-373x.2019.07.028 doi: 10.16652/j.issn.1004-373x.2019.07.028
    [25] W. L. Gong, Short-term load forecasting based on least squares support vector machine, Master thesis, Hunan University, 2014.
    [26] X. G. Zhang, About statistical learning theory and support vector machine, J. Autom., 26 (2000), 32–42. https://doi.org/10.16383/j.aas.2000.01.005 doi: 10.16383/j.aas.2000.01.005
    [27] T. Liu, Y. Wang, W. Liu, Research on Least Squares Support Vector Machine Combinatorial Optimization Algorithm, in 2009 International Forum on Computer Science-Technology and Applications, (2009), 452–454. https://doi.org/10.1109/IFCSTA.2009.116
    [28] W. D. Chang, An improved PSO algorithm for solving nonlinear programing problems with constrained conditions, Int. J. Model. Simul. Sci. Comput., 12 (2021), 2150001. https://doi.org/10.1142/S179396232150001X doi: 10.1142/S179396232150001X
    [29] H. Yang, SVM kernel parameter optimization research and application, Master thesis, Zhejiang University, 2014.
    [30] J. Dong, Y. Zhao, C. Liu, Z. F. Han, C. S. Leung, Orthogonal least squares based center selection for fault-tolerant RBF networks, Neurocomputing, 339 (2019), 217–231. https://doi.org/10.1016/j.neucom.2019.02.039 doi: 10.1016/j.neucom.2019.02.039
    [31] Y. Bai, Kernel function-based interior-point algorithms for conic optimization, Science Press, 2010.
    [32] W. Jiang, SVM parameter optimization and application based on improved particle swarm optimization, P. D. thesis, Jiangsu University of Science and Technology, 2020.
    [33] P. W. Li, J. Zhao, Intelligent single particle optimization and particle swarm optimization fusion algorithm, Int. J. Appl. Math. Stat., 45 (2013), 395–403.
    [34] J. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, 1975.
    [35] Y. Shi, R. Eberhart, Empirical study of particle swarm optimization, in International Conference on Evolutionary Computation, 3 (1999), 1945–1950. https://doi.org/10.1109/CEC.1999.785511
    [36] J. D. Tang, X. Y. Xiong, Y. W. Wu, Reactive power optimization of power system based on improved PSO algorithm, Power Autom. Equip., 7 (2004), 81–84.
    [37] J. D. C. Little, The use of storage Water in a Hydroelectric System, Oper. Res., 3 (1995), 187–197. https://doi.org/10.1287/opre.3.2.187 doi: 10.1287/opre.3.2.187
    [38] K. De Jong, An analysis of the behavior of a class of genetic adaptive systems, P. D. thesis, University of Michigan, 1975.
    [39] J. X. Hu, J. H. Zeng, Adjustment strategy of inertia weight in particle swarm optimization, Comput. Eng., 11 (2007), 193–195.
    [40] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in IEEE International Congress on Evolutionary Computation Proceedings, (1998), 69–73. https://doi.org/10.1109/ICEC.1998.699146
    [41] H. Y. Liu, Y. E. Lin, J. S. Zhang, Hybrid particle swarm optimization algorithm based on chaotic search to solve premature convergence, Comput. Eng. Appl., 42 (2006), 77–79.
    [42] Z. S. Lu, Z. R. Hou, Adaptive mutation particle swarm optimization algorithm, J. Electr., 32 (2004), 416–420.
    [43] X. L. Zhang, S. H. Wen, H. N. Li, Q. Lu, M. Wu, X. Wang, Chaos particle swarm optimization algorithm based on Tent mapping and its application, China Mech. Eng., 19 (2008), 2108–2112.
    [44] J. P. He, Application of support vector machine in short-term power load forecasting, Master thesis, Three Gorges University, 2014.
    [45] D. Wang, Short-term load forecasting of power system based on improved least squares support vector machine, Master thesis, Xi'an University of Technology, 2015.
    [46] H. B. Wang, Electric propulsion ship load forecasting research, Master thesis, Jiangsu University of Science and Technology, 2013.
  • This article has been cited by:

    1. Xuesong Tian, Yuping Zou, Xin Wang, Minglang Tseng, Hua Li, Huijuan Zhang, Improving the Efficiency and Sustainability of Intelligent Electricity Inspection: IMFO-ELM Algorithm for Load Forecasting, 2022, 14, 2071-1050, 13942, 10.3390/su142113942
    2. Xiaoyu Geng, Yibing Li, Qian Sun, A Novel Short-Term Ship Motion Prediction Algorithm Based on EMD and Adaptive PSO–LSTM with the Sliding Window Approach, 2023, 11, 2077-1312, 466, 10.3390/jmse11030466
    3. Ji-Yoon Kim, Jin-Seok Oh, Electric Consumption Forecast for Ships Using Multivariate Bayesian Optimization-SE-CNN-LSTM, 2023, 11, 2077-1312, 292, 10.3390/jmse11020292
    4. Bei Liu, Hongzi Bai, Wei Chen, Huaquan Chen, Zhen Zhang, Automatic detection method of epileptic seizures based on IRCMDE and PSO-SVM, 2023, 20, 1551-0018, 9349, 10.3934/mbe.2023410
    5. Jiang Yanmei, Liu Mingsheng, Li Yangyang, Liu Yaping, Zhang Jingyun, Liu Yifeng, Liu Chunyang, Enhanced neighborhood node graph neural networks for load forecasting in smart grid, 2023, 1868-8071, 10.1007/s13042-023-01796-8
    6. Fan Liu, Xiaolong Wang, Tong Zhao, Lei Zhang, Mingshun Jiang, Faye Zhang, Novel short-term low-voltage load forecasting method based on residual stacking frequency attention network, 2024, 233, 03787796, 110534, 10.1016/j.epsr.2024.110534
    7. Jingdong Wang, Zhu Cheng, Guangqiang Qu, 2024, Review of Regional Photovoltaic Power Dispatching Based on Artificial Intelligence, 979-8-3503-0546-3, 1347, 10.1109/CSNT60213.2024.10546117
    8. Mingyue Zhang, Yang Han, Amr S. Zalhaf, Chaoyang Wang, Ping Yang, Congling Wang, Siyu Zhou, Tianlong Xiong, Accurate ultra-short-term load forecasting based on load characteristic decomposition and convolutional neural network with bidirectional long short-term memory model, 2023, 35, 23524677, 101129, 10.1016/j.segan.2023.101129
    9. Xiaozhi Deng, Bo Li, Zhihua Yang, Xin Qian, 2023, BP Neural Network-Assisted Power Prediction for Photovoltaic Power Station, 979-8-3503-3718-1, 202, 10.1109/ISPDS58840.2023.10235417
    10. Xiao Chen, Zhaoyou Zeng, Bird sound recognition based on adaptive frequency cepstral coefficient and improved support vector machine using a hunter-prey optimizer, 2023, 20, 1551-0018, 19438, 10.3934/mbe.2023860
    11. Vergara Daniel, Alexandersson Martin, Lang Xiao, Mao Wengang, A machine learning based Bayesian decision support system for efficient navigation of double-ended ferries, 2023, 24680133, 10.1016/j.joes.2023.11.002
    12. Wenwen Cheng, Chen Yu, Xiaohui Liu, Construction of a prediction and visualization system for cognitive impairment in elderly COPD patients based on self-assigning feature weights and residual evolution model, 2025, 8, 2624-8212, 10.3389/frai.2025.1473223
    13. Zhijun Wang, Riyu Cong, Ruihong Wang, Zhihui Wang, 2025, Chapter 36, 978-981-96-2408-9, 366, 10.1007/978-981-96-2409-6_36
    14. Xiaomeng Yang, Lidong Zhang, Xiangyun Han, Electric Vehicle Charging Load Forecasting Method Based on Improved Long Short-Term Memory Model with Particle Swarm Optimization, 2025, 16, 2032-6653, 150, 10.3390/wevj16030150
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2773) PDF downloads(190) Cited by(14)

Figures and Tables

Figures(9)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog