
Citation: Thomas Bintsis. Lactic acid bacteria as starter cultures: An update in their metabolism and genetics[J]. AIMS Microbiology, 2018, 4(4): 665-684. doi: 10.3934/microbiol.2018.4.665
[1] | Daniela De Silva, Ovidiu Savin . Uniform density estimates and $ \Gamma $-convergence for the Alt-Phillips functional of negative powers. Mathematics in Engineering, 2023, 5(5): 1-27. doi: 10.3934/mine.2023086 |
[2] | Vito Crismale, Gianluca Orlando . A lower semicontinuity result for linearised elasto-plasticity coupled with damage in W1,γ, γ > 1. Mathematics in Engineering, 2020, 2(1): 101-118. doi: 10.3934/mine.2020006 |
[3] | Matteo Novaga, Marco Pozzetta . Connected surfaces with boundary minimizing the Willmore energy. Mathematics in Engineering, 2020, 2(3): 527-556. doi: 10.3934/mine.2020024 |
[4] | Fernando Farroni, Giovanni Scilla, Francesco Solombrino . On some non-local approximation of nonisotropic Griffith-type functionals. Mathematics in Engineering, 2022, 4(4): 1-22. doi: 10.3934/mine.2022031 |
[5] | Kyungkeun Kang, Dongkwang Kim . Existence of generalized solutions for Keller-Segel-Navier-Stokes equations with degradation in dimension three. Mathematics in Engineering, 2022, 4(5): 1-25. doi: 10.3934/mine.2022041 |
[6] | Massimo Frittelli, Ivonne Sgura, Benedetto Bozzini . Turing patterns in a 3D morpho-chemical bulk-surface reaction-diffusion system for battery modeling. Mathematics in Engineering, 2024, 6(2): 363-393. doi: 10.3934/mine.2024015 |
[7] | Serena Della Corte, Antonia Diana, Carlo Mantegazza . Global existence and stability for the modified Mullins–Sekerka and surface diffusion flow. Mathematics in Engineering, 2022, 4(6): 1-104. doi: 10.3934/mine.2022054 |
[8] | Chiara Caracciolo . Normal form for lower dimensional elliptic tori in Hamiltonian systems. Mathematics in Engineering, 2022, 4(6): 1-40. doi: 10.3934/mine.2022051 |
[9] | Emilio N. M. Cirillo, Giuseppe Saccomandi, Giulio Sciarra . Compact structures as true non-linear phenomena. Mathematics in Engineering, 2019, 1(3): 434-446. doi: 10.3934/mine.2019.3.434 |
[10] | M. M. Bhatti, Efstathios E. Michaelides . Oldroyd 6-constant Electro-magneto-hydrodynamic fluid flow through parallel micro-plates with heat transfer using Darcy-Brinkman-Forchheimer model: A parametric investigation. Mathematics in Engineering, 2023, 5(3): 1-19. doi: 10.3934/mine.2023051 |
According to the Food and Agriculture Organization of the United Nations (FAO), more than one billion people worldwide rely on fish to supplement their bodies with animal protein [1]. Over the past 30 years, aquaculture has been the fastest-growing sector in agriculture. It is one of the pillar industries driving China's economy, creating many jobs in rural areas and bringing stable income to farmers [2]. With the development of artificial intelligence and big data technology, how to increase aquaculture production based on modern information technology and improve the information management of fisheries and fishery administration has become a hot research topic.
For aquatic animals, DO is essential in sustaining their lives, and survival and reproduction can only occur under oxygenated conditions. At the same time, too high or too low dissolved oxygen concentrations can be fatal to the health of aquatic products and must be kept within a reasonable range [3]. When DO levels are too high, fish are prone to bubble disease [4]. On the contrary, when DO is below the standard index for a long time, the growth of aquatic organisms will be slowed down, disease resistance will be reduced, and death will result in severe cases. Therefore, aquaculturists would be very convenient if the trend of DO concentration could be accurately predicted in advance. However, accurately predicting DO concentration trends are challenging. Since aquaculture is in an open-air environment some microorganisms in the water will increase the DO content through photosynthesis; meanwhile, fish and phytoplankton will also accelerate the oxygen consumption through respiration [5]. Different depths and water temperatures will also lead to uneven DO distribution within the culture water [6]. Therefore, the DO time series monitored by water quality sensors will show a nonlinear characteristic [7,8]. With the increase in the prediction time, the nonlinear characteristics will gradually decrease the model's accuracy [9,10].
In order to reduce the influence of nonlinear characteristics on prediction results, researchers have gradually designed various water quality prediction models for various application scenarios, which can be divided into mechanistic and nonmechanistic models according to their working principles [11,12]. The mechanical model is derived from the system structure based on the physical, chemical, biological and other reaction processes of the water environment system with the help of many hydrological, water quality, meteorological and other monitoring data [13]. Because the mechanistic model requires a large amount of basic information about the water environment, which is usually very complex, it limits the further application in water quality prediction.
With the development and application of computer technology, more and more non-mechanical models are being applied to water quality prediction [14]. The non-mechanical water quality prediction method does not consider the physical and chemical changes of the water body. It builds a corresponding model based on historical data to predict the changing trend. The process is simple, and the effect is good. Mainly including time series, regression, probabilistic statistical, machine learning, and deep learning [15,16,17]. For example, Shi et al. [18] propose the softplus extreme learning machine (ELM) model based on clustering to accurately predict changes in DO for the nonlinear characteristics of the DO data in aquaculture waters. The model employs partial least squares while using a new Softplus activation function to improve the ELM, which solves the nonlinearity problem in time series data streams and avoids the instability of the output weight coefficients. However, this model is not suitable for training a large amount of historical data, so that the deep learning model may be more suitable for actual aquatic data. Li et al. [19] developed three deep learning models, recurrent neural network (RNN), long short-term memory neural network (LSTM) and gated recurrent unit (GRU), to predict DO in fish ponds. The results showed that the performance of GRU is similar to LSTM, but the time cost and the number of parameters used by GRU are much lower than LSTM, which is more suitable for DO prediction in natural fish ponds. Although the above three deep learning models can obtain excellent prediction results while training a large amount of aquatic data, the accuracy will decrease significantly with the increase in prediction time.
In recent years, many scholars have also decomposed the raw DO data first and then used different machine learning models or deep learning models to predict the characteristics of the decomposition results. For example, Li et al. [20] proposed a hybrid model of integrated empirical mode decomposition based on multiscale features. Ren et al. [21] proposed a prediction model based on variational modal decomposition and deep belief network combined prediction model. Huang et al. [22] proposed a combined prediction model based on fully integrated empirical mode decomposition, adaptive noise and GRU combined with an improved particle swarm algorithm. Compared with various benchmark models, this method can effectively consider complex time-series data and predict DO variation trends reliably. This decomposition-based method can effectively separate and denoise the original data and enhance the neural network input data quality, thus improving the model's prediction accuracy. However, the method based on decomposition before prediction can cause the problem of boundary distortion, and there is a possibility that future data will be used in the model training [23]. Because of the excellent performance of the attention mechanism in artificial intelligence, many scholars have introduced it to DO prediction in aquaculture. For example, Bi et al. [24] added an attention mechanism after the LSTM network for multi-steps prediction of DO. Liu et al. [25] built a combined prediction model combining attention mechanism and RNN network and obtained excellent short-term and long-term prediction.
Based on previous studies, we investigate a combined model combining convolutional neural network, attention mechanism, and bidirectional long-term and short-term memory neural networks for short-term and long-term prediction of DO concentrations in aquaculture. It consists of a one-dimensional convolutional neural network (1D-CNN), a BiLSTM, and the AM, called the CNN-BiLSTM-AM model. The 1D-CNN helps the model extract important feature data from multiple input vectors. The BiLSTM adds forward and backward propagation to the LSTM, which can learn the input data's backward and forward adjacency relationships in the input data to achieve the purpose of fully mining the multiple data features. After this, the BiLSTM network is combined with the AM. The model's attention is focused on the moving step to capture the effect of different time steps on DO concentration prediction and improve the accuracy and stability of the model in long-term prediction.
The remainder of this paper is organized as follows: Section 2 describes the materials and methods. Section 3 details the experiments and analysis of results. Finally, Section 4 gives conclusions and directions for future work.
The time series is a set of numerical sequences formed by arranging them in chronological order [26]. Time series are divided into unary time series and multivariate time series, and multivariate time series is the combination of multiple unary time series, which can be regarded as a sampling process to obtain multiple observed variables from different sources [27]. The multidimensional DO concentration prediction model for aquaculture proposed in this paper is a multivariate time series prediction problem. For a given time series data with $ N $ characteristic variables, multivariate single-step prediction can be defined by the Eq (2.1):
$ ⌢yt+1=f(x00,x11,⋅⋅⋅,xN−1t) $
|
(2.1) |
where $ {\mathord{\buildrel{\lower3pt\hbox{ $ \frown$ }} \over y} _{t + 1}} $ represents the model's estimate of DO concentration for one future moment, $ x_i^j = {\left({x_0^j, x_1^j, \cdot \cdot \cdot, x_t^j} \right)^T} $ represents the data vector for the $ j \in \left[ {0, N} \right) $ eigenvariable at the moment $ i \in \left\{ {0, 1, 2, \cdot \cdot \cdot, t} \right\} $.
The multivariate multi-step prediction is based on the multivariate single-step prediction. It uses $ k \cdot N $ characteristic variables in the last $ k $ moments as inputs to predict the DO concentration for the next $ k $ moments. The Eq (2.2) demonstrates the $ k $ steps prediction.
$ (⌢yt+1,⋅⋅⋅,⌢yt+k)=f(X0,X1,⋅⋅⋅,Xk) $
|
(2.2) |
where $ {X_i} \in \left\{ {{X_1}, {X_2}, \cdot \cdot \cdot, {X_k}} \right\} $ is a multi-step supervised learning dataset constructed $ i $ moments in advance, model $ f\left(\cdot \right) $ is usually estimated using a supervised learning strategy and multi-step prediction, using training data and corresponding labels for estimation.
The data used in this paper were obtained from two aquaculture farms in Yantai, Shandong Province, China. The marine farm is equipped with multi-parameter water quality monitoring sensors to collect various water environment data in real-time, including water temperature, salinity, chlorophyll concentration, and DO concentration. All data are collected at a frequency of 10 minutes. This paper uses the first 80% of the data collected in the experiment as the training set samples. The last 20% of the data collected is divided into validation and test set samples. The training set is used to adjust the model's internal parameters, such as the weights and bias vectors of the neural network. The validation data check whether the model is overfitted or underfitting. The test set data is used to evaluate the model's predictive performance.
Multivariate multi-step prediction can be defined as a supervised learning problem. The multi-dimensional data are stitched together to form a matrix. Specifically, the inputs to the model are DO concentration, salinity, water temperature and chlorophyll content at the past time. The output of the model is the DO concentration at the current time. $ T $ denotes the current moment and $ n $ represents the prediction time step. Figure 1 illustrates the construction process of the supervised learning dataset in this paper, using multi-dimensional data from the previous $ \left({T - n} \right) $ moments to predict the DO concentration at the moment. The data surrounded by red boxes represent a set of constructed training data and target labels, and the sliding window technique is used to repeat the operation to complete the construction of all data.
At the same time, the DO concentration at the current moment can be predicted in several time steps; the one-time step is 10 min, using sliding windows of different sizes. In order to verify the performance of the model in short-term prediction, three-step prediction (30 minutes) and six-steps prediction (1 hour) are selected for experimental analysis. The three-step prediction uses the historical data of the first three moments to predict the DO trend of the current moment 30 minutes in advance. The six-step prediction uses the historical data of the first six moments in advance to predict the DO trend of the current moment. Based on this, this paper chooses 36 steps ahead (6 hours) and 72 steps ahead (12 hours) to verify the performance of the CNN-BiLSTM-AM model in long-term prediction.
The CNN is an improvement of Lecun on multilayer perceptron (MLP) [28]. Due to its structural features of local area connectivity, weight sharing, and downsampling, the CNN excels in image processing. The application scenarios of CNN are specifically in image classification, face recognition, autonomous driving, and target detection [29,30,31]. There are three types of convolution operations: 1D, 2D convolution and 3D convolution [32]. The 1D convolution is used to process sequential data, such as in natural language processing; 2D convolution is often used in computer vision and image processing; 3D convolution is often used in medicine and video processing.
Since this paper focuses on predicting DO concentration sequences in aquaculture and is one-dimensional data, 1D-CNN is used for feature extraction. The 1D-CNN is calculated as shown in Eq (2.3):
$ ht=σ(W∗xt+b) $
|
(2.3) |
where $ W $ denotes the convolution kernel, also called the weight coefficient of the filter in the convolution layer. $ b $ denotes the bias vector. $ {x_t} $ represents the sample data of the $ {t_{th}} $ input network. $ * $ denotes the convolution operator. $ \sigma $ represents the activation function. $ {h_t} $ represents the output result after the convolution operation.
Figure 2 shows the structure of 1D-CNN. In this paper, the 1D-CNN is used to extract the features of DO concentration for the characteristics of time series data. Firstly, the historical data of the aquaculture water environment with multiple features are stitched into a matrix. In order to feed the constructed supervised learning data into the interior of the CNN network, the dimensions of the matrix are therefore converted into $ k $ tensors. At the same time, each tensor has $ m $ rows and $ n $ columns. The $ k $ represents the size of the one-dimensional convolution kernel, $ m $ is a variety of water environment data from the past few days, and $ n $ represents the length of the time step. After the convolution operation, this paper uses maximum pooling to retain the most vital features and eliminate the weak ones. The purpose is to reduce the complexity and avoid overfitting the model. The step of maximum pooling is to place the pooling window on the sequence and use the maximum value within that window as the output value of the pooling. The window is then slid, and the above steps are repeated until the end of the sequence.
Although the RNN can learn the relationship between the current moment and the earlier moment information in the long time series prediction problem, the longer the time goes, the more difficult it is for RNN to learn this relationship. Researchers call this phenomenon the long-term dependence problem [33]. It is like a person with weak memory who cannot remember the past is the same. The root cause of the long-term dependency problem is that the gradient tends to disappear or explode after the RNN has propagated through many stages. To solve the long-term dependency problem, Hochreiter and Schmidhuber [34] proposed the LSTM in 1997. LSTM is a modification of RNN, and Figure 3 illustrates the basic structural unit of LSTM. The LSTM network consists of several structural units, each containing three gating mechanisms: forgetting, input, and output.
The forgetting gate mainly determines the degree of forgetting of previous information. The forgetting gate decides which information from the past is discarded after receiving the last moment output $ h_{t-1} $ and the current moment input $ x_t $. The forgetting gate of LSTM is calculated as follows:
$ ft=σ(Wf⋅[ht−1,xt]+bf) $
|
(2.4) |
where $ f_t $ represents the output of the forgetting gate and $ \sigma $ represents the activation function. The output of the sigmoid function in the forgetting gate ranges between $ \left({0, 1} \right) $ so that selective discarding of the data can be achieved.
The role of the input gate is to select which current information is input to the internal network after the forgetting gate has discarded some of the information [35,36]. The input gate of LSTM consists of two steps. Firstly, determining which values need to be updated according to Eqs (2.5) and (2.6). Secondly, updating the cell state of the last moment to the cell state of the current moment according to Eq {(2.7)}.
$ it=σ(Wi⋅[ht−1,xt]+bi) $
|
(2.5) |
$ ˜Ct=tanh(WC⋅[ht−1,xt]+bC) $
|
(2.6) |
$ Ct=ft∗Ct−1+it∗˜Ct $
|
(2.7) |
As the last part inside the LSTM network, the output gate obtains the model's output value, and the calculation formula is as follows: Eqs {(2.8) and (2.9)}. The initial output value $ o_t $ is first calculated using the output $ h_{t-1} $ of the previous moment and the input $ x_t $ of the current moment, which is used to control the information to be output. The hyperbolic tangent activation function $ (tanh) $ then scales the cell state $ c_t $ at the current moment between $ -1 $ and $ 1 $ and multiplies the result by the initial output value $ o_t $ to obtain the final output of the output gate.
$ ot=σ(Wo[ht−1,xt])+bo $
|
(2.8) |
$ ht=ot∗tanh(Ct) $
|
(2.9) |
where $ w_f $, $ w_i $, $ w_c $, $ w_o $ are the weight matrices of oblivion gate, input gate, state update, and output gate. $ b_f $, $ b_i $, $ b_c $, and $ b_o $ represent the corresponding bias vectors.
Although the LSTM network internally solves the long-term dependency problem by adding a threshold mechanism, the LSTM can only handle temporal information in one direction. For the long-time sequence problem, the current state may relate to the previous information and involve the following information. The BiLSTM combines the bi-directional recurrent neural network [37] and LSTM to fully use the sequence's historical and future information to improve the model's prediction performance [38]. Since BiLSTM networks can obtain the before-and-after features of the input sequences, they have been widely used in machine translation and speech recognition.
The BiLSTM neural network adds a forward layer and backward layer to the LSTM structure, and the main structure is shown in Figure 4. The principle of each propagation layer in BiLSTM is precisely the same as the structure of the one-way propagation LSTM model. As shown in Eq {(2.10)}, the final output consists of a superposition of LSTMs in both directions.
$ ht=→ht⊗←ht $
|
(2.10) |
where $ t $ represents the moment in the time series; $ \vec h $ and $ \mathord{\buildrel{\lower3pt\hbox{ $ \leftarrow$ }} \over h} $ represent the output values of the forward and backward propagation layers, respectively. $ h_t $ represents the output results after the forward and backward superposition at moment $ t $.
When we look through a photo album quickly, we may not see the whole picture but often focus on the most beautiful parts of the picture. The AM references the above idea explained in mathematical language as $ X = \left[ {{x_1}, {x_2}, {x_3}, \cdot \cdot \cdot, {x_n}} \right] $ representing $ n $ inputs of information. In order to save computational resources, we do not want the neural network to process all the inputs but only select some information from the most relevant inputs to the task. In recent years, the AM mechanism has been widely used in image enhancement, text classification, and machine translation [39,40].
In this paper, we use soft attention, the most common attention method. It means that when information selection is performed, instead of selecting one of the $ n $ input information, the weighted sum of the $ n $ input information is calculated and then input to the neural network for calculation. In this paper, attention is assigned to the prediction step of the model based on the CNN-BiLSTM network.
The calculation steps are as follows: the first step is to calculate the similarity between the output $ {h_i}\left({i = 1, 2, \cdot \cdot \cdot, n} \right) $ of the BiLSTM network at each time and the output $ h_t $ of the current moment to get the corresponding weight $ {s_i}\left({i = 1, 2, \cdot \cdot \cdot, n} \right) $. The second step normalizes the weights $ s_i $ using the $ soft\max \left(\cdot \right) $ function. Obtain the output $ h_t $ at the current moment with the weights $ {\alpha _i}\left({i = 1, 2, \cdot \cdot \cdot, n} \right) $ of the outputs $ h_i $ at each moment. The third step weights the weight $ {\alpha _i} $ and the output $ h_i $ of the BiLSTM neural network at each moment to obtain the final output $ c_i $. The formulas for the attention mechanism are given in Eqs {(2.11)–(2.13)}.
$ si=tanh(Whiht+bhi) $
|
(2.11) |
$ αi=softmax(si) $
|
(2.12) |
$ ct=n∑i=1αihi $
|
(2.13) |
Figure 5 illustrates the CNN-BiLSTM-AM model framework built in this paper. Table 1 collates the details of the main parameters of the CNN-BiLSTM-AM model. In addition to the input and output layers, the model contains a convolutional layer, a BiLSTM layer, and an Attention layer.
Hyperparameter | Value |
Filter size of CNN | [128, 64] |
Kernel size of CNN | 1 |
Padding | same |
Activation function | Relu |
Unit number for BiLSTM | 128 |
Optimization function | adam |
Learning-rate | 0.001 |
Batch size | 64 |
Epoch number | 100 |
The first step is the data input layer, which uses multi-feature data from aquaculture water environments to construct a supervised learning dataset $ \left\{ {\left({{X_i}, {Y_i}} \right)|i = 1, 2, \cdot \cdot \cdot, n} \right\} $, and constructs the features $ X $ and labels $ Y $ for model training. The second step is the convolution layer, which extracts the features in the sequence using one-dimensional convolution (Conv1D) while adding a pooling (MaxPooling) layer to reduce the complexity of the features to avoid overfitting of the model. Although the convolution and pooling stages of the convolution layer can fully extract the time-series features and enrich the diversity of features, CNN only considers the correlation links between adjacent data in the sequence and does not consider the problem of long-term information dependence of the time series.
In order to remedy this deficiency, the third step of this paper connects the BiLSTM layer to the CNN layer. The forward and reverse LSTM networks in the BiLSTM layer are utilized to fully consider the sequences' past and future information features. Since the BiLSTM network is based on the bi-directional recurrent neural network and the LSTM neural network, the core of the network is still the LSTM, so the feature information of the output of the CNN layer needs to be changed accordingly, as the tensor form of the data is consistent with the input format [number of samples, prediction time step size, and input feature dimension] required by the LSTM. In the BiLSTM layer, the BiLSTM network firstly traverses the data output from the CNN layer from left to right, and secondly traverses the data from right to left in the reverse direction, and finally stitches the output results from the above two directions together and submits them to the attention layer.
In this paper, we study the multi-dimensional multi-step time series prediction of DO concentration, and different time steps affect the model's accuracy. Therefore, the fourth step of this paper incorporates the AM after the BiLSTM layer, specifically, the soft attention mechanism. The AM is used to capture the importance of different prediction steps in the time series on the impact of the model and improve the model's overall prediction accuracy. The fifth step uses the Flatten Layer to transform the multi-dimensional input data into one dimension. Finally, the fully connected layer is added to obtain the model's output.
In this paper, the mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and determination coefficient ($ R^2 $) are chosen to assess the prediction accuracy of the model. Their calculation methods are shown in Eqs {(2.14)–(2.17)}.
$ MAE=1NN∑k=1|ˆyk−yk| $
|
(2.14) |
$ RMSE=√1NN∑k=1(yk−ˆyk)2 $
|
(2.15) |
$ MAPE=1NN∑k=1|ˆyk−ykyk|×100% $
|
(2.16) |
$ R2=1−∑Nk=1(yk−⌢yk)2∑Nk=1(yk−ˉy2)2 $
|
(2.17) |
where $ N $ is the number of samples in the test set, $ {\mathord{\buildrel{\lower3pt\hbox{ $ \frown$ }} \over y} _k} $ is the actual value, and $ y_k $ represents the prediction result. The model has higher prediction accuracy when the MAE and RMSE results are negligible. Conversely, a higher $ R^2 $ value represents a better fit of the model on the test set and more accurate prediction results.
Aquaculture enterprises in the daily production and operation process, power outages, poor network signals and water quality sensor failure and other factors can easily lead to the collection of raw water environmental data missing values and abnormal. This paper uses the Lagrangian interpolation method to fill in the missing and abnormal data, calculated as in Eq (3.1). Table 2 collates the statistical information on DO concentrations monitored in aquaculture farms above.
Cases | Datasets | Numbers | Statistical indicators | |||||
Mean | Std. | Max. | Min. | Kurtosis | Skewness | |||
Dataset 1 | All Samples | 20851 | 8.09 | 1.20 | 10.25 | 5.02 | -0.64 | -0.62 |
Training Set | 16681 | 8.57 | 0.77 | 10.25 | 6.37 | -0.37 | -0.47 | |
Validation Set | 2085 | 6.54 | 0.25 | 7.09 | 5.84 | -0.53 | -0.04 | |
Testing Set | 2085 | 5.81 | 0.35 | 6.44 | 5.02 | -0.88 | -0.21 | |
Dataset 2 | All Samples | 21673 | 7.28 | 1.35 | 10.32 | 4.12 | -0.99 | 0.46 |
Training Set | 17339 | 6.75 | 0.94 | 9.08 | 4.12 | -0.69 | 0.48 | |
Validation Set | 2167 | 8.99 | 0.24 | 9.71 | 8.49 | -0.03 | 0.97 | |
Testing Set | 2167 | 9.70 | 0.22 | 10.32 | 9.32 | -1.47 | 0.19 |
At the same time, the different units of data in different water environments and the possible existence of odd sample data in the original data can negatively affect the model's training. For example, when gradient descent is performed, the gradient direction tends to deviate from the direction of the minimum value, resulting in a long training time for the model. Therefore, this paper adopts the normalization operation for the preprocessed data, and Eq (3.2) describes its calculation process.
$ Ln(x)=n∑k=0yk⋅(x−x0)(x−x1)⋅⋅⋅(x−xk−1)(x−xk+1)⋅⋅⋅(x−xn)(xk−x0)(xk−x1)⋅⋅⋅(xk−xk−1)(xk−xk+1)⋅⋅⋅(xk−xn) $
|
(3.1) |
$ z′=z−min(z)max(z)−min(z) $
|
(3.2) |
where $ {x_k}\left({k = 0, 1, \cdot \cdot \cdot, n} \right) $ is several independent variables, $ y_k $ is the value of the dependent variable corresponding to the independent variables, $ x $ is the interpolation node, and $ {L_n}\left(x \right) $ is the result after interpolation. $ z $ represents the different water environment parameters in the original data, $ \min \left(z \right) $ and $ \max \left(z \right) $ represent the minimum and maximum values under the same feature, respectively, $ z' $ representing the normalized data results.
In the experiments of this paper, several classical prediction models were trained and predicted to compare and evaluate the prediction performance of CNN-BiLSTM-AM models. They are SVR, RNN, LSTM, CNN-LSTM and CNN-BiLSTM models.
Meanwhile, to ensure the fairness of the experiments, we use the same training set, validation set and test set for all the models and choose the same values for the same hyperparameters in these models. The MSE is used as the loss function for model training, and the weights of the models are optimized using the adaptive moment estimation method. The learning rate was 0.001 and the training period was 100 times. All models were done in a Python programming environment and implemented on the Keras framework.
In order to visualize the prediction results, Figure 6 shows the short-term prediction fitting curves of different comparison models and the CNN-BiLSTM-AM model proposed in this paper on the test set. The four subplots (a)–(d) represent 3-step prediction, 6-step prediction, 36-step prediction and 72-step prediction, respectively. It can be seen from the Figure 6 that the dissolved oxygen concentration in the water environment of this aquaculture farm shows a cyclic variation, fluctuating up and down in a particular range. Although all models could predict the overall trend of DO correctly, the prediction accuracy of different models varied considerably at the peaks and valleys of different time steps. We can see that the prediction curves of the SVR model deviate the farthest from the actual values at the peaks and valleys, and the rest of the deep learning models and the combined models are closer to the actual values.
Table 3 collates the short-term prediction results of the CNN-BiLSTM-Attention model proposed in this paper for dissolved oxygen concentration in an aquaculture water environment, including three steps ahead (30 minutes) and six steps ahead (1 hour). Based on the prediction results of 3 steps ahead (30 minutes), it can be seen that the MAE, RMSE and MAPE of the deep learning RNN model are reduced by 10.12, 4.34 and 13.19%, respectively, compared with the machine learning SVR model. The three metrics of LSTM were reduced by another 13.85, 9.59 and 12.66%, respectively, compared to the RNN model. It is verified that LSTM alleviates the long-term dependence problem of RNN by increasing the threshold mechanism and improving the model's prediction accuracy. Compared with the combined model, none of the three single prediction models has higher prediction accuracy than the combined model at 30 minutes of prediction. It reflects that the combined model can utilize different network structures to improve the overall prediction of short-term forecasts.
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0514 | 0.0599 | 0.0091 | 0.9711 | 0.1238 | 0.1333 | 0.0219 | 0.8565 |
RNN | 0.0462 | 0.0573 | 0.0079 | 0.9736 | 0.0555 | 0.0638 | 0.0096 | 0.9671 |
LSTM | 0.0398 | 0.0518 | 0.0069 | 0.9799 | 0.0552 | 0.0627 | 0.0094 | 0.9683 |
CNN-LSTM | 0.0309 | 0.0394 | 0.0053 | 0.9875 | 0.0452 | 0.0575 | 0.0078 | 0.9733 |
CNN-BiLSTM | 0.0231 | 0.0342 | 0.0041 | 0.9907 | 0.0303 | 0.0429 | 0.0053 | 0.9851 |
Proposed | 0.0135 | 0.0187 | 0.0023 | 0.9971 | 0.0202 | 0.0293 | 0.0034 | 0.9931 |
To verify whether the model's prediction performance is improved after adding AM, a CNN-BiLSTM comparison model is introduced in this paper. The experimental results show that the prediction performance of the CNN-BiLSTM-AM model after adding AM improves by 41.56, 45.32 and 43.9%, respectively, when predicting 30 minutes in the short term. It demonstrates that focusing the model's attention on the time step can improve the short-term prediction performance of the model. In the prediction results of 6 steps ahead (1 hour), SVR reduced 11.43% in $ R^2 $ metric compared to LSTM in deep learning models. The CNN-BiLSTM-AM model proposed in this paper also improved 33.33, 31.7 and 35.85% over the best-performing CNN-BiLSTM model in the combined neural network.
To verify whether the CNN-BiLSTM-AM model proposed in this paper still has excellent prediction performance for long-term prediction, we take 36 steps ahead (6 hours) and 72 steps ahead (12 hours) for multiple time steps of DO concentration prediction. According to Table 4, the prediction accuracy of the SVR model decreases sharply as the prediction time step increases. In the $ R^2 $ index, its accuracy decreased by 21.97%, from 97.11% for 3-step prediction to 75.77% for 72-step prediction. RNN and LSTM closely follow this.
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.1352 | 0.1490 | 0.0240 | 0.8142 | 0.1365 | 0.1713 | 0.0247 | 0.7577 |
RNN | 0.0794 | 0.0923 | 0.0135 | 0.9287 | 0.0881 | 0.1063 | 0.0154 | 0.9067 |
LSTM | 0.0633 | 0.0765 | 0.0109 | 0.9513 | 0.0675 | 0.0797 | 0.0115 | 0.9476 |
CNN-LSTM | 0.0448 | 0.0599 | 0.0078 | 0.9756 | 0.0479 | 0.0601 | 0.0082 | 0.9702 |
CNN-BiLSTM | 0.0347 | 0.0451 | 0.0062 | 0.9835 | 0.0452 | 0.0584 | 0.0079 | 0.9719 |
Proposed | 0.0156 | 0.0219 | 0.0027 | 0.9968 | 0.0283 | 0.0381 | 0.0049 | 0.9883 |
The 36-step predictions of RNN and LSTM decrease by 3.97 and 1.75%, respectively, in $ R^2 $ compared to their 6-step predictions for short-term prediction of DO concentration. Although the accuracy of long-term prediction was reduced, LSTM had higher stability than RNN, verifying that LSTM alleviated the long-term dependence problem by its unique gating mechanism. The combined neural network models all outperformed the single model in long-term prediction. Compared with LSTM, CNN-BiLSTM improved by 45.18, 41.06 and 43.12% in 36-step prediction, and 33.04, 26.73 and 31.3% in 72-step prediction.
Happily, the performance of the CNN-BiLSTM-AM model used in this paper is 55.04, 51.44 and 56.45% higher than that of CNN-BiLSTM without attention mechanism in 36-step prediction. In the 72-step prediction, each index improved by 37.39, 34.76 and 37.97%, respectively. The above results indicate that the CNN-BiLSTM-AM model has more robust prediction performance on longer prediction steps and has higher stability than similar models.
Based on the above, this paper explores the generalization ability of CNN-BiLSTM-AM model on different datasets, and we use the water environment data monitored by another farm in Yantai. As shown in Figure 8, it can be seen that the periodicity of this dataset is not apparent, and the overall dissolved oxygen content is higher than that of the first dataset.
Tables 5 and 6 summarize the performance metrics of the different models for short-term and long-term forecasting on this dataset. We can clearly see that the CNN-BiLSTM model with the added attention mechanism has excellent performance in all metrics. In terms of MAE metrics, it improves 25.29%, 26.10%, 33.43% and 21.59% in short-term and long-term predictions, respectively, over the CNN-BiLSTM model without the added attention mechanism. It improved 30.76%, 48.85%, 49.31% and 44.24% in short-term and long-term predictions, respectively, over the RNN model. The growth trend of the prediction steps shows that adding the attention mechanism to the CNN-BiLSTM model in this paper can effectively enhance the long-term prediction ability of the model and has higher prediction accuracy compared with the benchmark model.
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0431 | 0.0514 | 0.0043 | 0.9473 | 0.0518 | 0.0607 | 0.0053 | 0.9264 |
RNN | 0.0273 | 0.0368 | 0.0028 | 0.9729 | 0.0393 | 0.0502 | 0.0040 | 0.9497 |
LSTM | 0.0208 | 0.0321 | 0.0021 | 0.9793 | 0.0334 | 0.0452 | 0.0034 | 0.9592 |
CNN-LSTM | 0.0257 | 0.0323 | 0.0027 | 0.9791 | 0.0329 | 0.0413 | 0.0034 | 0.9659 |
CNN-BiLSTM | 0.0253 | 0.0314 | 0.0026 | 0.9803 | 0.0272 | 0.0352 | 0.0028 | 0.9753 |
Proposed | 0.0189 | 0.0257 | 0.0019 | 0.9869 | 0.0201 | 0.0266 | 0.0021 | 0.9858 |
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0849 | 0.0971 | 0.0086 | 0.8126 | 0.1013 | 0.0118 | 0.0104 | 0.7519 |
RNN | 0.0436 | 0.0555 | 0.0045 | 0.9395 | 0.0495 | 0.0642 | 0.0051 | 0.9184 |
LSTM | 0.0409 | 0.0499 | 0.0042 | 0.9505 | 0.0420 | 0.0532 | 0.0043 | 0.9443 |
CNN-LSTM | 0.0394 | 0.0494 | 0.0041 | 0.9516 | 0.0422 | 0.0504 | 0.0044 | 0.9497 |
CNN-BiLSTM | 0.0332 | 0.0402 | 0.0034 | 0.9683 | 0.0352 | 0.0446 | 0.0036 | 0.9606 |
Proposed | 0.0221 | 0.0298 | 0.0023 | 0.9824 | 0.0276 | 0.0372 | 0.0028 | 0.9726 |
Both Figure 7 and Figure 9 show a combination of bar and line graphs that visualize the prediction accuracy of the different prediction models on the two data sets, respectively. Subplots (a) to (d) in the figures represent 3-step forecasting, 6-step forecasting, 36-step forecasting, and 72-step forecasting, respectively. Starting from the legend, MAE and RMSE correspond to the orange and green bars, whose values correspond to the left y-axis. mape corresponds to the purple beads, whose values correspond to the right y-axis. The red line graphs represent the coefficients of determination of the indicators, and higher values represent better fitting results of the model.
In summary, we can conclude that both short-term prediction (30 minutes, 1 hour) and long-term prediction (6 hours, 12 hours). The CNN-BiLSTM-AM model proposed in this paper has excellent prediction performance and better stability in long-term prediction.
This paper proposed a multi-step prediction model of dissolved oxygen concentration based on the combination of attention mechanisms and a combined neural network to improve the prediction accuracy of dissolved oxygen concentration in an aquaculture water environment. The model incorporates an attention mechanism in the BiLSTM network to focus the model's attention on the time step of prediction, which enhances the long-term prediction ability of the model. By comparing the SVR, RNN, LSTM, CNN-LSTM and CNN-BiLSTM prediction models, the CNN-BiLSTM-AM model established in this paper has higher prediction accuracy. As the time step increases, the model's superiority becomes more evident.
In the study process, this paper only used four water quality parameters as the input of the model. Future studies can consider on-farm weather factors and more water quality factors for the model.
This work was supported by the Yantai Science and Technology Innovation Development Plan Project (Grant No. 2022XDRH015).
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] | Bintsis T (2018) Lactic acid bacteria: their applications in foods. J Bacteriol Mycol 5: 1065. |
[2] | Hayek SA, Ibrahim SA (2013) Current limitations and challenges with lactic acid bacteria: a review. Food Nutr Sci 4: 73–87. |
[3] | Khalid K (2011) An overview of lactic acid bacteria. Int J Biosci 1: 1–13. |
[4] | Bintsis T, Athanasoulas A (2015) Dairy starter cultures, In: Papademas P, Editor, Dairy Microbiology, A Practical Approach, Boca Raton: CRC Press, 114–154. |
[5] | Von Wright A, Axelsson L (2011) Lactic acid bacteria: An introduction, In: Lahtinne S, Salminen S, Von Wright A, et al., Editors, Lactic Acid Bacteria: Microbiological and Functional Aspects, London: CRC Press, 1–17. |
[6] | Sheehan JJ (2007) What are starters and what starter types are used for cheesemaking? In: McSweeney PLH, Editor, Cheese problems solved, Boca Raton: Woodhead Publishing Ltd., 36–37. |
[7] | Tamime AY (2002) Microbiology of starter cultures, In: Robinson RK, Editor, Dairy Microbiology Handbook, 3 Eds., New York: John Wiley & Sons Inc., 261–366. |
[8] | Parente E, Cogan TM (2004) Starter cultures: General aspects, In: Fox PF, McSweeney PLH, Cogan TM, et al., Cheese: Chemistry, Physics and Microbiology, 4 Eds., London: Elsevier Academic Press, 23–147. |
[9] | Bourdichon F, Boyaval P, Casaregola J, et al. (2012) The 2012 Inventory of Microbial Species with technological beneficial role in fermented food products. B Int Dairy Fed 455: 22–61. |
[10] | Bourdichon F, Berger B, Casaregola S, et al. (2012) A safety assessment of microbial food cultures with history of use in fermented dairy products. B Int Dairy Fed 455: 2–12. |
[11] | Ricci A, Allende A, Bolton D, et al. (2017) Scientific Opinion on the update of the list of QPS-recommended biological agents intentionally added to food or feed as notified to EFSA. EFSA J 15: 4664. |
[12] | Beresford T, Cogan T (1997) Improving Cheddar cheese flavor, In: Proceedings of the 5th Cheese Symposium, Cork: Teagasc/University College Cork, 53–61. |
[13] | Picon A (2018) Cheese Microbial Ecology and Safety, In: Papademas P, Bintsis T, Editors, Global Cheesemaking Technology, Cheese Quality and Characteristics, Chichester: John Wiley & Sons Ltd., 71–99. |
[14] | Wedajo B (2015) Lactic acid bacteria: benefits, selection criteria and probiotic potential in fermented food. J Prob Health 3: 129. |
[15] |
Grattepanche F, Miescher-Schwenninger S, Meile L, et al. (2008) Recent developments in cheese cultures with protective and probiotic functionalities. Dairy Sci Technol 88: 421–444. doi: 10.1051/dst:2008013
![]() |
[16] | Law BA (1999) Cheese ripening and cheese flavour technology, In: Law BA, Editor, Technology of Cheesemaking, Sheffield: Sheffield Academic Press Ltd., 163–192. |
[17] |
Smit G, Smit BA, Engels WJ (2005) Flavour formation by lactic acid bacteria and biochemical flavour profiling of cheese products. FEMS Microbiol Rev 29: 591–610. doi: 10.1016/j.fmrre.2005.04.002
![]() |
[18] | Tamime AY, Robinson RK (1999) Yoghurt Science and Technology, 2 Eds., Cambridge: Woodhead Publishing Ltd. |
[19] | Ammor MS, Mayo B (2006) Selection criteria for lactic acid bacteria to be used as functional starter cultures in dry sausage production: An update. Meat Sci 76: 138–146. |
[20] | Souza MJ, Ardo Y, McSweeney PLH (2001) Advances in the study of proteolysis in cheese. Int Dairy J 11: 327–345. |
[21] |
Martinez FAC, Balciunas EM, Salgado JM, et al. (2013) Lactic acid properties, applications and production: A review. Trends Food Sci Tech 30: 70–83. doi: 10.1016/j.tifs.2012.11.007
![]() |
[22] |
Burgos-Rubio CN, Okos MR, Wankat PC (2000) Kinetic study of the conversion of different substrates to lactic acid using Lactobacillus bulgaricus. Biotechnol Progr 16: 305–314. doi: 10.1021/bp000022p
![]() |
[23] |
Hofvendahl K, Hahn-Hägerda B (2000) Factors affecting the fermentative lactic acid production from renewable resources. Enzyme Microb Tech 26: 87–107. doi: 10.1016/S0141-0229(99)00155-6
![]() |
[24] | Crow VL, Davey GP, Pearce LE, et al. (1983) Plasmid linkage of the D-tagatose 6-phosphate pathway in Streptococcus lactis: Effect on lactose and galactose metabolism. J Bacteriol 153: 76–83. |
[25] |
Hugenholtz J (1993) Citrate metabolism in lactic acid bacteria. FEMS Microbiol Rev 12: 165–178. doi: 10.1111/j.1574-6976.1993.tb00017.x
![]() |
[26] |
McFeeters RF, Fleming HP, Thompson RL (1982) Malic acid as a source of carbon dioxide in cucumber fermentations. J Food Sci 47: 1862–1865. doi: 10.1111/j.1365-2621.1982.tb12900.x
![]() |
[27] | Daeschel MA, McFeeters RF, Fleming HP, et al. (1984) Mutation and selection of Lactobacillus plantarum strains that do not produce carbon dioxide from malate. Appl Environ Microb 47: 419–420. |
[28] | Li KY (2004) Fermentation: Principles and Microorganisms, In: Hui YH, Meunier-Goddik L, Hansen LM, et al., Editors, Handbook of Food and Beverage Fermentation Technology, New York: Marcel Dekker Inc., 594–608. |
[29] |
Chopin A (1993) Organization and regulation of genes for amino acid biosynthesis in lactic acid bacteria. FEMS Microbiol Rev 12: 21–38. doi: 10.1111/j.1574-6976.1993.tb00011.x
![]() |
[30] |
Kunji ERS, Mierau I, Hagting A, et al. (1996) The proteolytic system of lactic acid bacteria. Anton Leeuw 70: 187–221. doi: 10.1007/BF00395933
![]() |
[31] | Upadhyay VK, McSweeney PLH, Magboul AAA, et al. (2004) Proteolysis in Cheese during Ripening, In: Fox PF, McSweeney PLH, Cogan TM, et al., Editors, Cheese: Chemistry, Physics and Microbiology, 4 Eds., London: Elsevier Academic Press, 391–433. |
[32] | Curtin AC, McSweeney PLH (2004) Catabolism of Amino Acids in Cheese during Ripening, In: Fox PF, McSweeney PLH, Cogan TM, et al., Editors, Cheese: Chemistry, Physics and Microbiology, 4 Eds., London: Elsevier Academic Press, 435–454. |
[33] |
Christensen JE, Dudley EG, Pederson JA, et al. (1999) Peptidases and amino acid catabolism in lactic acid bacteria. Anton Leeuw 76: 217–246. doi: 10.1023/A:1002001919720
![]() |
[34] | Fox PF, Wallace JM (1997) Formation of flavour compounds in cheese. Adv Appl Microbiol 45: 17–85. |
[35] |
Khalid NM, Marth EM (1990) Lactobacilli-their enzymes and role in ripening and spoilage of cheese: A review. J Dairy Sci 73: 2669–2684. doi: 10.3168/jds.S0022-0302(90)78952-7
![]() |
[36] |
Bintsis T, Robinson RK (2004) A study of the effects of adjunct cultures on the aroma compounds of Feta-type cheese. Food Chem 88: 435–441. doi: 10.1016/j.foodchem.2004.01.057
![]() |
[37] |
Broadbent JR, McMahon DJ, Welker DL, et al. (2003) Biochemistry, genetics, and applications of exopolysaccharide production in Streptococcus thermophilus: A review. J Dairy Sci 86: 407–423. doi: 10.3168/jds.S0022-0302(03)73619-4
![]() |
[38] | Callanan MJ, Ross RP (2004) Starter Cultures: Genetics, In: Fox PF, McSweeney PLH, Cogan TM, et al., Editors, Cheese: Chemistry, Physics and Microbiology, 4 Eds., London: Elsevier Academic Press, 149–161. |
[39] |
Klaenhammer T, Altermann E, Arigoni F, et al. (2002) Discovering lactic acid bacteria by genomics. Anton Leeuw 82: 29–58. doi: 10.1023/A:1020638309912
![]() |
[40] | Morelli L, Vogensen FK, Von Wright A (2011) Genetics of Lactic Acid Bacteria, In: Salminen S, Von Wright A, Ouwehand A, Editors, Lactic Acid Bacteria-Microbiological and Functional Aspects, 3 Eds., New York: Marcel Dekker Inc., 249–293. |
[41] |
Mills S, O'Sullivan O, Hill C, et al. (2010) The changing face of dairy starter culture research: From genomics to economics. Int J Dairy Tech 63: 149–170. doi: 10.1111/j.1471-0307.2010.00563.x
![]() |
[42] | NCBI, Data collected from Genbank, 2018. Available from: https://www.ncbi.nlm.nih.gov/genome/browse/. |
[43] | Oliveira AP, Nielsen J, Förster J (2005) Modeling Lactococcus lactis using a genome-scale flux model. BMC Microbiol 55: 39. |
[44] | Salama M, Sandine WE, Giovannoni S (1991) Development and application of oligonucleotide probes for identification of Lactococcus lactis subsp. cremoris. Appl Environ Microb 57: 1313–1318. |
[45] | McKay LL (1985) Roles of plasmids in starter cultures, In: Gilliland SE, Editor, Bacterial Starter Cultures for Food, Boca Raton: CRC Press, 159–174. |
[46] |
Davidson B, Kordis N, Dobos M, et al. (1996) Genomic organization of lactic acid bacteria. Anton Leeuw 70: 161–183. doi: 10.1007/BF00395932
![]() |
[47] |
Dunny G, McKay LL (1999) Group II introns and expression of conjugative transfer functions in lactic acid bacteria. Anton Leeuw 76: 77–88. doi: 10.1023/A:1002085605743
![]() |
[48] | Hughes D (2000) Evaluating genome dynamics: The constraints on rearrangements within bacterial genomes. Genome Biol 1: 1–8. |
[49] |
Hols P, Kleerebezem M, Schanck AN, et al. (1999) Conversion of Lactococcus lactis from homolactic to homoalanine fermentation through metabolic engineering. Nat Biotechnol 17: 588–592. doi: 10.1038/9902
![]() |
[50] |
Bolotin A, Quinquis B, Renault P, et al. (2004) Complete sequence and comparative genome analysis of the dairy bacterium Streptococcus thermophilus. Nat Biotechnol 22: 1554–1558. doi: 10.1038/nbt1034
![]() |
[51] |
Pastink MI, Teusink B, Hols P, et al. (2009) Genome-scale model of Streptococcus thermophilus LMG18311 for metabolic comparison of lactic acid bacteria. Appl Environ Microb 75: 3627–3633. doi: 10.1128/AEM.00138-09
![]() |
[52] |
Madera C, Garcia P, Janzen T, et al. (2003) Characterization of technologically proficient wild Lactococcus lactis strains resistant to phage infection. Int J Food Microbiol 86: 213–222. doi: 10.1016/S0168-1605(03)00042-4
![]() |
[53] |
Turgeon N, Frenette M, Moineau S (2004) Characterization of a theta-replicating plasmid from Streptococcus thermophilus. Plasmid 51: 24–36. doi: 10.1016/j.plasmid.2003.09.004
![]() |
[54] | Hols P, Hancy F, Fontaine L, et al. (2005) New insights in the molecular biology and physiology of Streptococcus thermophilus revealed by comparative genomics. FEMS Microbiol Rev 29: 435–463. |
[55] | O'Sullivan DJ (1999) Methods for analysis of the intestinal microflora, In: Tannock GW, Editor, Probiotics: A critical review, Norfolk: Horizon Scientific Press. |
[56] | Solow BT, Somkuti GA (2000) Molecular properties of Streptococcus thermophilus plasmid pER35 encoding a restriction modification system. Curr Microbiol 42: 122–128. |
[57] |
El Demerdash HAM, Oxmann J, Heller KJ, et al. (2006) Yoghurt fermentation at elevated temperatures by strains of Streptococcus thermophilus expressing a small heat-shock protein: Application of a two-plasmid system for constructing food-grade strains of Streptococcus thermophilus. Biotechnol J 1: 398–404. doi: 10.1002/biot.200600018
![]() |
[58] |
Chevallier B, Hubert JC, Kammerer B (1994) Determination of chromosome size and number of rrn loci in Lactobacillus plantarum by pulsed-field gel electrophoresis. FEMS Microbiol Lett 120: 51–56. doi: 10.1111/j.1574-6968.1994.tb07006.x
![]() |
[59] |
Daniel P (1995) Sizing the Lactobacillus plantarum genome and other lactic bacteria species by transverse alternating field electrophoresis. Curr Microbiol 30: 243–246. doi: 10.1007/BF00293640
![]() |
[60] |
Oliveira PM, Zannini E, Arendt EK (2014) Cereal fungal infection, mycotoxins, and lactic acid bacteria mediated bioprotection: From crop farming to cereal products. Food Microbiol 37: 78–95. doi: 10.1016/j.fm.2013.06.003
![]() |
[61] |
Johnson JL, Phelps CF, Cummins CS, et al. (1980) Taxonomy of the Lactobacillus acidophilus group. Int J Syst Bacteriol 30: 53–68. doi: 10.1099/00207713-30-1-53
![]() |
[62] | Fujisawa T, Benno Y, Yaeshima T, et al. (1992) Taxonomic study of the Lactobacillus acidophilus group, with recognition of Lactobacillus gallinarum sp. nov. and Lactobacillus johnsonii sp. nov. and synonymy of Lactobacillus acidophilus group A3 with the type strain of Lactobacillus amylovorus. Int J Syst Bacteriol 42: 487–491. |
[63] |
Link-Amster H, Rochat F, Saudan KY, et al. (1994) Modulation of a specific humoral immune response and changes in intestinal flora mediated through fermented milk intake. FEMS Immunol Med Mic 10: 55–64. doi: 10.1111/j.1574-695X.1994.tb00011.x
![]() |
[64] |
Schiffrin EJ, Rochat F, Link-Amster H, et al. (1995) Immunomodulation of human blood cells following the ingestion of lactic acid bacteria. J Dairy Sci 78: 491–497. doi: 10.3168/jds.S0022-0302(95)76659-0
![]() |
[65] | Bernet-Camard MF, Liévin V, Brassart D, et al. (1997) The human Lactobacillus acidophilus strain La1 secretes a non bacteriocin antibacterial substance active in vitro and in vivo. Appl Environ Microb 63: 2747–2753. |
[66] |
Felley CP, Corthésy-Theulaz I, Rivero JL, et al. (2001) Favourable effect of an acidified milk (LC-1) on Helicobacter pylori gastritis in man. Eur J Gastroenterol Hepatol 13: 25–29. doi: 10.1097/00042737-200101000-00005
![]() |
[67] | Pérez PF, Minnaard J, Rouvet M, et al. (2001) Inhibition ofGiardia intestinalisby extracellularfactors from lactobacilli: An in vitro study. Appl Environ Microb 67: 5037–5042. |
[68] | Schleifer KH, Ludwig W (1995) Phylogenetic relationships of lactic acid bacteria, In: Wood BJB, Holzapfel WH, Editors, The Genera of Lactic Acid Bacteria, London: Chapman & Hall, 7–18. |
[69] | Hassan AN, Frank JF (2001) Starter cultures and their use, In: Marth EH, Steele JL, Editors, Applied Dairy Microbiology, 2 Eds., New York: Marcel Dekker Inc., 151–206. |
[70] | Hammes WP, Vogel RF (1995) The genus Lactobacillus, In: Wood BJB, Holzapfel WH, Editors, The Genera of Lactic Acid Bacteria, London: Blackie Academic and Professional, 19–54. |
[71] |
Park JS, Shin E, Hong H, et al. (2015) Characterization of Lactobacillus fermentum PL9988 isolated from healthy elderly Korean in a longevity village. J Microbiol Biotechnol 25: 1510–1518. doi: 10.4014/jmb.1505.05015
![]() |
[72] | Simpson WJ, Taguchi H (1995) The genus Pediococcus, with notes on the genera Tetratogenococcus and Aerococcus, In: Wood BJB, Holzapfel WH, Editors, The Genera of Lactic Acid Bacteria, London: Chapman & Hall, 125–172. |
[73] |
Beresford TP, Fitzsimons NA, Brennan NL, et al. (2001) Recent advances in cheese microbiology. Int Dairy J 11: 259–274. doi: 10.1016/S0958-6946(01)00056-5
![]() |
[74] | Caldwell S, McMahon DL, Oberg CL, et al. (1996) Development and characterization of lactose-positive Pediococcus species for milk fermentation. Appl Environ Microb 62: 936–941. |
[75] |
Caldwell S, Hutkins RW, McMahon DJ, et al. (1998) Lactose and galactose uptake by genetically engineered Pediococcus species. Appl Microbiol Biot 49: 315–320. doi: 10.1007/s002530051175
![]() |
[76] | Graham DC, McKay LL (1985) Plasmid DNA in strains of Pediococcus cerevisiae and Pediococcus pentosaceus. Appl Environ Microb 50: 532–534. |
[77] | Daeschel MA, Klaenhammer TR (1985) Association of a 13.6-megadalton plasmid in Pediococcus pentosaceus with bacteriocin activity. Appl Environ Microb 50: 1528–1541. |
[78] | Gonzalez CF, Kunka BS (1986) Evidence for plasmid linkage of raffinose utilization and associated α-galactosidase and sucrose hydrolase activity in Pediococcus pentosaceus. Appl Environ Microb 51: 105–109. |
[79] | De Roos J, De Vuyst L (2018) Microbial acidification, alcoholization, and aroma production during spontaneous limbic beer production. J Sci Food Agr 99: 25–38. |
[80] |
Sakamoto K, Margolles A, van Veen HW, et al. (2001) Hop resistance in the beer spoilage bacterium Lactobacillus brevis is mediated by the ATP-binding cassette multidrug transporter HorA. J Bacteriol 183: 5371–5375. doi: 10.1128/JB.183.18.5371-5375.2001
![]() |
[81] |
Snauwaert I, Stragier P, De Vuyst L, et al. (2015) Comparative genome analysis of Pediococcus damnosus LMG 28219, a strain well-adapted to the beer environment. BMC Genomics 16: 267. doi: 10.1186/s12864-015-1438-z
![]() |
[82] |
Suzuki K, Ozaki K, Yamashita H (2004) Comparative analysis of conserved genetic markers and adjacent DNA regions identified in beer spoilage lactic acid bacteria. Lett Appl Microbiol 39: 240–245. doi: 10.1111/j.1472-765X.2004.01572.x
![]() |
[83] | Bergsveinson J, Friesen V, Ziola B (2017) Transcriptome analysis of beer-spoiling Lactobacillus brevis BSO 464 during growth in degassed and gassed beer. Int J Food Microbiol 235: 28–35. |
[84] | Garvie EI (1986) Genus Leuconostoc, In: Sneath PHA, Mair NS, Sharpe ME, et al., Editors, Bergey's Manual of Systematic Bacteriology, 9 Eds., Baltimore: Williams and Wilkins, 1071–1075. |
[85] | Cogan TM, O'Dowd M, Mellerick D (1981) Effects of sugar on acetoin production from citrate by Leuconostoc lactis. Appl Environ Microb 41: 1–8. |
[86] |
Klare I, Werner G, Witte W (2001) Enterococci: Habitats, infections, virulence factors, resistances to antibiotics, transfer of resistance determinants. Contrib Microbiol 8: 108–122. doi: 10.1159/000060406
![]() |
[87] |
Endtz HP, van den Braak N, Verbrugh HA, et al. (1999) Vancomycin resistance: Status quo and quo vadis. Eur J Clin Microbiol 18: 683–690. doi: 10.1007/s100960050379
![]() |
[88] |
Moreno MRF, Sarantinopoulos P, Tsakalidou E, et al. (2006) The role and application of enterococci in food and health. Int J Food Microbiol 106: 1–24. doi: 10.1016/j.ijfoodmicro.2005.06.026
![]() |
[89] |
Cappello MS, Zapparoli G, Logrieco A, et al. (2017) Linking wine lactic acid bacteria diversity with wine aroma and flavor. Int J Food Microbiol 243: 16–27. doi: 10.1016/j.ijfoodmicro.2016.11.025
![]() |
[90] |
Hugenholtz J (2008) The lactic acid bacterium as a cell factory for food ingredient production. Int Dairy J 18: 466–475. doi: 10.1016/j.idairyj.2007.11.015
![]() |
[91] | Hugenholtz J, Kleerebezem M, Starrenburg M, et al. (2000) Lactococcus lactis as a cell-factory for high level diacetyl production. Appl Environ Microb 66: 4112–4114. |
[92] |
Hugenholtz J, Smid EJ (2002) Neutraceutical production with food-grade microorganisms. Curr Opin Biotech 13: 497–507. doi: 10.1016/S0958-1669(02)00367-1
![]() |
[93] |
Hugenholtz J, Sybesma W, Groot MN, et al. (2002) Metabolic engineering of lactic acid bacteria for the production of nutraceuticals. Anton Leeuw 82: 217–235. doi: 10.1023/A:1020608304886
![]() |
[94] |
Hols P, Kleerebezem M, Schranck AN, et al. (1999) Conversion of Lactococcus lactis from homolactic to homoalanine fermentation through metabolic engineering. Nat Biotechnol 17: 588–592. doi: 10.1038/9902
![]() |
[95] | Börner RA, Kandasamy V, Axelsen AM, et al. (2018) High-throughput genome editing tools for lactic acid bacteria: Opportunities for food, feed, pharma and biotech. Available from: www.preprints.org. |
[96] | Johansen E (2018) Use of natural selection and evolution to develop new starter cultures for fermented foods. Annu Rev Food Sci T 9: 411–428. |
[97] | EC (2013) Regulation (EC) No 1829/2003 of the European Parliament and of the council on genetically modified food and feed. Off J Eur Union L 268: 1–23. |
[98] |
Derkx PM, Janzen T, Sørensen KI, et al. (2014) The art of strain improvement of industrial lactic acid bacteria without the use of recombinant DNA technology. Microb Cell Fact 13: S5. doi: 10.1186/1475-2859-13-S1-S5
![]() |
[99] | Bachmann H, Pronk JT, Kleerebezem M, et al. (2015) Evolutionary engineering to enhance starter culture performance in food fermentations. Curr Opin Biotech 32: 1–7. |
[100] |
Zeidan AA, Poulsen VK, Janzen T, et al. (2017) Polysaccharide production by lactic acid bacteria: From genes to industrial applications. FEMS Microbiol Rev 41: S168–S200. doi: 10.1093/femsre/fux017
![]() |
[101] |
De Angelis M, de Candia S, Calasso MP, et al. (2008) Selection and use of autochthonous multiple strain cultures for the manufacture of high-moisture traditional Mozzarella cheese. Int J Food Microbiol 125: 123–132. doi: 10.1016/j.ijfoodmicro.2008.03.043
![]() |
[102] |
Terzić-Vidojević A, Tonković K, Leboš A, et al. (2015) Evaluation of autochthonous lactic acid bacteria as starter cultures for production of white pickled and fresh soft cheeses. LWT-Food Sci Technol 63: 298–306. doi: 10.1016/j.lwt.2015.03.050
![]() |
[103] |
Frau F, Nuñez M, Gerez L, et al. (2016) Development of an autochthonous starter culture for spreadable goat cheese. Food Sci Technol 36: 622–630. doi: 10.1590/1678-457x.08616
![]() |
[104] |
Kargozari M, Moini S, Basti AA, et al. (2014) Effect of autochthonous starter cultures isolated from Siahmazgi cheese on physicochemical, microbiological and volatile compound profiles and sensorial attributes of sucuk, a Turkish dry-fermented sausage. Meat Sci 97: 104–114. doi: 10.1016/j.meatsci.2014.01.013
![]() |
[105] | Casquete R, Benito MJ, Martin A, et al. (2012) Use of autochthonous Pediococcus acidilactici and Staphylococcus vitulus starter cultures in the production of "Chorizo" in 2 different traditional industries. J Food Sci 77: 70–79. |
[106] | Talon R, Leroy S, Lebert I, et al. (2008) Safety improvement and preservation of typical sensory qualities of traditional dry fermented sausages using autochthonous starter cultures. Int J Food Microbiol 126: 227–234. |
[107] | EFSA (2008) Scientific opinion of the panel on biological hazards on the request from EFSA on the maintenance of the list of QPS microorganisms intentionally added to food or feed. EFSA J 928: 1–48. |
1. | K Ganpati Shrinivas Sharma, Surekha Bhusnur, 2023, Data Ensemble Model for Prediction of Oxygen Content in Gas fired Boiler for Efficient Combustion, 979-8-3503-9874-8, 1, 10.1109/SCEECS57921.2023.10062991 | |
2. | Zheng Li, Min Yao, Zhenmin Luo, Qianrui Huang, Tongshuang Liu, Ultra-early prediction of the process parameters of coal chemical production, 2024, 10, 24058440, e30821, 10.1016/j.heliyon.2024.e30821 | |
3. | Rui Tan, Zhaocai Wang, Tunhua Wu, Junhao Wu, A data-driven model for water quality prediction in Tai Lake, China, using secondary modal decomposition with multidimensional external features, 2023, 47, 22145818, 101435, 10.1016/j.ejrh.2023.101435 | |
4. | Mingyan Wang, Qing Xu, Yingying Cao, Shahbaz Gul Hassan, Wenjun Liu, Min He, Tonglai Liu, Longqin Xu, Liang Cao, Shuangyin Liu, Huilin Wu, An Ensemble Model for Water Temperature Prediction in Intensive Aquaculture, 2023, 11, 2169-3536, 137285, 10.1109/ACCESS.2023.3339190 | |
5. | Wenhao Li, Yin Zhao, Yining Zhu, Zhongtian Dong, Fenghe Wang, Fengliang Huang, Research progress in water quality prediction based on deep learning technology: a review, 2024, 31, 1614-7499, 26415, 10.1007/s11356-024-33058-7 | |
6. | Yane Li, Lijun Guo, Jiyang Wang, Yiwei Wang, Dayu Xu, Jun Wen, An Improved Sap Flow Prediction Model Based on CNN-GRU-BiLSTM and Factor Analysis of Historical Environmental Variables, 2023, 14, 1999-4907, 1310, 10.3390/f14071310 | |
7. | Itunu C. Adedeji, Ebrahim Ahmadisharaf, Clayton J. Clark, A unified subregional framework for modeling stream water quality across watersheds of a hydrologic subregion, 2025, 958, 00489697, 177870, 10.1016/j.scitotenv.2024.177870 | |
8. | Sirisak Pangvuthivanich, Wirachai Roynarin, Promphak Boonraksa, Terapong Boonraksa, Deep Learning‐Driven Forecasting for Compressed Air Oxygenation Integrating With Floating PV Power Generation System, 2025, 7, 2516-8401, 10.1049/esi2.70000 | |
9. | Kaixuan Shao, Daoliang Li, Hao Tang, Yonghui Zhang, Bo Xu, Uzair Aslam Bhatti, Improving multi-step dissolved oxygen prediction in aquaculture using adaptive temporal convolution and optimized transformer, 2025, 235, 01681699, 110329, 10.1016/j.compag.2025.110329 |
Hyperparameter | Value |
Filter size of CNN | [128, 64] |
Kernel size of CNN | 1 |
Padding | same |
Activation function | Relu |
Unit number for BiLSTM | 128 |
Optimization function | adam |
Learning-rate | 0.001 |
Batch size | 64 |
Epoch number | 100 |
Cases | Datasets | Numbers | Statistical indicators | |||||
Mean | Std. | Max. | Min. | Kurtosis | Skewness | |||
Dataset 1 | All Samples | 20851 | 8.09 | 1.20 | 10.25 | 5.02 | -0.64 | -0.62 |
Training Set | 16681 | 8.57 | 0.77 | 10.25 | 6.37 | -0.37 | -0.47 | |
Validation Set | 2085 | 6.54 | 0.25 | 7.09 | 5.84 | -0.53 | -0.04 | |
Testing Set | 2085 | 5.81 | 0.35 | 6.44 | 5.02 | -0.88 | -0.21 | |
Dataset 2 | All Samples | 21673 | 7.28 | 1.35 | 10.32 | 4.12 | -0.99 | 0.46 |
Training Set | 17339 | 6.75 | 0.94 | 9.08 | 4.12 | -0.69 | 0.48 | |
Validation Set | 2167 | 8.99 | 0.24 | 9.71 | 8.49 | -0.03 | 0.97 | |
Testing Set | 2167 | 9.70 | 0.22 | 10.32 | 9.32 | -1.47 | 0.19 |
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0514 | 0.0599 | 0.0091 | 0.9711 | 0.1238 | 0.1333 | 0.0219 | 0.8565 |
RNN | 0.0462 | 0.0573 | 0.0079 | 0.9736 | 0.0555 | 0.0638 | 0.0096 | 0.9671 |
LSTM | 0.0398 | 0.0518 | 0.0069 | 0.9799 | 0.0552 | 0.0627 | 0.0094 | 0.9683 |
CNN-LSTM | 0.0309 | 0.0394 | 0.0053 | 0.9875 | 0.0452 | 0.0575 | 0.0078 | 0.9733 |
CNN-BiLSTM | 0.0231 | 0.0342 | 0.0041 | 0.9907 | 0.0303 | 0.0429 | 0.0053 | 0.9851 |
Proposed | 0.0135 | 0.0187 | 0.0023 | 0.9971 | 0.0202 | 0.0293 | 0.0034 | 0.9931 |
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.1352 | 0.1490 | 0.0240 | 0.8142 | 0.1365 | 0.1713 | 0.0247 | 0.7577 |
RNN | 0.0794 | 0.0923 | 0.0135 | 0.9287 | 0.0881 | 0.1063 | 0.0154 | 0.9067 |
LSTM | 0.0633 | 0.0765 | 0.0109 | 0.9513 | 0.0675 | 0.0797 | 0.0115 | 0.9476 |
CNN-LSTM | 0.0448 | 0.0599 | 0.0078 | 0.9756 | 0.0479 | 0.0601 | 0.0082 | 0.9702 |
CNN-BiLSTM | 0.0347 | 0.0451 | 0.0062 | 0.9835 | 0.0452 | 0.0584 | 0.0079 | 0.9719 |
Proposed | 0.0156 | 0.0219 | 0.0027 | 0.9968 | 0.0283 | 0.0381 | 0.0049 | 0.9883 |
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0431 | 0.0514 | 0.0043 | 0.9473 | 0.0518 | 0.0607 | 0.0053 | 0.9264 |
RNN | 0.0273 | 0.0368 | 0.0028 | 0.9729 | 0.0393 | 0.0502 | 0.0040 | 0.9497 |
LSTM | 0.0208 | 0.0321 | 0.0021 | 0.9793 | 0.0334 | 0.0452 | 0.0034 | 0.9592 |
CNN-LSTM | 0.0257 | 0.0323 | 0.0027 | 0.9791 | 0.0329 | 0.0413 | 0.0034 | 0.9659 |
CNN-BiLSTM | 0.0253 | 0.0314 | 0.0026 | 0.9803 | 0.0272 | 0.0352 | 0.0028 | 0.9753 |
Proposed | 0.0189 | 0.0257 | 0.0019 | 0.9869 | 0.0201 | 0.0266 | 0.0021 | 0.9858 |
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0849 | 0.0971 | 0.0086 | 0.8126 | 0.1013 | 0.0118 | 0.0104 | 0.7519 |
RNN | 0.0436 | 0.0555 | 0.0045 | 0.9395 | 0.0495 | 0.0642 | 0.0051 | 0.9184 |
LSTM | 0.0409 | 0.0499 | 0.0042 | 0.9505 | 0.0420 | 0.0532 | 0.0043 | 0.9443 |
CNN-LSTM | 0.0394 | 0.0494 | 0.0041 | 0.9516 | 0.0422 | 0.0504 | 0.0044 | 0.9497 |
CNN-BiLSTM | 0.0332 | 0.0402 | 0.0034 | 0.9683 | 0.0352 | 0.0446 | 0.0036 | 0.9606 |
Proposed | 0.0221 | 0.0298 | 0.0023 | 0.9824 | 0.0276 | 0.0372 | 0.0028 | 0.9726 |
Hyperparameter | Value |
Filter size of CNN | [128, 64] |
Kernel size of CNN | 1 |
Padding | same |
Activation function | Relu |
Unit number for BiLSTM | 128 |
Optimization function | adam |
Learning-rate | 0.001 |
Batch size | 64 |
Epoch number | 100 |
Cases | Datasets | Numbers | Statistical indicators | |||||
Mean | Std. | Max. | Min. | Kurtosis | Skewness | |||
Dataset 1 | All Samples | 20851 | 8.09 | 1.20 | 10.25 | 5.02 | -0.64 | -0.62 |
Training Set | 16681 | 8.57 | 0.77 | 10.25 | 6.37 | -0.37 | -0.47 | |
Validation Set | 2085 | 6.54 | 0.25 | 7.09 | 5.84 | -0.53 | -0.04 | |
Testing Set | 2085 | 5.81 | 0.35 | 6.44 | 5.02 | -0.88 | -0.21 | |
Dataset 2 | All Samples | 21673 | 7.28 | 1.35 | 10.32 | 4.12 | -0.99 | 0.46 |
Training Set | 17339 | 6.75 | 0.94 | 9.08 | 4.12 | -0.69 | 0.48 | |
Validation Set | 2167 | 8.99 | 0.24 | 9.71 | 8.49 | -0.03 | 0.97 | |
Testing Set | 2167 | 9.70 | 0.22 | 10.32 | 9.32 | -1.47 | 0.19 |
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0514 | 0.0599 | 0.0091 | 0.9711 | 0.1238 | 0.1333 | 0.0219 | 0.8565 |
RNN | 0.0462 | 0.0573 | 0.0079 | 0.9736 | 0.0555 | 0.0638 | 0.0096 | 0.9671 |
LSTM | 0.0398 | 0.0518 | 0.0069 | 0.9799 | 0.0552 | 0.0627 | 0.0094 | 0.9683 |
CNN-LSTM | 0.0309 | 0.0394 | 0.0053 | 0.9875 | 0.0452 | 0.0575 | 0.0078 | 0.9733 |
CNN-BiLSTM | 0.0231 | 0.0342 | 0.0041 | 0.9907 | 0.0303 | 0.0429 | 0.0053 | 0.9851 |
Proposed | 0.0135 | 0.0187 | 0.0023 | 0.9971 | 0.0202 | 0.0293 | 0.0034 | 0.9931 |
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.1352 | 0.1490 | 0.0240 | 0.8142 | 0.1365 | 0.1713 | 0.0247 | 0.7577 |
RNN | 0.0794 | 0.0923 | 0.0135 | 0.9287 | 0.0881 | 0.1063 | 0.0154 | 0.9067 |
LSTM | 0.0633 | 0.0765 | 0.0109 | 0.9513 | 0.0675 | 0.0797 | 0.0115 | 0.9476 |
CNN-LSTM | 0.0448 | 0.0599 | 0.0078 | 0.9756 | 0.0479 | 0.0601 | 0.0082 | 0.9702 |
CNN-BiLSTM | 0.0347 | 0.0451 | 0.0062 | 0.9835 | 0.0452 | 0.0584 | 0.0079 | 0.9719 |
Proposed | 0.0156 | 0.0219 | 0.0027 | 0.9968 | 0.0283 | 0.0381 | 0.0049 | 0.9883 |
Methods | 3 steps | 6 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0431 | 0.0514 | 0.0043 | 0.9473 | 0.0518 | 0.0607 | 0.0053 | 0.9264 |
RNN | 0.0273 | 0.0368 | 0.0028 | 0.9729 | 0.0393 | 0.0502 | 0.0040 | 0.9497 |
LSTM | 0.0208 | 0.0321 | 0.0021 | 0.9793 | 0.0334 | 0.0452 | 0.0034 | 0.9592 |
CNN-LSTM | 0.0257 | 0.0323 | 0.0027 | 0.9791 | 0.0329 | 0.0413 | 0.0034 | 0.9659 |
CNN-BiLSTM | 0.0253 | 0.0314 | 0.0026 | 0.9803 | 0.0272 | 0.0352 | 0.0028 | 0.9753 |
Proposed | 0.0189 | 0.0257 | 0.0019 | 0.9869 | 0.0201 | 0.0266 | 0.0021 | 0.9858 |
Methods | 36 steps | 72 steps | ||||||
MAE | RMSE | MAPE | $ R^2 $ | MAE | RMSE | MAPE | $ R^2 $ | |
SVR | 0.0849 | 0.0971 | 0.0086 | 0.8126 | 0.1013 | 0.0118 | 0.0104 | 0.7519 |
RNN | 0.0436 | 0.0555 | 0.0045 | 0.9395 | 0.0495 | 0.0642 | 0.0051 | 0.9184 |
LSTM | 0.0409 | 0.0499 | 0.0042 | 0.9505 | 0.0420 | 0.0532 | 0.0043 | 0.9443 |
CNN-LSTM | 0.0394 | 0.0494 | 0.0041 | 0.9516 | 0.0422 | 0.0504 | 0.0044 | 0.9497 |
CNN-BiLSTM | 0.0332 | 0.0402 | 0.0034 | 0.9683 | 0.0352 | 0.0446 | 0.0036 | 0.9606 |
Proposed | 0.0221 | 0.0298 | 0.0023 | 0.9824 | 0.0276 | 0.0372 | 0.0028 | 0.9726 |