
In this paper, we investigate the abstract integro-differential time-fractional wave equation with a small positive parameter ε. The Lp−Lq estimates for the resolvent operator family are obtained using the Laplace transform, the Mittag-Leffler operator family, and the C0−semigroup. These estimates serve as the foundation for some fixed point theorems that demonstrate the local-in-time existence of the solution in weighted function space. We first demonstrate that, for acceptable indices p∈[1,+∞) and s∈(1,+∞), the mild solution of the approximation problem converges to the solution of the associated limit problem in Lp((0,T),Ls(Rn)) as ε→0+. The resolvent operator family and a set of kernel k(t) assumptions form the foundation of the proof's primary methodology for evaluating norms. Moreover, we consider the asymptotic behavior of solutions as α→2−.
Citation: Yongqiang Zhao, Yanbin Tang. Approximation of solutions to integro-differential time fractional wave equations in Lp−space[J]. Networks and Heterogeneous Media, 2023, 18(3): 1024-1058. doi: 10.3934/nhm.2023045
[1] | Faisal Mehmood Butt, Lal Hussain, Anzar Mahmood, Kashif Javed Lone . Artificial Intelligence based accurately load forecasting system to forecast short and medium-term load demands. Mathematical Biosciences and Engineering, 2021, 18(1): 400-425. doi: 10.3934/mbe.2021022 |
[2] | Xingyu Tang, Peijie Zheng, Yuewu Liu, Yuhua Yao, Guohua Huang . LangMoDHS: A deep learning language model for predicting DNase I hypersensitive sites in mouse genome. Mathematical Biosciences and Engineering, 2023, 20(1): 1037-1057. doi: 10.3934/mbe.2023048 |
[3] | Wen Li, Xuekun Yang, Guowu Yuan, Dan Xu . ABCNet: A comprehensive highway visibility prediction model based on attention, Bi-LSTM and CNN. Mathematical Biosciences and Engineering, 2024, 21(3): 4397-4420. doi: 10.3934/mbe.2024194 |
[4] | Xin Jing, Jungang Luo, Shangyao Zhang, Na Wei . Runoff forecasting model based on variational mode decomposition and artificial neural networks. Mathematical Biosciences and Engineering, 2022, 19(2): 1633-1648. doi: 10.3934/mbe.2022076 |
[5] | Wenbo Yang, Wei Liu, Qun Gao . Prediction of dissolved oxygen concentration in aquaculture based on attention mechanism and combined neural network. Mathematical Biosciences and Engineering, 2023, 20(1): 998-1017. doi: 10.3934/mbe.2023046 |
[6] | Guanghua Fu, Qingjuan Wei, Yongsheng Yang . Bearing fault diagnosis with parallel CNN and LSTM. Mathematical Biosciences and Engineering, 2024, 21(2): 2385-2406. doi: 10.3934/mbe.2024105 |
[7] | Ying Chang, Lan Wang, Yunmin Zhao, Ming Liu, Jing Zhang . Research on two-class and four-class action recognition based on EEG signals. Mathematical Biosciences and Engineering, 2023, 20(6): 10376-10391. doi: 10.3934/mbe.2023455 |
[8] | Xintong Wu, Yingyi Geng, Xinhong Wang, Jucheng Zhang, Ling Xia . Continuous extraction of coronary artery centerline from cardiac CTA images using a regression-based method. Mathematical Biosciences and Engineering, 2023, 20(3): 4988-5003. doi: 10.3934/mbe.2023231 |
[9] | Vasileios E. Papageorgiou, Georgios Petmezas, Pantelis Dogoulis, Maxime Cordy, Nicos Maglaveras . Uncertainty CNNs: A path to enhanced medical image classification performance. Mathematical Biosciences and Engineering, 2025, 22(3): 528-553. doi: 10.3934/mbe.2025020 |
[10] | Keruo Jiang, Zhen Huang, Xinyan Zhou, Chudong Tong, Minjie Zhu, Heshan Wang . Deep belief improved bidirectional LSTM for multivariate time series forecasting. Mathematical Biosciences and Engineering, 2023, 20(9): 16596-16627. doi: 10.3934/mbe.2023739 |
In this paper, we investigate the abstract integro-differential time-fractional wave equation with a small positive parameter ε. The Lp−Lq estimates for the resolvent operator family are obtained using the Laplace transform, the Mittag-Leffler operator family, and the C0−semigroup. These estimates serve as the foundation for some fixed point theorems that demonstrate the local-in-time existence of the solution in weighted function space. We first demonstrate that, for acceptable indices p∈[1,+∞) and s∈(1,+∞), the mild solution of the approximation problem converges to the solution of the associated limit problem in Lp((0,T),Ls(Rn)) as ε→0+. The resolvent operator family and a set of kernel k(t) assumptions form the foundation of the proof's primary methodology for evaluating norms. Moreover, we consider the asymptotic behavior of solutions as α→2−.
A typhoon is a kind of tropical cyclone. When the sustained wind speed in the center of a tropical cyclone reaches category 12 to 13 (32.7–41.4 meters per second), it is called a typhoon [1]. Typhoons usually form on the sea surface in tropical areas 3–5 latitudes away from the equator, such as the North and South Pacific, the North Atlantic, and the Indian Ocean. Its movement is mainly affected by large-scale weather systems, and eventually dissipates at sea, becomes an extratropical cyclone, or dissipates after arriving on land. A typhoon is sudden and destructive; it will bring strong winds and rainstorms wherever it goes. The storm surge disaster caused by typhoons is one of the most serious natural disasters in the world. Sun et al. [2] found that typhoons will become stronger, larger, and more destructive in the context of global warming. Kossin [3] mentioned in his study that the translation speeds of tropical-cyclones have decreased globally by 10% and decreased by 30% in Asia during 1949–2016. The lower the velocity of a cyclone, the more rain falls on the location, and the higher the risk of flooding and other disasters. Therefore, slower translation speeds will lead to more serious disasters. In order to reduce the economic losses and casualties caused by typhoons, it is urgent to establish accurate and timely prediction methods.
The motion of a typhoon is highly complex. Unlike many time series problems, this cannot be solved by using the characteristics of periodicity [4]. A meteorological department mainly uses numerical simulation to achieve typhoon track prediction. This requires a lot of typhoon observational data. In order to establish complex atmospheric equations to simulate the motion and variation of the atmosphere, researchers need to fully understand the internal structural changes, key physical processes, and the mutation mechanism of the atmosphere [5]. South Korea, for example, conducts numerical simulations on a Cray XC40 supercomputer with 139, 329 CPUs. Due to its difficulty, numerical simulation requires a lot of calculation time. Even so, numerical simulation cannot obtain the analytical solution, only the numerical solution. Also, expensive hardware means high maintenance costs.
A deep learning method can use a single graphics processing unit (GPU). The prediction results can be generated within seconds by updating the input data of the prediction model. Compared with using hundreds of thousands of CPUs in a simulation, this method is obviously quicker and cheaper. There have been many investigations into the use of deep learning methods for typhoon track prediction. Song et al. [6] used the spatial location information and meteorological factors of typhoons to train a neural network which is constructed based on convolutional neural networks (CNNs) and the gated recurrent neural network (GRU). They state that the proposed model is better than the traditional prediction methods for typhoon track prediction. Song et al. [7] used BiGRU with an attention mechanism to predict typhoon tracks. Gao et al. [8] trained a LSTM with typhoon track data. In addition to describing the relevant physical quantities of the typhoon process, satellite images can also effectively reflect the spatial information.
Satellite images can provide the distribution of clouds, and the changes between satellite images can reflect the evolution of weather systems. With the help of satellite images, more accurate weather forecasts can be made. Lee et al. [9] were probably the first to consider using satellite images of tropical cyclones for typhoon track prediction with the help of neural networks. However, the images did not function as input data for the LSTM. Therefore, they extract information such as Dvorak number, maximum wind speed, or spatial location of the cyclone from satellite images and input these into the network for time series prediction. Kovordanyi et al. [10] used satellite images as neural network input data for the first time. They predicted the shape of cloud clusters and the direction of movement of the typhoon via a neural network. Rüttgers et al. [11] marked the typhoon center on the satellite images and used generative adversarial networks (GANs) [12] to generate the satellite images to predict the shape of the cloud cluster and the typhoon track.
The prediction of typhoon tracks is affected by many factors such as the typhoon model structure, terrain, initial field, and boundary conditions of the model [13]. For example, the movement of a typhoon is affected by the steering flow, which is determined by the environment around the storm. When a typhoon is affected by multiple weather systems at the same time, it is difficult to predict the steering flow [14]. Therefore, a single neural network cannot accurately predict the typhoon track. Therefore, the fusion of multiple neural networks is more suitable for the task of typhoon track prediction rather than a single neural network [7]. Xu et al. [15] modeled the spatiotemporal relationship. They attached the trained generator to the output of the LSTM to generate the cloud images. Hong et al. [16] applied a convolutional LSTM network (ConvLSTM) to typhoon track prediction.
At present, research based on deep learning methods combined with satellite images mostly focuses on typhoon eye tracking [17] or typhoon intensity prediction [18], and less on typhoon track prediction. We believe that it is necessary to extract the temporal and spatial features of time series satellite images for typhoon path prediction. Therefore, we propose a new typhoon track prediction model based on a CNN and LSTM: DeepTyphoon. The specific contributions of this paper are as follows: We mark the satellite images obtained from the Japan Meteorological Agency to create the data set. According to the characteristics of the images in the data set, we establish a prediction network. The features of the clouds and marked squares in the image are extracted by dilated convolution. In addition, a convolutional block attention module (CBAM) is used to enhance the feature extraction ability of the network from two dimensions: Channel and spatial.
The attention module can focus on the local information of the network and suppress invalid information, so as to enhance the expression of effective features. The attention mechanism was initially applied to natural language processing and achieved good results. However, there are few studies on the application of attention mechanisms to time series prediction in meteorology. The convolutional block attention module is a new convolutional attention module proposed by Sanghyun et al. in 2018 [19]. The convolution operation extracts information features by mixing cross channel information and spatial information. Therefore, they innovatively proposed the attention mechanism of integrating channel attention and spatial attention and applied a channel attention module and a spatial attention module for the two respective dimensions. The principle of CBAM is shown in Figure 1.
In this paper, CBAM is applied to the encoder of the neural network to enhance the feature extraction ability of the convolution network, and extracts the features of marked squares and clouds from the two dimensions.
The long short-term memory network is a special recurrent neural network. It was first proposed by Hochreiter and Schmidhuber [20] and later improved by Grave [21], which can address the problems of gradient disappearance and gradient explosion in the process of long sequence training. Like RNNs, LSTMs can also connect previous information to the current task, such as using the previous video frame information to understand the current frame. Compared with RNNs, LSTMs perform better in learning long-term dependencies.
The operation of LSTM is mainly realized by the input gate it, forget gate ft, and output gate ot. First, the LSTM uses the forget gate ft to decide which information in ct-1 to delete, and the activation state of the forget gate is determined by the activation function σ(·):
ft=σ(Wf⋅[ht−1,xt]+bf) | (1) |
where Wf is the weight matrix of the forget gate; bf is the offset term of the forget gate; ft represents the forget gate at time t; xt is the input of network at time t. Second, the LSTM uses the input gate it to determine the information stored in the new memory unit ct:
it=σ(Wi⋅[ht−1,xt]+bi) | (2) |
~ct=tanh(Wc⋅[ht−1,xt]+bc) | (3) |
where Wi is the weight matrix of the input gate; Wc is the weight moment of the cell state; bi is the offset term of the input gate; bc is the bias term of the cell state; it is the input gate at time t; tanh is the hyperbolic tangent function; ~ct indicates the candidate value ct added to the new memory unit status. Then, the LSTM updates the status of the previous memory unit ct-1 to the status ct:
ct=ft×ct−1+it×~ct | (4) |
where ct is the input status of the memory unit. Finally, the output gate ot is used to calculate the hidden state ht. The specific formula is as follows:
ot=σ(Wo⋅[ht−1,xt]+bo) | (5) |
ht=ot⊙tanh(ct) | (6) |
where Wo is the weight matrix of the output gate; bo is the output gate offset term; σ is the sigmoid function; ⊙ represents the multiplication of matrix elements.
The dilated residual network [22] was proposed to improve the receptive field of convolutions. It maintains the width and height of input features by setting dilation rates. However, the setting of dilation rates leads to the "gridding issue"; when the dilation rate increases, the input sampling will become very sparse, which will be detrimental to the learning of features. Therefore, Wang et al. [23] proposed hybrid dilated convolution (HDC) which alleviates the gridding issue by setting the dilation rates for a series of convolution layers and connecting them. In this paper, the size of the marked square on the satellite images is 3 × 3 pixels. As shown in Figure 2, the three-layer HDC layer with a dilation rate of 1/2/3 can ensure that the input feature of the square is not lost.
The self-built typhoon image data set used in this paper is provided by the National Institute of Information (NII). The satellite image data are taken from Himawari-1~8 series and GOES meteorological satellites. Of these, Himawari-8 can achieve a 10-minute time resolution and 500 m spatial resolution. It has an improved imager observing five bands (one for visible (0.64 μm) and four for infrared (10.4, 12.4, 6.2 and 3.9 μm)). GOES is the geostationary orbit operational satellite series of NOAA in the United States. It also has an improved imager observing five bands (one for visible (0.55–0.75 μm) and four for infrared (10.20–11.20, 11.50–12.50, 6.50–7.00 and 3.80–400 μm)). Satellite images in the recent 40 years are selected as data samples, which are all infrared images (10.4 μm) with a size of 128 × 128 pixels, representing an actual range of 2600 × 2600 km. The middle of the image is the typhoon center, and NII provides the longitude and latitude of the typhoon position corresponding to each image.
Due to the different life cycles of typhoons, the typhoon processes with typhoon cycle less than 60 hours are screened out through data cleaning in order to ensure that the neural network can obtain sufficient information in the training process. We selected satellite images located in the Northwest Pacific and Southwest Pacific to demonstrate the effectiveness of the model. Of these, 513 typhoon processes located in the Northwest Pacific from 1980 to 2020, that is, 513 typhoon sequences, and a total of 13400 satellite images are selected to construct the training set, verification set, and test set, which are 338, 103 and 72 typhoon sequences, respectively, corresponding to 9110, 2680 and 1610 satellite images. Also, 198 typhoon processes located in the Southwest Pacific from 2000 to 2020, a total of 8420 satellite images, are selected to construct the training set, verification set, and test set, which are 108, 60 and 30 typhoon sequences, respectively, corresponding to 5470, 2110 and 840 satellite images.
The original infrared image is a single channel gray image, and the range of digital number of satellite image is 0–255. Typhoons are sequential from start to finish. Moreover, the same typhoon has different forms in different lifecycle stages, and the spiral cloud system shown on the satellite cloud map is also different [24]. Figure 3 shows some images in the lifecycle of typhoon Damrey, which lasted for 11 days and 18 hours.
As described in Section 2.2.1, the typhoon center in the satellite image provided by NII is located in the middle of the image. However, the typhoon center on the satellite images is an absolute position. We need the relative positional information between satellite image sequences to judge the movement of clouds. In order to determine the direction of movement and distance of the typhoon through the satellite image, for each satellite image, we mark the typhoon position at the previous time (for example, six hours ago) on the satellite image at the current time. The marking method is as follows: The spatial coordinate of the typhoon at the current time is (φ, λ). The horizontal offset Dφ and vertical offset Dλ (in km) between the coordinates of the previous time and the coordinates of the current time is calculated according to the Havers formula [25]; the earth radius R is taken from the real coordinate system:
Dφ=2Rarcsin√sin2(φ′−φ2) | (7) |
Dλ=2Rarcsin√cosφcosφ′sin2(λ′−λ2) | (8) |
Then, according to Dφ and Dλ, we calculate the horizontal and vertical offset Pφ and Pλ at the pixel level on the image using the reference system. This determines the position of the typhoon at the previous time on the satellite image at the current time.
We use black squares to mark satellite images (gray image) for the experiments. Compared with using RGB image (image data marked with red squares) as the input of neural network, the use of gray images as the input of neural network means a shorter training time which is convenient for the experiments to adjust the parameters of the neural network. In the experimentation, it is found that using a red square marked image can eliminate the interference of the gray background, so as to determine the position of the typhoon more accurately. As shown in Figure 4: Figure 4(a) is the original satellite image, Figure 4(b) is the satellite image marked with a black square, and Figure 4(c) is the satellite image marked with a red square. In Figure 4(b), (c), the center of the image is the typhoon center at the current time, and the marked square represents the typhoon center at the previous time. The size of the marked square is 3 × 3 pixels. There are 13400 single channel gray images in the typhoon image data set with black square marks, and 13, 400 RGB three channel images in the typhoon image data set with red square marks. The range of digital number of image data marked with square (black or red) are both 0–255.The size of the dataset does not change as a result of marking. The data set size of the Northwest Pacific is 9110 images for the training set, 2680 images for the verification set, and 1610 images for the test set. For the Southwest Pacific data set, this is 5470, 2110 and 840, respectively.
In the research of using deep learning to predict typhoon tracks, most researchers use reanalysis data as the input of neural network. For example, the work in [26] converts the reanalysis data into a density map and uses it as the input of a neural network to determine the spatial position of the typhoon according to the predicted density map. However, compared with the thermodynamic diagram transformed according to physical quantity data, satellite images contain more intuitive information. In order to consider the temporal and spatial characteristics of the typhoon process at the same time, we propose the DeepTyphoon prediction model for satellite image training and prediction. DeepTyphoon combines the ability of a CNN to extract advanced semantic features and the utility of LSTM for time series prediction. It takes the marked satellite image as the input, learns the relative position between the square mark and the cloud cluster and the direction of movement of the cloud cluster in the training process, and finally predicts the position of the typhoon according to the square mark on the predicted image.
The prediction process of DeepTyphoon is shown in Figure 5. The whole neural network model includes a convolution neural network (as encoder), an LSTM, and a deconvolution neural network (as decoder). Taking the satellite images marked with black squares as an example, the prediction process is as follows: First, the marked satellite images (denoted by m) are input into the encoder. We use the hybrid dilated convolution layer to extract the features of the satellite image m. At the same time, we add the channel attention module and the spatial attention module to further improve the feature extraction ability of the coding network. After each image of a satellite image sequence is input into the encoder, the encoder outputs the feature vector of m (denoted by t) as the input of LSTM. The output of the LSTM network t' is the predicted value of t. Finally, we deconvolute t' into an image m' in order to determine the typhoon center according to the prediction result t' of LSTM. Due to the black square mark in the input image m, the final output of the model, the predicted image m', will also have a predicted black square mark.
The black square in the predicted image is detected by applying a filter on the predicted image and a convolution with the size of the black square on the filtered image. The pixel with the highest value in the convolved image gives the location of the predicted typhoon center. After identifying the pixel coordinates of the predicted black square, they are converted back to coordinates of latitude (φ) and longitude (λ) through the geographic reference system.
The experimental environment of this paper is mainly based on the PyTorch framework and built on the Ubuntu operating system. The specific configuration is shown in Table 1 below. In order to fully verify the performance, the experiments in this paper are completed in the same experimental environment. The experimental hyperparameters are set as follows: The batch size is 100, the learning rate is 0.0001, and the number of epochs is 100. The Adam algorithm is used to optimize the network parameters, and the weight attenuation coefficient is set to 0.00002.
Name | Configuration |
Operating System | Ubuntu 18.04.5 |
Framework | PyTorch 1.8.0 |
Memory | 60 Gb |
GPU | GeForce RTX 3080Ti |
CUDA | CUDA 10.1 |
In order to test the influence of the number of convolution layers on the experimental results, we set different convolution layers for testing. First, we set the dilation rates of the three continuous convolution layers to 1/2/3, respectively, and take the three convolution layers as one HDC layer. We use the satellite image marked with the red square as the input of the model, and calculate the mean average error (MAE) of the predicted results when one, two, and three HDC layers are used.
As shown in Table 2, we achieved the minimum MAE when using two HDC layers. That is to say, when we use six convolution layers and set the dilation rates to 1/2/3/1/2/3 respectively, the prediction result of the neural network is the best.
Number of HDC layers | MAE (km) |
1 | 78.22 |
2 | 66.38 |
3 | 69.43 |
An important hyperparameter of the model is the number of input image sequences. If the number of input image sequences (denoted by n) is too large, the memory usage will be increased, and the training time will be increased. Moreover, the lifecycle of a typhoon is not too long, and too large a value of n will make most typhoon processes unable to be used as training data. However, too small a value of n will lead to insufficient input information and affect the prediction performance. We use the satellite images marked with red squares as the input. The time interval of the satellite image is 6 hours. After training and testing, for the three cases where n is 5, 10 and 20, the MAE after 6 hours is calculated.
As shown in Table 3: In the case of n = 5, the change of the typhoon seems to have not been fully captured. In the case of n = 20, due to the reduction of the number of training data sets, the results are not good compared with n = 10. For example, considering typhoon Mitag with 29 sequences, using n = 10 allows to start training from sequence 11, and provides a number of 19 training sequences. However, when n = 20, this only allows to start training from sequence 21, that is, 9 training sequences.
n | MAE (km) |
5 | 96.05 |
10 | 68.49 |
20 | 79.43 |
During the experiment, it is found that different convolution kernel sizes will affect the shape of the final generated square. In order to accurately predict the square in the predicted images, the convolution kernel size is tested. We select the satellite image marked with a black square as the input, which can save calculation time.
Figure 6 shows the prediction results corresponding to different convolution kernels. Figure 6(a) shows the prediction results when the convolution kernel size is 3. It is obvious from the prediction image that a 3 × 3 pixel black square is generated. Figure 6(b), (c) correspond to the shape of the predicted square in the generated image when the convolution kernel size is 2 and 4, respectively. Although it can be seen from the predicted image that the marks are generated, they are blurred to varying degrees and the central position cannot be accurately determined by convolution. Finally, the size of the convolution kernel is set to 3 so that the exact position of the square can be obtained.
For the typhoon satellite image at the later stage of the typhoon lifecycle, as shown in Figure 7(a), if the black square is used to mark the satellite image, the square in the predicted image almost coincides with the gray background of the image. It is impossible to accurately detect the mark position in the prediction image through convolution. Therefore, we choose the satellite image marked by the red square as the model input so that we can accurately detect the location of the marked pixels. The channel attention module is added after the convolution layer to enhance the weight of the pixel value of the image R channel (red). In the predicted image, only the pixel value on the R channel in the generated image is detected, and the position of the marked square can be obtained accurately. As shown in Figure 7(b), the red square in the figure is located and redrawn after convolution on the R channel.
The parameters of the neural network are finally determined through experiments, as shown in Table 4. Of these, the number of HDC layers is obtained from Table 2, which is six convolution layers with different dilation rates. In order to decode the prediction vector into an image, the number of deconvolution layers is the same as that of convolution layers. The kernel size is determined according to Figure 6. Based on previous experience, we set the number of hidden layers of LSTM to two.
Name | Parameter |
HDC layers | 2 |
Deconvolution layers | 6 |
LSTM layers | 2 |
Kernel | 3 × 3 |
In order to evaluate the performance of the prediction model, the root mean square error (RMSE) and the mean absolute error are selected to measure the prediction accuracy of the model.
The absolute error (E) describes the distance between the predicted coordinates (φpred, λpred) and the real coordinates (φreal, λreal). It is calculated in kilometers (km) by applying the haversine formula, and the earth radius R is taken from the real coordinate position:
E=2Rarcsin√sin2(φpred−φreal2)+cosφpredcosφpredsin2(λpred−λreal2) | (9) |
The mean absolute error formula is as follows, in km, where E represents the absolute error of the typhoon:
MAE=1m∑i=1m|E| | (10) |
The root mean square error formula is as follows, in km, where E represents the absolute error of the typhoon:
RMSE=√1mm∑i=1(Ei)2 | (11) |
As an example of a predicted image, Figure 8 shows the prediction results of typhoon Gray sequence 11, in which Figure 8(a) is the ground truth and Figure 8(b) is a predicted image.
It is obvious that the predicted image does not have the same definition as the real image; in particular, the clouds are blurred compared with the ground truth. However, we focus on accurately predicting the position of the square in the predicted images, so as to accurately predict the spatial position of the typhoon rather than improve the quality of the generated images.
By calculating the relative position between the square and the center of the predicted image, the spatial position of the predicted typhoon is determined. We compare the position of the predicted typhoon with the position of the real typhoon, and finally we can obtain the prediction accuracy. We select an example in the Northwest Pacific and the Southwest Pacific to show the prediction results of DeepTyphoon. Each typhoon is predicted in chronological order, and the sequence is named by the date and time of the corresponding real image UTC. For example, "1993072718" represents 6 p.m. on July 27, 1993 (UTC).
Table 5 shows the coordinate prediction results and prediction errors of Typhoon Mitag. As can be seen from the table, this typhoon has taken 29 steps from occurrence to extinction; each time step is 6 hours, giving a total of 174 hours. Take ten points as a sequence as the input of the prediction model, the predicted values of Mitag sequence 11 to sequence 29 can be obtained in order. After calculation, the average error of Typhoon Mitag from sequence 11 to sequence 29 is 55.037 km.
Sequence | Real | Predicted | E (km) |
11(2007112300) | 13.7, 127.2 | 13.6, 126.8 | 44.629 |
12(2007112306) | 13.7, 127.0 | 14, 127.1 | 35.062 |
13(2007112312) | 14.0, 126.7 | 14.5, 126.6 | 56.632 |
14(2007112318) | 14.1, 126.5 | 14.3, 126.1 | 48.516 |
15(2007112400) | 14.3, 126.2 | 14.8, 126.4 | 59.619 |
16(2007112406) | 14.5, 125.9 | 15, 126.2 | 64.278 |
17(2007112412) | 14.8, 125.5 | 15, 124.9 | 68.201 |
18(2007112418) | 15.1, 125.2 | 15.4, 125.5 | 46.353 |
19(2007112500) | 15.5, 124.5 | 16.1, 124.7 | 70.065 |
20(2007112506) | 16.4, 123.9 | 16.1, 124.1 | 39.606 |
21(2007112512) | 16.9, 122.9 | 16.8, 122.5 | 43.997 |
22(2007112518) | 17.3, 121.5 | 17, 122.1 | 71.951 |
23(2007112600) | 18.3, 121.0 | 18.7, 121.4 | 61.298 |
24(2007112606) | 19.0, 120.7 | 19.4, 120.5 | 49.187 |
25(2007112612) | 19.5, 120.7 | 19.2, 121.1 | 53.609 |
26(2007112618) | 20.0, 121.1 | 20.6, 121.4 | 73.688 |
27(2007112700) | 20.1, 121.5 | 20.4, 122.6 | 56.719 |
28(2007112706) | 20.5, 122.8 | 20.6, 123.6 | 23.619 |
29(2007112712) | 20.6, 124.0 | 20, 124.4 | 78.685 |
MAE (km) | ![]() |
![]() |
55.037 |
As shown in Figure 9, in order to display the forecast results more intuitively, they are marked in the map according to the real track (yellow) and predicted track (blue) of Typhoon Mitag.
Table 6 shows the coordinate prediction results and prediction errors of Typhoon Marcia. Typhoon Marcia has taken 22 steps from occurrence to extinction; each time step is 6 hours, giving a total of 132 hours. Take ten points as a sequence as the input of the prediction model, the predicted values of Marcia sequence 11 to sequence 22 can be obtained in order.
Sequence | Real | Predicted | E (km) |
11(2015021909) | –20.5, 150.6 | –20.4, 150.3 | 33.175 |
12(2015021912) | –20.8, 150.5 | –20.6, 150.9 | 47.177 |
13(2015021915) | –21.1, 150.5 | –21.5, 150.8 | 54.261 |
14(2015021918) | –21.5, 150.5 | –21.5, 151.1 | 70.413 |
15(2015021921) | –22.1, 150.5 | –21.9, 150.9 | 46.853 |
16(2015022000) | –22.7, 150.5 | –22.5, 150.6 | 24.494 |
17(2015022003) | –23.1, 150.5 | –22.7, 150.2 | 54.061 |
18(2015022006) | –23.7, 150.6 | –23.3, 150.9 | 53.983 |
19(2015022009) | –24.2, 150.9 | –24.7, 150.2 | 90.065 |
20(2015022012) | –24.5, 151.3 | –24.9, 150.8 | 67.302 |
21(2015022015) | –24.9, 151.6 | –25.1, 151 | 64.426 |
22(2015022018) | –25.3, 151.8 | –25.4, 151.4 | 41.705 |
MAE (km) | ![]() |
![]() |
53.992 |
As shown in Figure 10, we also display the forecast results for Typhoon Marcia in the map according to Table 6. The real track is represented by yellow points, and the predicted path is represented by blue points.
We tested two test sets located in the Northwest Pacific and Southwest Pacific. Table 7 shows the prediction results of DeepTyphoon for the two test sets. The predicted time range is 6 hours. The average absolute error of the prediction results of DeepTyphoon for the two test sets is less than 70 km. In addition, we recorded the calculation time of the DeepTyphoon prediction model. After 14 hours of training, the DeepTyphoon prediction model can process 1000 satellite images, that is, 100 typhoon points, in one minute.
Data set | Test set size | RMSE (km) | MAE (km) | Computational time (s) |
Northwest Pacific | 1610 | 72.72 | 65.39 | 87.42 |
Southwest Pacific | 840 | 69.58 | 62.41 | 49.80 |
Table 8 shows the prediction results for RNN, GAN, AE–GRU, TraijGRU, and DeepTyphoon. The performance indicators include RMSE and MAE.
Model | Predicted Time Range | RMSE (km) | MAE (km) |
RNN[27] | 12 h | 164.22 | |
AE-GRU[28] | 12 h | 138.67 | |
TraijGRU[29] | 6 h | 66.6 | |
DeepTyphoon | 6 h | 73.96 | 64.17 |
12 h | 134.70 | 129.82 |
As shown in Table 8, the evaluation indexes for DeepTyphoon are the best after 6 and 12 hours. Through comparison, it is shown that DeepTyphoon can more accurately predict the location of typhoons using satellite images.
In order to further verify the validity of the proposed model, we calculated the MAEs in longitude and latitude and compared it with SOTA models. As shown in Table 9, our model has the best prediction result at 6 hours and 12 hours. This verifies the validity of the proposed model.
Model | Latitude | Longitude | ||
6 h | 1 h | 6 h | 12 h | |
DeepFR[30] | 0.9100 | 1.1765 | 0.8910 | 1.2430 |
Trj-DMFMG[31] | 0.6602 | 0.9275 | 0.8747 | 1.1329 |
DeepTyphoon | 0.4876 | 0.8812 | 0.6645 | 1.0874 |
The experimental results show that marking the typhoon center at the previous time on the satellite images and using CNN and LSTM for feature extraction can effectively learn cloud motions in the satellite image. Also, we can use the deconvolution network to detect the typhoon center at a future time so as to accurately predict the typhoon path in the future.
However, we expect to get more information from the prediction image obtained by the deconvolution network. Therefore, the focus of future work should be to further optimize the neural network and better extract cloud features, so as to obtain more realistic satellite images through prediction. In this way, we can not only detect the typhoon center in the prediction image, but also determine the intensity of the typhoon. In addition, the prediction accuracy of the typhoon path should be further improved. Then, how to ensure the accuracy of typhoon track prediction and the authenticity of the prediction image at the same time is the difficulty in the future.
In this paper, we use a deep learning method combined with satellite images to predict typhoon tracks, and propose the DeepTyphoon prediction model. The typhoon center is marked on the satellite image. Combined with the characteristics of CNN and LSTM, we extract the features of the satellite image by HDC and CBAM, and use LSTM to predict the features. After the predicted feature vector is transformed into an image, the position of the typhoon is detected on the predicted image. More than 20000 time series satellite images in two data sets released by the Japan Meteorological Agency in recent years are used for training and prediction. Experimental results on the Northwest Pacific and Southwest Pacific data sets show that our prediction method is better than existing deep learning methods in typhoon track prediction.
This work was supported by "Shanghai science and technology innovation plan" (Project number: 20dz1203800) and "the Capacity Development for Local College Project" (Project number: 19050502100).
The authors declare there is no conflict of interests.
[1] | J. Prüss, Evolutionary Integral Equations and Applications, Basel: Birkhäuser Verlag, 87 (1993). https://doi.org/10.1007/978-3-0348-0499-8 |
[2] | A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems, Basel: Birkhäuser, 16 (1995). |
[3] | E. Bazhlekova, Fractional Evolution Equations in Banach Spaces, Ph.D. Thesis, Eindhoven University of Technology, Eindhoven, 2001. |
[4] |
X. Yang, Y. Tang, Decay estimates of nonlocal diffusion equations in some particle systems, J. Math. Phys., 60 (2019), 043302. https://doi.org/10.1063/1.5085894 doi: 10.1063/1.5085894
![]() |
[5] |
C. Gu, Y. Tang, Chaotic characterization of one dimensional stochastic fractional heat equation, Chaos Solitons Fractals, 145 (2021), 110780. https://doi.org/10.1016/j.chaos.2021.110780 doi: 10.1016/j.chaos.2021.110780
![]() |
[6] |
C. Gu, Y. Tang, Global solution to the Cauchy problem of fractional drift diffusion system with power-law nonlinearity, Netw. Heterog. Media, 18 (2023), 109–139. https://doi.org/10.3934/nhm.2023005 doi: 10.3934/nhm.2023005
![]() |
[7] |
J. P. C. Dos Santos, S. M. Guzzo, M. N. Rabelo, Asymptotically almost periodic solutions for abstract partial neutral integro-differential equation, Adv. Differ. Equ., 2010 (2010), 1–26. https://doi.org/10.1155/2010/310951 doi: 10.1155/2010/310951
![]() |
[8] |
J. P. C. Dos Santos, H. Henrˊiquez, Existence of s−asymptotically ω−periodic solutions to abstract integro-differential equations, Appl. Math. Comput., 256 (2015), 109–118. https://doi.org/10.1016/j.amc.2015.01.005 doi: 10.1016/j.amc.2015.01.005
![]() |
[9] |
R. C. Grimmer, A. J. Prichard, Analytic resolvent operators for integral equations in Banach space, J. Differ. Equ., 50 (1983), 234–259. https://doi.org/10.1016/0022-0396(83)90076-1 doi: 10.1016/0022-0396(83)90076-1
![]() |
[10] |
C. C. Kuo, S. Y. Shaw, C−cosine functions and the abstract Cauchy problem, Ⅰ, J. Math. Anal. Appl., 210 (1997), 632–646. https://doi.org/10.1006/jmaa.1997.5420 doi: 10.1006/jmaa.1997.5420
![]() |
[11] |
C. C. Kuo, S. Y. Shaw, C−cosine functions and the abstract Cauchy problem, Ⅱ, J. Math. Anal. Appl., 210 (1997), 647–666. https://doi.org/10.1006/jmaa.1997.5421 doi: 10.1006/jmaa.1997.5421
![]() |
[12] |
A. Lorenzi, F. Messina, Approximation of solutions to linear integro-differential parabolic equations in Lp−spaces, J. Math. Anal. Appl., 333 (2007), 642–656. https://doi.org/10.1016/j.jmaa.2006.11.042 doi: 10.1016/j.jmaa.2006.11.042
![]() |
[13] |
A. Lorenzi, F. Messina, Approximation of solutions to non-linear integro-differential parabolic equations in Lp−spaces, Differ. Integral Equ., 20 (2007), 693–720. https://doi.org/10.57262/die/1356039433 doi: 10.57262/die/1356039433
![]() |
[14] |
R. N. Wang, D. H. Chen, T. J. Xiao, Abstract fractional Cauchy problems with almost sectorial operators, J. Differ. Equ., 252 (2012), 202–235. https://doi.org/10.1016/j.jde.2011.08.048 doi: 10.1016/j.jde.2011.08.048
![]() |
[15] |
A. El-Sayed, M. Herzallah, Continuation and maximal regularity of an arbitrary (fractional) order evolutionary integro-differential equation, Appl. Anal., 84 (2005), 1151–1164. https://doi.org/10.1080/0036810412331310941 doi: 10.1080/0036810412331310941
![]() |
[16] |
R. Ponce, Hölder continuous solutions for fractional differential equations and maximal regularity, J. Differ. Equ., 255 (2013), 3284–3304. https://doi.org/10.1016/j.jde.2013.07.035 doi: 10.1016/j.jde.2013.07.035
![]() |
[17] | M. Conti, V. Pata, M. Squassina, Singular limit of differential systems with memory, Indiana Univ. Math. J., 55 (2006), 169–215. http://www.jstor.org/stable/24902350 |
[18] | R. Agarwal, J. P. C. Dos Santos, C. Uevas, Analytic resolvent operator and existence results for fractional integro-differential equations, J. Abstr. Differ. Equ. Appl., 2 (2012), 26–47. |
[19] | J. P. C Dos Santos, H. Henrˊiquez, E. Henˊaandez, Existence results for neutral integro-differential equations with unbounded delay, J. Integral Equ. Appl., 23 (2011), 289–330. http://www.jstor.org/stable/26163698 |
[20] |
N. Tatar, Mittag-Leffler stability for a fractional Euler-Bernoulli problem, Chaos Solitons Fractals, 149 (2021), 1110777. https://doi.org/10.1016/j.chaos.2021.111077 doi: 10.1016/j.chaos.2021.111077
![]() |
[21] |
N. Tatar, Mittag-Leffler stability for a fractional viscoelastic telegraph problem, Math. Methods Appl. Sci., 44 (2021), 14184–14205. https://doi.org/10.1002/mma.7689 doi: 10.1002/mma.7689
![]() |
[22] |
P. Bedi, A. Kumar, T. Abdeljawad, A. Khan, J. Gomez-Aguilar, Mild solutions of coupled hybrid fractional order system with caputo-hadamard derivatives, Fractals, 29 (2021), 2150158. https://doi.org/10.1142/S0218348X21501589 doi: 10.1142/S0218348X21501589
![]() |
[23] |
H. Khan, T. Abdeljawad, J. Gomez-Aguilar, H. Tajadodi, A. Khan, Fractional order Volterra integro-differential equation with Mittag-Leffler kernel. Fractals, 29 (2021), 2150154. https://doi.org/10.1142/S0218348X2150153X doi: 10.1142/S0218348X2150153X
![]() |
[24] |
O. Martnez-Fuentes, F. Melndez-Vzquez, G. Fernndez-Anaya, J.F. Gómez-Aguilar, Analysis of fractional-order nonlinear dynamic systems with general analytic kernels: Lyapunov stability and inequalities, Mathematics, 9 (2021), 2084. https://doi.org/10.3390/math9172084 doi: 10.3390/math9172084
![]() |
[25] |
J. Asma, G. Rahman, M. Javed, Stability analysis for fractional order implicit ψ−Hilfer differential equations, Math. Methods Appl. Sci., 45 (2022), 2701–2712. https://doi.org/10.1002/mma.7948 doi: 10.1002/mma.7948
![]() |
[26] |
R. Dhayal, J. F. Gómez-Aguilar, J. Jimenez, Stability analysis of Atangana-Baleanu fractional stochastic differential systems with impulses, Int. J. Syst. Sci., 53 (2022), 3481–3495. https://doi.org/10.1080/00207721.2022.2090638 doi: 10.1080/00207721.2022.2090638
![]() |
[27] |
A. Gónzacutealez-Calderóna, L. X. Vivas-Cruzb, M. A. Taneco-Hernandezc, J. F. Gómezmez-Aguilar, Assessment of the performance of the hyperbolic-NILT method to solve fractional differential equations, Math. Comput. Simul., 206 (2023), 375–390. https://doi.org/10.1016/j.matcom.2022.11.022 doi: 10.1016/j.matcom.2022.11.022
![]() |
[28] |
A. Al-Omari, H. Al-Saadi, Existence of the classical and strong solutions for fractional semilinear initial value problems, Bound. Value Probl., 157 (2018), 1–13. https://doi.org/10.1186/s13661-018-1054-3 doi: 10.1186/s13661-018-1054-3
![]() |
[29] |
M. Benchohra, S. Litimein, J. J. Nieto, Semilinear fractional differential equations with infinite delay and non-instantaneous impulses, J. Fixed Point Theory Appl., 21 (2019), 1–16. https://doi.org/10.1007/s11784-019-0660-8 doi: 10.1007/s11784-019-0660-8
![]() |
[30] |
R. Chaudhary, M. Muslim, D. N. Pandey, Approximation of solutions to fractional stochastic integro-differential equations of order α∈(1,2], Stochastics, 92 (2020), 397–417. https://doi.org/10.1080/17442508.2019.1625904 doi: 10.1080/17442508.2019.1625904
![]() |
[31] | J. V. da C. Sousa, D. F. Gomes, E. C. de Oliveira, A new class of mild and strong solutions of integro-differential equation of arbitrary order in Banach space, arXiv, 2018. https://doi.org/10.48550/arXiv.1812.11197 |
[32] |
M. Li, Q. Zheng, On spectral inclusions and approximations of α−times resolvent families, Semigroup Forum, 69 (2004), 356–368. https://doi.org/10.1007/s00233-004-0128-y doi: 10.1007/s00233-004-0128-y
![]() |
[33] |
K. Li, J. Peng, Fractional resolvents and fractional evolution equations, Appl. Math. Lett., 25 (2012), 808–812. https://doi.org/10.1016/j.aml.2011.10.023 doi: 10.1016/j.aml.2011.10.023
![]() |
[34] |
B. Li, H. Gou, Weak solutions nonlinear fractional integrodifferential equations in nonreflexive Banach spaces, Bound. Value Probl., 209 (2016), 1–13. https://doi.org/10.1186/s13661-016-0716-2 doi: 10.1186/s13661-016-0716-2
![]() |
[35] |
Z. D. Mei, J. G. Peng, J. H. Gao, General fractional differential equations of order α∈(1,2) and type β∈[0,1] in Banach spaces, Semigroup Forum, 94 (2017), 712–737. https://doi.org/10.1007/s00233-017-9859-4 doi: 10.1007/s00233-017-9859-4
![]() |
[36] |
S. A. Qasem, R. W. Ibrahim, Z. Siri, On mild and strong solutions of fractional differential equations with delay, AIP Conf. Proc., 1682 (2015), 020049. https://doi.org/10.1063/1.4932458 doi: 10.1063/1.4932458
![]() |
[37] |
H. Henrquez, J. Mesquita, J. Pozo, Existence of solutions of the abstract Cauchy problem of fractional order, J. Funct. Anal., 281 (2021), 109028. https://doi.org/10.1016/j.jfa.2021.109028 doi: 10.1016/j.jfa.2021.109028
![]() |
[38] |
I. Kim, K. H. Kim, S. Lim, An Lq(Lp)−theory for the time fractional evolution equations with variable coefficients, Adv. Math., 306 (2017), 123–176. https://doi.org/10.1016/j.aim.2016.08.046 doi: 10.1016/j.aim.2016.08.046
![]() |
[39] | P. Quittner, P. Souplet, Superlinear Parabolic Problems: Blow-up, Global Existence and Steady States, Basel: Birkhäuser Verlag, 2007. https://doi.org/10.1007/978-3-7643-8442-5 |
[40] |
T. Kato, Blow-up of solutions of some nonlinear hyperbolic equations, Commun. Pure Appl. Math., 33 (1980), 501–505. https://doi.org/10.1002/cpa.3160330403 doi: 10.1002/cpa.3160330403
![]() |
[41] |
M. D'Abbicco, M. R. Ebert, T. H. Picon, The critical exponent(s) for the semilinear fractional diffusive equation, J. Fourier Anal. Appl., 25 (2019), 696–731. https://doi.org/10.1007/s00041-018-9627-1 doi: 10.1007/s00041-018-9627-1
![]() |
[42] |
B. T. Yordanov, Q. S. Zhang, Finite time blow-up for critical wave equations in high dimensions, J. Funct. Anal., 231 (2006), 361–374. https://doi.org/10.1016/j.jfa.2005.03.012 doi: 10.1016/j.jfa.2005.03.012
![]() |
[43] |
B. de Andrade, G. Siracusa, A. Viana, A nonlinear fractional diffusion equation: well-posedness, comparison results and blow-up, J. Math. Anal. Appl., 505 (2022), 125524. https://doi.org/10.1016/j.jmaa.2021.125524 doi: 10.1016/j.jmaa.2021.125524
![]() |
[44] |
P. M. de Carvalho-Neto, G. Planas, Mild solutions to the time fractional Navier-Stokes equations in Rn, J. Differ. Equ., 259 (2015), 2948-2980. https://doi.org/10.1016/j.jde.2015.04.008 doi: 10.1016/j.jde.2015.04.008
![]() |
[45] |
V. Keyantuo, M. Warma, On the interior approximate controllability for fractional wave equations, Discrete Contin. Dyn. Syst. Ser. A, 36 (2016), 3719–3739. https://doi.org/10.3934/dcds.2016.36.3719 doi: 10.3934/dcds.2016.36.3719
![]() |
[46] |
E. Alvarez, C. G. Gal, V. Keyantuo, M. Warma, Well-posedness results for a class of semi-linear super-diffusive equations, Nonlinear Anal., 181 (2019), 24–61. https://doi.org/10.1016/j.na.2018.10.016 doi: 10.1016/j.na.2018.10.016
![]() |
[47] |
J. P. C. Dos Santos, Fractional resolvent operator with α∈(0,1) and applications, Frac. Differ. Calc., 9 (2019), 187–208. https://doi.org/10.7153/fdc-2019-09-13 doi: 10.7153/fdc-2019-09-13
![]() |
[48] |
Y. Li, H. Sun, Z. Feng, Fractional abstract Cauchy problem with order α∈(1,2), Dyn. Partial Differ. Equ., 13 (2016), 155–177. https://dx.doi.org/10.4310/DPDE.2016.v13.n2.a4 doi: 10.4310/DPDE.2016.v13.n2.a4
![]() |
[49] |
Y. Li, Regularity of mild Solutions for fractional abstract Cauchy problem with order α∈(1,2), Z. Angew. Math. Phys., 66 (2015), 3283–3298. https://doi.org/10.1007/s00033-015-0577-z doi: 10.1007/s00033-015-0577-z
![]() |
[50] |
Q. Zhang, Y. Li, Global well-posedness and blow-up solution of the Cauchy problem for a time-fractional superdiffusion equation, J. Evol. Equ., 19 (2019), 271–303. https://doi.org/10.1007/s00028-018-0475-x doi: 10.1007/s00028-018-0475-x
![]() |
[51] | S. I. Piskarev, Evolution Equations in Banach Spaces. Theory of Cosine Operator Functions, Internet Notes, (2004), 122. |
[52] | K. Boukerrioua, D. Diabi, B. Kilani, Some new Gronwall-bihari type inequalities and its application in the analysis for solutions to fractional differential equations, Int. J. Comput. Methods, 5 (2020), 60–68. |
[53] |
I. Bihari, A generalisation of a lemma of Bellman and its application to uniqueness problems of differential equations, Acta Math. Hung., 7 (1956), 81–94. https://doi.org/10.1007/bf02022967 doi: 10.1007/bf02022967
![]() |
[54] |
V. V. Vasilev, S. I. Piskarev, Differential equations in Banach spaces Ⅱ. Theory of cosine operator functions, J. Math. Sci., 122 (2004), 3055–3174. https://doi.org/10.1023/B:JOTH.0000029697.92324.47 doi: 10.1023/B:JOTH.0000029697.92324.47
![]() |
[55] | A. Carpinteri, F. Mainardi, Fractional calculus, some basic problems in continuumand statistical mechanics, in Fractals and Fractional Calculus in Continuum Mechanics (eds. A. Carpinteri, F. Mainardi), Vienna: Springer-Verlag, 378 (1997), 291–348. https://doi.org/10.1007/978-3-7091-2664-6-7 |
[56] | F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity, London: Imperial College Press, 2010. |
[57] |
W. R. Schnrider, W. Wyss, Fractional diffusionand and wave equations, J. Math. Phys., 30 (1989), 134–144. https://doi.org/10.1063/1.528578 doi: 10.1063/1.528578
![]() |
1. | Nam Bui Duc, Nguyen Minh Hai, Luu Vu Cam Hoan, Le Dinh Long, On inverse source term for heat equation with memory term, 2024, 57, 2391-4661, 10.1515/dema-2023-0138 | |
2. | Aslam Khan, Abdul Ghafoor, Emel Khan, Manzoor Hussain, Haar Wavelet based method of lines for the numerical solutions of inverse problems with space dependent source and noise intensity parameter on the boundary, 2025, 100, 0031-8949, 045251, 10.1088/1402-4896/adc20d |
Name | Configuration |
Operating System | Ubuntu 18.04.5 |
Framework | PyTorch 1.8.0 |
Memory | 60 Gb |
GPU | GeForce RTX 3080Ti |
CUDA | CUDA 10.1 |
Number of HDC layers | MAE (km) |
1 | 78.22 |
2 | 66.38 |
3 | 69.43 |
n | MAE (km) |
5 | 96.05 |
10 | 68.49 |
20 | 79.43 |
Name | Parameter |
HDC layers | 2 |
Deconvolution layers | 6 |
LSTM layers | 2 |
Kernel | 3 × 3 |
Sequence | Real | Predicted | E (km) |
11(2007112300) | 13.7, 127.2 | 13.6, 126.8 | 44.629 |
12(2007112306) | 13.7, 127.0 | 14, 127.1 | 35.062 |
13(2007112312) | 14.0, 126.7 | 14.5, 126.6 | 56.632 |
14(2007112318) | 14.1, 126.5 | 14.3, 126.1 | 48.516 |
15(2007112400) | 14.3, 126.2 | 14.8, 126.4 | 59.619 |
16(2007112406) | 14.5, 125.9 | 15, 126.2 | 64.278 |
17(2007112412) | 14.8, 125.5 | 15, 124.9 | 68.201 |
18(2007112418) | 15.1, 125.2 | 15.4, 125.5 | 46.353 |
19(2007112500) | 15.5, 124.5 | 16.1, 124.7 | 70.065 |
20(2007112506) | 16.4, 123.9 | 16.1, 124.1 | 39.606 |
21(2007112512) | 16.9, 122.9 | 16.8, 122.5 | 43.997 |
22(2007112518) | 17.3, 121.5 | 17, 122.1 | 71.951 |
23(2007112600) | 18.3, 121.0 | 18.7, 121.4 | 61.298 |
24(2007112606) | 19.0, 120.7 | 19.4, 120.5 | 49.187 |
25(2007112612) | 19.5, 120.7 | 19.2, 121.1 | 53.609 |
26(2007112618) | 20.0, 121.1 | 20.6, 121.4 | 73.688 |
27(2007112700) | 20.1, 121.5 | 20.4, 122.6 | 56.719 |
28(2007112706) | 20.5, 122.8 | 20.6, 123.6 | 23.619 |
29(2007112712) | 20.6, 124.0 | 20, 124.4 | 78.685 |
MAE (km) | ![]() |
![]() |
55.037 |
Sequence | Real | Predicted | E (km) |
11(2015021909) | –20.5, 150.6 | –20.4, 150.3 | 33.175 |
12(2015021912) | –20.8, 150.5 | –20.6, 150.9 | 47.177 |
13(2015021915) | –21.1, 150.5 | –21.5, 150.8 | 54.261 |
14(2015021918) | –21.5, 150.5 | –21.5, 151.1 | 70.413 |
15(2015021921) | –22.1, 150.5 | –21.9, 150.9 | 46.853 |
16(2015022000) | –22.7, 150.5 | –22.5, 150.6 | 24.494 |
17(2015022003) | –23.1, 150.5 | –22.7, 150.2 | 54.061 |
18(2015022006) | –23.7, 150.6 | –23.3, 150.9 | 53.983 |
19(2015022009) | –24.2, 150.9 | –24.7, 150.2 | 90.065 |
20(2015022012) | –24.5, 151.3 | –24.9, 150.8 | 67.302 |
21(2015022015) | –24.9, 151.6 | –25.1, 151 | 64.426 |
22(2015022018) | –25.3, 151.8 | –25.4, 151.4 | 41.705 |
MAE (km) | ![]() |
![]() |
53.992 |
Data set | Test set size | RMSE (km) | MAE (km) | Computational time (s) |
Northwest Pacific | 1610 | 72.72 | 65.39 | 87.42 |
Southwest Pacific | 840 | 69.58 | 62.41 | 49.80 |
Name | Configuration |
Operating System | Ubuntu 18.04.5 |
Framework | PyTorch 1.8.0 |
Memory | 60 Gb |
GPU | GeForce RTX 3080Ti |
CUDA | CUDA 10.1 |
Number of HDC layers | MAE (km) |
1 | 78.22 |
2 | 66.38 |
3 | 69.43 |
n | MAE (km) |
5 | 96.05 |
10 | 68.49 |
20 | 79.43 |
Name | Parameter |
HDC layers | 2 |
Deconvolution layers | 6 |
LSTM layers | 2 |
Kernel | 3 × 3 |
Sequence | Real | Predicted | E (km) |
11(2007112300) | 13.7, 127.2 | 13.6, 126.8 | 44.629 |
12(2007112306) | 13.7, 127.0 | 14, 127.1 | 35.062 |
13(2007112312) | 14.0, 126.7 | 14.5, 126.6 | 56.632 |
14(2007112318) | 14.1, 126.5 | 14.3, 126.1 | 48.516 |
15(2007112400) | 14.3, 126.2 | 14.8, 126.4 | 59.619 |
16(2007112406) | 14.5, 125.9 | 15, 126.2 | 64.278 |
17(2007112412) | 14.8, 125.5 | 15, 124.9 | 68.201 |
18(2007112418) | 15.1, 125.2 | 15.4, 125.5 | 46.353 |
19(2007112500) | 15.5, 124.5 | 16.1, 124.7 | 70.065 |
20(2007112506) | 16.4, 123.9 | 16.1, 124.1 | 39.606 |
21(2007112512) | 16.9, 122.9 | 16.8, 122.5 | 43.997 |
22(2007112518) | 17.3, 121.5 | 17, 122.1 | 71.951 |
23(2007112600) | 18.3, 121.0 | 18.7, 121.4 | 61.298 |
24(2007112606) | 19.0, 120.7 | 19.4, 120.5 | 49.187 |
25(2007112612) | 19.5, 120.7 | 19.2, 121.1 | 53.609 |
26(2007112618) | 20.0, 121.1 | 20.6, 121.4 | 73.688 |
27(2007112700) | 20.1, 121.5 | 20.4, 122.6 | 56.719 |
28(2007112706) | 20.5, 122.8 | 20.6, 123.6 | 23.619 |
29(2007112712) | 20.6, 124.0 | 20, 124.4 | 78.685 |
MAE (km) | ![]() |
![]() |
55.037 |
Sequence | Real | Predicted | E (km) |
11(2015021909) | –20.5, 150.6 | –20.4, 150.3 | 33.175 |
12(2015021912) | –20.8, 150.5 | –20.6, 150.9 | 47.177 |
13(2015021915) | –21.1, 150.5 | –21.5, 150.8 | 54.261 |
14(2015021918) | –21.5, 150.5 | –21.5, 151.1 | 70.413 |
15(2015021921) | –22.1, 150.5 | –21.9, 150.9 | 46.853 |
16(2015022000) | –22.7, 150.5 | –22.5, 150.6 | 24.494 |
17(2015022003) | –23.1, 150.5 | –22.7, 150.2 | 54.061 |
18(2015022006) | –23.7, 150.6 | –23.3, 150.9 | 53.983 |
19(2015022009) | –24.2, 150.9 | –24.7, 150.2 | 90.065 |
20(2015022012) | –24.5, 151.3 | –24.9, 150.8 | 67.302 |
21(2015022015) | –24.9, 151.6 | –25.1, 151 | 64.426 |
22(2015022018) | –25.3, 151.8 | –25.4, 151.4 | 41.705 |
MAE (km) | ![]() |
![]() |
53.992 |
Data set | Test set size | RMSE (km) | MAE (km) | Computational time (s) |
Northwest Pacific | 1610 | 72.72 | 65.39 | 87.42 |
Southwest Pacific | 840 | 69.58 | 62.41 | 49.80 |
Model | Predicted Time Range | RMSE (km) | MAE (km) |
RNN[27] | 12 h | 164.22 | |
AE-GRU[28] | 12 h | 138.67 | |
TraijGRU[29] | 6 h | 66.6 | |
DeepTyphoon | 6 h | 73.96 | 64.17 |
12 h | 134.70 | 129.82 |
Model | Latitude | Longitude | ||
6 h | 1 h | 6 h | 12 h | |
DeepFR[30] | 0.9100 | 1.1765 | 0.8910 | 1.2430 |
Trj-DMFMG[31] | 0.6602 | 0.9275 | 0.8747 | 1.1329 |
DeepTyphoon | 0.4876 | 0.8812 | 0.6645 | 1.0874 |