Citation: Ruiqi Li, Yifan Chen, Xiang Zhao, Yanli Hu, Weidong Xiao. TIME SERIES BASED URBAN AIR QUALITY PREDICATION[J]. Big Data and Information Analytics, 2016, 1(2): 171-183. doi: 10.3934/bdia.2016003
[1] | Yanshuo Wang . Pattern analysis of continuous analytic wavelet transforms of the COVID19 spreading and death. Big Data and Information Analytics, 2020, 5(1): 29-46. doi: 10.3934/bdia.2020003 |
[2] | Amanda Working, Mohammed Alqawba, Norou Diawara, Ling Li . TIME DEPENDENT ATTRIBUTE-LEVEL BEST WORST DISCRETE CHOICE MODELLING. Big Data and Information Analytics, 2018, 3(1): 55-72. doi: 10.3934/bdia.2018010 |
[3] | Ming Yang, Dunren Che, Wen Liu, Zhao Kang, Chong Peng, Mingqing Xiao, Qiang Cheng . On identifiability of 3-tensors of multilinear rank (1; Lr; Lr). Big Data and Information Analytics, 2016, 1(4): 391-401. doi: 10.3934/bdia.2016017 |
[4] | Ugo Avila-Ponce de León, Ángel G. C. Pérez, Eric Avila-Vales . A data driven analysis and forecast of an SEIARD epidemic model for COVID-19 in Mexico. Big Data and Information Analytics, 2020, 5(1): 14-28. doi: 10.3934/bdia.2020002 |
[5] | Elnaz Delpisheh, Aijun An, Heidar Davoudi, Emad Gohari Boroujerdi . Time aware topic based recommender System. Big Data and Information Analytics, 2016, 1(2): 261-274. doi: 10.3934/bdia.2016008 |
[6] | Jian-Bing Zhang, Yi-Xin Sun, De-Chuan Zhan . Multiple-instance learning for text categorization based on semantic representation. Big Data and Information Analytics, 2017, 2(1): 69-75. doi: 10.3934/bdia.2017009 |
[7] | Bill Huajian Yang . Modeling path-dependent state transitions by a recurrent neural network. Big Data and Information Analytics, 2022, 7(0): 1-12. doi: 10.3934/bdia.2022001 |
[8] | Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012 |
[9] | Nickson Golooba, Woldegebriel Assefa Woldegerima, Huaiping Zhu . Deep neural networks with application in predicting the spread of avian influenza through disease-informed neural networks. Big Data and Information Analytics, 2025, 9(0): 1-28. doi: 10.3934/bdia.2025001 |
[10] | Prince Peprah Osei, Ajay Jasra . Estimating option prices using multilevel particle filters. Big Data and Information Analytics, 2018, 3(2): 24-40. doi: 10.3934/bdia.2018005 |
While the atmosphere, a complex natural gaseous system, has been an essential key to support life on earth, air pollution is recognized as a threat to human health as well as to the earth's ecosystems. Among all those particles in air, particles less than 2.5 micrometers in diameter are called"fine" particles, i.e. PM
Ever since urban air quality is listed as one of the world's worst toxic pollution problems in the 2008 Blacksmith Institute World's Worst Polluted Places report, increasing number of air quality monitoring stations were established to inform people the real-time concentration of air pollutants, such as PM2.5, O
1 GB3095-2012 Ambient Air Quality Standards, released by Ministry of Environmental Protection of the People's Republic of China in 02.2012
Unfortunately, current air quality monitoring stations are still insufficient because such a station is in great cost of money, land, and human resources while building and maintaining. Even Beijing, the captain of China, only has 22 stations covering a
Although many statistic-based models have been proposed by environment scientists to approximate the quantitative link from factors like traffic and wind to air quality, empirical assumptions and parameters on which they based may not be applicable to all urban environments. Some methodologies, e.g., methodology based on crowd and participatory sensing using sensor-equipped mobile phones, could only work for a very few kinds of gas like CO
In this paper, we analyse and decompose the real-time PM2.5 concentration data within one year according to time series decomposition theory and infer the future fine-grained air quality information throughout a city using historical and real-time air quality data reported by existing monitor stations. We also product stochastic modelling in fitting and forecasting. We take two methodologies into comparison and discuss their strong and weak points respectively in PM2.5 prediction.
Contributions. The contribution of this paper is as follows:
1. We propose a practical system of time series based PM2.5 predication on the foundation of limited real-time data without expensive devices. Predicating fine particles like PM2.5 can give an effective support on air quality management. Our experimental result demonstrates the effectiveness of our method.
2. We compare and analyse the characters of two essentially-distinct methods applying to PM2.5. The varies on PM2.5 are intrinsically caused by complex human activities and deterministic and stochastic methods can separately excavate different aspects of hidden pattern of human activities.
Organizations. The rest of paper is organized as follows: Section 2 introduces the background material. Section 3 and 4 present in detail the progress of deterministic and stochastic predication, respectively. Section 5 discusses the characters of two methods. The related work and conclusion is in Section 6 and 7.
This section presentd the basic conpects related to this research.
Definition 2.1. Air Quality Index (AQI). AQI is a number used by government agencies to communicate to the public how polluted the air is currently. As the AQI increases, an increasingly large percentage of the population is likely to experience increasingly severe adverse health effects. To compute the AQI requires an air pollutant concentration from a monitor or model. The function used to convert from air pollutant concentration to AQI varies by pollutants, and is different in different countries. Air quality index values are divided into ranges, and each range is assigned a descriptor and a color code. In this paper, we use the standard issued by Ministry of Environmental Protection, People's Republic of China2, as shown in Table 1. The descriptor of each AQI level is regarded as the class to be inferred and the color is employed in the following visualization figures.
![]() |
2 HJ 633-2012 Technical Regulation on Ambient Air Quality Index (on trial), released by Ministry of Environmental Protection of the People's Republic of China in 02.2012
Specifically, the calculation for AQI follows Equation (1) below:
AQI=max{IAQI1,IAQI2,⋯,IAQIn} | (1) |
where IAQI stands for the sub-indicators of air quality and
Recall the AQI of Wuhan in 2013, PM2.5 contributed to the primary pollutant over most of the days (illustrate in Figure 1(a)). In this paper, we concentrate the prediction on IAQI for PM2.5 only as it is the culprit of air pollution. However, our time-series based method is straightforward to be extended to AQI prediction.
In order to de-constructs the time series into notional components, we identify and construct a number if component series where each represent a certain characteristic or type of behaviour as follows:
-the Trend Component T that reflects the long term progression of the series
-the Cyclical Component C that describes repeated but non-periodic fluctuations
-the Seasonal Component S reflecting seasonality
-the Irregular Component I (or "noise") that describes random, irregular influences. It represents the residuals of the time series after the other components have been removed.
Since cyclicality identification needs complex process and is less-productive, we here consecrate on identifying compositions in the order of trend, seasonality, and irregularity.
Trend identification. Figure 2 exhibits the autocorrelation of time series. We find that the auto-correlation coefficient attenuation of PM
Seasonality identification. We denote spring, summer, fall and winter respectively in blue, green, red and green in Figure 3(a). It is easily observed that significant difference exists in PM
Irregularity identification. Figure 3(b) demonstrates the data after 5(green) and 20(red) intervals moving average process. It is clearly discerned the random fluctuations decreases more as the intervals increasing. Therefore, it is considered that irregularity exists in the time series.
Currently, there are a variety of time-series decomposition models, each of which suits one specific shape. Figure 3.1 shows the tendency feature of two models, namely additive model and multiplicative model [9]. We pick up multiplicative model to decompose the time series of PM2.5 as it is easily observed that PM2.5 time series is roughly actinomorphic. This leads us assume that the PM2.5 can be decomposed into multiplicative model, i.e.,
Trend analysis. Since both trend and seasonality are observed in the data, we first minimize irregularity influence via 20 intervals moving average process and then use Seasonal multivariate regression model fitting.
Figure 5 illustrates the fitting curve from cubic curve (Figure 5(a)) and trigonometric (Figure 5(b)) curve fitting separately. According to fitting goodness in Table. 2, cuber curve fitting is considered with best fitting result. However, unpractical upward trend is observed in the final form of the cubic curve. Comparatively, although the fitting result from triangle curve is not as good as cuber curve's, it is still determined that triangle curve fitting as the final trend substitute. So the trend fitting equation can be written as:
ST(t)=1130sin(0.01295t−1.094)+1089sin(0.01412t+1.803) | (2) |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Seasonality analysis. During the process of moving average, not only irregular component but also part of seasonal component can be removed. On the one hand, we aim to remove irregular component to eliminate its interference on other components. On the other hand, we need to maintain seasonal component. To guarantee the effectiveness of prediction, we should add a factor, representing the removed seasonal component. Specifically, we define the generalized seasonal index
Definition 3.1. (Generalized seasonal index). The average remained PM
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Note that we utilize mean to decrease the interference from irregular and cyclical component. Thus
ST(t)=1130sin(0.01295t−1.094)+1089sin(0.01412t+1.803)+α4∑i=1Qibi | (3) |
where
Qi={1,if t∈i-th season0,otherwise. |
Cyclicality analysis. The naive method to detect cyclical component is observing to see whether any cyclicality exsits in the remaining series after removing trend and seasonal components. However, most of real-world time series does not show strict repeated model in every cyclical time points. As in our case, few cyclical can be detected after removing trend and seasonality (
As a matter of fact, real-word time series can be seen as cyclicality under certain degree of confidence and to detect that kind of cyclicality in PM2.5, we use autoregressive support vector regression (SVR_AR) with RBF kernel function, i.e.,
We apply cross validation method to select and verify the value of the parameter. Specifically, we divide the dataset equally into 10 parts and repeat the following operation for ten times. At
Thus, the final predication model can be indicated as
PM2.5=ST(t)⋅C(t) | (4) |
where ST(t) is calculated according to Equation 3 and C(t) can be fitted from SVR_AR) with RBF(the penalty term equals
The basic approach for stochastic modelling is as follow:
Definition 4.1. (Box-Jenkins model identification). The Box–Jenkins method applies autoregressive moving average ARMA or ARIMA models to find the best fit of a time-series model to past values of a time series.
The original model uses an iterative three-stage modeling approach:
(1). Model identification and model selection: guaranteeing stationariness of the variables, identifying seasonality in the dependent series (seasonally differencing it if necessary), and using plots of the autocorrelation and partial autocorrelation functions of the dependent time series to determine autoregressive(if any) or moving average component.
(2). Parameter estimation: computationally arriving at coefficients that best fit the selected ARIMA model. The maximum likelihood estimation or non-linear least-squares estimation are most common methods.
(3). Model checking: testing the estimated model conformity with the specifications of a stationary univariate process. In particular, the residuals should be independent of each other and constant in mean and variance over time. If inadequate, return to step one and attempt to build a better model.
ARIMA(autoregressive integrated moving average) model can be used in time series prediction based on a limited number of observations. The basic intuition behind ARIMA is that non-stationary sequence firstly built stationary via differencing of appropriate order and then realize fitting in ARMA model. Since sequence after differencing is equal to the weighted summation of sequence before differencing, sequence after differencing can be written in
Definition 4.2. (ARIMA
{Φ(B)∇dxt=Θ(B)εtE(εt)=0,Var(εt)=σ2ε,E(εsεt)=0,s≠tExsεt=0,∀s<t | (5) |
in which
∇dxt=Θ(B)Φ(B)εt | (6) |
while
Order identification. Since the observed data is identified in-stationary, we utilize differencing approach to achieve stationary. We chose first-order and second-order differencing separately and compared their accuracy. Figure 9 and Figure 10 shows the autocorrelation coefficient and partial correlation coefficient in 20 steps of first-order and second-order differencing. The 2 times of standard deviation of corresponding coefficients is represented by red line in each figures.
It can be observed in both results from first-order (Figure 9) and second-order (Figure 10) that the autocorrelation coefficients when the steps over 2 are all within 2 times of standard deviation (Figure 9(a) and 10(a)). Tailing can be identified since the autocorrelation coefficient is gradually close to zero, thus q = 2. As for partial correlation coefficient, it is less than 2 times of standard deviation when the steps are over 19 and it is gradually close to 0, tailing can be identified, thus p = 19 (Figure 9(b) and 10(b)). Therefore according to Table 4, the model can be identified as ARIMA
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |
Model fitting and prediction. The prediction results under
Residual test. For ARIMA
The two previous models are evaluated according to the real-time urban PM2.5 concentrations data obtained in Wuhan from December 1 to 10,2013. As can be seen from the Figure 12, stochastic time series analysis method results in better fitting.
Deterministic time series analysis method is relatively simple and lead to a more in-depth understanding of time series various characteristics. It allows more flexibility, which on the other hand means that it needs more empirically determination of parameters. Namely, it works with a certain degree of subjectivity, in which assumptions are required in advance and the tiny inaccurate in assumption could cause large deviations.
Stochastic time series analysis method leads a higher accuracy and stronger generalization ability. Comparatively, the process of stochastic time series analysis method is more fixed. However, the vague process also leads to difficulty in understanding and analysing the results.
We brief related work in four directions.
Classical bottom-up emission models. There are two major "bottom-up" methods in calculating air quality via the observed emission from ground surfaces. The most common one is referencing to the nearby air quality monitor stations, usually applied by public websites reporting AQIs. However, it is with low accuracy since air quality varies non-linearly as illustrated before. The other are classical dispersion models. Gaussian Plume models, Operational Street Canyon models, and Computational Fluid Dynamics are most widely used in this methodology. These models are in most cases a function of meteorology, street geometry, receptor locations, traffic volumes, and emission factors (e.g., g/km per single vehicle), based on a number of empirical assumptions and parameters that might not be applicable to all urban environments[6].
Satellite remote sensing. Satellite remote sensing of surface air quality is regarded as top-down methods in this field, such as[4] and [5]. However, despite its high cost, the result can only the air quality of atmosphere rather than the ground one.
Crowd sensing. Significant efforts[3], [2] have been devoted to crowd sensing and it may be a potential solution solving air pollution in the future. The devices for PM2.5 and NO
Urban computing. Big data has attracted a series of researches on urban computing to promote urban life quality, including managing air pollution. Data from varies aspects such as human mobility data and POIs[7], taxi trajectories[11], GPS-equipped vehicles[8] can be used to product useful pattern in urban life. This kind of method is based on sufficient urban data, sometimes private, which are difficult to acquire. Becides, it is in need of a long time in pre-processing of cleaning and reducing.
Different from classical models, methods with highly-required devices and tremendous data processing, our method offers a simple but efficient aspect in inferring air quality. Effectiveness is guaranteed on the basis of real-time data without expensive device and long time pre-processing.
In this paper, from the perspective of time series, we infer the fine-granularity air quality in a city based on the historical reported PM2.5 concentrations from air quality monitor stations. Using deterministic and stochastic theories, we make two predications. In deterministic point of view, we identify and decompose the historical reported PM2.5 concentrations into trend, seasonality, cyclical and irregular factors, based on which we calculate the PM2.5 concentrations equation. In stochastic point of view, we compare the first-order and second-order differencing methods and compute the quantitative models. Finally, we analyse the strong and weak points of deterministic and stochastic methodologies and reach the conclusion that stochastic is more accurate for PM2.5 concentrations predication.
[1] | [ L. Bin-lian, G. Feng and J. Jian-hua, Analysis of pm2.5 current situation and the prevention control measures, energy and energy conservation, 54-54. |
[2] | [ D. Hasenfratz, O. Saukh, S. Sturzenegger, and L. Thiele, Participatory air pollution monitoring using smartphones, In the 2nd International Workshop on Mobile Sensing. |
[3] | [ Y. Jiang, K. Li, L. Tian, R. Piedrahita, X. Yun, O. Mansata, Q. Lv, R. P. Dick, M. Hannigan and L. Shang, Maqs:a personalized mobile sensing system for indoor air quality monitoring, in Proceedings of the 13th international conference on Ubiquitous computing, 2011, 271-280. |
[4] | [ L. N. Lamsal, R. V. Martin, A. V. Donkelaar, M. Steinbacher, E. A. Celarier, E. Bucsela, E. J. Dunlea and J. P. Pinto, Ground-level nitrogen dioxide concentrations inferred from the satellite-borne ozone monitoring instrument, Journal of Geophysical Research, 113(2008), 280-288. |
[5] | [ R. V. Martin, L. Lamsal and A. Van Donkelaar, Satellite remote sensing of surface air quality, Atmospheric Environment, 42(2008), 7823-7843. |
[6] | [ S. Vardoulakis, B. E. Fisher, K. Pericleous and N. Gonzalez-Flesca, Modelling air quality in street canyons:A review, Atmospheric environment, 37(2003), 155-182. |
[7] | [ J. Yuan, Y. Zheng and X. Xie, Discovering regions of different functions in a city using human mobility and pois, in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2012, 186-194. |
[8] | [ F. Zhang, D. Wilkie, Y. Zheng, and X. Xie., Sensing the pulse of urban refueling behavior, Proceedings of Acm International Conference on Ubiquitous Computing Ubicomp 11 Acm. |
[9] | [ Y. Zhang and L. Y. Yang, On the applications of the additive model and multiplicative model of time series analysis, Statistics and Information Tribune. |
[10] | [ Y. Zheng, F. Liu and H.-P. Hsieh, U-air:When urban air quality inference meets big data, in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2013, 1436-1444. |
[11] | [ Y. Zheng, Y. Liu, J. Yuan and X. Xie, Urban computing with taxicabs, in Proceedings of the 13th international conference on Ubiquitous computing, ACM, 2011, 89-98. |
1. | Wenjun Zhang, Zhanpeng Guan, Jianyao Li, Zhu Su, Weibing Deng, Wei Li, Chinese cities’ air quality pattern and correlation, 2020, 2020, 1742-5468, 043403, 10.1088/1742-5468/ab7813 |
![]() |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |
![]() |
Curve Fitting | SSE | R-Square | Adjusted R-square | RMSE |
Cubic Fitting | | 0.9544 | 0.954 | 11.3 |
Trigonometric Fitting | | 0.6862 | 0.6814 | 29.74 |
Date | Sp. | Sum. | Fall | Win. |
1st | -44.33 | 9.14 | -3.75 | 9.66 |
2nd | 8.44 | 10.61 | -4.37 | 25.82 |
3rd | -37.76 | 5.76 | -10.01 | 49.94 |
4th | -39.62 | -19.09 | -11.33 | 37.69 |
5th | -45.78 | -48.95 | 2.67 | 79.07 |
6th | 6.07 | -54.47 | -4.02 | 0.41 |
7th | 9.61 | -46.65 | 3.93 | 5.38 |
8th | -2.16 | -23.5 | -7.46 | -62.35 |
9th | -1.25 | -37.69 | -0.54 | -22.83 |
10th | 85.68 | -16.87 | 15.69 | -56.35 |
11th | 70.64 | -18.72 | 19.23 | -65.58 |
12th | 50.94 | -7.57 | 29.42 | -56.18 |
13th | 22.6 | -14.76 | 14.93 | -92.15 |
14th | 31.94 | -40.61 | 14.41 | -79.5 |
15th | 36.3 | -18.13 | 11.21 | -76.88 |
16th | 21.35 | -18.32 | -1.35 | -79.65 |
17th | 20.75 | -17.18 | 15.41 | -68.78 |
18th | 36.49 | -9.71 | 2.14 | -57.29 |
19th | -28.07 | 8.43 | 13.85 | -30.17 |
20th | -6.62 | 7.24 | 4.54 | 9.57 |
21st | 31.84 | 5.7 | 0.54 | 3.27 |
22nd | 55.66 | 31.84 | -23.15 | -91.08 |
23rd | 28.83 | 26.97 | -0.19 | -110.79 |
24th | 14.35 | -11.58 | 6.74 | -100.22 |
25th | 21.88 | -23.79 | 10.32 | -106.02 |
26th | 94.76 | -16 | 2.21 | -106.87 |
27th | 167.66 | -25.22 | 13.4 | -89.42 |
28th | 60.24 | -30.44 | 9.25 | -46.69 |
29th | 8.84 | -11 | 9.73 | -24.06 |
30th | -0.55 | -22.23 | 22.52 | -63.51 |
31st | 19.49 | -14.87 | 38.36 | -42 |
Model | ACF | PACF |
White Noise | | |
| attenuated to zero (geometric or volatility) | censored after the |
| censored after the | attenuated to zero (geometric or volatility) |
| attenuated to zero (geometric or volatility) after | attenuated to zero (geometric or volatility) after |