Citation: Michele La Rocca, Cira Perna. Nonlinear autoregressive sieve bootstrap based on extreme learning machines[J]. Mathematical Biosciences and Engineering, 2020, 17(1): 636-653. doi: 10.3934/mbe.2020033
[1] | J. P. Kreiss, Bootstrap procedures for AR(∞)-processes, in Bootstrapping and Related Techniques (eds. K.-H. Jockel, G. Rothe and W. Sendler), Springer, Heidelberg, (1992), 107-113. |
[2] | P. Bühlmann, Sieve bootstrap for time series, Bernoulli, 3 (1997), 123-148. |
[3] | P. J. Bickel and P. Bühlmann, A new mixing notion and functional central limit theorems for a sieve bootstrap in time series, Bernoulli, 5 (1999), 413-446. |
[4] | A. M. Alonso, D. Peña and J. Romo, Forecasting time series with sieve bootstrap, J. Stat. Plann. Infer., 100 (2002), 1-11. |
[5] | A. M. Alonso, D. Peña and J. Romo, On sieve bootstrap prediction intervals, Stat. Probabili. Lett., 65 (2003), 13-20. |
[6] | A. Zagdanski, On the construction and properties of bootstrap-t prediction intervals for stationary time series, Probab. Math. Stati. PWN, 25 (2005), 133-154. |
[7] | A. M. Alonso and A. E. Sipols, A time series bootstrap procedure for interpolation intervals, Comput. Stat. Data Anal., 52 (2008), 1792-1805. |
[8] | P. Mukhopadhyay and V. A. Samaranayake, Prediction intervals for time series: a modified sieve bootstrap approach, Commun. Stat. Simul. Comput., 39 (2010), 517-538. |
[9] | G. Ulloa, H. Allende-Cid and H. Allende Robust sieve bootstrap prediction intervals for contaminated time series, Int. J. Pattern Recognit. Artif. Intell., 28 (2014). |
[10] | Y. Chang and J. Y. Park, A sieve bootstrap for the test of a unit root, J. Time Ser. Anal., 24 (2003), 379-400. |
[11] | Z. Psaradakis, Blockwise bootstrap testing for stationarity, Stat. Probabili. Lett., 76 (2006), 562 -570. |
[12] | D. S. Poskitt, Properties of the sieve bootstrap for fractionally integrated and non-invertible processes, J. Time Ser. Anal., 29 (2008), 224-250. |
[13] | D. S. Poskitt, G. M. Martin and S. D. Grose, Bias reduction of long memory parameter estimators via the pre-filtered sieve bootstrap, arXiv preprint arXiv, 2014 (2014). |
[14] | E. Paparoditis, Sieve bootstrap for functional time series, Ann. Stat., 46 (2018), 3510-3538. |
[15] | M. Meyer, C. Jentsch and J. P. Kreiss Baxter's inequality and sieve bootstrap for random fields, Bernoulli, 23 (2017), 2988-3020. |
[16] | J. P. Kreiss, E. Paparoditis and D. N. Politis, On the range of validity of the autoregressive sieve bootstrap, Ann. Stat., 39 (2011), 2103-2130. |
[17] | M. Fragkeskou and E. Paparoditis, Extending the Range of Validity of the Autoregressive (Sieve) Bootstrap, J. Time Ser. Anal., 39 (2018), 356-379. |
[18] | F. Giordano, M. La Rocca and C. Perna, Forecasting nonlinear time series with neural network sieve bootstrap, Comput. Stat. Data Anal., 51 (2007), 3871-3884. |
[19] | F. Giordano, M. La Rocca and C. Perna, Properties of the neural network sieve bootstrap, J. Nonparametr. Stat., 23 (2011), 803-817. |
[20] | G. B. Huang, Q. Y. Zhu and C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), 489-501. |
[21] | G. B. Huang, H. Zhou, X. Ding, et al., Extreme learning machine for regression and multiclass classification, IEEE Trans. Syst. Man Cybern. Part B, 42 (2012), 513-529. |
[22] | W. Haerdle and A. Tsybakov, Local polynomial estimators of the volatility function in nonparametric autoregression, J. Econometrics, 81 (1997), 223-242. |
[23] | J. Franke and M. Diagne, Estimating market risk with neural network, Stat. Decisions, 24 (2006), 233-253. |
[24] | A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inf. Theory, 39 (1993), 930-945. |
[25] | K. Hornik, M. Stinchcombe and P. Auer, Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives, Neural Comput., 6 (1994), 1262-1275. |
[26] | Y. Makovoz, Random approximates and neural networks, J. Approximation Theory, 85 (1994), 98-109. |
[27] | X. Chen and H. White, Improved Rates and Asymptotic Normality for Nonparametric Neural Network Estimators, IEEE Trans. Inf. Theory, 45 (1999), 682-691. |
[28] | X. Chen and X. Shen, Asymptotic Properties of Sieve Extremum Estimates for Weakly Dependent Data with Applications, Econometrica, 66 (1998), 299-315. |
[29] | J. Zhang, Sieve Estimates via Neural Network for Strong Mixing Processes, Stat. Inference Stochastic Processes, 7 (2004), 115-135. |
[30] | S. F. Crone and N. Kourentzes, Feature selection for time series prediction-A combined filter and wrapper approach for neural networks, Neurocomputing, 7 (2010), 1923-1936. |
[31] | C. Wang, Y. Qi, M. Shao, et al., A fitting model for feature selection with fuzzy rough sets, IEEE Trans. Fuzzy Syst., 25 (2017), 741-753. |
[32] | D. Yu and L. Deng, Efficient and effective algorithms for training single hidden- layer neural networks, Pattern Recognit. Lett., 33 (2012), 554-558. |
[33] | K. Li, J. X. Peng and G. W. Irwin, A fast nonlinear model identification method, IEEE Trans. Autom. Control, 50 (2005), 1211-1216. |
[34] | X. Yao, A review of evolutionary artificial neural networks, Int. J. Intell. Syst., 8 (1993), 539-567. |
[35] | G. B. Huang, D. H. Wang and Y. Lan, Extreme learning machines: a survey, Int. J. Mach. Learn. Cybern., 2 (2011), 107-122. |
[36] | S. Ding, H. Zhao, Y. Zhang, et al. Extreme learning machine: algorithm, theory and applications, Artif. Intell. Rev., 44 (2015), 103-115. doi: 10.1007/s10462-013-9405-z |
[37] | G. Huang, G. B. Huang, S. Song, et al., Trends in extreme learning machines: A review, Neural Networks, 61 (2015), 32-48. |
[38] | G. H. Huang, H. Zhou, X. Ding, et al., Extreme learning machine for regression and multiclass classification, IEEE Trans. Syst. Man Cybern. Part B, 42 (2012), 513-529. |
[39] | G. B. Huang, L. Chen and C. K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks, 17 (2006), 879-892. |
[40] | G. B. Huang and L. Chen, Convex incremental extreme learning machine, Neurocomputing, 70 (2007), 3056-3062. |
[41] | G. B. Huang and L. Chen, Enhanced random search based incremental extreme learning machine, Neurocomputing, 71 (2008), 3460-3468. |
[42] | J. Lin, J. Yin, Z. Cai, et al., A secure and practical mechanism of outsourcing extreme learning machine in cloud computing, IEEE Intell. Syst., 28 (1999), 35-38. |
[43] | E. Cule and S. Moritz, ridge: Ridge Regression with Automatic Selection of the Penalty Parameter, R package version, (2019), https://CRAN.R-project.org/package=ridge. |
[44] | Z. Cai, J. Fan and Q. Yao, Functional-coefficient regression models for nonlinear time series, J. Am. Stat. Assoc., 95 (2000), 941-956. |
[45] | H. Kuswanto and P. Sibbertsen, Can we distinguish between common nonlinear time series models and long memory?, Discussion papers//School of Economics and Management of the Hanover Leibniz University., (2007). |