Review Special Issues

Telerehabilitation for community-dwelling middle-aged and older adults after musculoskeletal trauma: A systematic review

  • Background: Musculoskeletal trauma at midlife and beyond imposes significant impact on function and quality of life: Rehabilitation is key to support early and sustained recovery. There are frequent barriers to attending in-person rehabilitation that may be overcome by the recent advances in technology (telerehabilitation). Therefore, we conducted a systematic review of published evidence on telerehabilitation as a delivery mode for adults and older adults with musculoskeletal trauma. Methods: We followed established guidelines for conducting and reporting systematic reviews. We searched the following databases up to June 23, 2018: Cochrane Library, Cumulative Index to Nursing and Allied Health Literature, Embase, Google Scholar, MEDLINE (Ovid and PubMed), PsycINFO, and SportDiscus. We included publications across all available years and languages for community-dwelling adults (50 years and older) with musculoskeletal trauma; and interventions using the following delivery modes: Apps, computer, telephone, videophone, videoconference, webcam, webpage, or similar media. Results: Six studies met the inclusion criteria: Five studies for hip fracture (n = 260) and one study for proximal humeral fracture rehabilitation (n = 17). Four of the studies used telephone as the delivery mode, one used computer and another used video-conferencing. Two of the studies were pre-post with no comparator group, and the remaining four studies were randomized controlled trials with low or unclear risk of bias. Studies established some modes of remote delivery as feasible, but the generalizability of the findings were limited. Two studies observed significant between-group differences (favoring the intervention) for physical activity, quality of life, and self-efficacy. Conclusion: Very few studies exist that tested the effect of telerehabilitation for recovery after musculoskeletal trauma later in life. Given the global burden imposed by musculoskeletal trauma, this review underscores an important gap in clinical knowledge.

    Citation: Maureen C. Ashe, Christina L. Ekegren, Anna M. Chudyk, Lena Fleig, Tiffany K. Gill, Dolores Langford, Lydia Martin-Martin, Patrocinio Ariza-Vega. Telerehabilitation for community-dwelling middle-aged and older adults after musculoskeletal trauma: A systematic review[J]. AIMS Medical Science, 2018, 5(4): 316-336. doi: 10.3934/medsci.2018.4.316

    Related Papers:

    [1] Morteza Fotouhi, Andreas Minne, Henrik Shahgholian, Georg S. Weiss . Remarks on the decay/growth rate of solutions to elliptic free boundary problems of obstacle type. Mathematics in Engineering, 2020, 2(4): 698-708. doi: 10.3934/mine.2020032
    [2] Daniela De Silva, Ovidiu Savin . Uniform density estimates and $ \Gamma $-convergence for the Alt-Phillips functional of negative powers. Mathematics in Engineering, 2023, 5(5): 1-27. doi: 10.3934/mine.2023086
    [3] Catharine W. K. Lo, José Francisco Rodrigues . On an anisotropic fractional Stefan-type problem with Dirichlet boundary conditions. Mathematics in Engineering, 2023, 5(3): 1-38. doi: 10.3934/mine.2023047
    [4] Filippo Gazzola, Gianmarco Sperone . Remarks on radial symmetry and monotonicity for solutions of semilinear higher order elliptic equations. Mathematics in Engineering, 2022, 4(5): 1-24. doi: 10.3934/mine.2022040
    [5] Donatella Danielli, Rohit Jain . Regularity results for a penalized boundary obstacle problem. Mathematics in Engineering, 2021, 3(1): 1-23. doi: 10.3934/mine.2021007
    [6] Aleksandr Dzhugan, Fausto Ferrari . Domain variation solutions for degenerate two phase free boundary problems. Mathematics in Engineering, 2021, 3(6): 1-29. doi: 10.3934/mine.2021043
    [7] Hugo Tavares, Alessandro Zilio . Regularity of all minimizers of a class of spectral partition problems. Mathematics in Engineering, 2021, 3(1): 1-31. doi: 10.3934/mine.2021002
    [8] Tatsuya Miura . Polar tangential angles and free elasticae. Mathematics in Engineering, 2021, 3(4): 1-12. doi: 10.3934/mine.2021034
    [9] Matteo Novaga, Marco Pozzetta . Connected surfaces with boundary minimizing the Willmore energy. Mathematics in Engineering, 2020, 2(3): 527-556. doi: 10.3934/mine.2020024
    [10] Daniela De Silva, Giorgio Tortone . Improvement of flatness for vector valued free boundary problems. Mathematics in Engineering, 2020, 2(4): 598-613. doi: 10.3934/mine.2020027
  • Background: Musculoskeletal trauma at midlife and beyond imposes significant impact on function and quality of life: Rehabilitation is key to support early and sustained recovery. There are frequent barriers to attending in-person rehabilitation that may be overcome by the recent advances in technology (telerehabilitation). Therefore, we conducted a systematic review of published evidence on telerehabilitation as a delivery mode for adults and older adults with musculoskeletal trauma. Methods: We followed established guidelines for conducting and reporting systematic reviews. We searched the following databases up to June 23, 2018: Cochrane Library, Cumulative Index to Nursing and Allied Health Literature, Embase, Google Scholar, MEDLINE (Ovid and PubMed), PsycINFO, and SportDiscus. We included publications across all available years and languages for community-dwelling adults (50 years and older) with musculoskeletal trauma; and interventions using the following delivery modes: Apps, computer, telephone, videophone, videoconference, webcam, webpage, or similar media. Results: Six studies met the inclusion criteria: Five studies for hip fracture (n = 260) and one study for proximal humeral fracture rehabilitation (n = 17). Four of the studies used telephone as the delivery mode, one used computer and another used video-conferencing. Two of the studies were pre-post with no comparator group, and the remaining four studies were randomized controlled trials with low or unclear risk of bias. Studies established some modes of remote delivery as feasible, but the generalizability of the findings were limited. Two studies observed significant between-group differences (favoring the intervention) for physical activity, quality of life, and self-efficacy. Conclusion: Very few studies exist that tested the effect of telerehabilitation for recovery after musculoskeletal trauma later in life. Given the global burden imposed by musculoskeletal trauma, this review underscores an important gap in clinical knowledge.


    Volatility forecasting is essential in asset pricing, portfolio allocation and risk management research. Early volatility forecasting was based on economic models. The most famous economic models are the ARCH model [1] and the GARCH model [2], which can capture volatility clustering and heavy-tail features. However, they fail to capture asymmetry, such as leverage effects. The leverage effect is due to the fact that negative returns have a more significant impact on future volatility than positive returns. To overcome this drawback, the exponential GARCH (EGARCH) model [3] and GJR model [4] were proposed. In the following years, new volatility models based on the GARCH model emerged, such as the stochastic volatility model [5] proposed by Hull and White and the realized volatility model [6] offered by Blair et al. They formed a class of GARCH-type volatility models for financial markets.

    The traditional GARCH model has strict constraints and requires the financial time series to satisfy the stationarity condition. It usually assumes conditional variances have a linear relationship with previous errors and previous variances. However, many financial time series show certain nonstationary and nonlinear characteristics in practice. Consequently, some extended model from GARCH is necessary to study the volatility of these time series.

    With the development of computer and big data technologies, machine learning brings new ideas to volatility forecasting [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. Especially artificial neural networks(ANN) have shown an outstanding performance. It derives its computational ideas from biological neurons and is now widely used in various fields.

    In financial risk analysis, researchers have utilized neural networks to study the volatility of financial markets. Hamid and Iqbal [22] apply ANN to predict the S&P500 index implied volatility, finding that ANN's forecasting performance surpasses that of the American option pricing model. Livieris proposes an artificial neural network prediction model for forecasting gold prices and trends [23]. Additionally, Dunis and Huang [24] explore neural network regression (NNR), recurrent neural networks, and their collaborative NNR-RNN models for predicting and trading the volatility of daily exchange rates of GBP/USD and USD/JPY, with results indicating that RNNs have the best volatility forecasting performance. Beyond the direct application of neural networks, researchers have investigated a series of mixture models [25,26,27,28,29,30,31] that combine ANNs and GARCH models. Liu et al. [32] introduce a volatility forecasting model based on the recurrent neural network (RNN) and the GARCH model. Experiments reveal that such mixture models enhance the predictive capabilities of traditional GARCH models, capturing normality, skewness and kurtosis of financial volatility more accurately.

    This study employs a mixture model (DeepAR-GMM-GARCH) that combines the deep autoregressive network, the Gaussian mixture model and the GARCH model for probabilistic volatility forecasting. First, we discuss the design of the mixture model; Second, the article presents the model's inference and the training algorithm of the model; Third, we conduct a simulation experiment using artificial data and compare the outcomes with traditional GARCH models, finding that our model yields smaller RMSE and MAE; Last, we investigate the correlation between the square of extreme values and the square of returns for the CSI300 index. The empirical data is partitioned into training and test sets. After training and testing, we analyze the prediction results and observe that our proposed model outperforms other models in both in-sample and out-of-sample analyses.

    The key offerings presented in this article can be summarized as follows: Initially, this article introduces a novel conditional volatility probability prediction model, which addresses the leptokurtic and heavy-tail traits of conventional financial volatility. This model is built upon a deep autoregressive network combined with a Gaussian mixture distribution. Subsequently, we incorporate extreme values into the mixture model via the neural network. It is discovered that the inclusion of extreme values enhances the accuracy of volatility predictions.

    The structure of this paper is as follows: Section 2 outlines the GARCH model and the deep autoregressive network. Section 3 delves into the mixture model, elaborating on inference, prediction and the relevant algorithm. Section 4 encompasses the simulation studies that we propose. Lastly, Section 5 focuses on the empirical analysis of our proposed model.

    Scholars usually believe that stock price or stock index returns are nonlinear, asymmetric, heavy-tailed, returns are generally uncorrelated. Aggregation characterises volatility, which was first found by Engle (1982) and Bollerslev (1986) in ARCH and GARCH models. The GARCH model is defined as follows:

    $ rt=εtht,ht=α0+qi=1αir2ti+pj=1βjhtj,
    $
    (2.1)

    where $ h_{t} $ is the conditional heteroscedastic variance of return series $ r_{t} $.

    Although there are many criteria of GARCH($ p $, $ q $) models to find $ p $ and $ q $, it is sufficient to apply the GARCH($ 1 $, $ 1 $) model to characterize the conditional volatilities.

    The DeepAR model [33], illustrated in Figure 1, is a time series forecasting model that employs a deep autoregressive recurrent network architecture. Distinct from other time series forecasting models, DeepAR generates probabilistic predictions.

    Figure 1.  The network structure of the DeepAR model with $ p $ inputs and $ q $ outputs.

    Consider a time series $ [x_{1}, \dots, x_{t_{0}}, x_{t_{0}+1}, \dots, x_{T}] : = x_{1:T} $. Given its past time series $ [x_{1}, \dots, x_{t_{0}-2}, x_{t_{0}-1}] : = x_{1:t_{0}-1} $, our objective is to predict the future time series $ [x_{t_0}, \dots, x_{t_{0}+(T-1)}, x_{t_{0}+T}] : = x_{t_{0}:t_ {0}+T} $. The DeepAR model constructs the conditional distribution $ P_{\Theta}(x_{t_{0}: T} |x_{1:t_{0}-1}) $ using a latent factor $ z $, which is implemented by a deep recurrent network architecture. This conditional distribution, $ P_{\Theta}(x_{t_{0}: T} |x_{1:t_{0}-1}) $, comprises a product of likelihood factors ($ z $)

    $ PΘ(xt0:Tx1:t01)=Tt=t0PΘ(xtx1:t1)=Tt=t0p(xtθ(zt,Θ)).
    $
    (2.2)

    The likelihood $ p(x_{t}|\theta(z_{t})) $ is a fixed distribution with parameters determined by a function $ \theta(z_{t}, \Theta) $ of the network output $ z_{t} $. As suggested by the model's authors, Gaussian likelihood is appropriate for real-valued data.

    Forecasting the probability of volatility in finance and economics is an important problem. There are mainly two methods to tackle this. First, statistical models such as ARCH and the GARCH models are usually adopted. These models are specifically designed to capture the dynamic nature of volatility over time and help to predict future levels of volatility based on past patterns.

    Another strategy involves using machine learning models, such as neural networks, which can analyze vast amounts of data and uncover patterns that may not be readily apparent to human analysts. A case in point is the DeepAR model is a series-to-series probabilistic forecasting model. The advantages of the DeepAR model are: it makes probabilistic forecasting and allows to introduce additional covariates. Due to these advantages, it can be used to predict financial volatility($ h_{t} $) based on the series $ r_{t}^{2} $. However, the DeepAR model usually assumes that $ p(x_{t}|\theta(z_{t})) $ (given in (2.2)) follows a Gaussian distribution, which may be unreasonable due to the non-negative, leptokurtic and heavy-tail characteristics of traditional financial volatility. To avoid this problem, people use the gaussian mixture distribution to describe the density of $ p(ln(x_{t})|\theta(z_{t})) $, see references [34]. Motivated by the above results, this paper propose an improved mixture model: DeepAR-GMM-GARCH.

    The conditional distribution of $ ln(h_{t}) $ can be expressed as:

    $ P(ln(ht)|r21:t1,x1:t1),
    $
    (3.1)

    where $ h_{t} $ represents the future volatility at time $ t $, $ [r_{1}, ..., r_{t-2}, r_{t-1}]: = r_{1:t-1} $ denotes the past return series during the $ [1:t-1] $ period, and $ {\bf x}_{1:t-1} $ refers to the covariate, which is observable at all times. The past time horizon is represented by $ [1:t-1] $.

    The proposed hybrid model assumes that the conditional density for logarithm of the volatility is given by $ p(ln(h_{t})|r_{1:t-1}^{2}, {\bf x}_{1:t-1}) $, which includes a set of latent factors, denoted as $ z_{t} $. A recurrent neural network with hyperparameters($ \Theta_{1} $), specifically an LSTM, encodes the squared returns $ r_{t}^{2} $, the input features $ {\bf x}_{t} $ and the previous latent factors $ z_{t-1} $, generating the updated latent factors $ z_{t} $. The likelihood $ p(ln(h_{t})|\theta (z_{t})) $ follows a Gaussian mixture distribution with parameters determined by a function $ \theta(z_{t}, \Theta_{2}) $ of the network output $ z_{t} $. The network architecture of the DeepAR-GMM-GARCH model is depicted in Figure 2.

    Figure 2.  The network structure of the DeepAR-GMM-GARCH model with $ m $ inputs and one output.

    Due to the complex interplay between volatility and the factors that influence it, this paper's central model component declares that the volatility $ h_{t} $ of a time series at time $ t $ is derived from the latent variable $ z_{t-1} $ at time $ t-1 $, the square of return $ r_{t-1}^{2} $ and the covariates $ {\bf x}_{t-1} $. $ p(\ln(h_{t})|\theta(z_{t})) $ follows a Gaussian mixture distribution composed of K components. In the empirical analysis, $ {\bf x}_{t-1} $ will be substituted with a vector of extreme values. A nonlinear mapping function $ g $ is used to establish this relationship. The DeepAR-GMM-GARCH model proposed in this paper is as follows.

    $ zt=g(zt1,r2t1,xt1,Θ1),μk,t=log(1+exp(wTk,μzt+bk,μ)),σk,t=log(1+exp(wTk,σzt+bk,σ)),πk,t=log(1+exp(wTk,πzt+bk,π)),P(ln(ht)|zt,Θ2)Ki=1πkN(μk,t,σk,t),rt=εtht,Ki=1πk=1.
    $
    (3.2)

    The model can be viewed as a structure for nonlinear volatility prediction models since the conditional distribution of the perturbation $ \varepsilon_{t} $ in the model can be selected as $ N(0, 1) $ and $ T(0, 1, v) $. Consequently, this gives rise to two distinct models, referred to as DeepAR-GMM-GARCH and DeepAR-GMM-GARCH-t.

    Assuming that the distribution of $ p(ln(h_{t})|\theta(z_{t})) $ follows a Gaussian distribution, model (3.2) will be reduced to a more simple version:

    $ zt=g(zt1,r2t1,xt1,Θ1),μt=log(1+exp(wTμzt+bμ)),σt=log(1+exp(wTσzt+bσ)),P(ln(ht)|zt,Θ2)N(μt,σ2t),rt=εtht.
    $
    (3.3)

    For similarity, we call the above as DeepAR-GARCH model.

    For a given time series, our goal is to estimate the parameters $ \Theta_{1} $ of the LSTM cells and the parameters $ \Theta_{2} $ in function $ \theta $ which applies an affine transformation followed by a softplus activation. We employ a quasi-maximum likelihood estimation method with the likelihood function: $ \Theta = argmax\sum_{i}logp(\widetilde{h_{i}}|\Theta_{1}, \Theta_{2}) $. Inferring from this likelihood function necessitates taking into account the latent variable $ z_{t} $.

    The flowchart for the training algorithm of the model is shown below. First, we utilize the BIC criterion to identify the number of classifications, $ K $, for all samples. Each data point is assigned a label from 1 to $ K $, and each cluster $ k $ has its mean vector and covariance matrix. Based on these findings, we establish the initial $ \pi_{k} $ as the proportion of data points labelled as $ k $ and set the initial mean vector $ \mu_{k} $ and covariance matrix $ \sum_{k} $ to the mean vector and covariance matrix within cluster $ k $. As a result, we obtain the parameter values $ (\theta = {\widetilde \pi_{k, 0}, \widetilde \mu_{k, 0}, \widetilde \sigma_{k, 0}^{2}}) $ from the initial cluster and use them to pre-train the DeepAR-GMM-GARCH model. This approach allows our model to converge quickly. Next, we partition the training sample data into multiple batches, select one sample from a batch and use the sample $ (r_{t0-m}^{2}, \dots, r_{t0-1}^{2}) $ as the input for the DeepAR-GMM-GARCH model. The model calculates a set of $ \widetilde \pi_{k, t}, \widetilde \mu_{k, t}, \widetilde \sigma_{k, t}^{2} $, after which we sample from this Gaussian mixture model, compute the loss, and update the parameters through gradient descent. Since direct differentiation of the sampling is infeasible, we apply the reparameterization trick to adjust the model's parameters. We continue this training process until the end of the training cycle. Last, we input the training set sample into our trained model for prediction evaluation. The model sequentially calculates the parameters for both the latent variable and the mixed Gaussian model and then proceeds with sampling. Ultimately, we provide the prediction results derived from the sampling outcomes.

    The training algorithm is shown in the Algorithm $ 1 $.

    Algorithm 1 Training Procedure for DeepAR-GMM-GARCH Mixture Model
    1: for each batch do
    2:  for each $ t \in [t0-m, t0-1] $ do
    3:    if $ t $ is $ t0-m $ then
    4:      $ z_{t-1} = 0 $
    5:    else {$ t $ is not $ t0-m $}
    6:      $ z_{t} = g(z_{i-1}, r_{i-1}^{2}, x_{t-1}, \Theta_{1}) $
    7:    end if
    8:    for each $ k \in [1, K] $ do
    9:      $ \widetilde \mu_{k, t} = log(1+exp(w_{k, \mu}^{T}z_{t}+b_{k, \mu})) $
    10:      $ \widetilde \sigma_{k, t}^{2} = log(1+exp(w_{k, \sigma}^{T}z_{t}+b_{k, \sigma})) $
    11:      $ \widetilde \pi_{k, t} = log(1+exp(w_{k, \pi}^{T}z_{t}+b_{k, \pi})) $
    12:    end for
    13:    sample $ ln(\widetilde{h_{t}}) \sim GMM(\widetilde \pi_{k, t}, \widetilde \mu_{k, t}, \widetilde \sigma_{k, t}^{2}) $
    14:  end for
    15:  compute Loss, model parameters $ \Theta_{1} $, $ \Theta_{2} $ adjust using gradient descent method.
    16: end for

    During the training process, the definition of the loss function determines the prediction quality of the model. We use the the average negative loglikelihood function as the loss function $ Loss = L_{h} $. In the GARCH model, we usually assume that $ \varepsilon_{t} $ obeys a Gaussian distribution or a Student distribution, two loss functions are as follows:

    (1) When $ \varepsilon_{t} \sim N(0, 1) $, the loss function is:

    $ Lh=1NNt=1[log(˜htr2t2˜ht)].
    $
    (3.4)

    (2) When $ \varepsilon_{t} \sim t(0, 1, v) $, the loss function is:

    $ Lh=1NNt=1[log(˜ht)+12(v+1)log(1+r2t˜ht(v2))].
    $
    (3.5)

    To calculate the above loss functions, we need to get samples for $ \widetilde{h_{t}} $ based on algorithm1 given in Section 3.2.1. In practice, if $ \varepsilon_{t} $ follows other distributions, based on the idea of QMLE, we still can use the loss function given in (3.4) see Liu and So, 2020.

    Experiments are carried out on volatility inference using simulated time series. These series exhibit flexibility, with both volatility and mixing coefficients changing over time, as detailed below:

    $ rt=εtht,εtN(0,1),p(ht|Ft1)=η1,tϕ(μt,σ21,t)+η2,tϕ(μt,σ22,t),μt=a0+a1ht1,σ21,t=α01+α11r2t1+β1σ21,t1,σ22,t=α02+α12r2t1+β2σ22,t1,π1,t=c0+c1ht1,η1,t=exp(π1,t)/(1+exp(π1,t)),
    $
    (4.1)

    where $ F_{t-1} $ denotes the information set through time $ t-1 $ and $ \phi $ is the Gaussian density function. $ \eta_{1, t} $ and $ \eta_{2, t} $ are mixing coefficients of two gaussian distribution and satisfy: $ \eta_{2, t} = (1-\eta_{1, t}) $. When generating the simulation data, we set: $ \alpha_{01} = 0. 01 $, $ \alpha_{11} = 0. 1 $, $ \beta_{1} = 0. 15 $, $ \alpha_{02} = 0. 04 $, $ \alpha_{12} = 0. 15 $, $ \beta_{2} = 0. 82 $, $ c_{0} = 0. 02, c_{1} = 0. 90, a_{0} = 0. 02, a_{1} = 0. 6 $. The time series has initial values:$ r_{0} = 0. 1, \sigma_{0}^{2} = 0 $, $ h_{0} = 0 $. The sample sizes of T = 500, 1000 and 1500 are considered, and the replication time is 1000.

    For the series simulated from (4.1), we apply three models to forecast their volatility, namely, the GARCH model with $ \epsilon_{t} \sim N(0, 1) $(GARCH-n), the GARCH model with$ \epsilon_{t} \sim T(0, 1, v) $(GARCH-t) and the DeepAR-GMM-GARCH model. Using MCMC sampling method, the degree of freedom for the Student-t distribution is determined to be 6. For the DeepAR-GMM-GARCH model, we set a recurrent neural network with three LSTM layers and 24 hidden nodes. For the input nodes m in Figure 2, we use the Grid Search Algorithm to find their optimal values. We use BIC rule to choose K based on the Mclust package [35]. Our model's hyperparameters are trained by the software Optuna, a commonly applied automatic hyperparameters optimization software.

    Table 1 informs the three volatility forecasting models' in-sample errors (RMSE and MAE). The GARCH-n and GARCH-t show similar performance, and our DeepAR-GMM-GARCH model is outstanding; All the models get decreased RMSE as sample size increase. These results imply the proposed estimation can be asymptotically convergent. Table 2 reports the average out-of-sample errors of the three volatility forecasting models. Similar to the in-sample results, our DeepAR-GMM-GARCH model is superior to the GARCH-n and GARCH-t models.

    Table 1.  The average in-sample errors of the GARCH-n, the GARCH-t and the DeepAR-GMM-GARCH models.
    sample size Modle RMSE MAE
    $ T=500 $ $ \text{GARCH-n} $ $ 0. 1364 $ $ 0. 0628 $
    $ \text{GARCH-t} $ $ 0. 1211 $ $ 0. 0591 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 1068} $ $ \underline{0. 0548} $
    $ T=1000 $ $ \text{GARCH-n} $ $ 0. 0621 $ $ 0. 0437 $
    $ \text{GARCH-t} $ $ 0. 0578 $ $ 0. 0419 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 0398} $ $ \underline{0. 0331} $
    $ T=1500 $ $ \text{GARCH-n} $ $ 0. 0604 $ $ 0. 0428 $
    $ \text{GARCH-t} $ $ 0. 0652 $ $ 0. 0401 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 0300} $ $ \underline{0. 0325} $
    Note: Number of replications = 1000.

     | Show Table
    DownLoad: CSV
    Table 2.  The average out-of-sample errors of the GARCH-n, the GARCH-t and the DeepAR-GMM-GARCH models.
    sample size Modle RMSE MAE
    $ T=500 $ $ \text{GARCH-n} $ $ 0. 3564 $ $ 0. 3028 $
    $ \text{GARCH-t} $ $ 0. 3271 $ $ 0. 3091 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 2761} $ $ \underline{0. 2311} $
    $ T=1000 $ $ \text{GARCH-n} $ $ 0. 2619 $ $ 0. 2117 $
    $ \text{GARCH-t} $ $ 0. 2318 $ $ 0. 2033 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 2091} $ $ \underline{0. 1834} $
    $ T=1500 $ $ \text{GARCH-n} $ $ 0. 2241 $ $ 0. 2179 $
    $ \text{GARCH-t} $ $ 0. 2213 $ $ 0. 1971 $
    $ \text{DeepAR-GMM-GARCH} $ $ \underline{0. 1911} $ $ \underline{0. 1722} $
    Note: Number of replications = 1000.

     | Show Table
    DownLoad: CSV

    Comprehensive stock index represents the average of the economic performance of the whole financial market. In this section, we study the China Shanghai Shenzhen index(CSI 300 index) daily OHLC data. The OHLC data contains daily high, low, open and close prices. Scholars have pointed out that combining the open, high, low and close prices can obtain more effective volatility estimates. Hence, We also intruduce OHLC data to our mixture model.

    The data of the CSI 300 index studied in this paper are from January 4, 2010, to December 30, 2021, with a total of 2916 trading days. Let $ r_{t} $ be the returns of the corresponding series, which are calculated using the closing price $ C_{t} $ series of the CSI300 index.

    $ rt=100logCt+1Ct.
    $
    (5.1)

    The time series of $ r_{t} $ and $ r_{t}^{2} $ are ploted in Figure 3. We can find that $ r_{t}^{2} $ displays a significant volatility clustering characteristic, and the amplitude of volatility is gradually decreasing.

    Figure 3.  The time series of $ r_{t} $ and $ r_{t}^{2} $ for the CSI 300 index, as shown in (5.1), spans from January 4, 2010, to December 30, 2021.

    The collected data is divided into training data and test data. Table 3 describes the statistical characteristics of training and test data, respectively. The mean of $ r_{t} $ is relatively small, only 0.007465. The standard deviation is 2.251006, which displays that the degree of variation is large. The skewness is less than 0, and the kurtosis is greater than 3, indicating that the sequence is of left deviation and heavy tail. The test data has the same characteristics as the training data, such as data dispersion, large volatility, left deviation and higher protrusion than the normal distribution. This may imply that the normal distribution may not be suitable for our data, and other heavy-tailed distributions, such as t distribution or mixed normal distribution, could be more suitable.

    Table 3.  Descriptive statistics for the training and test sets of CSI 300 returns.
    Data Set Period Mean Std. Skew. Kurt.
    $ \text{training data} $ $ 04 / 01 / 2010 $ to $ 29 / 12 / 2017 $ $ 0.007465 $ $ 2.251006 $ $ -0.755580 $ $ 5.136707 $
    $ \text{test data} $ $ 02 / 01 / 2018 $ to $ 30 / 12 / 2021 $ $ 0.019125 $ $ 1.717302 $ $ -0.427819 $ $ 3.248347 $

     | Show Table
    DownLoad: CSV

    Besides the close price($ C_{t} $), we also introduce the high price($ H_{t} $), the open price($ O_{t} $) and the low price($ L_{t} $). Define $ u_{t} = (H_{t}-O_{t})^{2} $, $ d_{t} = (L_{t}-O_{t})^{2} $, $ c_{t} = (C_{t}-O_{t})^{2} $.

    The correlation matrix below (5.2) shows the correlation coefficients between $ u_{t} $, $ d_{t} $, $ c_{t} $ and $ r_{t}^{2} $. It can be found that there is large correlation coefficient between the pair ($ u_{t} $, $ r_{t}^{2} $), ($ d_{t} $, $ r_{t}^{2} $) and ($ c_{t} $, $ r_{t}^{2} $). From a common sense, large values for $ u_{t} $, $ d_{t} $ and $ c_{t} $, usually means large volatility($ r_{t}^{2} $). However, classical volatility models do not take such extreme values into account. Consequently, it is reasonable to use neural network together with $ u_{t} $, $ d_{t} $ and $ c_{t} $ to forecast volatility, because neural network can introduce additional covariates and capture the complex relation between different covariates.

    $ [r2tutdtctr2t1.0000.3940.4750.716ut0.3941.0000.0120.556dt0.4750.0121.0000.690ct0.7160.5560.6901.000]
    $
    (5.2)

    This paper uses four evaluation indicators to measure the predictive performance of the model, they are: NMAE, HR, linear correlation coefficient and rank correlation coefficient, which is defined as follows:

    $ NMAE=Nt=1|r2t+1˜ht+1|Nt=1|r2t+1r2t|,
    $
    (5.3)
    $ HR=1NNt=1θt,θt={1:(˜ht+1r2t)(r2t+1r2t)00: else ,
    $
    (5.4)

    where N represents the number of predicted samples. Both NMAE and HR values range between 0 and 1. The smaller the values of these two indicators, the better the model's performance.

    Scholars usually use high-frequency data volatility estimates as a proxy for actual volatility to evaluate forecasting models. We also use realized volatility($ \sigma_{RV, t}^{2} $) as a proxy for actual volatility, calculated by summing up the squares of intra-day returns every 5 minutes.

    $ σ2RV,t=48i=1[logrt,ilogrt,i1]2.
    $
    (5.5)

    We focus on the out-of-sample predictive performance of the models, the correlation between realized volatilities $ \sigma_{RV, t+1}^{2} $ and predicted volatilities $ \widetilde{h}_{t+1} $ is measured only on the test set. We calculated Pearson's coefficient

    $ r=Ni=1(σ2RV,t+1σ2RV)(˜ht+1˜h)Ni=1(σ2RV,t+1σ2RV)2Ni=1(˜ht+1˜h)2,
    $
    (5.6)

    where $ \sigma_{RV}^{2} $ and $ \widetilde{h} $ denote the respective mean values, and Spearman's rank order correlation coefficient $ r_{s} $. $ r_{s} $ is also calculated using Eq (5.6). However, the actual volatilities are replaced by their ranks. Spearman's rank order correlation coefficient is considered more robust than Pearson's coefficient. $ r $ and $ r_{s} $ are both between $ -1 $ and $ 1 $. A value of $ r $($ r_{s} $) around $ 0 $ means that the realized volatilities and predicted volatilities are uncorrelated.

    Simulation experiments demonstrate that our proposed model exhibits greater prediction accuracy than the GARCH model. In this section, to highlight the advantages of our model, we compare it to the classic GARCH model, ANN-GARCH (an existing neural network GARCH model) and the DeepAR-GARCH model using empirical data.

    Among the four models, the GARCH and ANN-GARCH models predict conditional volatility, whereas the DeepAR-GARCH and DeepAR-GMM-GARCH models provide probabilistic forecasts for conditional volatility. To facilitate comparison, we will calculate the mean and quantiles of the probability density function for conditional volatility derived from the DeepAR-GARCH and DeepAR-GMM-GARCH models.

    The estimated parameters of the GARCH models(GARCH-n and GARCH-t) are summarized in Tables 4 and 5. For the GARCH-t model, the degrees of freedom parameter $ v $ is estimated at around $ 6 $. The GARCH-n and GARCH-t models are all nearly estimated with higher values of $ \beta_{1} $ and lower values of $ \alpha_{1} $. The sum of $ \alpha_{1} $ and $ \beta_{1} $ is almost 1, Which implies the sequence may be non-stationary. Therefore, our model without the stationary constraint is more suitable. The ANN-GARCH model employs a three-layer ANN structure, featuring two input nodes, 24 nodes for the hidden layer and a single output node. Likewise, the DeepAR-GARCH and DeepAR-GMM-GARCH models also have a three-layer design, consisting of 14 input nodes, 24 nodes for the hidden layer and an output layer with two output nodes and five output nodes.

    Table 4.  Parameter estimation of GARCH-n model.
    Data Set $a_{0}$ $ \alpha_{1} $ $ \beta_{1} $
    $ CSI300 $ $ 1.8365e-04 $ $ 0.1000 $ $ 0.8800 $

     | Show Table
    DownLoad: CSV
    Table 5.  Parameter estimation of GARCH-t model.
    Data Set $a_{0}$ $ \alpha_{1} $ $ \beta_{1} $ $ \nu $
    $ CSI300 $ $ 1.8103e-04 $ $ 0.1000 $ $ 0.8400 $ $ 6.4625 $

     | Show Table
    DownLoad: CSV

    Table 6 lists the performance of five volatility prediction models. In the in-sample study, the DeepAR-GMM-GARCH model has the smallest HR and loss, and the DeepAR-GARCH model has the smallest NMAE. The volatility prediction performance of the DeepAR-GMM-GARCH model is better than the traditional GARCH and DeepAR models.

    Table 6.  In-sample forecasting results for the GARCH-n, GARCH-t, DeepAR-GARCH and DeepAR-GMM-GARCH models.
    Data Set Model Loss NMAE HR
    In-sample $ \text{GARCH-n} $ $ 1. 401 $ $ 0. 763 $ $ 0. 704 $
    $ \text{GARCH-t} $ $ 1. 331 $ $ 0. 761 $ $ 0. 637 $
    $ \text{ANN-GARCH} $ $ 1. 603 $ $ 0. 827 $ $ 0. 690 $
    $ \text{DeepAR-GARCH} $ $ 1. 541 $ $ {\bf 0. 748} $ $ 0. 717 $
    $ \text{DeepAR-GMM-GARCH} $ $ {\bf 1. 311} $ $ 0. 751 $ $ {\bf 0.630} $

     | Show Table
    DownLoad: CSV

    In Figure 4, we display a portion of the forecasting results from various models and compare them with $ r_{t}^{2} $. As shown in (a), the forecasting results from the GARCH models differ from $ r_{t}^{2} $. The GARCH models fail to capture significant changes in $ r_{t}^{2} $. From (b), (c), it is evident that the neural network models capture the trend of $ r_{t}^{2} $ and more accurately predict large fluctuations, with the DeepAR-GMM-GARCH model demonstrating the best performance. In (d), we observe that the estimated 90% quantiles from the DeepAR-GMM-GARCH model appear to be closer to the observations ($ r_{t}^{2} $).

    Figure 4.  Subplots $ (a), (b) $, $ (c) $ and $ (d) $ are the plots of the in-sample volatility forecasting results of the classic GARCH, ANN-GARCH, DeepAR-GARCH and DeepAR-GMM-GARCH models. (a) is the comparison of volatility forecasting results from the GARCH models. (b) is the comparison of volatility forecasting results between the ANN-GARCH model and $ r_{t}^{2} $. (c) is the comparison of probabilistic forecasting models between the DeepAR-GARCH model and DeepAR-GMM-GARCH model. (d) is the comparison of 90% quantile forecasting results between the DeepAR-GARCH model and DeepAR-GMM-GARCH model.

    To sum up, from the estimation results of Table 6 and the plots in Figure 4, it is shown that introducing the extreme values($ u_{t} $, $ d_{t} $, $ c_{t} $) can help to improve the forecasting accuracy of the mixture volatility models. Hence the proposed approach is of particular practical value.

    In the out-of-sample analysis, as discussed in Section 5.1, the test set comprises the time series of 972 trading days subsequent to the respective training set.

    Table 7 presents the performance of the models using common error measures (loss function, NMAE and HR). The DeepAR-GARCH model attains a lower loss function value compared to the neural network models on the test set. The neural network models display lower NMAE and HR values than the GARCH model on the training set. The DeepAR-GMM-GARCH models exhibit the lowest NMAE and HR values on the test data set.

    Table 7.  Out-of-sample forecasting results for the GARCH-n, GARCH-t, ANN-GARCH, DeepAR-GARCH and DeepAR-GMM-GARCH models.
    Data Set Model Loss NMAE HR
    Out-sample $ \text{GARCH-n} $ $ 2. 320 $ $ 0. 917 $ $ 0. 868 $
    $ \text{GARCH-t} $ $ 2. 008 $ $ 0. 915 $ $ 0. 859 $
    $ \text{ANN-GARCH} $ $ 2. 517 $ $ 0. 903 $ $ 0. 783 $
    $ \text{DeepAR-GARCH} $ $ {\bf 1. 916} $ $ 0. 929 $ $ 0. 801 $
    $ \text{DeepAR-GMM-GARCH} $ $ 2. 100 $ $ {\bf 0.790} $ $ {\bf 0.722} $

     | Show Table
    DownLoad: CSV

    In Figure 5, we plot part of the forecasting results from the five models mentioned with out-sample data set and compare it with $ r_{t}^{2} $. It can be seen from (a) that the GARCH models do not capture significant changes of $ r_{t}^{2} $, the same as the in-sample results. For (b), (c), The neural network models capture most of the fluctuations of $ r_{t}^{2} $ well, the DeepAR-GMM-GARCH model performing the best.

    Figure 5.  Subplots $ (a), (b) $, $ (c) $ and $ (d) $ are the plots of the out-sample volatility forecasting results of the classic GARCH, ANN-GARCH, DeepAR-GARCH and DeepAR-GMM-GARCH models. (a) is the comparison of volatility forecasting results from the GARCH models. (b) is the comparison of volatility forecasting results between the ANN-GARCH model and realized volatility($ r_{t}^{2} $). (c) is the comparison of probabilistic forecasting models between the DeepAR-GARCH model and DeepAR-GMM-GARCH model. (d) is the comparison of 90% quantile forecasting results between the DeepAR-GARCH model and DeepAR-GMM-GARCH model.

    From (d), we could find that The estimated 90% quantiles from the DeepAR-GMM-GARCH model seem to be more closely aligned with the observations ($ r_{t}^{2} $).

    Section 5.2 mentions that the linear correlation $ r $ and the rank correlation $ r_{s} $ are two measures for comparing realized volatilities. The linear correlation r and the rank correlation $ r_{s} $ between predicted and realized volatilities of the test set are reported in Table 8. On average, the DeepAR-GMM-GARCH model shows the best performance of all models. It obtains the highest rank correlation on the test set. Rank correlation is more robust than linear correlation since it detects correlations nonparametrically.

    Table 8.  A comparison of linear correlation (r) and rank correlation ($ r_s $) between the realized volatility and volatility predicted by GARCH-n, GARCH-t, DeepAR-GARCH, ANN-GARCH and DeepAR-GMM-GARCH models for test data set on the CSI300 index. The best model (the highest correlation) is underlined.
    Out-sample GARCH-n GARCH-t ANN-GARCH DeepAR-GARCH DeepAR-GMM-GARCH
    $ r $ $ 0. 381 $ $ 0.420 $ $ 0.490 $ $ 0. 473 $ $ \underline{0. 504} $
    $ r_s $ $ 0. 477 $ $ 0.502 $ $ 0.500 $ $ 0. 516 $ $ \underline{0. 527} $

     | Show Table
    DownLoad: CSV

    This paper studies a mixture volatility forecasting model based on the autoregressive neural work and the GARCH model to obtain more precise forecasting for the conditional volatility model. The inference, loss functions and training algorithm of the mixture model are given. The simulation results show that our model performs better with less error than the classic GARCH models. The empirical study based on the CSI300 index shows that our model can significantly improve the forecasting accuracy with extreme values compared to the usual models.

    Our research findings can offer valuable insights into the prediction of volatility uncertainty. In future studies, our model can be employed for various high-frequency volatility analysis, where it is anticipated to exhibit enhanced performance.

    This work is partially supported by Guangdong Basic and Applied Basic Research Foundation (2022A1515010046) and Funding by Science and Technology Projects in Guangzhou (SL2022A03J00654).

    The authors declare no conflict of interest.

    [1] Mock C, Cherian MN (2008) The global burden of musculoskeletal injuries: Challenges and solutions. Clin Orthop Relat Res 466: 2306–2316. doi: 10.1007/s11999-008-0416-z
    [2] Wu D, Zhu X, Zhang S (2018) Effect of home-based rehabilitation for hip fracture: A meta-analysis of randomized controlled trials. J Rehabil Med 50: 481–486. doi: 10.2340/16501977-2328
    [3] Hirdes JP, Fries BE, Morris JN, et al. (2004) Home care quality indicators (HCQIs) based on the MDS-HC. Gerontologist 44: 665–679. doi: 10.1093/geront/44.5.665
    [4] Armstrong JJ, Sims-Gould J, Stolee P (2016) Allocation of Rehabilitation Services for Older Adults in the Ontario Home Care System. Physiother Can 68: 346–354. doi: 10.3138/ptc.2014-66
    [5] Tran D, Davis A, Mcgillis HL, et al. (2012) Comparing Recruitment and Retention Strategies for Rehabilitation Professionals among Hospital and Home Care Employers. Physiother Can 64: 31–41. doi: 10.3138/ptc.2010-43
    [6] Roots RK, Brown H, Bainbridge L, et al. (2014) Rural rehabilitation practice: Perspectives of occupational therapists and physical therapists in British Columbia, Canada. Rural Remote Health 14: 2506.
    [7] Russell TG (2007) Physical rehabilitation using telemedicine. J Telemed Telecare 13: 217–220. doi: 10.1258/135763307781458886
    [8] Clarke M, Shah A, Sharma U (2011) Systematic review of studies on telemonitoring of patients with congestive heart failure: A meta-analysis. J Telemed Telecare 17: 7–14. doi: 10.1258/jtt.2010.100113
    [9] Laver KE, Schoene D, Crotty M, et al. (2013) Telerehabilitation services for stroke. Cochrane Database Syst Rev 12: CD010255.
    [10] Chen J, Jin W, Zhang XX, et al. (2015) Telerehabilitation Approaches for Stroke Patients: Systematic Review and Meta-analysis of Randomized Controlled Trials. J Stroke Cerebrovasc Dis 24: 2660–2668. doi: 10.1016/j.jstrokecerebrovasdis.2015.09.014
    [11] Reeder B, Chung J, Stevens-Lapsley J (2016) Current Telerehabilitation Research With Older Adults at Home: An Integrative Review. J Gerontol Nurs 42: 15–20.
    [12] Cottrell MA, Galea OA, O'Leary SP, et al. (2017) Real-time telerehabilitation for the treatment of musculoskeletal conditions is effective and comparable to standard practice: A systematic review and meta-analysis. Clin Rehabil 31: 625–638. doi: 10.1177/0269215516645148
    [13] Le MY, Collins G, Bhandari M, et al. (2015) Outcomes After Hip Fracture Surgery Compared With Elective Total Hip Replacement. JAMA 314: 1159–1166. doi: 10.1001/jama.2015.10842
    [14] Moher D, Liberati A, Tetzlaff J, et al. (2009) Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med 6: e1000097. doi: 10.1371/journal.pmed.1000097
    [15] Higgins JP, Altman DG, Gotzsche PC, et al. (2011) The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 343: d5928. doi: 10.1136/bmj.d5928
    [16] Higgins JPT, Altman DG, Sterne JAC (2017) Chapter 8: Assessing risk of bias in included studies, In: Higgins JPT, Churchill R, Chandler J, Cumpston MS. Editors, Cochrane Handbook for Systematic Reviews of Interventions version 520 (updated June 2017), Cochrane, Available from: www.training.cochrane.org/handbook.
    [17] Villamar MF, Contreras VS, Kuntz RE, et al. (2013) The reporting of blinding in physical medicine and rehabilitation randomized controlled trials: A systematic review. J Rehabil Med 45: 6–13. doi: 10.2340/16501977-1071
    [18] Avellar SA, Thomas J, Kleinman R, et al. (2016) External Validity: The Next Step for Systematic Reviews? Eval Rev 41: 283–325.
    [19] Michie S, Richardson M, Johnston M, et al. (2013) The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Ann Behav Med 46: 81–95. doi: 10.1007/s12160-013-9486-6
    [20] Cohen J (1960) A Coefficient of Agreement for Nominal Scales. Eduational Psychol Meas 20: 37–46. doi: 10.1177/001316446002000104
    [21] Bedra M, Finkelstein J (2015) Feasibility of post-acute hip fracture telerehabilitation in older adults. Stud Health Technol Inf 210: 469–473.
    [22] Di MM, De ET, Gardin L, et al. (2015) A single postdischarge telephone call by an occupational therapist does not reduce the risk of falling in women after hip fracture: A randomized controlled trial. Eur J Phys Rehabil Med 51: 15–22.
    [23] Langford DP, Fleig L, Brown KC, et al. (2015) Back to the future-feasibility of recruitment and retention to patient education and telephone follow-up after hip fracture: A pilot randomized controlled trial. Patient Prefer Adherence 9: 1343–1351.
    [24] O'Halloran PD, Shields N, Blackstock F, et al. (2016) Motivational interviewing increases physical activity and self-efficacy in people living in the community after hip fracture: A randomized controlled trial. Clin Rehabil 30: 1108–1119. doi: 10.1177/0269215515617814
    [25] Suwanpasu S, Aungsuroch Y, Jitapanya C (2014) Post-surgical physical activity enhancing program for elderly patients after hip fracture: A randomized controlled trial. Asian Biomed 8: 525–532. doi: 10.5372/1905-7415.0804.323
    [26] Tousignant M, Giguere AM, Morin M, et al. (2014) In-home telerehabilitation for proximal humerus fractures: A pilot study. Int J Telerehabil 6: 31–37.
    [27] Landis JR, Koch GG (1977) The Measurement of Observer Agreement for Categorical Data. Biometrics 33: 159–174. doi: 10.2307/2529310
    [28] Yang YT, Iqbal U, Ching JH, et al. (2015) Trends in the growth of literature of telemedicine: A bibliometric analysis. Comput Methods Programs Biomed 122: 471–479. doi: 10.1016/j.cmpb.2015.09.008
    [29] Johansson T, Wild C (2011) Telerehabilitation in stroke care-a systematic review. J Telemed Telecare 17: 1–6. doi: 10.1258/jtt.2010.100105
    [30] Veras M, Kairy D, Rogante M, et al. (2017) Scoping review of outcome measures used in telerehabilitation and virtual reality for post-stroke rehabilitation. J Telemed Telecare 23: 567–587. doi: 10.1177/1357633X16656235
    [31] Chan C, Yamabayashi C, Syed N, et al. (2016) Exercise Telemonitoring and Telerehabilitation Compared with Traditional Cardiac and Pulmonary Rehabilitation: A Systematic Review and Meta-Analysis. Physiother Can 68: 242–251. doi: 10.3138/ptc.2015-33
    [32] Frederix I, Vanhees L, Dendale P, et al. (2015) A review of telerehabilitation for cardiac patients. J Telemed Telecare 21: 45–53. doi: 10.1177/1357633X14562732
    [33] Hwang R, Bruning J, Morris N, et al. (2015) A Systematic Review of the Effects of Telerehabilitation in Patients With Cardiopulmonary Diseases. J Cardiopulm Rehabil Prev 35: 380–389. doi: 10.1097/HCR.0000000000000121
    [34] Jiang S, Xiang J, Gao X, et al. (2018) The comparison of telerehabilitation and face-to-face rehabilitation after total knee arthroplasty: A systematic review and meta-analysis. J Telemed Telecare 24: 257–262. doi: 10.1177/1357633X16686748
    [35] Shukla H, Nair SR, Thakker D (2017) Role of telerehabilitation in patients following total knee arthroplasty: Evidence from a systematic literature review and meta-analysis. J Telemed Telecare 23: 339–346. doi: 10.1177/1357633X16628996
    [36] Amatya B, Galea MP, Kesselring J, et al. (2015) Effectiveness of telerehabilitation interventions in persons with multiple sclerosis: A systematic review. Mult Scler Relat Disord 4: 358–369. doi: 10.1016/j.msard.2015.06.011
    [37] Khan F, Amatya B, Kesselring J, et al. (2015) Telerehabilitation for persons with multiple sclerosis. A Cochrane review. Eur J Phys Rehabil Med 51: 311–325.
    [38] Best A, Greenhalgh T, Lewis S, et al. (2012) Large-system transformation in health care: A realist review. Milbank Q 90: 421–456. doi: 10.1111/j.1468-0009.2012.00670.x
    [39] Ranhoff AH, Holvik K, Martinsen MI, et al. (2010) Older hip fracture patients: three groups with different needs. BMC Geriatr 10: 65. doi: 10.1186/1471-2318-10-65
    [40] Irwin J, Carter A (2013) Major trauma patients with musculoskeletal injuries: Rehabilitation pathway inadequacies. Int J Ther Rehabil 20: 376–377. doi: 10.12968/ijtr.2013.20.8.376
    [41] Greenwald P, Stern ME, Clark S, et al. (2018) Older adults and technology: In telehealth, they may not be who you think they are. Int J Emerg Med 11: 2. doi: 10.1186/s12245-017-0162-7
    [42] Pew Internet and American Life Project, Older adults and the internet, 2004. Available via Pew Internet and American Life Project. Available from: http://www.pewinternet.org/2017/05/17/tech-adoption-climbs-among-older-adults/. Accessed 6 June 2018.
    [43] Nahm ES, Resnick B, Plummer L, et al. (2013) Use of discussion boards in an online hip fracture resource center for caregivers. Orthop Nurs 32: 89–95. doi: 10.1097/NOR.0b013e318289fa22
    [44] Yamato T, Maher C, Saragiotto B, et al. (2016) Improving completeness and transparency of reporting in clinical trials using the template for intervention description and replication (TIDieR) checklist will benefit the physiotherapy profession. J Man Manip Ther 24: 183–184. doi: 10.1080/10669817.2016.1210343
    [45] Webb TL, Joseph J, Yardley L, et al. (2010) Using the internet to promote health behavior change: A systematic review and meta-analysis of the impact of theoretical basis, use of behavior change techniques, and mode of delivery on efficacy. J Med Int Res 12: e4.
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5238) PDF downloads(1141) Cited by(5)

Figures and Tables

Figures(3)  /  Tables(4)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog