
This study mainly used The Cancer Genome Atlas (TCGA) RNA sequencing dataset to screen prognostic snoRNAs of acute myeloid leukemia (AML), and used for the construction of prognostic snoRNAs signature for AML. A total of 130 AML patients with RNA sequencing dataset were used for prognostic snoRNAs screenning. SnoRNAs co-expressed genes and differentially expressed genes (DEGs) were used for functional annotation, as well as gene set enrichment analysis (GSEA). Connectivity Map (CMap) also used for potential targeted drugs screening. Through genome-wide screening, we identified 30 snoRNAs that were significantly associated with the prognosis of AML. Then we used the step function to screen a prognostic signature composed of 14 snoRNAs (SNORD72, SNORD38, U3, SNORA73B, SNORD79, SNORA73, SNORD12B, SNORA74, SNORD116-12, SNORA65, SNORA14, snoU13, SNORA75, SNORA31), which can significantly divide AML patients into high- and low-risk groups. Through GSEA, snoRNAs co-expressed genes and DEGs functional enrichment analysis, we screened a large number of potential functional mechanisms of this prognostic signature in AML, such as phosphatidylinositol 3-kinase-Akt, Wnt, epithelial to mesenchymal transition, T cell receptors, NF-kappa B, mTOR and other classic cancer-related signaling pathways. In the subsequent targeted drug screening using CMap, we also identified six drugs that can be used for AML targeted therapy, they were alimemazine, MG-262, fluoxetine, quipazine, naltrexone and oxybenzone. In conclusion, our current study was constructed an AML prognostic signature based on the 14 prognostic snoRNAs, which may serve as a novel prognostic biomarker for AML.
Citation: Rui Huang, Xiwen Liao, Qiaochuan Li. Integrative genomic analysis of a novel small nucleolar RNAs prognostic signature in patients with acute myelocytic leukemia[J]. Mathematical Biosciences and Engineering, 2022, 19(3): 2424-2452. doi: 10.3934/mbe.2022112
[1] | Kayode Oshinubi, Firas Ibrahim, Mustapha Rachdi, Jacques Demongeot . Functional data analysis: Application to daily observation of COVID-19 prevalence in France. AIMS Mathematics, 2022, 7(4): 5347-5385. doi: 10.3934/math.2022298 |
[2] | Ahmed Sedky Eldeeb, Muhammad Ahsan-ul-Haq, Mohamed S. Eliwa . A discrete Ramos-Louzada distribution for asymmetric and over-dispersed data with leptokurtic-shaped: Properties and various estimation techniques with inference. AIMS Mathematics, 2022, 7(2): 1726-1741. doi: 10.3934/math.2022099 |
[3] | Fatimah A Almulhim, Torkia Merouan, Mohammed B. Alamari, Boubaker Mechab . Local linear estimation for the censored functional regression. AIMS Mathematics, 2024, 9(6): 13980-13997. doi: 10.3934/math.2024679 |
[4] | Yanting Xiao, Yifan Shi . Robust estimation for varying-coefficient partially nonlinear model with nonignorable missing response. AIMS Mathematics, 2023, 8(12): 29849-29871. doi: 10.3934/math.20231526 |
[5] | Oussama Bouanani, Salim Bouzebda . Limit theorems for local polynomial estimation of regression for functional dependent data. AIMS Mathematics, 2024, 9(9): 23651-23691. doi: 10.3934/math.20241150 |
[6] | Lijie Zhou, Liucang Wu, Bin Yang . Estimation and diagnostic for single-index partially functional linear regression model with p-order autoregressive skew-normal errors. AIMS Mathematics, 2025, 10(3): 7022-7066. doi: 10.3934/math.2025321 |
[7] | Alanazi Talal Abdulrahman, Khudhayr A. Rashedi, Tariq S. Alshammari, Eslam Hussam, Amirah Saeed Alharthi, Ramlah H Albayyat . A new extension of the Rayleigh distribution: Methodology, classical, and Bayes estimation, with application to industrial data. AIMS Mathematics, 2025, 10(2): 3710-3733. doi: 10.3934/math.2025172 |
[8] | Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563 |
[9] | Liqi Xia, Xiuli Wang, Peixin Zhao, Yunquan Song . Empirical likelihood for varying coefficient partially nonlinear model with missing responses. AIMS Mathematics, 2021, 6(7): 7125-7152. doi: 10.3934/math.2021418 |
[10] | Said Attaoui, Billal Bentata, Salim Bouzebda, Ali Laksaci . The strong consistency and asymptotic normality of the kernel estimator type in functional single index model in presence of censored data. AIMS Mathematics, 2024, 9(3): 7340-7371. doi: 10.3934/math.2024356 |
This study mainly used The Cancer Genome Atlas (TCGA) RNA sequencing dataset to screen prognostic snoRNAs of acute myeloid leukemia (AML), and used for the construction of prognostic snoRNAs signature for AML. A total of 130 AML patients with RNA sequencing dataset were used for prognostic snoRNAs screenning. SnoRNAs co-expressed genes and differentially expressed genes (DEGs) were used for functional annotation, as well as gene set enrichment analysis (GSEA). Connectivity Map (CMap) also used for potential targeted drugs screening. Through genome-wide screening, we identified 30 snoRNAs that were significantly associated with the prognosis of AML. Then we used the step function to screen a prognostic signature composed of 14 snoRNAs (SNORD72, SNORD38, U3, SNORA73B, SNORD79, SNORA73, SNORD12B, SNORA74, SNORD116-12, SNORA65, SNORA14, snoU13, SNORA75, SNORA31), which can significantly divide AML patients into high- and low-risk groups. Through GSEA, snoRNAs co-expressed genes and DEGs functional enrichment analysis, we screened a large number of potential functional mechanisms of this prognostic signature in AML, such as phosphatidylinositol 3-kinase-Akt, Wnt, epithelial to mesenchymal transition, T cell receptors, NF-kappa B, mTOR and other classic cancer-related signaling pathways. In the subsequent targeted drug screening using CMap, we also identified six drugs that can be used for AML targeted therapy, they were alimemazine, MG-262, fluoxetine, quipazine, naltrexone and oxybenzone. In conclusion, our current study was constructed an AML prognostic signature based on the 14 prognostic snoRNAs, which may serve as a novel prognostic biomarker for AML.
Functional data analysis (FDA) (e.g., [1]) has drawn considerable attention over recent years, owing to a great deal of flexibilities and universal applications in handling high-dimensional data sets. A fundamental and important tool for FDA is functional linear models.
There are a lot of researches in literature on the inference of functional linear models and their extensions, see, among others, [2,3,4] for earlier works, and [5,6,7,8,9,10] for recent works. As is well known, in the estimation of regression models, the choice of loss function is essential to obtain a highly efficient and robust estimator. Most of earlier works employed the square loss function and obtained ordinary least squares (OLS) estimators. In recent years, many other loss functions have been considered in the estimation of functional linear models and their extensions. Kato [6], Tang and Cheng [11] studied the quantile regression (QR) with functional linear models and partial functional linear models, respectively. Yu et al. [12] proposed a robust exponential squared loss estimation procedure (ESL) and established the asymptotic properties of the proposed estimators. Cai et al. [13] introduced a new robust estimation procedure by employing a modified Huber function, whose tail function is replaced by the exponential squared loss (H-ESL) in the partial functional linear model.
It is well known that the square loss function pays attention to reflect the distributional features of the entire distribution, whereas QR, ESL and H-ESL methods focus on bounding the influence of outliers when the data are heavy-tailed, respectively. Thus, developing a method, which can both reflect distributional features and bound outliers effectively, is highly desirable in data analysis. We note that, in the context of principal component analysis (PCA), Lim and Oh [14] proposed a new approach using a weighted linear combination of asymmetric Huber loss functions to demonstrate the distributional features of data as well as keep robust to outliers. The asymmetric Huber loss functions are defined as
ρτ(u)={(τ−1)(u+0.5c∗)foru<−c∗0.5(1−τ)u2/c∗for−c∗≤u<00.5τu2/c∗for0≤u<c∗τ(u−0.5c∗)forc∗≤u, | (1.1) |
with c∗=1.345, and τ∈(0,1) being a parameter to control the degree of skewness. The function ρτ(⋅) is equivalent to the Huber loss function (see, [15]) when τ is equal to 0.5 and is most exactly the same as the quantile loss function when c∗ is small enough.
Motivated by the appealing characteristics of the asymmetric Huber functions, in this paper, we first investigate a new asymmetric Huber regression (AHR) estimation procedure to analyse skewed data for the partial functional linear model, based on the functional principal component analysis. To improve the estimation accuracy for single AHR estimation, we develop a weighted composite asymmetric Huber regression (WCAHR) estimation by combining the strength across multiple asymmetric Huber regression models. A practical algorithm for WCAHR estimators based on pseudo data is developed to implement the estimation method. The asymptotic properties of the proposed estimators are also derived. Extensive simulations are carried out to show the superiority of the proposed estimators.
Finally, we apply the proposed methods to two data sets. In the first example, we analyze the electricity data. Figure 1 presents the estimated density of the residuals and the residual diagnostic plot obtained by fitting the model (4.1) in Section 4.1 via the OLS method. The distribution of the residuals is skewed, bimodal, and there are some outliers in the dataset. Given that the WCAHR can effectively manage such data, we use the proposed method to conduct an analysis to this data set. Another example in Section 4 considers the Tecator data set. Similarly, Figure 2 presents the density of the residuals and the residual diagnostic plot obtained by fitting the model (4.2) in Section 4.2 via the OLS method, which demonstrates that the distribution of the residuals is skewed and far from normality. Undoubtedly, WCAHR regression is applicable to analyzing this data set on account of its appealing features.
To our knowledge, it is the first to discuss the asymmetric Huber regression problems under functional models framework. The proposed WCAHR method possesses advantages that include the robustness to outliers as well as reflecting the relationships between potential explanatory variables and the entire distribution of response. It retains the advantages in analysing skewed data and the obtained estimators rely on the shape of the entire distribution rather than merely on the data nearby a specific quantile level or skewness level of the asymmetric Huber loss, thereby avoiding the limitations of these methods. These advantages are revealed by both theoretical conclusions and numerical results. The relevant algorithm is data-adaptive, and capable of reflecting the distributional features of the data without prior information, and is robust to outliers.
The rest of this paper is organized as follows. In Section 2, we formally describe the estimation procedures, and develop a new algorithm. We also establish the asymptotic behaviors of the proposed estimators as well as a list of technical assumptions needed in the theoretical research. In Section 3, the finite performances of the proposed estimators are evaluated through simulations. Section 4 illustrates the use of the proposed methods in the analyses of electricity data and Tecator data. Brief conclusions on the proposed methods are made in Section 5. All technique proofs are provided in Section A.
Let Y be a real value random variable, Z=(Z1,⋯,Zp)T be a p dimensional random vector with zero mean and finite second moment. Let {X(t):t∈T} be a zero mean, second-order stochastic process with sample paths in L2(T), which consists of square integrable functions with inner product ⟨x,y⟩=∫Tx(t)y(t)dt and norm ‖x‖=⟨x,x⟩1/2, respectively, here T is a bounded closed interval. Without loss of generality, we suppose T=[0,1] throughout the paper. The dependence between Y and (X,Z) is expressed by the partial functional linear regression as following,
Y=ZTα+∫10β(t)X(t)dt+e. | (2.1) |
Here, random error e is assumed to be independent of Z and X, α=(α1,⋯,αp)T is an unknown p-dimensional parameter vector, and the slope function β(⋅) is an unknown square integrable function on [0,1].
Let (Zi,Xi(⋅),Yi),i=1,⋯,n, be independent observations generated by model (2.1) and let ei=Yi−ZTiα−∫10β(t)Xi(t)dt,i=1,⋯,n. The covariance and empirical covariance functions for X(⋅) are defined as cX(t,s)=Cov(X(t),X(s)), ˆcX(t,s)=1n∑ni=1Xi(t)Xi(s) respectively. Based on the Mercer's Theorem, cX and ˆcX can be represented as following,
cX(t,s)=∞∑i=1λivi(t)vi(s),ˆcX(t,s)=∞∑i=1ˆλiˆvi(t)ˆvi(s), |
where λ1>λ2>⋅⋅⋅>0andˆλ1≥ˆλ2≥⋯≥ˆλn+1=⋅⋅⋅=0 are each the ordered eigenvalue sequences of the covariance operator CX and its estimator ˆCX with kernels cX and ˆcX, which are defined by CXf(s)=∫10cX(t,s)f(t)dt and ˆCXf(s)=∫10ˆcX(t,s)f(t)dt with CX being assumed strictly positive, and {vi(⋅)} and {ˆvi(⋅)} are the corresponding orthonormal eigenfunction sequences. Besides, (ˆvi(⋅),ˆλi) is treated as an estimator of (vi(⋅),λi).
Similarly, we can define cYX(⋅)=Cov(Y,X(⋅)), cZ=Var(Z)=E[ZZT], cZY=Cov(Z,Y), cZX(⋅)=Cov(Z,X(⋅))=(cZ1X(⋅),⋯,cZpX(⋅))T. And the corresponding empirical counterparts defined below can be used as their estimators,
ˆcYX=1nn∑i=1YiXi,ˆcZ=1nn∑i=1ZiZTi,ˆcZY=1nn∑i=1ZiYi,ˆcZX=1nn∑i=1ZiXi. |
By the Karhunen-Loève representation, Xi(t) and β(t) can be expanded into
β(t)=∞∑j=1γjvj(t),Xi(t)=∞∑j=1ξijvj(t),i=1,⋯,n, | (2.2) |
here γj=⟨β(⋅),vj(⋅)⟩=∫10β(t)vj(t)dt, and ξij=⟨Xi(⋅),vj(⋅)⟩.
Owing to the orthogonality of {v1(⋅),...,vm(⋅)} and Eq (2.2), Model (2.1) can be transformed into
Yi=ZTiα+m∑j=1γjξij+˜ei=ZTiα+UTiγ+˜ei,i=1,⋯,n, |
where Ui=(ξi1,...,ξim)T, γ=(γ1,...,γm)T, ˜ei=∑∞j=m+1γjξij+ei, and the tuning parameter m may increase with the sample size n.
Replacing vj(⋅) with its estimator ˆvj(⋅), the τth AHR estimators ˉα and ˉβ(t)=∑mj=1ˉγjˆvj(t) can be obtained by minimizing the loss function over α, γ and bτ as follows:
(ˉb,ˉα,ˉγ)≜ |
where the asymmetric Huber loss function \rho_{\tau}(u) is defined in ( 1.1 ), and \hat{\mathit{\boldsymbol{U}}}_i = (\hat{\xi}_{i1}, \cdots, \hat{\xi}_{im})^{\rm T} with \hat{\xi}_{ij} = \langle X_i(\cdot), \hat{v}_j(\cdot)\rangle . Here the true value of b_{\tau} is defined as the solution that minimizes the loss function {E}\left\{\rho_{\tau}(e-\theta) \right\} over \theta \in \mathbb{R} , and we call it the \tau th number of e with respect to (1.1).
Remark 1. In model (2.1), we suppose the intercept term is zero. In fact, if there is an intercept, we then may absorb it into the distribution of e . Thus, the main impact of model (2.1) is finding the contribution of the predictors to the response, and the zero mean assumption for e is not needed.
Noting that the regression coefficients are the same across different skewness asymmetric Huber regression models, and being inspirited by [14] and [16], we combine the strength across multiple asymmetric Huber regression models and propose a WCAHR method. Specifically, the WCAHR estimators \hat{\mathit{\boldsymbol{\alpha}}} and \hat{\beta}(t) = \sum_{j = 1}^{m}\hat{\gamma}_j\hat{v}_j(t) can be obtained by minimizing the following loss function with respect to (\mathit{\boldsymbol{b}}, \mathit{\boldsymbol{\alpha}}, \mathit{\boldsymbol{\gamma}}) :
\begin{equation*} \label{2.1} Q_n(\mathit{\boldsymbol{b}}, \mathit{\boldsymbol{\alpha}}, \mathit{\boldsymbol{\gamma}})\triangleq\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}\Big), \end{equation*} |
where \left\{\tau_{k}\right\}_{k = 1}^{K} are predetermined levels over (0, 1), b_k = b_{\tau_k} for brevity, \mathit{\boldsymbol{b}} = \left(b_1, \cdots, b_K\right)^{\rm T} , and the weights w_{1}, \ldots, w_{K} , which control the contribution of each loss function, are positive constants satisfying \sum_{k} w_{k} = 1 .
Remark 2. Generally speaking, one can choose the equidistant levels as \tau_{k} = k /(K+1) for k = 1, 2, \ldots, K for a given K , similar to what often has been done in composite quantile regression. Although one can also apply data-adaptive methods, such as cross validation, to select K , we do not pursue this topic here. As for the weights, we consider two choices. The first is using the equal weights w_{1} = \cdots = w_{K} = 1/K . The obtained estimators are called composite asymmetric Huber regression (CAHR) estimators. As the second choice in this study, to preferably describe the distribution information of the data, we consider a K -dimensional weight vector (w_{1}, \cdots, w_{K}) = \big({f}({b}_{01}), \cdots, {f}({b}_{0K})\big)\Big/\sum_{{k} = 1}^K{f}({b}_{0k}) , where \mathit{\boldsymbol{b}}_0 = (b_{01}, \cdots, b_{0K}) is the true value vector of \mathit{\boldsymbol{b}} , and f(\cdot) is the density function of the random error. In practice, we estimate f(\cdot) by kernel density estimation method.
Denote \mathcal{S} = \big\{(Y_i, \mathit{\boldsymbol{Z}}_i, X_i(\cdot)):1\leq i\leq n\big\} , and given a new copy of (\mathit{\boldsymbol{Z}}, X) , namely the predictor variables (\mathit{\boldsymbol{Z}}_{n+1}, {X_{n+1}(\cdot)}) , once we gain the estimated \mathit{\boldsymbol{\alpha}} and \beta(t) , the mean squared prediction error (MSPE) can be obtained, take asymmetric Huber regression for example,
\begin{equation*} \begin{split} &{\rm{MSPE}}_{AHR}\\ = &{E\Bigg[\bigg\{\left(\bar{b}+\mathit{\boldsymbol{Z}}_{n+1}^{\rm T}\bar{\mathit{\boldsymbol{\alpha}}}+\int^{1}_{0}\bar{\beta}(t)X_{n+1}(t)dt\right)-\left(b_\tau+\mathit{\boldsymbol{Z}}_{n+1}^{\rm T}{\mathit{\boldsymbol{\alpha}}_0}+\int^{1}_{0}{\beta}_0(t)X_{n+1}(t)dt\right)\bigg\}^2\Bigg|\mathcal{S}\Bigg]}, \end{split} \end{equation*} |
where \mathit{\boldsymbol{\alpha}}_0 and \beta_0(t) are the true values of \mathit{\boldsymbol{\alpha}} and \beta(t) , respectively. The MSPEs of CAHR and WCAHR have analoguos definitions, and denoted by {\rm{MSPE}}_{CAHR} and {\rm{MSPE}}_{WCAHR} , respectively.
Note that the minimization problems for AHR and CAHR estimators are special cases of WCAHR method. Here, we just present the practical algorithm for WCAHR based on pseudo data. A similar argument can be found in [14] to implement the data-adaptive principal component analysis. The algorithm is simply described as following.
Step 1. Given initial estimators \hat{\boldsymbol{\alpha}}^{(0)} and \hat{\boldsymbol\gamma}^{(0)} for \boldsymbol{\alpha}_{0} and \gamma_{0} , respectively.
Step 2. Iterate, until convergence, following these three steps for L = 0, 1, \ldots
(a) Compute the residuals as \hat{e}_{i}^{(L)} = Y_{i}-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T} \hat{\boldsymbol\alpha}^{(L)}-\hat{{\mathit{\boldsymbol{U}}}}_{i}^{\rm T} \hat{\boldsymbol\gamma}^{(L)} . (b) Calculate the empirical pseudo data vector \mathit{\boldsymbol{G}}^{(L)} = ({G}_1^{(L)}, \cdots, {G}_n^{(L)})^{\rm T} in the element-wise way, G_{i}^{(L)} = {{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T} \hat{\boldsymbol\alpha}^{(L)}+\hat{{\mathit{\boldsymbol{U}}}}_{i}^{\rm T} \hat{\boldsymbol\gamma}^{(L)}+ \sum_{k = 1}^{K} w_{k} \psi_{\tau_{k}}\big(\hat{e}_{i}^{(L)}-\hat{b}_k^{(L)}\big), for given weights \left(w_{1}, \cdots, w_{K}\right) and \hat{b}_k^{(L)} = {\rm argmin}_\mu\sum_{i = 1}^n \rho_{\tau_k}\left(\hat{e}_{i}^{(L)}-\mu\right) at each k . Here \psi_{\tau_k}(u) = \rho'_{\tau_k}(u) = ({\tau_k}-1)I(u < -c^*)+\frac{(1-{\tau_k})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau_k}{c^*}uI(0\leq u < c^*)+\tau_kI(u\geq c^*) . (c) Obtain next iterative estimates \hat{\boldsymbol\alpha}^{(L+1)} and \hat{\boldsymbol\gamma}^{(L+1)} by using the OLS method for response variable \tilde{Y}_{i} = G_{i}^{(L)} and covariates {{\mathit{\boldsymbol{Z}}}}_{i}, \hat{{\mathit{\boldsymbol{U}}}}_{i} .
In this section, we will establish the asymptotic properties of the estimators defined in the previous section. We shall first present some notations, suppose \mathit{\boldsymbol{\gamma}}_0 = (\gamma_{01}, ..., \gamma_{0m})^{\rm T} is the true values of \mathit{\boldsymbol{\gamma}} , F(\cdot) is the cumulative distribution function of the random error. In addition, the notation \|\cdot\| represents the \mathcal{L}^2 norm of a function or the Euclidean norm of a vector, and a_n\sim b_n indicates that a_n/b_n is bounded away from zero and infinity as n \rightarrow \infty . For simplicity, in this paper, C represents a generic positive constant whose value may change from line to line. Next, to obtain the asymptotic properties, some technical assumptions are listed as follows.
C1. The random process X(\cdot) and score \xi_j = \langle X(\cdot), v_j(\cdot)\rangle satisfy the following condition: {\rm {E}}\|X(\cdot)\|^4 < \infty and {\rm{E}}[\xi_j^4]\leq C\lambda_j^2, \ j \geq 1 .
C2. The eigenvalues of C_X and the score coefficients fulfil the conditions below:
{(a)} There exist constants C and a > 1 such that C^{-1}j^{-a}\leq\lambda_j\leq Cj^{-a}, \lambda_j-\lambda_{j+1}\geq Cj^{-a-1}, \ j \geq 1 ;
{(b)} There exist constants C and b > a/2+1 such that |\gamma_j|\leq Cj^{-b}, \ j \geq 1 .
C3. The random vector \mathit{\boldsymbol{Z}} satisfies {\rm {E}}\|\mathit{\boldsymbol{Z}}\|^4 < \infty .
C4. There is some constant C such that \mid \langle {c}_{ {Z}_lX}, v_j\rangle \mid \leq Cj^{-(a+b)}, \; l = 1, \cdots, p, \ j \geq 1 .
C5. Let \eta_{il} = {Z}_{il}-\langle g_l, X_i \rangle with g_l = \sum\limits_{j = 1}^\infty\lambda_j^{-1}\langle { c}_{{Z}_lX}, v_j\rangle v_j , and \boldsymbol \eta_i = (\eta_{i1}, \cdots, \eta_{ip})^{\rm T} , then {\boldsymbol\eta}_{1}, \cdots, {\boldsymbol\eta}_{n} are i.i.d random vectors. We further assume that {{\rm {E}}[{\boldsymbol\eta}_{i}|{X_i}] = 0, \; \; {\rm {E}}[{\boldsymbol \eta_i}{\boldsymbol \eta_i}^{\rm T}| X_i]{\overset{a.s.}{ = }}\mathit{\boldsymbol{\Sigma}}} is a constant positive definite matrix.
C6. b_\tau is the unique solution of {\rm {E}}\left[\rho_{\tau}^{\prime}\left(e-b_{\tau}\right)\right] = 0 , and h_{\tau}(b_{\tau}) = (1-\tau)f(b_{\tau}-c^*)+\frac{1-\tau}{c^*}(F(b_{\tau})-F(b_{\tau}-c^*)) +\frac{\tau}{c^*}(F(b_{\tau}+c^*)-F(b_{\tau}))+\tau f(b_{\tau}+c^*) is continuous at b_{\tau} . Furthermore, we suppose that h_{\tau}(b_{\tau}) > 0 .
C6'. b_{0k} is the unique solution of E\left[\rho_{\tau_{k}}^{\prime}\left(e_i-b_{0 k}\right)\right] = 0 , h_{\tau_k}(b_{0k}) = (1-\tau_k)f(b_{0k}-c^*)+\frac{1-\tau_k}{c^*}(F(b_{0k})-F(b_{0k}-c^*)) +\frac{\tau_k}{c^*}(F(b_{0k}+c^*)-F(b_{0k}))+\tau_kf(b_{0k}+c^*) is continuous at b_{0k} , k = 1, \cdots, K . Furthermore, there exist some positive constants C_1, \; C_2 such that 0 < C_1\leq\underset{1\leq k\leq K}\min h_{\tau_k}(b_{0k})\leq\underset{1\leq k\leq K}\max h_{\tau_k}(b_{0k})\leq C_2 < +\infty .
C1 is the commonly used condition for establishing the consistency of the the empirical covariance operator of X in functional linear model and partial functional regression model. For example, it has been adopted in [3,17,18] (mean regression), [6,11] (quantile regression), [12,13] (robust estimation procedure), among others. C2(a) is used to identify the slope function \beta(t) via preventing the spacings between eigenvalues being too small, and C2(b) ensures the sufficiently smooth of slope function \beta(t) . C3–C5 are needed to deal with the vector-type covariate in the model ( 2.1 ) (see [19]). More concretely, C3 is for the asymptotic behaviors of \hat{{c}}_{{{\mathit{\boldsymbol{Z}}}}X} and \hat{{c}}_{{{\mathit{\boldsymbol{Z}}}}} . C4 is used to ensure the effect of truncation on the estimator of {\beta}(\cdot) be sufficiently small. C5 is a commonly used condition in the literature on partial functional regression models (see for example, [4,20,21]). The assumptions on {\rm {E}}[{\boldsymbol\eta}_{i}|{X_i}] and {\rm {E}}[{\boldsymbol \eta_i}{\boldsymbol \eta_i}^{\rm T}|X_i] are slightly strong, and are used to fix the identifiability of {\boldsymbol \alpha} and simplify the proof of the theorems. It is easy to see that \langle g_l, X_i \rangle is the projection of Z_l onto X_i , and {{\rm{E}}}({\boldsymbol \eta_i}) = 0 , {\rm Cov}\left(\eta_{il}, \langle g_l, X_i \rangle\right) = 0 , {\rm E}\|\boldsymbol \eta_i\|^4 < \infty even without the assumptions. The facts can be used to obtain similar results to the following theorems with more complicated technics (see, for example, [6]) and more conditions to ensure the identifiability. Other type conditions on {\boldsymbol \eta_i} can be found in [11,22]. C6 and C6 ^\prime are specific to the AHR and WCAHR (CAHR) cases respectively, which are primarily used to ensure the asymptotic behaviors of our estimators.
The following theorems discuss the convergence rate of the estimated {\beta}(\cdot) , the asymptotic normality of the estimated {\boldsymbol\alpha} and the convergence rate of the mean squared prediction error. To obtain this, we further assume (\mathit{\boldsymbol{Z}}_{n+1}, {X_{n+1}(\cdot)}) is independent of \mathcal{S} in this paper.
The next theorem establishes the large sample properties of the AHR estimators.
Theorem 1. Suppose that the Conditions C1–C6 are satisfied, and the tuning parameter m\sim n^{1/(a+2b)} , then
\begin{equation*} \begin{split} & \|\bar{\beta}(\cdot)-\beta_0(\cdot)\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}),\\ & \sqrt{n}(\bar{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{V}{\left\{h_{\tau}\left(b_{\tau}\right)\right\}^2} \mathit{\boldsymbol{\Sigma}}^{-1}\right),\\ &\mathit{MSPE}_{AHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}), \end{split} \end{equation*} |
where \stackrel{d}{\longrightarrow} represents convergence in distribution, and V = \mathit{E}\left[\psi_{\tau}^2\left(e-b_{\tau}\right)\right] with \psi_{\tau}(u) = \rho'_{\tau}(u) = ({\tau}-1)I(u < -c^*)+\frac{(1-{\tau})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau}{c^*}uI(0\leq u < c^*)+\tau I(u\geq c^*) .
The asymptotic properties of the proposed WCAHR estimators are presented in the following theorem.
Theorem 2. Under the Conditions C1–C5 and C6 ' , if the tuning parameter is taken as m\sim n^{1/(a+2b)} , then
\begin{equation*} \begin{split} \label{alp} & \|\hat{\beta}(\cdot)-\beta_0(\cdot)\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}),\\ &\sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{{{\mathit{\boldsymbol{w}}}}^{T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} {\boldsymbol\Sigma}^{-1}\right),\\ &\mathit{MSPE}_{WCAHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}), \end{split} \end{equation*} |
where {{\mathit{\boldsymbol{w}}}} = \left(w_{1}, \ldots, w_{K}\right)^{T} and \mathit{\boldsymbol{V}} = \left({V_{kl}}\right)_{1 \leq k, \; l \leq K} , here V_{k l} = \mathit{E}\left[\psi_{\tau_{k}}\left(e-b_{0 k}\right)\psi_{\tau_{l}}\left(e-b_{0 l}\right)\right] with 1 \leq k, l \leq K .
Remark 3. The results illustrate that the slope function estimator has the same convergence rate as the estimators in [6] and [4], which are optimal in the minimax sense. Note that it is similar to quantile regression that no moment condition on error term is needed here. In addition, we notice that the rate attained in predicting {Y}_{n+1} is faster than the rate attained in estimating \beta(t) . Trace its root and use MSPE_{AHR} as an example, it is for the integral operator providing additional smoothness in computing \int^{1}_{0}\bar{\beta}(t)X_{n+1}(t)dt from \bar\beta(t) .
Remark 4. If all w_{k} s are equal, then Theorem 2.2 reduces to the asymptotic properties of the CAHR estimators. Taking \tau_{1} = \tau , it is easy to see that there is a weight vector {{\mathit{\boldsymbol{w}}}} such that \frac{{{\mathit{\boldsymbol{w}}}}^{T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} < \frac{V}{\left\{h_{\tau}\left(b_{01}\right)\right\}^2} . Note that the right hand side of the inequality is just the asymptotic variance given in Theorem 1.
In this section, a Monte Carlo simulation is used to investigate the finite sample properties of the proposed estimation approaches. The data sets used in the simulation are generated from the following model,
\begin{equation*} Y = \mathit{\boldsymbol{Z}}^{{\rm T}}\boldsymbol\alpha+\int_0^1 X(t)\beta(t) dt +\sigma (\mathit{\boldsymbol{Z}},X)e, \end{equation*} |
where the slope function \beta(t) = \sqrt 2 \sin(\pi t/2)+3 \sqrt 2 \sin(3 \pi t/2) , and X(t) = \sum\limits_{j = 1}^{50} \xi_j \phi_j(t) , here \phi_j(t) = \sqrt 2 \sin((j-0.5)\pi t) , and \xi_j s are mutually independent normal random variables with mean 0 and variance \lambda_j = ((j-0.5)\pi)^{-2} . The true values of parameters are set as \mathit{\boldsymbol{\alpha}} = (\alpha_1, \alpha_2)^{\rm T} = (10, 5)^{\rm T} , and {\mathit{\boldsymbol{Z}}}\sim N\left(0, \Sigma_{{\mathit{\boldsymbol{Z}}}}\right) with \left(\Sigma_{{\mathit{\boldsymbol{Z}}}}\right)_{i, j} = 0.75^{|i-j|} for i, j = 1, 2 .
Five different distributions for e are considered as follows: (a) standard normal distribution N(0, 1) ; (b) positively skewed normal distribution {SN}(0, 1, 15) ; (c) positively skewed t -distribution {St}(0, 1, 5, 3) ; (d) mixture of normals (MN) 0.95 N(0, 1)+0.05 N\left(0, 10^{2}\right) , which produces a distribution with outliers of response; (e) bimodal distribution ( Bimodal ) {\tilde{\eta}}N(-1.2, 1) + (1-\tilde{\eta})N(1.2, 1) with \tilde{\eta}\sim Binomial(1, 0.5) . The multiplier \sigma(\mathit{\boldsymbol{Z}}, X) can be generated from either of the following two models:
(A) (homoscedastic) \sigma(\mathit{\boldsymbol{Z}}, X) = 1 ;
(B) (heteroscedastic) \sigma(\mathit{\boldsymbol{Z}}, X) = { \left|1+0.1\left({Z_{1}\alpha_1^*+Z_{2}\alpha_2^*+\int_0^1 X(t)\beta^*(t) dt}\right)\right|} , where \alpha_1^* = \alpha_2^* = 1 , and \beta^*(t) = \sqrt 2 \sin(\pi t/2)+\sqrt 2 \sin(3 \pi t/2) .
Implementing the proposed estimation method requires the predetermined levels over (0, 1), i.e., \left\{\tau_{k}\right\}_{k = 1}^{K} . Similar to the setting in [14], we take K = 19 , and choose the equidistant levels \tau_{k} = k /(K+1) , k = 1, 2, \ldots, K . In addition, for the WCAHR estimator, we employ the adaptive weights given in Remark 2.
For comparison, we also calculate the OLS estimator, the least absolute deviation (LAD) estimator, the ESL estimator, the H-ESL estimator, the Huber estimator (which corresponds to the case of AHR estimator at \tau = 0.5 ), the CAHR estimator, and the AHR estimators at \tau = 0.25 and 0.75 . In this study, the sample size n is set as 200 or 400.
To implement these methods, we need to choose the tuning parameter m . In this paper, m is selected by the cumulative percentage of total variability (CPV) method, that is,
m = \underset{p}{\operatorname{argmin}}\bigg\{ \sum\limits_{i = 1}^{p}\hat{\lambda}_i\Big/\sum\limits_{i = 1}^{\infty}\hat{\lambda}_i\geqslant\delta \bigg\}, |
where \delta equals 85\% . Other criterion, such as AIC, BIC, can be employed.
For each setting and different methods, bias (Bias), standard deviation (Sd) of the estimated \alpha_1 and \alpha_2 , and the mean squared error (MSE) of the estimated \mathit{\boldsymbol{\alpha}} with {\rm{MSE}} = \frac{1}{S}\sum_{s = 1}^S\sum_{j = 1}^2(\hat{\alpha}_j^s-{\alpha}_j)^2 , as well as the mean integrated squared error (MISE) of the estimated \beta(t) over S = 500 repetitions are summarized, where {\rm{MISE}} = {\left\{\frac{1}{100S}\sum_{s = 1}^S\sum_{i = 1}^{100} {\left(\hat{\beta}^s(t_i)-{\beta}(t_i)\right)}^2\right\}} with t_i s being 100 equally spaced grids in [0, 1], here \hat{\alpha}_j^s , \hat{\beta}^s(\cdot) are the estimates of \hat{\alpha}_j and \hat{\beta}(\cdot) from the s th sampling, j = 1, 2 .
Table 1 presents the results in the homoscedastic case. From Table 1, we can see the following facts: (a) The Sd, MSE and MISE decrease as the sample size n increases from 200 to 400 . (b) The proposed estimators are almost unbiased, which further illustrates the consistency combining with the fact (a). (c) The proposed adaptively weighted estimator performs similarly to the OLS estimator under the normal error, and is comparable to the corresponding H-ESL estimator for the mixture of normal distributions, but is significantly better than the other estimators considered when the distribution of model error is skewed or bimodal, and still enjoys the favoured being robust to outliers. This demonstrates that the proposed WCAHR estimator can well adapt to different error distributions, thus is more useful in practice.
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Table 2 presents the results in the more challenged heteroscedastic case, which violates the condition in this paper. The proposed WCAHR estimator outperforms the other estimators considered for the normal, skewed normal and skewed t error distributions, and is comparable to the corresponding H-ESL estimator for the mixture of normal distribution and bimodal distribution. This further illustrates that the proposed WCAHR estimator may be more applicable. Although the simulation results show the appealing performance for the considered heteroscedastic errors, general theoretical results are still challenging, and more conditions on the conditional moments of e may be helpful.
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
In order to detect the effect of the level choice to the performance of the WCAHR estimators, especially for the skewed error distributions, we also change in the simulation the number K_1 of levels over (0, 0.5) given the total level number K . Specifically, for the given K = 19 and different values of K_1 , we set \tau_i = \frac{i}{2K_1}, \; \text{for}\; i = 1, \cdots, K_1 and \tau_i = \frac{K+1-2K_1+i}{2(K+1-K_1)}, \; \text{for}\; i = K_1+1, \cdots, K. Table 3 presents the estimation results. We find from the results that the choice of the levels does not destroy the performance of the WCAHR estimators, although less number of levels over (0, 0.5) leads to slightly larger MSE and MISE for the positively skewed error distributions. In addition, the MISEs and MSEs decrease as K_1 increases, and stabilize eventually. This may motivate that one can take more levels appropriately over (0, 0.5) in dealing with the positively skewed error distributions.
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
In this section, we use the proposed estimation methods to the Electricity data and the Tecator data set, and the competing methods mentioned in Section 3. In the applications, all the observations are centralized before the regression analysis.
The data set consists of the daily average hourly electricity spot prices of the German electricity market ( Y ), the hourly values of Germany's wind power infeed ( X(t) ), the precipitation height ( Z_1 ) and the sunshine duration ( Z_2 ). Here we only consider the working days span from January 1, 2006 to September 30, 2008. The hourly values of Germany's wind power infeed curves are shown in the left panel of Figure 3. The data set can be obtained from the online supplements of Liebl [23]. Now we adopt the following partial functional linear regression model to fit the data:
\begin{equation} Y = Z_1 \alpha_{1}+Z_2 \alpha_{2}+\int_{1}^{24} X(t) \beta(t) dt+e. \end{equation} | (4.1) |
Firstly, the OLS method is applied to fit model (4.1). Then Shapiro-Wilk test is applied to test the normality of the residuals and the p -value is less than 2.2 \times 10^{-16} . In addition, we also give the estimated density of the residuals and the residual diagnostic plot (see Figure 1). Both the test and plot clearly indicate that e follows a skewed distribution with outliers. Notice that the density of the residuals is similar to the error distribution {Bimodal} discussed in Section 3, and the simulation results illustrate that the proposed method can be applied and provide reliable inference for this kind of data.
To evaluate the predictions obtained with different estimation methods, we randomly divide data set into a training sample of size 478 subjects and a testing sample with the remaining 160 subjects (indexed by \mathcal{J} ). The data are split for N = 100,200,400 times, respectively. We use the median quadratic error of prediction ( \rm{MEDQEP} ) defined below as the criterion to evaluate the performances:
{\rm{MEDQEP}} = \frac{1}{N}\sum\limits_{i = 1}^N{\rm{median}}\left\{{(Y_{ij}-\hat{Y}_{ij})^2}/{\textsf{Var}}_{\mathcal{J}}(Y_{ij}), j\in \mathcal{J}\right\}. |
The left 3 columns of Table 4 present the MEDQEPs of different methods mentioned above. According to the results of calculation, the WCAHR method is uniformly superior to the other estimators.
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Table 5 (the first 2 columns) presents the estimates of the parametric part by the estimation methods based on the whole data set. According to the results, both the precipitation height and the sunshine duration have negative effects on the daily average hourly electricity spot prices. In addition, Figure 4(a) plots the estimated slope function obtained by the WCAHR method, the estimates for \beta(\cdot) obtained by other methods mentioned above exhibit similar patterns and thus omitted here. From the figure, we can see the prices have a larger (in the absolute value) linkage with the wind power infeed in the daytime, which reflects the economic phenomena of price sensitivity, and more specifically, the market is active during the daytime and thus there is more correlation between the prices and the wind power infeed in the daytime. Secondly, the Germany's wind power infeed has a negative effect on the daily average hourly electricity spot prices, which reflects supply-demand balance, that is, more wind infeed creates the oversupply of electricity and thus reduces the price.
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |
The Tecator data set consists of 215 meat samples. For each sample, moisture, fat and protein are recorded in percent, and a 100-channel spectrum of absorbances is measured by the spectrometer. The data set is available from the R package fda.usc (see [24]). The right panel of Figure 3 shows the spectral trajectories. In this paper, the objective is to investigate the effects of the spectral trajectories X(t) , water content Z_1 and protein content Z_2 on the fat content Y by fitting the following model:
\begin{equation} Y = Z_1 \alpha_{1}+Z_2 \alpha_{2}+\int_{850}^{1050} X(t) \beta(t) dt+e \end{equation} | (4.2) |
The density of the residuals and the residual diagnostic plot in Figure 2 illustrate the error follows a skewed distribution with outliers. Similarly, to assess the prediction accuracy, the 215 meat samples are randomized into training set with 180 subjects and testing set with 35 subjects. We also randomly split the data set for N = 100,200,400 times and use MEDQEP as criterion to evaluate the finite sample performances of different estimation procedures. The comparison results are shown in the right 3 columns of Table 2, from which we know the proposed method performs better than the competing estimation procedures in view of prediction accuracy.
The estimated coefficients \hat{\alpha}_{1} , \hat{\alpha}_{2} using various methods based on the whole data set are also shown in the last 2 columns of Table 3. Both the protein content and water content have negative effects on the fat content. Next, the right panel of Figure 4 demonstrates the estimated slope function curves based on the WCAHR method. It is obvious that the spectrum curve of absorbance has negative impact on the quantity of fatty. In addition, the estimated slope functions by other methods mentioned above exhibit similar patterns and thus omitted here.
In this paper, we study the WCAHR estimation in the partial functional linear regression model. We use the functional principal component basis to approximate the functions, and obtain the estimators of the unknown parameter vector and the slope function through minimizing the weighted asymmetric Huber loss function. The asymptotic normality of the estimated parameter vector and the convergence rate of the estimated slope function are presented.
The proposed approach is designed for automatically reflecting distributional features as well as bounding outliers effectively without requiring prior information of the data. Simulation results show that the proposed method is almost as efficient as OLS when the error follows a normal distribution, while keeps robust to outliers and still works well when the error follows skewed or bimodal distributions. That is to say, the method is adaptive to the distribution of the error in the regression model. The analyses of two examples further illustrate that the utility of the proposed methods in modelling and forecasting.
The novelty of the method is that it focuses on the extraction of major features as well as shielding the estimator from outliers. The proposed WCAHR estimation procedure can be extended to more general situations, including dependent functional data, sparse modeling, partially observed functional data, and high dimension setting. In addition, we still need to try objective way to select K .
Authors are highly grateful to anonymous referees and editor in chief for their valuable comments and suggestions for improving this paper. This research was supported by National Natural Science Foundation of China (Grant No. 11771032 and No. 11971045), Natural Science Foundation of Beijing (Grant No. 1202001), and Natural Science Foundation of Shanxi Province, China (Grant No. 20210302124530).
The authors declare that they have no competing interests.
We just prove Theorem 2. The Theorem 1 is a special case of Theorem 2.
Proof of Theorem 2:
Let \delta_n = \sqrt{\frac{m}{n}} , {P}_{n} = \frac{1}{n}\sum_{i = 1}^{n}\mathit{\boldsymbol{\eta}}_i\mathit{\boldsymbol{\eta}}_i^{\rm T} , \mathit{\boldsymbol{V}}_n = \delta_n^{-1}{P}_{n}^{{ 1/2}}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) , \tilde{\boldsymbol\eta}_{i} = P_{n}^{-1 / 2} {\boldsymbol\eta}_{i} , \Lambda = \operatorname{diag}\left\{\lambda_{1}, \cdots, \lambda_{m}\right\} , {{{\mathit{\boldsymbol{A}}}}}_i = {\Lambda}^{-{ 1/2}}{{{\mathit{\boldsymbol{U}}}}_i} , {\hat{{\mathit{\boldsymbol{A}}}}}_i = {\Lambda}^{-{ 1/2}}\hat{{{\mathit{\boldsymbol{U}}}}_i} , H_{m} = \left(\lambda_{1}^{-1}\langle {c}_{{\mathit{\boldsymbol{Z}}}, X}, v_{1}\rangle, \ldots, \lambda_{m}^{-1}\langle {c}_{{\mathit{\boldsymbol{Z}}}, X}, v_{m}\rangle\right)^{\mathrm{T}} , \mathit{\boldsymbol{W}}_n = \delta_n^{-1}\Lambda^{ 1/2}[(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)+H_{m}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0)] , r_i = \int_{0}^{1}\beta_0(t)X_i(t)dt-{\hat{\mathit{\boldsymbol{U}}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0 , B_i = H_{m}^{\rm T}({{\mathit{\boldsymbol{U}}}}_i-\hat{{\mathit{\boldsymbol{U}}}}_i)+\sum\limits_{j = m+1}^\infty\lambda_j^{-1}\langle {c}_{\mathit{\boldsymbol{Z}}X}, v_j\rangle \xi_{ij} , \tilde{B}_i = \delta_n^{-1}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0)^{\rm T}B_i , S_{nk} = \delta_{n}^{-1}\left(\hat{b}_{k}-b_{0 k}\right) , \boldsymbol{S}_{n} = \left(\sqrt{w_1}S_{n 1}, \ldots, \; \sqrt{w_K}S_{n K}\right)^{\rm T} , \mathcal{F}_n = \Big\{(\mathit{\boldsymbol{V}}_n, \mathit{\boldsymbol{W}}_n, \mathit{\boldsymbol{S}}_{n}):\big\|(\mathit{\boldsymbol{V}}_n^{\rm T}, \mathit{\boldsymbol{W}}_n^{\rm T}, \mathit{\boldsymbol{S}}_{n}^{\rm T})^{\rm T}\big\|\leq L\Big\} , T_n = \left\{\left(\mathit{\boldsymbol{Z}}_1, X_1(\cdot)\big), ..., \big(\mathit{\boldsymbol{Z}}_n, X_n(\cdot)\right)\right\} .
Next, we will show that, for any given \eta > 0 , there exists a sufficiently large constant L , such that
\begin{equation} \begin{split} &P\bigg\{\inf\limits_{(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\boldsymbol{S}_{n})\in \mathcal{F}_n}\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0-\delta_n\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\Big)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; > \sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\Big)\bigg\}\geq 1-\eta. \end{split} \end{equation} | (A.1) |
This means that there is a local minimizer \mathit{\boldsymbol{\hat{\alpha}}} , \hat{\mathit{\boldsymbol{b}}} and \mathit{\boldsymbol{\hat{\gamma}}} in the ball \Big\{(\mathit{\boldsymbol{V}}_n, \mathit{\boldsymbol{W}}_n, \mathit{\boldsymbol{S}}_{n}):\big\|(\mathit{\boldsymbol{V}}_n^{\rm T}, \mathit{\boldsymbol{W}}_n^{\rm T}, \mathit{\boldsymbol{S}}_{n}^{\rm T})^{\rm T}\big\|\leq L\Big\} such that \|\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\| = O_p(\delta_n) , \left|\hat{b}_{k}-b_{0 k}\right| = O_p(\delta_n) , \|\Lambda^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\| = O_p(\delta_n) with a probability at least 1-\eta .
First, by \|v_j-\hat{v}_j\|^2 = O_p(n^{-1}j^2) (see, [4]), one has
\begin{equation*} \begin{split} |r_i|^2& = \bigg|\int_{0}^{1}\beta_0(t)X_i(t)dt-{\hat{\mathit{\boldsymbol{U}}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\bigg|^2\\ &\leq2 \bigg|\sum\limits_{j = 1}^{m}\langle X_i,\hat{v}_j-{v}_j\rangle\gamma_{0j}\bigg|^2+2\bigg|\sum\limits_{j = m+1}^{ \infty}\langle X_i,v_j\rangle\gamma_{0j}\bigg|^2\triangleq 2\rm{D}_1+2\rm{D}_2. \end{split} \end{equation*} |
For \rm{D}_1 , using Conditions C1, C2, and the Hölder inequality, one can obtain
\begin{equation*} \begin{split} \rm{D}_1& = \bigg|\sum\limits_{j = 1}^{m}\langle X_i,{v}_j-\hat{v}_j\rangle\gamma_{0j}\bigg|^2\\ &\leq Cm\sum\limits_{j = 1}^{m}\|{v}_j-\hat{v}_j\|^2|\gamma_{0j}|^2\leq Cm\sum\limits_{j = 1}^{m}O_p(n^{-1}j^{2-2b}) = O_p\left(\frac{m}{n}\right) = O_p(\delta_n^2). \end{split} \end{equation*} |
As for \rm{D}_2 , due to \mathrm{E}\Big \{\sum_{j = m+1}^{ \infty}\langle X_i, v_j\rangle\gamma_{0j}\Big \} = 0, \; \mathrm{Var}\Big \{\sum_{j = m+1}^{ \infty}\langle X_i, {v}_j\rangle\gamma_{0j}\Big \} = \sum_{j = m+1}^{ \infty}\lambda_j{\gamma_{0j}}^2 \leq C\sum_{j = m+1}^{ \infty}j^{-(a+2b)} = O(n^{-\frac{a+2b-1}{a+2b}}), one has D_2 = O_p(n^{-\frac{a+2b-1}{a+2b}}) = O_p(\delta_n^2). To sum up, we have |r_i|^2 = O_p(\delta_n^2) .
Now consider B_i . Due to
\begin{equation*} \begin{split} \label{hm} \|B_i\|^2& = \left\|H_{m}^T({{\mathit{\boldsymbol{U}}}}_i-\hat{{\mathit{\boldsymbol{U}}}}_i)+\sum\limits_{j = m+1}^\infty\lambda_j^{-1}\langle {c}_{\mathit{\boldsymbol{Z}}X},v_j\rangle \xi_{ij}\right\|^2\\ &\leq 2\sum\limits_{l = 1}^d\left\{\left\|\sum\limits_{j = 1}^m\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle \langle X_i,\hat{v}_j-{v}_j\rangle\right\|^2+\left\|\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\right\|^2\right\}, \end{split} \end{equation*} |
by Conditions C1, C2, C4, and the Hölder inequality, we obtain
\begin{equation*} \left\|\sum\limits_{j = 1}^m\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle \langle X_i,\hat{v}_j-{v}_j\rangle\right\|^2\leq Cm\sum\limits_{j = 1}^{m}\|{v}_j-\hat{v}_j\|^2|\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle|^2\\ \leq Cm\sum\limits_{j = 1}^{m}O_p(n^{-1}j^{2-2b}) = O_p\left(\frac{m}{n}\right) = O_p\left(\delta_n^2\right). \end{equation*} |
In addition, noting that
\begin{equation*} \begin{split} &\mathrm{E}\Big \{\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\Big \} = 0,\\ &\mathrm{Var}\Big \{\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\Big \} = \sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle^2 = O(n^{-\frac{a+2b-1}{a+2b}}), \end{split} \end{equation*} |
together with the above inequality, one has
\begin{equation} \begin{split} \|B_i\|^2 = O_p\left(\frac{m}{n}\right) = O_p(\delta_n^2). \end{split} \end{equation} | (A.2) |
Recall that \psi_{\tau_k}(u) = \rho'_{\tau_k}(u) = ({\tau_k}-1)I(u < -c^*)+\frac{(1-{\tau_k})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau_k}{c^*}uI(0\leq u < c^*)+\tau_kI(u\geq c^*), and denote \widetilde{Q}_{n}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big) = \sum_{i = 1}^{n}\sum_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0-\delta_n\big(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\big)\Big) -\sum_{i = 1}^{n}\sum_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\Big) . Then \widetilde{Q}_{n}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big) can be transformed into
\begin{align*} \widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right) & = E[\widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)|T_n]+\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\\ & \; \; -\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k}))\\ &\triangleq D_1^*+D_2^*+D_3^*, \end{align*} |
where
\begin{align*} &R_{nik}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big)\\ = &\rho_{\tau_k}\bigg(r_i+e_{ i}-{b_{0k}}- \delta_{n}\big(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i +{\tilde B}_i+S_{n k}\big)\bigg)-\rho_{\tau_k}(r_i+e_{i}-{b_{0k}})\\ & -E\bigg[\bigg\{\rho_{\tau_k}\big(r_i+e_{ i}-{b_{0k}}-\delta_{n}\big(\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} +\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\big)\big)-\rho_{\tau_k}(r_i+e_{i}-{b_{0k}})\bigg\}\bigg|T_n\bigg]\\ & +\delta_{n}(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k})\psi_{\tau_k}(e_{i}-b_{0 k}). \end{align*} |
Consider D_1^* . According to (A.2), we have
\begin{equation} \begin{split} \mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k} = \left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)(1+o_p(1)).\end{split} \end{equation} | (A.3) |
The proof of Theorem 3.1 in [6] indicates that \left|\frac{1}{n}\sum\limits_{i = 1}^{n}\left[\left({\mathit{\boldsymbol{A}}}_{i}^{{\rm T}}h^m\right)^{2}\right]-1\right| = O_{P}\left(n^{-1 / 4} m^{1 / 2}(\log n)^{1 / 2}\right) = o_{P}(1) with h^{m} = \mathit{\boldsymbol{W}}_n/\|\mathit{\boldsymbol{W}}_n\| , which leads to
\begin{align} \frac{1}{n}\sum\limits_{i = 1}^{n}\left(\hat{{\mathit{\boldsymbol{A}}}}_i^{\rm T}\mathit{\boldsymbol{W}}_n\right)^{2} = \|\mathit{\boldsymbol{W}}_n\|^2(1+o_{P}(1)). \end{align} | (A.4) |
Observe that \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n} = \mathit{\boldsymbol{V}}_n^{\rm T} P_{n}^{-1 / 2} \sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\boldsymbol\eta}_{i} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n, then by Conditions C1–C3, C5, E \left[\sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\eta}_{il} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n\right] = 0 and E\left(\left[\sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\eta}_{il} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n\right]^{2}\right) = O(n m) . Hence,
\begin{equation} \begin{split} \sum\limits_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n} = O_{p}\left(n^{1 / 2} m^{1/2}\right). \end{split} \end{equation} | (A.5) |
Similarly, \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i}S_{nk} = O_{p}\left(n^{1 / 2}\right), \; \sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n}S_{nk} = O_{p}\left(n^{1 / 2} m^{1 / 2}\right). Then, together with formulas (A.3)–(A.5), we have
\begin{align*} \label{ex} &D_1^* = E[\widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)|T_n]\nonumber\\ & = \sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\int_{r_i}^{r_i-\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)} E[\psi_{\tau_k}(e_{ i}-b_{0k}+t)|T_n]dt\nonumber\\ & = \frac{1}{2}\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}h_{\tau_k}(b_{0k})\left\{\left(r_i-\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\right)^2-{r_i}^2\right\}(1+o_p(1))\nonumber\\ &\geq Cn\delta_n^2\left(\left\|\mathit{\boldsymbol{V}}_{n}\right\|^{2}+\left\|\mathit{\boldsymbol{W}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) (1+o_p(1)). \end{align*} |
As for D_2^* , due to the continuity of \psi_{\tau_{k}}(\cdot) , we have
\begin{equation*} \begin{split} Var\left(\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\bigg|T_n\right) { = o_p}\left(n\delta_n^2(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+|S_{nk}|^{2})\right), \end{split} \end{equation*} |
then
\begin{equation*} \begin{split} Var\left(\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\bigg|T_n\right){ = o_p}\left(n\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right)\right), \end{split} \end{equation*} |
from which we get
\begin{equation*} \begin{split} \label{ns} \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|D^*_2\right| = \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\right| = o_p(\sqrt{n}\delta_nL). \end{split} \end{equation*} |
For the term D_3^* , it is easy to show that
\begin{equation*} \begin{split} E\left[\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\bigg|T_n\right] = 0, \end{split} \end{equation*} |
and
\begin{equation*} \begin{split} &\; \; \; E\left[\left\{\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\right\}^2\bigg|T_n\right]\\ &\leq Cn\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) (1+o_p(1)). \end{split} \end{equation*} |
Combining with the equation (A.4), we can obtain
\begin{align*} \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|D^*_3\right| = \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\right| = O_p(\delta_n n^{1/2}L). \end{align*} |
From the results about D_1^*, D_2^* and D^*_3 , it is easy to obtain that \widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right) is dominated by the positive quadratic term Cn\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) . Hence, Eq (A.1) is established, and there exists local minimizer \hat{\mathit{\boldsymbol{\gamma}}} such that
\begin{align} \|\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\| = O_p(\delta_n),\quad \left|\hat{b}_{k}-b_{0 k}\right| = O_p(\delta_n),\quad \|{\Lambda}^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = O_p(\delta_n^2). \end{align} | (A.6) |
Now we consider the convergence rate of \hat{\beta} . Since
\begin{align*} \|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = \sum\limits_{j = 1}^m \lambda_j({\hat{\gamma}_{j}}-{\gamma}_{0 j})^2\geq \lambda_m\sum\limits_{j = 1}^m (\hat{\gamma}_{j}-{\gamma}_{0 j})^2 , \end{align*} |
and based on the basic Condition C2, we have
\begin{align} \|\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0\|^2 \leq O_p(\lambda_m^{-1}\frac{m}{n}) = O_p(m^{a+1}n^{-1}) = O_p(n^{-\frac{2b-1}{{a+2b}}}). \end{align} | (A.7) |
Note that
\begin{equation*} \begin{split} \label{hatbeta} \|\hat{\beta}-\beta_0\|^2 & = \bigg\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}\hat{v}_j-\sum\limits_{j = 1}^{\infty}{\gamma}_{0j}{v}_j\bigg\|^2\\ &\leq4\bigg\|\sum\limits_{j = 1}^{m}(\hat{\gamma}_j-\gamma_{0j})\hat{v}_j\bigg\|^2+ 4\bigg\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(\hat{v}_j-v_j)\bigg\|^2+2\sum\limits_{j = m+1}^{\infty}{\gamma}_{0j}^2\\ &\triangleq4D_1^{**}+4D_2^{**}+2D_3^{**}. \end{split} \end{equation*} |
Based on the Condition C2, Eq (A.7) and the orthogonality of \{\hat{v}_j\} , as well as \|v_j-\hat{v}_j\|^2 = O_p(n^{-1}j^2) , we can obtain
\begin{equation*} \begin{split} \label{13} D_1^{**} = \bigg\|\sum\limits_{j = 1}^{m}(\hat{\gamma}_j-\gamma_{0j})\hat{v}_j\bigg\|^2 \leq\sum\limits_{j = 1}^{m}\big|\hat{\gamma}_j-\gamma_{0j}\big|^2 = \|\hat{\mathit{\boldsymbol{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0\|^2 = O_p(n^{-\frac{2b-1}{{a+2b}}}), \end{split} \end{equation*} |
\begin{equation} \begin{split} D_2^{**}& = \bigg\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(\hat{v}_j-v_j)\bigg\|^2\leq m\sum\limits_{j = 1}^{m}\parallel\hat{v}_j-v_j\parallel^2{\gamma}_{0j}^2\leq \frac{m}{n}O_p\big(\sum\limits_{j = 1}^{m}j^2{\gamma}_{0j}^2\big)\\ & = O_p\Big(n^{-1}m\sum\limits_{j = 1}^{m}j^{2-2b}\Big) = O_p(n^{-1}m) = o_p(n^{-\frac{2b-1}{a+2b}}),\\ D_3^{**}& = \sum\limits_{j = m+1}^{\infty}\gamma_{0j}^2\leq C\sum\limits_{j = m+1}^{\infty}j^{-2b} = O_p(n^{-\frac{2b-1}{a+2b}}). \end{split} \end{equation} | (A.8) |
These lead to
\|\hat{\beta}-\beta_0\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}). |
Next, we turn to the asymptotic normality of \hat{\alpha} . Note that Q_{n}(\boldsymbol{\alpha}, \boldsymbol{\gamma}, \boldsymbol{b}) attains the minimal value at (\hat{\boldsymbol{\alpha}}, \hat{\boldsymbol{\gamma}}, \hat{\boldsymbol{b}}) with probability tending to 1 as n tends to infinity. Then, we have the following score equations
\begin{equation} \begin{split} &\frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(Y_i-\hat{b}_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\hat{\alpha}}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\hat{\gamma}}}\right) = 0, \end{split} \end{equation} | (A.9) |
\begin{equation} \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \psi_{\tau_{k}}\left(Y_i-\hat{b}_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\hat{\alpha}}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\hat{\gamma}}}\right) = 0. \end{equation} | (A.10) |
Further, we can write (A.9) as H_{n}+\sum_{k = 1}^{K}w_k B_{n 1}^{(k)}+\sum_{k = 1}^{K} w_k B_{n 2}^{(k)} = 0 , with
\begin{equation*} \begin{split} \label{ro333} & H_{n} = \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right), \\ & B_{n 1}^{(k)} = \frac{1}{n} \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i}E\bigg[\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\bigg|T_n\bigg], \\ \end{split} \end{equation*} |
\begin{equation*} \begin{split} & B_{n 2}^{(k)} = \frac{1}{n} \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i}\Bigg[\bigg\{\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\bigg\}\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -E\bigg\{\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\Big|T_n\bigg\}\Bigg]. \end{split} \end{equation*} |
By simple calculations, we have B_{n 1}^{(k)} = -\frac{1}{n} \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\left[{{\mathit{\boldsymbol{Z}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\mathit{\boldsymbol{Z}}}}_{i} \hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)).
Through calculating the mean and variance directly, we can obtain B_{n 2}^{(k)} = o_{p}\left(\delta_{n}\right) . Then, Eq (A.9) can be written as
\begin{equation} \begin{split} &\; \; \; \; \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\\ & = \frac{1}{n}\sum\limits_{k = 1}^{K} \sum\limits_{i = 1}^{n}w_k h_{\tau_k}(b_{0k})\left[{{\mathit{\boldsymbol{Z}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\mathit{\boldsymbol{Z}}}}_{i} \hat{{{\mathit{\boldsymbol{U}}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)). \end{split} \end{equation} | (A.11) |
Similarly, Eq (A.10) can be changed into
\begin{equation*} \begin{split} \label{bn2} &\; \; \; \; \frac{1}{n} \sum\limits_{k = 1}^{K}\omega_k \sum\limits_{i = 1}^{n} {{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\\ & = \frac{1}{n}\sum\limits_{k = 1}^{K} \sum\limits_{i = 1}^{n}\omega_k h_{\tau_k}(b_{0k})\left[{{\hat{{\mathit{\boldsymbol{U}}}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \hat{\boldsymbol{U}}_{i}^{T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)). \end{split} \end{equation*} |
Now let \Phi_{n} = \frac{1}{n} \sum_{i = 1}^{n} \hat{\mathit{\boldsymbol{U}}}_{i} \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}, \; \Gamma_{n} = \frac{1}{n} \sum_{i = 1}^{n} \hat{{\mathit{\boldsymbol{U}}}}_{i} {{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}, \; \Gamma = E({\mathit{\boldsymbol{U}}}_{i} \mathit{\boldsymbol{Z}}_{i}^{\rm T}), \; \Sigma_{n} = \frac{1}{n} \sum_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}, \; \widetilde{\mathit{\boldsymbol{Z}}}_{i} = \mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}, \; \Upsilon_{n k} = \frac{1}{n} \sum_{i = 1}^{n} \hat{\boldsymbol{U}}_{i}\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right). Then, above equation allows \sum_{k = 1}^{K}w_kh_{\tau_k}(b_{0k})(\hat{\mathit{\boldsymbol{\gamma}}}-\mathit{\boldsymbol{\gamma}}_{0}) = \sum_{k = 1}^{K}w_k\left(\Phi_{n}+o_{p}(1)\right)^{-1}[\Upsilon_{nk} +h_{\tau_k}(b_{0k})\Gamma_{n}(\mathit{\boldsymbol{\alpha}}_{0}-\hat{\mathit{\boldsymbol{\alpha}}})] . Substituting it into (A.11), we obtain
\begin{equation*} \begin{split} \label{frac} & \frac{1}{n} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} h_{\tau_{k}}\left(b_{0 k}\right) \mathit{\boldsymbol{Z}}_{i}\left[\mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}\right]^{\rm T}\left(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0}\right)\left(1+o_{p}(1)\right) \\ = & \frac{1}{n} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} \mathit{\boldsymbol{Z}}_{i}\left[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)- \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}\left(\Phi_{n}\right)^{-1} \Upsilon_{nk}\right]. \end{split} \end{equation*} |
Noting that
\begin{equation*} \begin{split} &\frac{1}{n} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{K}w_k h_{\tau_k}(b_{0 k}) \Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}\big[\mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T}\Phi_{n}^{-1}\hat{\mathit{\boldsymbol{U}}}_{i}\big]^{\rm T} = 0\\ &\frac{1}{n}\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K} w_k\Gamma_{n}^{\rm T} \Phi_{n}^{-1}\hat{\boldsymbol{U}}_{i}\big\{\psi_{\tau_{k}}(e_{i}-b_{0 k})-\hat{\boldsymbol{U}}_{i}^{\rm T} \Phi_{n}^{-1} \Upsilon_{nk}\big\} = 0, \end{split} \end{equation*} |
then, it is easy to conclude that
\begin{equation*} \begin{split} &\left(\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\right) \sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0})\\ = &\frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K}w_k\sum\limits_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i}\left[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)- \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}\left(\Phi_{n}\right)^{-1} \Upsilon_{nk}\right] (1+o_{p}(1)). \end{split} \end{equation*} |
Note that
\frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} \tilde{Z}_{i} \hat{U}_{i}^{\rm T} = 0. |
Then we have
\begin{equation} \begin{split} &\; \; \; \; \left(\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\right) \sqrt{n}\left(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0}\right) \\ & = \frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K}w_k\sum\limits_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i}\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)(1+o_{p}(1)). \end{split} \end{equation} | (A.12) |
It is easy to see that \Phi_{n} = \Lambda+o_p(1), \; \Gamma_{n} = \Gamma+o_p(1) . And based on Lemma 1 in [8] and the Condition \mathrm{C}5, we can obtain \frac{1}{n} \sum_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T} \stackrel{p}{\longrightarrow} \boldsymbol{\Sigma}\; (n \rightarrow \infty) .
Through some simple calculations, one has
\begin{equation} \begin{split} &\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\stackrel{p}{\longrightarrow} \sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\boldsymbol{\Sigma}, \\ &\operatorname{Var}\left(\sum\limits_{k = 1}^{K} w_{k} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\right) = {{\mathit{\boldsymbol{w}}}}^{\rm T} \boldsymbol{V} {{\mathit{\boldsymbol{w}}}}, \end{split} \end{equation} | (A.13) |
where \boldsymbol{V} = \left(V_{k, l}\right)_{1 \leq k, l \leq K} with V_{k l} = E[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\psi_{\tau_{l}}\left(e_{i}-b_{0 l}\right)] and {{\mathit{\boldsymbol{w}}}} = \left(w_{1}, \ldots, w_{k}\right)^{\rm T} . Then, according to Eqs (A.12) and (A.13), the Slutsky's theorem, and the properties of multivariate normal distributions, we can obtain
\sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{{{\mathit{\boldsymbol{w}}}}^{\rm T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} {\boldsymbol\Sigma}^{-1}\right). |
Lastly, we prove the third conclusion of Theorem 2. By the definition of {\rm{MSPE}}_{WCAHR} , we have
\begin{align*} \label{pe} &{\rm{MSPE}}_{WCAHR}\\ &\leq 5\sum\limits_{j = 1}^m (\hat{\gamma}_{j}-{\gamma}_{0j})^2\lambda_j +5C\left\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}(v_j-\hat{v}_j)\right\|^2\nonumber +5\sum\limits_{j = m+1}^{\infty}{\gamma}_{0j}^2\lambda_j +5C\|\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\|^2 {+5\left\{\sum\limits_{k = 1}^Kw_{k}\left|\hat{b}_{k}-b_{0k}\right|\right\}^2}\nonumber\\ &\triangleq 5F_1+5CF_2+5F_3+5CF_4{+5F_5}. \end{align*} |
Firstly, according to the previous proof process, we have F_1 = \|{\Lambda}^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = O_p(\frac{m}{n}) . As for F_2 , based on triangle inequality and C_R inequality,
\begin{align*} F_2& = \left\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}(v_j-\hat{v}_j)\right\|^2 = \left\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(v_j-\hat{v}_j)+(\hat{\gamma}_{j}-{\gamma}_{0j})(v_j-\hat{v}_j)\right\|^2\\ &\leq2m\sum\limits_{j = 1}^{m}{\gamma}_{0j}^2\|v_j-\hat{v}_j\|^2+2\|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2\\ &\triangleq 2F_{21}+2F_{22}. \end{align*} |
By Eq (A.8), F_{21} = m\sum\limits_{j = 1}^{m}{\gamma}_{0 j}^2\|v_j-\hat{v}_j\|^2 = O_p\left(\frac{m}{n}\right) by. As for F_{22} , it is easy to know {\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2 = o_p(1)} , then F_{22} = \|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2 = o_p\left(\frac{m}{n}\right). Next, by (A.8), we have F_3 = \sum\limits_{j = m+1}^{\infty}{\gamma}_{0 j}^2\lambda_j = O(m^{-a-2b+1}) . By (A.6), we know F_4 = O_p\left(\frac{m}{n}\right) and F_5 = O_p\left(\frac{m}{n}\right) . Then, we have
{\rm{MSPE}}_{WCAHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}). |
The proof of Theorem 2 is complete.
[1] |
A. Khwaja, M. Bjorkholm, R. E. Gale, R. L. Levine, C. T. Jordan, G. Ehninger, et al., Acute myeloid leukaemia, Nat. Rev. Dis. Primers, 2 (2016), 16010. https://doi.org/10.1038/nrdp.2016.10 doi: 10.1038/nrdp.2016.10
![]() |
[2] |
E. Estey, H. Dohner, Acute myeloid leukaemia, Lancet, 368 (2006), 1894-1907. https://doi.org/10.1016/S0140-6736(06)69780-8 doi: 10.1016/S0140-6736(06)69780-8
![]() |
[3] |
L. Bullinger, K. Dohner, E. Bair, S. Frohling, R. F. Schlenk, R. Tibshirani, et al., Use of gene-expression profiling to identify prognostic subclasses in adult acute myeloid leukemia, N. Eng. J. Med., 350 (2004), 1605-1616. https://doi.org/10.1056/NEJMoa031046 doi: 10.1056/NEJMoa031046
![]() |
[4] |
E. Papaemmanuil, M. Gerstung, L. Bullinger, V. I. Gaidzik, P. Paschka, N. D. Roberts, et al., Genomic classification and prognosis in acute myeloid leukemia, N. Eng. J. Med., 374 (2016), 2209-2221. https://doi.org/10.1056/NEJMoa1516192 doi: 10.1056/NEJMoa1516192
![]() |
[5] |
C. C. Coombs, M. S. Tallman, R. L. Levine, Molecular therapy for acute myeloid leukaemia, Nat. Rev. Clin. Oncol., 13 (2016), 305-318. https://doi.org/10.1038/nrclinonc.2015.210 doi: 10.1038/nrclinonc.2015.210
![]() |
[6] |
J. W. Tyner, C. E. Tognon, D. Bottomly, B. Wilmot, S. E. Kurtz, S. L. Savage, et al., Functional genomic landscape of acute myeloid leukaemia, Nature, 562 (2018), 526-531. https://doi.org/10.1038/s41586-018-0623-z doi: 10.1038/s41586-018-0623-z
![]() |
[7] |
S. Abelson, G. Collord, S. W. K. Ng, O. Weissbrod, N. M. Cohen, E. Niemeyer, et al., Prediction of acute myeloid leukaemia risk in healthy individuals, Nature, 559 (2018), 400-404. https://doi.org/10.1038/s41586-018-0317-6 doi: 10.1038/s41586-018-0317-6
![]() |
[8] |
S. C. Meyer, R. L. Levine, Translational implications of somatic genomics in acute myeloid leukaemia, Lancet Oncol., 15 (2014), e382-394. https://doi.org/10.1016/S1470-2045(14)70008-7 doi: 10.1016/S1470-2045(14)70008-7
![]() |
[9] |
T. Bratkovic, J. Bozic, B. Rogelj, Functional diversity of small nucleolar RNAs, Nucleic Acids Res., 48 (2020), 1627-1651. https://doi.org/10.1093/nar/gkz1140 doi: 10.1093/nar/gkz1140
![]() |
[10] |
J. Ni, A. L. Tien, M. J. Fournier, Small nucleolar RNAs direct site-specific synthesis of pseudouridine in ribosomal RNA, Cell, 89 (1997), 565-573. https://doi.org/10.1016/s0092-8674(00)80238-x doi: 10.1016/s0092-8674(00)80238-x
![]() |
[11] |
V. Chikne, K. S. Rajan, M. Shalev-Benami, K. Decker, S. Cohen-Chalamish, H. Madmoni, et al., Small nucleolar RNAs controlling rRNA processing in Trypanosoma brucei, Nucleic Acids Res., 47 (2019), 2609-2629. https://doi.org/10.1093/nar/gky1287 doi: 10.1093/nar/gky1287
![]() |
[12] |
L. Xing, X. Zhang, X. Zhang, D. Tong, Expression scoring of a small-nucleolar-RNA signature identified by machine learning serves as a prognostic predictor for head and neck cancer, J. Cell Phys., 235 (2020), 8071-8084. https://doi.org/10.1002/jcp.29462 doi: 10.1002/jcp.29462
![]() |
[13] |
Y. Zhao, Y. Yan, R. Ma, X. Lv, L. Zhang, J. Wang, et al., Expression signature of six-snoRNA serves as novel non-invasive biomarker for diagnosis and prognosis prediction of renal clear cell carcinoma, J. Cell Mol. Med., 24 (2020), 2215-2228. https://doi.org/10.1111/jcmm.14886 doi: 10.1111/jcmm.14886
![]() |
[14] | L. Huang, X. Z. Liang, Y. Deng, Y. B. Liang, X. Zhu, X. Y. Liang, et al., Prognostic value of small nucleolar RNAs (snoRNAs) for colon adenocarcinoma based on RNA sequencing data, Pathol. Res. Pract., 216 (2020), 152937. https: //doi.org/10.1016/j.prp.2020.152937 |
[15] |
J. Gong, Y. Li, C. J. Liu, Y. Xiang, C. Li, Y. Ye, et al., A pan-cancer analysis of the expression and clinical relevance of small nucleolar RNAs in human cancer, Cell Rep., 21 (2017), 1968-1981. https://doi.org/10.1016/j.celrep.2017.10.070 doi: 10.1016/j.celrep.2017.10.070
![]() |
[16] | The Cancer Genome Atlas Research Network, Genomic and epigenomic landscapes of adult de novo acute myeloid leukemia, N. Eng. J. Med., 368 (2013), 2059-2074. https://doi.org/10.1056/NEJMoa1301689 |
[17] |
M. D. Robinson, D. J. McCarthy, G. K. Smyth, edgeR: a Bioconductor package for differential expression analysis of digital gene expression data, Bioinformatics, 26 (2010), 139-140. https://doi.org/10.1093/bioinformatics/btp616 doi: 10.1093/bioinformatics/btp616
![]() |
[18] |
R. Huang, X. Liao, Q. Li, Identification and validation of potential prognostic gene biomarkers for predicting survival in patients with acute myeloid leukemia, Oncol. Targets Ther., 10 (2017), 5243-5254. https://doi.org/10.2147/OTT.S147717 doi: 10.2147/OTT.S147717
![]() |
[19] |
X. Liao, X. Wang, K. Huang, C. Yang, T. Yu, C. Han, et al., Genome-scale analysis to identify prognostic microRNA biomarkers in patients with early stage pancreatic ductal adenocarcinoma after pancreaticoduodenectomy, Cancer Manage. Res., 10 (2018), 2537-2551. https://doi.org/10.2147/CMAR.S168351 doi: 10.2147/CMAR.S168351
![]() |
[20] |
X. Liao, X. Wang, K. Huang, C. Han, J. Deng, T. Yu, et al., Integrated analysis of competing endogenous RNA network revealing potential prognostic biomarkers of hepatocellular carcinoma, J. Cancer, 10 (2019), 3267-3283. https://doi.org/10.7150/jca.29986 doi: 10.7150/jca.29986
![]() |
[21] |
W. H. Da, B. T. Sherman, R. A. Lempicki, Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources, Nat. Protoc., 4 (2009), 44-57. https://doi.org/10.1038/nprot.2008.211 doi: 10.1038/nprot.2008.211
![]() |
[22] |
V. K. Mootha, C. M. Lindgren, K. F. Eriksson, A. Subramanian, S. Sihag, J. Lehar, et al., PGC-1alpha-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes, Nat. Genet., 34 (2003), 267-273. https://doi.org/10.1038/ng1180 doi: 10.1038/ng1180
![]() |
[23] |
A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, et al., Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles, Proc. Nat. Acad. Sci. U. S. A., 102 (2005), 15545-15550. https://doi.org/10.1073/pnas.0506580102 doi: 10.1073/pnas.0506580102
![]() |
[24] |
A. Liberzon, C. Birger, H. Thorvaldsdottir, M. Ghandi, J. P. Mesirov, P. Tamayo, The molecular signatures database (MSigDB) hallmark gene set collection, Cell Syst., 1 (2015), 417-425. https://doi.org/10.1016/j.cels.2015.12.004 doi: 10.1016/j.cels.2015.12.004
![]() |
[25] |
A. Liberzon, A. Subramanian, R. Pinchback, H. Thorvaldsdottir, P. Tamayo, J. P. Mesirov, Molecular signatures database (MSigDB) 3.0, Bioinformatics, 27 (2011), 1739-1740. https://doi.org/10.1093/bioinformatics/btr260 doi: 10.1093/bioinformatics/btr260
![]() |
[26] |
J. Lamb, The connectivity map: a new tool for biomedical research, Nat. Rev. Cancer, 7 (2007), 54-60. https://doi.org/10.1038/nrc2044 doi: 10.1038/nrc2044
![]() |
[27] |
J. Lamb, E. D. Crawford, D. Peck, J. W. Modell, I. C. Blat, M. J. Wrobel, et al., The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease, Science, 313 (2006), 1929-1935. https://doi.org/10.1126/science.1132939 doi: 10.1126/science.1132939
![]() |
[28] | E. W. Sayers, J. Beck, J. R. Brister, E. E. Bolton, K. Canese, D. C. Comeau, et al., Database resources of the national center for biotechnology information, Nucleic Acids Res., 48 (2020), D9-D16. https://doi.org/10.1093/nar/gkz899 |
[29] | S. Kim, J. Chen, T. Cheng, A. Gindulyte, J. He, S. He, et al., PubChem 2019 update: improved access to chemical data, Nucleic Acids Res., 47 (2019), D1102-D1109. https://doi.org/10.1093/nar/gky1033 |
[30] | M. Kuhn, D. Szklarczyk, A. Franceschini, M. Campillos, C. V. Mering, L. J. Jensen, et al., STITCH 2: an interaction network database for small molecules and proteins, Nucleic Acids Res., 38 (2010), D552-556. https://doi.org/10.1093/nar/gkp937 |
[31] | M. Kuhn, C. V. Mering, M. Campillos, L. J. Jensen, P. Bork, STITCH: interaction networks of chemicals and proteins, Nucleic Acids Res., 36 (2008), D684-688. https://doi.org/10.1093/nar/gkm795 |
[32] | D. Szklarczyk, A. Santos, C. V. Mering, L. J. Jensen, P. Bork, M. Kuhn, STITCH 5: augmenting protein-chemical interaction networks with tissue and affinity data, Nucleic Acids Res., 44 (2016), D380-384. https://doi.org/10.1093/nar/gkv1277 |
[33] | K. Yoshihara, M. Shahmoradgoli, E. Martinez, R. Vegesna, H. Kim, W. Torres-Garcia, et al., Inferring tumour purity and stromal and immune cell admixture from expression data, Nat. Commun., 4 (2013), 2612. https://doi.org/10.1038/ncomms3612 |
[34] |
B. Chen, M. S. Khodadoust, C. L. Liu, A. M. Newman, A. A. Alizadeh, Profiling tumor infiltrating immune cells with CIBERSORT, Methods Mol. Biol., 1711 (2018), 243-259. https://doi.org/10.1007/978-1-4939-7493-1_12 doi: 10.1007/978-1-4939-7493-1_12
![]() |
[35] |
Y. Benjamini, D. Drai, G. Elmer, N. Kafkafi, I. Golani, Controlling the false discovery rate in behavior genetics research, Behav. Brain Res., 125 (2001), 279-284. https://doi.org/10.1016/s0166-4328(01)00297-2 doi: 10.1016/s0166-4328(01)00297-2
![]() |
[36] |
P. Shannon, A. Markiel, O. Ozier, N. S. Baliga, J. T. Wang, D. Ramage, et al., Cytoscape: a software environment for integrated models of biomolecular interaction networks, Genome Res., 13 (2003), 2498-2504. https://doi.org/10.1101/gr.1239303 doi: 10.1101/gr.1239303
![]() |
[37] | D. Otasek, J. H. Morris, J. Boucas, A. R. Pico, B. Demchak, Cytoscape automation: empowering workflow-based network analysis, Genome Biol., 20 (2019), 185. https://doi.org/10.1186/s13059-019-1758-4 |
[38] | D. Ronchetti, L. Mosca, G. Cutrona, G. Tuana, M. Gentile, S. Fabris, et al., Small nucleolar RNAs as new biomarkers in chronic lymphocytic leukemia, BMC Med. Genomics, 6 (2013), 27. https://doi.org/10.1186/1755-8794-6-27 |
[39] |
E. Bignotti, S. Calza, R. A. Tassi, L. Zanotti, E. Bandiera, E. Sartori, et al., Identification of stably expressed reference small non-coding RNAs for microRNA quantification in high-grade serous ovarian carcinoma tissues, J. Cell Mol. Med., 20 (2016), 2341-2348. https://doi.org/10.1111/jcmm.12927 doi: 10.1111/jcmm.12927
![]() |
[40] |
L. H. Mao, S. Y. Chen, X. Q. Li, F. Xu, J. Lei, Q. L. Wang, et al., LncRNA-LALR1 upregulates small nucleolar RNA SNORD72 to promote growth and invasion of hepatocellular carcinoma, Aging (Albany NY), 12 (2020), 4527-4546. https://doi.org/10.18632/aging.102907 doi: 10.18632/aging.102907
![]() |
[41] |
F. G. Lafaille, O. Harschnitz, Y. S. Lee, P. Zhang, M. L. Hasek, G. Kerner, et al., Human SNORA31 variations impair cortical neuron-intrinsic immunity to HSV-1 and underlie herpes simplex encephalitis, Nat. Med., 25 (2019), 1873-1884. https://doi.org/10.1038/s41591-019-0672-3 doi: 10.1038/s41591-019-0672-3
![]() |
[42] |
H. Davanian, A. Balasiddaiah, R. Heymann, M. Sundstrom, P. Redenstrom, M. Silfverberg, et al., Ameloblastoma RNA profiling uncovers a distinct non-coding RNA signature, Oncotarget, 8 (2017), 4530-4542. https://doi.org/10.18632/oncotarget.13889 doi: 10.18632/oncotarget.13889
![]() |
[43] | I. Nepstad, K. J. Hatfield, I. S. Gronningsaeter, H. Reikvam, The PI3K-Akt-mTOR signaling pathway in human acute myeloid leukemia (AML) cells, Int. J. Mol. Sci., 21 (2020), 2907. https://doi.org/10.3390/ijms21082907 |
[44] |
L. Herschbein, J. L. Liesveld, Dueling for dual inhibition: Means to enhance effectiveness of PI3K/Akt/mTOR inhibitors in AML, Blood Rev., 32 (2018), 235-248. https://doi.org/10.1016/j.blre.2017.11.006 doi: 10.1016/j.blre.2017.11.006
![]() |
[45] |
J. Bertacchini, N. Heidari, L. Mediani, S. Capitani, M. Shahjahani, A. Ahmadzadeh, et al., Targeting PI3K/AKT/mTOR network for treatment of leukemia, Cell Mol. Life Sci., 72 (2015), 2337-2347. https://doi.org/10.1007/s00018-015-1867-5 doi: 10.1007/s00018-015-1867-5
![]() |
[46] |
Y. Su, X. Li, J. Ma, J. Zhao, S. Liu, G. Wang, et al., Targeting PI3K, mTOR, ERK and Bcl-2 signaling network shows superior antileukemic activity against AML ex vivo, Biochem. Pharmacol., 148 (2018), 13-26. https://doi.org/10.1016/j.bcp.2017.11.022 doi: 10.1016/j.bcp.2017.11.022
![]() |
[47] |
Y. Tabe, A. Tafuri, K. Sekihara, H. Yang, M. Konopleva, Inhibition of mTOR kinase as a therapeutic target for acute myeloid leukemia, Expert Opin. Ther. Targets, 21 (2017), 705-714. https://doi.org/10.1080/14728222.2017.1333600 doi: 10.1080/14728222.2017.1333600
![]() |
[48] |
N. Guo, M. Azadniv, M. Coppage, M. Nemer, J. Mendler, M. Becker, et al., Effects of neddylation and mTOR inhibition in acute myelogenous leukemia, Transl. Oncol., 12 (2019), 602-613. https://doi.org/10.1016/j.tranon.2019.01.001 doi: 10.1016/j.tranon.2019.01.001
![]() |
[49] |
J. Wu, G. Hu, Y. Dong, R. Ma, Z. Yu, S. Jiang, et al., Matrine induces Akt/mTOR signalling inhibition-mediated autophagy and apoptosis in acute myeloid leukaemia cells, J. Cell Mol. Med., 21 (2017), 1171-1181. https://doi.org/10.1111/jcmm.13049 doi: 10.1111/jcmm.13049
![]() |
[50] | Y. Feng, L. Wu, mTOR up-regulation of PFKFB3 is essential for acute myeloid leukemia cell survival, Biochem. Biophys. Res. Commun., 483 (2017), 897-903. |
[51] |
J. Bertacchini, C. Frasson, F. Chiarini, D. D'Avella, B. Accordi, L. Anselmi, et al., Dual inhibition of PI3K/mTOR signaling in chemoresistant AML primary cells, Adv. Biol. Regul., 68 (2018), 2-9. https://doi.org/10.1016/j.jbior.2018.03.001 doi: 10.1016/j.jbior.2018.03.001
![]() |
[52] |
V. Stavropoulou, S. Kaspar, L. Brault, M. A. Sanders, S. Juge, S. Morettini, et al., MLL-AF9 expression in hematopoietic stem cells drives a highly invasive AML expressing EMT-related genes linked to poor outcome, Cancer Cell, 30 (2016), 43-58. https://doi.org/10.1016/j.ccell.2016.05.011 doi: 10.1016/j.ccell.2016.05.011
![]() |
[53] |
T. J. Zhang, J. D. Zhou, J. C. Ma, Z. Q. Deng, Z. Qian, D. M. Yao, et al., CDH1 (E-cadherin) expression independently affects clinical outcome in acute myeloid leukemia with normal cytogenetics, Clin. Chem. Lab. Med., 55 (2017), 123-131. https://doi.org/10.1515/cclm-2016-0205 doi: 10.1515/cclm-2016-0205
![]() |
[54] | S. Wu, Y. Du, J. Beckford, H. Alachkar, Upregulation of the EMT marker vimentin is associated with poor clinical outcome in acute myeloid leukemia, J. Transl. Med., 16 (2018), 170. https://doi.org/10.1186/s12967-018-1539-y |
[55] |
L. Zhong, J. Chen, X. Huang, Y. Li, T. Jiang, Monitoring immunoglobulin heavy chain and T-cell receptor gene rearrangement in cfDNA as minimal residual disease detection for patients with acute myeloid leukemia, Oncol. Lett., 16 (2018), 2279-2288. https://doi.org/10.3892/ol.2018.8966 doi: 10.3892/ol.2018.8966
![]() |
[56] |
A. G. Chapuis, D. N. Egan, M. Bar, T. M. Schmitt, M. S. McAfee, K. G. Paulson, et al., T cell receptor gene therapy targeting WT1 prevents acute myeloid leukemia relapse post-transplant, Nat. Med., 25 (2019), 1064-1072. https://doi.org/10.1038/s41591-019-0472-9 doi: 10.1038/s41591-019-0472-9
![]() |
[57] |
H. J. Stauss, S. Thomas, M. Cesco-Gaspere, D. P. Hart, S. A. Xue, A. Holler, et al., WT1-specific T cell receptor gene therapy: improving TCR function in transduced T cells, Blood Cells Mol. Dis., 40 (2008), 113-116. https://doi.org/10.1016/j.bcmd.2007.06.018 doi: 10.1016/j.bcmd.2007.06.018
![]() |
[58] |
Y. Wang, A. V. Krivtsov, A. U. Sinha, T. E. North, W. Goessling, Z. Feng, et al., The Wnt/beta-catenin pathway is required for the development of leukemia stem cells in AML, Science, 327 (2010), 1650-1653. https://doi.org/10.1126/science.1186624 doi: 10.1126/science.1186624
![]() |
[59] | A. M. Gruszka, D. Valli, M. Alcalay, Wnt signalling in acute myeloid leukaemia, Cells, 8 (2019), 1403. https://doi.org/10.3390/cells8111403 |
[60] | F. J. Staal, F. Famili, L. G. Perez, K. Pike-Overzet, Aberrant Wnt signaling in leukemia, Cancers (Basel), 8 (2016), 78. https://doi.org/10.3390/cancers8090078 |
[61] |
A. Valencia, J. Roman-Gomez, J. Cervera, E. Such, E. Barragan, P. Bolufer, et al., Wnt signaling pathway is epigenetically regulated by methylation of Wnt antagonists in acute myeloid leukemia, Leukemia, 23 (2009), 1658-1666. https://doi.org/10.1038/leu.2009.86 doi: 10.1038/leu.2009.86
![]() |
[62] |
E. A. Griffiths, S. D. Gore, C. Hooker, M. A. McDevitt, J. E. Karp, B. D. Smith, et al., Acute myeloid leukemia is characterized by Wnt pathway inhibitor promoter hypermethylation, Leuk. Lymphoma, 51 (2010), 1711-1719. https://doi.org/10.3109/10428194.2010.496505 doi: 10.3109/10428194.2010.496505
![]() |
[63] |
C. Gasparini, C. Celeghini, L. Monasta, G. Zauli, NF-kappaB pathways in hematological malignancies, Cell Mol. Life Sci., 71 (2014), 2083-2102. https://doi.org/10.1007/s00018-013-1545-4 doi: 10.1007/s00018-013-1545-4
![]() |
[64] |
M. Breccia, G. Alimena, NF-kappaB as a potential therapeutic target in myelodysplastic syndromes and acute myeloid leukemia, Expert Opin. Ther. Targets, 14 (2010), 1157-1176. https://doi.org/10.1517/14728222.2010.522570 doi: 10.1517/14728222.2010.522570
![]() |
[65] |
M. C. Bosman, J. J. Schuringa, E. Vellenga, Constitutive NF-kappaB activation in AML: Causes and treatment strategies, Crit. Rev. Oncol. Hematol., 98 (2016), 35-44. https://doi.org/10.1016/j.critrevonc.2015.10.001 doi: 10.1016/j.critrevonc.2015.10.001
![]() |
[66] |
J. Zhou, Y. Q. Ching, W. J. Chng, Aberrant nuclear factor-kappa B activity in acute myeloid leukemia: from molecular pathogenesis to therapeutic target, Oncotarget, 6 (2015), 5490-5500. https://doi.org/10.18632/oncotarget.3545 doi: 10.18632/oncotarget.3545
![]() |
[67] |
C. H. Choi, H. Xu, H. Bark, T. B. Lee, J. Yun, S. I. Kang, et al., Balance of NF-kappaB and p38 MAPK is a determinant of radiosensitivity of the AML-2 and its doxorubicin-resistant cell lines, Leuk. Res., 31 (2007), 1267-1276. https://doi.org/10.1016/j.leukres.2006.11.006 doi: 10.1016/j.leukres.2006.11.006
![]() |
[68] |
A. Volk, J. Li, J. Xin, D. You, J. Zhang, X. Liu, et al., Co-inhibition of NF-kappaB and JNK is synergistic in TNF-expressing human AML, J. Exp. Med., 211 (2014), 1093-1108. https://doi.org/10.1084/jem.20130990 doi: 10.1084/jem.20130990
![]() |
[69] |
M. C. Bosman, H. Schepers, J. Jaques, A. Z. Brouwers-Vos, W. J. Quax, J. J. Schuringa, et al., The TAK1-NF-kappaB axis as therapeutic target for AML, Blood, 124 (2014), 3130-3140. https://doi.org/10.1182/blood-2014-04-569780 doi: 10.1182/blood-2014-04-569780
![]() |
[70] | M. Ma, X. Wang, N. Liu, F. Shan, Y. Feng, Low-dose naltrexone inhibits colorectal cancer progression and promotes apoptosis by increasing M1-type macrophages and activating the Bax/Bcl-2/caspase-3/PARP pathway, Int. Immunopharmacol., 83 (2020), 106388. https://doi.org/10.1016/j.intimp.2020.106388 |
[71] | N. Liu, M. Ma, N. Qu, R. Wang, H. Chen, F. Hu, et al., Low-dose naltrexone inhibits the epithelial-mesenchymal transition of cervical cancer cells in vitro and effects indirectly on tumor-associated macrophages in vivo, Int. Immunopharmacol., 86 (2020), 106718. https://doi.org/10.1016/j.intimp.2020.106718 |
[72] |
A. C. Menezes, M. Carvalheiro, J. M. P. F. de Oliveira, A. Ascenso, H. Oliveira, Cytotoxic effect of the serotonergic drug 1-(1-Naphthyl)piperazine against melanoma cells, Toxicol. Int. Vitro, 47 (2018), 72-78. https://doi.org/10.1016/j.tiv.2017.11.011 doi: 10.1016/j.tiv.2017.11.011
![]() |
[73] | G. G. Wei, L. Gao, Z. Y. Tang, P. Lin, L. B. Liang, J. J. Zeng, et al., Drug repositioning in head and neck squamous cell carcinoma: An integrated pathway analysis based on connectivity map and differential gene expression, Pathol. Res. Pract., 215 (2019), 152378. https://doi.org/10.1016/j.prp.2019.03.007 |
[74] |
J. Takezawa, Y. Ishimi, K. Yamada, Proteasome inhibitors remarkably prevent translesion replication in cancer cells but not normal cells, Cancer Sci., 99 (2008), 863-871. https://doi.org/10.1111/j.1349-7006.2008.00764.x doi: 10.1111/j.1349-7006.2008.00764.x
![]() |
[75] |
P. G. Richardson, C. Mitsiades, T. Hideshima, K. C. Anderson, Bortezomib: proteasome inhibition as an effective anticancer therapy, Annu. Rev. Med., 57 (2006), 33-47. https://doi.org/10.1146/annurev.med.57.042905.122625 doi: 10.1146/annurev.med.57.042905.122625
![]() |
[76] |
I. Zavrski, C. Naujokat, K. Niemoller, C. Jakob, U. Heider, C. Langelotz, et al., Proteasome inhibitors induce growth inhibition and apoptosis in myeloma cell lines and in human bone marrow myeloma cells irrespective of chromosome 13 deletion, J. Cancer Res. Clin. Oncol., 129 (2003), 383-391. https://doi.org/10.1007/s00432-003-0454-6 doi: 10.1007/s00432-003-0454-6
![]() |
[77] | W. X. Wang, B. H. Kong, P. Li, K. Song, X. Qu, B. X. Cui, et al., Effect of extracellular signal regulated kinase signal pathway on apoptosis induced by MG262 in ovarian cancer cells, Zhonghua Fu Chan Ke Za Zhi, 43 (2008), 690-694 |
[78] |
J. Y. Wu, S. S. Lin, F. T. Hsu, J. G. Chung, Fluoxetine inhibits DNA repair and NF-kB-modulated metastatic potential in non-small cell lung cancer, Anticancer Res., 38 (2018), 5201-5210. https://doi.org/10.21873/anticanres.12843 doi: 10.21873/anticanres.12843
![]() |
[79] | L. C. Hsu, H. F. Tu, F. T. Hsu, P. F. Yueh, I. T. Chiang, Beneficial effect of fluoxetine on anti-tumor progression on hepatocellular carcinoma and non-small cell lung cancer bearing animal model, Biomed. Pharmacother., 126 (2020), 110054. https://doi.org/10.1016/j.biopha.2020.110054 |
[80] | A. R. Mun, S. J. Lee, G. B. Kim, H. S. Kang, J. S. Kim, S. J. Kim, Fluoxetine-induced apoptosis in hepatocellular carcinoma cells, Anticancer Res., 33 (2013), 3691-3697 |
[81] | D. Sun, L. Zhu, Y. Zhao, Y. Jiang, L. Chen, Y. Yu, et al., Fluoxetine induces autophagic cell death via eEF2K-AMPK-mTOR-ULK complex axis in triple negative breast cancer, Cell Prolif., 51 (2018), e12402. https://doi.org/10.1111/cpr.12402 |
[82] |
A. M. Kabel, A. A. Elkhoely, Ameliorative potential of fluoxetine/raloxifene combination on experimentally induced breast cancer, Tissue Cell, 48 (2016), 89-95. https://doi.org/10.1016/j.tice.2016.02.002 doi: 10.1016/j.tice.2016.02.002
![]() |
[83] |
M. Bowie, P. Pilie, J. Wulfkuhle, S. Lem, A. Hoffman, S. Desai, et al., Fluoxetine induces cytotoxic endoplasmic reticulum stress and autophagy in triple negative breast cancer, World J. Clin. Oncol., 6 (2015), 299-311. https://doi.org/10.5306/wjco.v6.i6.299 doi: 10.5306/wjco.v6.i6.299
![]() |
[84] |
T. M. Khing, W. W. Po, U. D. Sohn, Fluoxetine enhances anti-tumor activity of paclitaxel in gastric adenocarcinoma cells by triggering apoptosis and necroptosis, Anticancer Res., 39 (2019), 6155-6163. https://doi.org/10.21873/anticanres.13823 doi: 10.21873/anticanres.13823
![]() |
[85] |
P. P. Khin, W. W. Po, W. Thein, U. D. Sohn, Apoptotic effect of fluoxetine through the endoplasmic reticulum stress pathway in the human gastric cancer cell line AGS, Naunyn Schmiedebergs Arch. Pharmacol., 393 (2020), 537-549. https://doi.org/10.1007/s00210-019-01739-7 doi: 10.1007/s00210-019-01739-7
![]() |
[86] | M. Marcinkute, S. Afshinjavid, A. A. Fatokun, F. A. Javid, Fluoxetine selectively induces p53-independent apoptosis in human colorectal cancer cells, Eur. J. Pharmacol., 857 (2019), 172441. https://doi.org/10.1016/j.ejphar.2019.172441 |
[87] |
V. Kannen, S. B. Garcia, W. A. Silva, M. Gasser, R. Monch, E. J. Alho, et al., Oncostatic effects of fluoxetine in experimental colon cancer models, Cell Signal, 27 (2015), 1781-1788. https://doi.org/10.1016/j.cellsig.2015.05.008 doi: 10.1016/j.cellsig.2015.05.008
![]() |
[88] | V. Kannen, H. Hintzsche, D. L. Zanette, W. A. Silva, S. B. Garcia, A. M. Waaga-Gasser, et al., Antiproliferative effects of fluoxetine on colon cancer cells and in a colonic carcinogen mouse model, PLoS One, 7 (2012), e50043. https://doi.org/10.1371/journal.pone.0050043 |
[89] |
H. Stopper, S. B. Garcia, A. M. Waaga-Gasser, V. Kannen, Antidepressant fluoxetine and its potential against colon tumors, World J. Gastrointest. Oncol., 6 (2014), 11-21. https://doi.org/10.4251/wjgo.v6.i1.11 doi: 10.4251/wjgo.v6.i1.11
![]() |
[90] | S. J. Koh, J. M. Kim, I. K. Kim, N. Kim, H. C. Jung, I. S. Song, et al., Fluoxetine inhibits NF-kappaB signaling in intestinal epithelial cells and ameliorates experimental colitis and colitis-associated colon cancer in mice, Am. J. Physiol. Gastrointest. Liver Phys., 301 (2011), G9-19. https://doi.org/10.1152/ajpgi.00267.2010 |
[91] |
K. H. Liu, S. T. Yang, Y. K. Lin, J. W. Lin, Y. H. Lee, J. Y. Wang, et al., Fluoxetine, an antidepressant, suppresses glioblastoma by evoking AMPAR-mediated calcium-dependent apoptosis, Oncotarget, 6 (2015), 5088-5101. https://doi.org/10.18632/oncotarget.3243 doi: 10.18632/oncotarget.3243
![]() |
[92] |
J. Ma, Y. R. Yang, W. Chen, M. H. Chen, H. Wang, X. D. Wang, et al., Fluoxetine synergizes with temozolomide to induce the CHOP-dependent endoplasmic reticulum stress-related apoptosis pathway in glioma cells, Oncol. Rep., 36 (2016), 676-684. https://doi.org/10.3892/or.2016.4860 doi: 10.3892/or.2016.4860
![]() |
![]() |
![]() |
1. | Lijie Zhou, Liucang Wu, Bin Yang, Estimation and diagnostic for single-index partially functional linear regression model with p -order autoregressive skew-normal errors, 2025, 10, 2473-6988, 7022, 10.3934/math.2025321 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |