
We consider the inverse scattering problem of reconstructing the boundary of an obstacle by using far-field data. With the plane wave as the incident wave, a priori information of the impenetrable obstacle can be obtained via the linear sampling method. We have constructed the shape parameter inversion model based on a neural network to reconstruct the obstacle. Numerical experimental results demonstrate that the model proposed in this paper is robust and performs well with a small number of observation directions.
Citation: Bowen Tang, Xiaoying Yang, Lin Su. Shape reconstruction of acoustic obstacle with linear sampling method and neural network[J]. AIMS Mathematics, 2024, 9(6): 13607-13623. doi: 10.3934/math.2024664
[1] | Kayode Oshinubi, Firas Ibrahim, Mustapha Rachdi, Jacques Demongeot . Functional data analysis: Application to daily observation of COVID-19 prevalence in France. AIMS Mathematics, 2022, 7(4): 5347-5385. doi: 10.3934/math.2022298 |
[2] | Ahmed Sedky Eldeeb, Muhammad Ahsan-ul-Haq, Mohamed S. Eliwa . A discrete Ramos-Louzada distribution for asymmetric and over-dispersed data with leptokurtic-shaped: Properties and various estimation techniques with inference. AIMS Mathematics, 2022, 7(2): 1726-1741. doi: 10.3934/math.2022099 |
[3] | Fatimah A Almulhim, Torkia Merouan, Mohammed B. Alamari, Boubaker Mechab . Local linear estimation for the censored functional regression. AIMS Mathematics, 2024, 9(6): 13980-13997. doi: 10.3934/math.2024679 |
[4] | Yanting Xiao, Yifan Shi . Robust estimation for varying-coefficient partially nonlinear model with nonignorable missing response. AIMS Mathematics, 2023, 8(12): 29849-29871. doi: 10.3934/math.20231526 |
[5] | Oussama Bouanani, Salim Bouzebda . Limit theorems for local polynomial estimation of regression for functional dependent data. AIMS Mathematics, 2024, 9(9): 23651-23691. doi: 10.3934/math.20241150 |
[6] | Lijie Zhou, Liucang Wu, Bin Yang . Estimation and diagnostic for single-index partially functional linear regression model with p-order autoregressive skew-normal errors. AIMS Mathematics, 2025, 10(3): 7022-7066. doi: 10.3934/math.2025321 |
[7] | Alanazi Talal Abdulrahman, Khudhayr A. Rashedi, Tariq S. Alshammari, Eslam Hussam, Amirah Saeed Alharthi, Ramlah H Albayyat . A new extension of the Rayleigh distribution: Methodology, classical, and Bayes estimation, with application to industrial data. AIMS Mathematics, 2025, 10(2): 3710-3733. doi: 10.3934/math.2025172 |
[8] | Amal S. Hassan, Najwan Alsadat, Oluwafemi Samson Balogun, Baria A. Helmy . Bayesian and non-Bayesian estimation of some entropy measures for a Weibull distribution. AIMS Mathematics, 2024, 9(11): 32646-32673. doi: 10.3934/math.20241563 |
[9] | Liqi Xia, Xiuli Wang, Peixin Zhao, Yunquan Song . Empirical likelihood for varying coefficient partially nonlinear model with missing responses. AIMS Mathematics, 2021, 6(7): 7125-7152. doi: 10.3934/math.2021418 |
[10] | Said Attaoui, Billal Bentata, Salim Bouzebda, Ali Laksaci . The strong consistency and asymptotic normality of the kernel estimator type in functional single index model in presence of censored data. AIMS Mathematics, 2024, 9(3): 7340-7371. doi: 10.3934/math.2024356 |
We consider the inverse scattering problem of reconstructing the boundary of an obstacle by using far-field data. With the plane wave as the incident wave, a priori information of the impenetrable obstacle can be obtained via the linear sampling method. We have constructed the shape parameter inversion model based on a neural network to reconstruct the obstacle. Numerical experimental results demonstrate that the model proposed in this paper is robust and performs well with a small number of observation directions.
Functional data analysis (FDA) (e.g., [1]) has drawn considerable attention over recent years, owing to a great deal of flexibilities and universal applications in handling high-dimensional data sets. A fundamental and important tool for FDA is functional linear models.
There are a lot of researches in literature on the inference of functional linear models and their extensions, see, among others, [2,3,4] for earlier works, and [5,6,7,8,9,10] for recent works. As is well known, in the estimation of regression models, the choice of loss function is essential to obtain a highly efficient and robust estimator. Most of earlier works employed the square loss function and obtained ordinary least squares (OLS) estimators. In recent years, many other loss functions have been considered in the estimation of functional linear models and their extensions. Kato [6], Tang and Cheng [11] studied the quantile regression (QR) with functional linear models and partial functional linear models, respectively. Yu et al. [12] proposed a robust exponential squared loss estimation procedure (ESL) and established the asymptotic properties of the proposed estimators. Cai et al. [13] introduced a new robust estimation procedure by employing a modified Huber function, whose tail function is replaced by the exponential squared loss (H-ESL) in the partial functional linear model.
It is well known that the square loss function pays attention to reflect the distributional features of the entire distribution, whereas QR, ESL and H-ESL methods focus on bounding the influence of outliers when the data are heavy-tailed, respectively. Thus, developing a method, which can both reflect distributional features and bound outliers effectively, is highly desirable in data analysis. We note that, in the context of principal component analysis (PCA), Lim and Oh [14] proposed a new approach using a weighted linear combination of asymmetric Huber loss functions to demonstrate the distributional features of data as well as keep robust to outliers. The asymmetric Huber loss functions are defined as
ρτ(u)={(τ−1)(u+0.5c∗)foru<−c∗0.5(1−τ)u2/c∗for−c∗≤u<00.5τu2/c∗for0≤u<c∗τ(u−0.5c∗)forc∗≤u, | (1.1) |
with c∗=1.345, and τ∈(0,1) being a parameter to control the degree of skewness. The function ρτ(⋅) is equivalent to the Huber loss function (see, [15]) when τ is equal to 0.5 and is most exactly the same as the quantile loss function when c∗ is small enough.
Motivated by the appealing characteristics of the asymmetric Huber functions, in this paper, we first investigate a new asymmetric Huber regression (AHR) estimation procedure to analyse skewed data for the partial functional linear model, based on the functional principal component analysis. To improve the estimation accuracy for single AHR estimation, we develop a weighted composite asymmetric Huber regression (WCAHR) estimation by combining the strength across multiple asymmetric Huber regression models. A practical algorithm for WCAHR estimators based on pseudo data is developed to implement the estimation method. The asymptotic properties of the proposed estimators are also derived. Extensive simulations are carried out to show the superiority of the proposed estimators.
Finally, we apply the proposed methods to two data sets. In the first example, we analyze the electricity data. Figure 1 presents the estimated density of the residuals and the residual diagnostic plot obtained by fitting the model (4.1) in Section 4.1 via the OLS method. The distribution of the residuals is skewed, bimodal, and there are some outliers in the dataset. Given that the WCAHR can effectively manage such data, we use the proposed method to conduct an analysis to this data set. Another example in Section 4 considers the Tecator data set. Similarly, Figure 2 presents the density of the residuals and the residual diagnostic plot obtained by fitting the model (4.2) in Section 4.2 via the OLS method, which demonstrates that the distribution of the residuals is skewed and far from normality. Undoubtedly, WCAHR regression is applicable to analyzing this data set on account of its appealing features.
To our knowledge, it is the first to discuss the asymmetric Huber regression problems under functional models framework. The proposed WCAHR method possesses advantages that include the robustness to outliers as well as reflecting the relationships between potential explanatory variables and the entire distribution of response. It retains the advantages in analysing skewed data and the obtained estimators rely on the shape of the entire distribution rather than merely on the data nearby a specific quantile level or skewness level of the asymmetric Huber loss, thereby avoiding the limitations of these methods. These advantages are revealed by both theoretical conclusions and numerical results. The relevant algorithm is data-adaptive, and capable of reflecting the distributional features of the data without prior information, and is robust to outliers.
The rest of this paper is organized as follows. In Section 2, we formally describe the estimation procedures, and develop a new algorithm. We also establish the asymptotic behaviors of the proposed estimators as well as a list of technical assumptions needed in the theoretical research. In Section 3, the finite performances of the proposed estimators are evaluated through simulations. Section 4 illustrates the use of the proposed methods in the analyses of electricity data and Tecator data. Brief conclusions on the proposed methods are made in Section 5. All technique proofs are provided in Section A.
Let Y be a real value random variable, Z=(Z1,⋯,Zp)T be a p dimensional random vector with zero mean and finite second moment. Let {X(t):t∈T} be a zero mean, second-order stochastic process with sample paths in L2(T), which consists of square integrable functions with inner product ⟨x,y⟩=∫Tx(t)y(t)dt and norm ‖x‖=⟨x,x⟩1/2, respectively, here T is a bounded closed interval. Without loss of generality, we suppose T=[0,1] throughout the paper. The dependence between Y and (X,Z) is expressed by the partial functional linear regression as following,
Y=ZTα+∫10β(t)X(t)dt+e. | (2.1) |
Here, random error e is assumed to be independent of Z and X, α=(α1,⋯,αp)T is an unknown p-dimensional parameter vector, and the slope function β(⋅) is an unknown square integrable function on [0,1].
Let (Zi,Xi(⋅),Yi),i=1,⋯,n, be independent observations generated by model (2.1) and let ei=Yi−ZTiα−∫10β(t)Xi(t)dt,i=1,⋯,n. The covariance and empirical covariance functions for X(⋅) are defined as cX(t,s)=Cov(X(t),X(s)), ˆcX(t,s)=1n∑ni=1Xi(t)Xi(s) respectively. Based on the Mercer's Theorem, cX and ˆcX can be represented as following,
cX(t,s)=∞∑i=1λivi(t)vi(s),ˆcX(t,s)=∞∑i=1ˆλiˆvi(t)ˆvi(s), |
where λ1>λ2>⋅⋅⋅>0andˆλ1≥ˆλ2≥⋯≥ˆλn+1=⋅⋅⋅=0 are each the ordered eigenvalue sequences of the covariance operator CX and its estimator ˆCX with kernels cX and ˆcX, which are defined by CXf(s)=∫10cX(t,s)f(t)dt and ˆCXf(s)=∫10ˆcX(t,s)f(t)dt with CX being assumed strictly positive, and {vi(⋅)} and {ˆvi(⋅)} are the corresponding orthonormal eigenfunction sequences. Besides, (ˆvi(⋅),ˆλi) is treated as an estimator of (vi(⋅),λi).
Similarly, we can define cYX(⋅)=Cov(Y,X(⋅)), cZ=Var(Z)=E[ZZT], cZY=Cov(Z,Y), cZX(⋅)=Cov(Z,X(⋅))=(cZ1X(⋅),⋯,cZpX(⋅))T. And the corresponding empirical counterparts defined below can be used as their estimators,
ˆcYX=1nn∑i=1YiXi,ˆcZ=1nn∑i=1ZiZTi,ˆcZY=1nn∑i=1ZiYi,ˆcZX=1nn∑i=1ZiXi. |
By the Karhunen-Loève representation, Xi(t) and β(t) can be expanded into
β(t)=∞∑j=1γjvj(t),Xi(t)=∞∑j=1ξijvj(t),i=1,⋯,n, | (2.2) |
here γj=⟨β(⋅),vj(⋅)⟩=∫10β(t)vj(t)dt, and ξij=⟨Xi(⋅),vj(⋅)⟩.
Owing to the orthogonality of {v1(⋅),...,vm(⋅)} and Eq (2.2), Model (2.1) can be transformed into
Yi=ZTiα+m∑j=1γjξij+˜ei=ZTiα+UTiγ+˜ei,i=1,⋯,n, |
where Ui=(ξi1,...,ξim)T, γ=(γ1,...,γm)T, ˜ei=∑∞j=m+1γjξij+ei, and the tuning parameter m may increase with the sample size n.
Replacing vj(⋅) with its estimator ˆvj(⋅), the τth AHR estimators ˉα and ˉβ(t)=∑mj=1ˉγjˆvj(t) can be obtained by minimizing the loss function over α, γ and bτ as follows:
(ˉb,ˉα,ˉγ)≜ |
where the asymmetric Huber loss function \rho_{\tau}(u) is defined in ( 1.1 ), and \hat{\mathit{\boldsymbol{U}}}_i = (\hat{\xi}_{i1}, \cdots, \hat{\xi}_{im})^{\rm T} with \hat{\xi}_{ij} = \langle X_i(\cdot), \hat{v}_j(\cdot)\rangle . Here the true value of b_{\tau} is defined as the solution that minimizes the loss function {E}\left\{\rho_{\tau}(e-\theta) \right\} over \theta \in \mathbb{R} , and we call it the \tau th number of e with respect to (1.1).
Remark 1. In model (2.1), we suppose the intercept term is zero. In fact, if there is an intercept, we then may absorb it into the distribution of e . Thus, the main impact of model (2.1) is finding the contribution of the predictors to the response, and the zero mean assumption for e is not needed.
Noting that the regression coefficients are the same across different skewness asymmetric Huber regression models, and being inspirited by [14] and [16], we combine the strength across multiple asymmetric Huber regression models and propose a WCAHR method. Specifically, the WCAHR estimators \hat{\mathit{\boldsymbol{\alpha}}} and \hat{\beta}(t) = \sum_{j = 1}^{m}\hat{\gamma}_j\hat{v}_j(t) can be obtained by minimizing the following loss function with respect to (\mathit{\boldsymbol{b}}, \mathit{\boldsymbol{\alpha}}, \mathit{\boldsymbol{\gamma}}) :
\begin{equation*} \label{2.1} Q_n(\mathit{\boldsymbol{b}}, \mathit{\boldsymbol{\alpha}}, \mathit{\boldsymbol{\gamma}})\triangleq\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}\Big), \end{equation*} |
where \left\{\tau_{k}\right\}_{k = 1}^{K} are predetermined levels over (0, 1), b_k = b_{\tau_k} for brevity, \mathit{\boldsymbol{b}} = \left(b_1, \cdots, b_K\right)^{\rm T} , and the weights w_{1}, \ldots, w_{K} , which control the contribution of each loss function, are positive constants satisfying \sum_{k} w_{k} = 1 .
Remark 2. Generally speaking, one can choose the equidistant levels as \tau_{k} = k /(K+1) for k = 1, 2, \ldots, K for a given K , similar to what often has been done in composite quantile regression. Although one can also apply data-adaptive methods, such as cross validation, to select K , we do not pursue this topic here. As for the weights, we consider two choices. The first is using the equal weights w_{1} = \cdots = w_{K} = 1/K . The obtained estimators are called composite asymmetric Huber regression (CAHR) estimators. As the second choice in this study, to preferably describe the distribution information of the data, we consider a K -dimensional weight vector (w_{1}, \cdots, w_{K}) = \big({f}({b}_{01}), \cdots, {f}({b}_{0K})\big)\Big/\sum_{{k} = 1}^K{f}({b}_{0k}) , where \mathit{\boldsymbol{b}}_0 = (b_{01}, \cdots, b_{0K}) is the true value vector of \mathit{\boldsymbol{b}} , and f(\cdot) is the density function of the random error. In practice, we estimate f(\cdot) by kernel density estimation method.
Denote \mathcal{S} = \big\{(Y_i, \mathit{\boldsymbol{Z}}_i, X_i(\cdot)):1\leq i\leq n\big\} , and given a new copy of (\mathit{\boldsymbol{Z}}, X) , namely the predictor variables (\mathit{\boldsymbol{Z}}_{n+1}, {X_{n+1}(\cdot)}) , once we gain the estimated \mathit{\boldsymbol{\alpha}} and \beta(t) , the mean squared prediction error (MSPE) can be obtained, take asymmetric Huber regression for example,
\begin{equation*} \begin{split} &{\rm{MSPE}}_{AHR}\\ = &{E\Bigg[\bigg\{\left(\bar{b}+\mathit{\boldsymbol{Z}}_{n+1}^{\rm T}\bar{\mathit{\boldsymbol{\alpha}}}+\int^{1}_{0}\bar{\beta}(t)X_{n+1}(t)dt\right)-\left(b_\tau+\mathit{\boldsymbol{Z}}_{n+1}^{\rm T}{\mathit{\boldsymbol{\alpha}}_0}+\int^{1}_{0}{\beta}_0(t)X_{n+1}(t)dt\right)\bigg\}^2\Bigg|\mathcal{S}\Bigg]}, \end{split} \end{equation*} |
where \mathit{\boldsymbol{\alpha}}_0 and \beta_0(t) are the true values of \mathit{\boldsymbol{\alpha}} and \beta(t) , respectively. The MSPEs of CAHR and WCAHR have analoguos definitions, and denoted by {\rm{MSPE}}_{CAHR} and {\rm{MSPE}}_{WCAHR} , respectively.
Note that the minimization problems for AHR and CAHR estimators are special cases of WCAHR method. Here, we just present the practical algorithm for WCAHR based on pseudo data. A similar argument can be found in [14] to implement the data-adaptive principal component analysis. The algorithm is simply described as following.
Step 1. Given initial estimators \hat{\boldsymbol{\alpha}}^{(0)} and \hat{\boldsymbol\gamma}^{(0)} for \boldsymbol{\alpha}_{0} and \gamma_{0} , respectively.
Step 2. Iterate, until convergence, following these three steps for L = 0, 1, \ldots
(a) Compute the residuals as \hat{e}_{i}^{(L)} = Y_{i}-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T} \hat{\boldsymbol\alpha}^{(L)}-\hat{{\mathit{\boldsymbol{U}}}}_{i}^{\rm T} \hat{\boldsymbol\gamma}^{(L)} . (b) Calculate the empirical pseudo data vector \mathit{\boldsymbol{G}}^{(L)} = ({G}_1^{(L)}, \cdots, {G}_n^{(L)})^{\rm T} in the element-wise way, G_{i}^{(L)} = {{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T} \hat{\boldsymbol\alpha}^{(L)}+\hat{{\mathit{\boldsymbol{U}}}}_{i}^{\rm T} \hat{\boldsymbol\gamma}^{(L)}+ \sum_{k = 1}^{K} w_{k} \psi_{\tau_{k}}\big(\hat{e}_{i}^{(L)}-\hat{b}_k^{(L)}\big), for given weights \left(w_{1}, \cdots, w_{K}\right) and \hat{b}_k^{(L)} = {\rm argmin}_\mu\sum_{i = 1}^n \rho_{\tau_k}\left(\hat{e}_{i}^{(L)}-\mu\right) at each k . Here \psi_{\tau_k}(u) = \rho'_{\tau_k}(u) = ({\tau_k}-1)I(u < -c^*)+\frac{(1-{\tau_k})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau_k}{c^*}uI(0\leq u < c^*)+\tau_kI(u\geq c^*) . (c) Obtain next iterative estimates \hat{\boldsymbol\alpha}^{(L+1)} and \hat{\boldsymbol\gamma}^{(L+1)} by using the OLS method for response variable \tilde{Y}_{i} = G_{i}^{(L)} and covariates {{\mathit{\boldsymbol{Z}}}}_{i}, \hat{{\mathit{\boldsymbol{U}}}}_{i} .
In this section, we will establish the asymptotic properties of the estimators defined in the previous section. We shall first present some notations, suppose \mathit{\boldsymbol{\gamma}}_0 = (\gamma_{01}, ..., \gamma_{0m})^{\rm T} is the true values of \mathit{\boldsymbol{\gamma}} , F(\cdot) is the cumulative distribution function of the random error. In addition, the notation \|\cdot\| represents the \mathcal{L}^2 norm of a function or the Euclidean norm of a vector, and a_n\sim b_n indicates that a_n/b_n is bounded away from zero and infinity as n \rightarrow \infty . For simplicity, in this paper, C represents a generic positive constant whose value may change from line to line. Next, to obtain the asymptotic properties, some technical assumptions are listed as follows.
C1. The random process X(\cdot) and score \xi_j = \langle X(\cdot), v_j(\cdot)\rangle satisfy the following condition: {\rm {E}}\|X(\cdot)\|^4 < \infty and {\rm{E}}[\xi_j^4]\leq C\lambda_j^2, \ j \geq 1 .
C2. The eigenvalues of C_X and the score coefficients fulfil the conditions below:
{(a)} There exist constants C and a > 1 such that C^{-1}j^{-a}\leq\lambda_j\leq Cj^{-a}, \lambda_j-\lambda_{j+1}\geq Cj^{-a-1}, \ j \geq 1 ;
{(b)} There exist constants C and b > a/2+1 such that |\gamma_j|\leq Cj^{-b}, \ j \geq 1 .
C3. The random vector \mathit{\boldsymbol{Z}} satisfies {\rm {E}}\|\mathit{\boldsymbol{Z}}\|^4 < \infty .
C4. There is some constant C such that \mid \langle {c}_{ {Z}_lX}, v_j\rangle \mid \leq Cj^{-(a+b)}, \; l = 1, \cdots, p, \ j \geq 1 .
C5. Let \eta_{il} = {Z}_{il}-\langle g_l, X_i \rangle with g_l = \sum\limits_{j = 1}^\infty\lambda_j^{-1}\langle { c}_{{Z}_lX}, v_j\rangle v_j , and \boldsymbol \eta_i = (\eta_{i1}, \cdots, \eta_{ip})^{\rm T} , then {\boldsymbol\eta}_{1}, \cdots, {\boldsymbol\eta}_{n} are i.i.d random vectors. We further assume that {{\rm {E}}[{\boldsymbol\eta}_{i}|{X_i}] = 0, \; \; {\rm {E}}[{\boldsymbol \eta_i}{\boldsymbol \eta_i}^{\rm T}| X_i]{\overset{a.s.}{ = }}\mathit{\boldsymbol{\Sigma}}} is a constant positive definite matrix.
C6. b_\tau is the unique solution of {\rm {E}}\left[\rho_{\tau}^{\prime}\left(e-b_{\tau}\right)\right] = 0 , and h_{\tau}(b_{\tau}) = (1-\tau)f(b_{\tau}-c^*)+\frac{1-\tau}{c^*}(F(b_{\tau})-F(b_{\tau}-c^*)) +\frac{\tau}{c^*}(F(b_{\tau}+c^*)-F(b_{\tau}))+\tau f(b_{\tau}+c^*) is continuous at b_{\tau} . Furthermore, we suppose that h_{\tau}(b_{\tau}) > 0 .
C6'. b_{0k} is the unique solution of E\left[\rho_{\tau_{k}}^{\prime}\left(e_i-b_{0 k}\right)\right] = 0 , h_{\tau_k}(b_{0k}) = (1-\tau_k)f(b_{0k}-c^*)+\frac{1-\tau_k}{c^*}(F(b_{0k})-F(b_{0k}-c^*)) +\frac{\tau_k}{c^*}(F(b_{0k}+c^*)-F(b_{0k}))+\tau_kf(b_{0k}+c^*) is continuous at b_{0k} , k = 1, \cdots, K . Furthermore, there exist some positive constants C_1, \; C_2 such that 0 < C_1\leq\underset{1\leq k\leq K}\min h_{\tau_k}(b_{0k})\leq\underset{1\leq k\leq K}\max h_{\tau_k}(b_{0k})\leq C_2 < +\infty .
C1 is the commonly used condition for establishing the consistency of the the empirical covariance operator of X in functional linear model and partial functional regression model. For example, it has been adopted in [3,17,18] (mean regression), [6,11] (quantile regression), [12,13] (robust estimation procedure), among others. C2(a) is used to identify the slope function \beta(t) via preventing the spacings between eigenvalues being too small, and C2(b) ensures the sufficiently smooth of slope function \beta(t) . C3–C5 are needed to deal with the vector-type covariate in the model ( 2.1 ) (see [19]). More concretely, C3 is for the asymptotic behaviors of \hat{{c}}_{{{\mathit{\boldsymbol{Z}}}}X} and \hat{{c}}_{{{\mathit{\boldsymbol{Z}}}}} . C4 is used to ensure the effect of truncation on the estimator of {\beta}(\cdot) be sufficiently small. C5 is a commonly used condition in the literature on partial functional regression models (see for example, [4,20,21]). The assumptions on {\rm {E}}[{\boldsymbol\eta}_{i}|{X_i}] and {\rm {E}}[{\boldsymbol \eta_i}{\boldsymbol \eta_i}^{\rm T}|X_i] are slightly strong, and are used to fix the identifiability of {\boldsymbol \alpha} and simplify the proof of the theorems. It is easy to see that \langle g_l, X_i \rangle is the projection of Z_l onto X_i , and {{\rm{E}}}({\boldsymbol \eta_i}) = 0 , {\rm Cov}\left(\eta_{il}, \langle g_l, X_i \rangle\right) = 0 , {\rm E}\|\boldsymbol \eta_i\|^4 < \infty even without the assumptions. The facts can be used to obtain similar results to the following theorems with more complicated technics (see, for example, [6]) and more conditions to ensure the identifiability. Other type conditions on {\boldsymbol \eta_i} can be found in [11,22]. C6 and C6 ^\prime are specific to the AHR and WCAHR (CAHR) cases respectively, which are primarily used to ensure the asymptotic behaviors of our estimators.
The following theorems discuss the convergence rate of the estimated {\beta}(\cdot) , the asymptotic normality of the estimated {\boldsymbol\alpha} and the convergence rate of the mean squared prediction error. To obtain this, we further assume (\mathit{\boldsymbol{Z}}_{n+1}, {X_{n+1}(\cdot)}) is independent of \mathcal{S} in this paper.
The next theorem establishes the large sample properties of the AHR estimators.
Theorem 1. Suppose that the Conditions C1–C6 are satisfied, and the tuning parameter m\sim n^{1/(a+2b)} , then
\begin{equation*} \begin{split} & \|\bar{\beta}(\cdot)-\beta_0(\cdot)\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}),\\ & \sqrt{n}(\bar{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{V}{\left\{h_{\tau}\left(b_{\tau}\right)\right\}^2} \mathit{\boldsymbol{\Sigma}}^{-1}\right),\\ &\mathit{MSPE}_{AHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}), \end{split} \end{equation*} |
where \stackrel{d}{\longrightarrow} represents convergence in distribution, and V = \mathit{E}\left[\psi_{\tau}^2\left(e-b_{\tau}\right)\right] with \psi_{\tau}(u) = \rho'_{\tau}(u) = ({\tau}-1)I(u < -c^*)+\frac{(1-{\tau})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau}{c^*}uI(0\leq u < c^*)+\tau I(u\geq c^*) .
The asymptotic properties of the proposed WCAHR estimators are presented in the following theorem.
Theorem 2. Under the Conditions C1–C5 and C6 ' , if the tuning parameter is taken as m\sim n^{1/(a+2b)} , then
\begin{equation*} \begin{split} \label{alp} & \|\hat{\beta}(\cdot)-\beta_0(\cdot)\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}),\\ &\sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{{{\mathit{\boldsymbol{w}}}}^{T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} {\boldsymbol\Sigma}^{-1}\right),\\ &\mathit{MSPE}_{WCAHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}), \end{split} \end{equation*} |
where {{\mathit{\boldsymbol{w}}}} = \left(w_{1}, \ldots, w_{K}\right)^{T} and \mathit{\boldsymbol{V}} = \left({V_{kl}}\right)_{1 \leq k, \; l \leq K} , here V_{k l} = \mathit{E}\left[\psi_{\tau_{k}}\left(e-b_{0 k}\right)\psi_{\tau_{l}}\left(e-b_{0 l}\right)\right] with 1 \leq k, l \leq K .
Remark 3. The results illustrate that the slope function estimator has the same convergence rate as the estimators in [6] and [4], which are optimal in the minimax sense. Note that it is similar to quantile regression that no moment condition on error term is needed here. In addition, we notice that the rate attained in predicting {Y}_{n+1} is faster than the rate attained in estimating \beta(t) . Trace its root and use MSPE_{AHR} as an example, it is for the integral operator providing additional smoothness in computing \int^{1}_{0}\bar{\beta}(t)X_{n+1}(t)dt from \bar\beta(t) .
Remark 4. If all w_{k} s are equal, then Theorem 2.2 reduces to the asymptotic properties of the CAHR estimators. Taking \tau_{1} = \tau , it is easy to see that there is a weight vector {{\mathit{\boldsymbol{w}}}} such that \frac{{{\mathit{\boldsymbol{w}}}}^{T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} < \frac{V}{\left\{h_{\tau}\left(b_{01}\right)\right\}^2} . Note that the right hand side of the inequality is just the asymptotic variance given in Theorem 1.
In this section, a Monte Carlo simulation is used to investigate the finite sample properties of the proposed estimation approaches. The data sets used in the simulation are generated from the following model,
\begin{equation*} Y = \mathit{\boldsymbol{Z}}^{{\rm T}}\boldsymbol\alpha+\int_0^1 X(t)\beta(t) dt +\sigma (\mathit{\boldsymbol{Z}},X)e, \end{equation*} |
where the slope function \beta(t) = \sqrt 2 \sin(\pi t/2)+3 \sqrt 2 \sin(3 \pi t/2) , and X(t) = \sum\limits_{j = 1}^{50} \xi_j \phi_j(t) , here \phi_j(t) = \sqrt 2 \sin((j-0.5)\pi t) , and \xi_j s are mutually independent normal random variables with mean 0 and variance \lambda_j = ((j-0.5)\pi)^{-2} . The true values of parameters are set as \mathit{\boldsymbol{\alpha}} = (\alpha_1, \alpha_2)^{\rm T} = (10, 5)^{\rm T} , and {\mathit{\boldsymbol{Z}}}\sim N\left(0, \Sigma_{{\mathit{\boldsymbol{Z}}}}\right) with \left(\Sigma_{{\mathit{\boldsymbol{Z}}}}\right)_{i, j} = 0.75^{|i-j|} for i, j = 1, 2 .
Five different distributions for e are considered as follows: (a) standard normal distribution N(0, 1) ; (b) positively skewed normal distribution {SN}(0, 1, 15) ; (c) positively skewed t -distribution {St}(0, 1, 5, 3) ; (d) mixture of normals (MN) 0.95 N(0, 1)+0.05 N\left(0, 10^{2}\right) , which produces a distribution with outliers of response; (e) bimodal distribution ( Bimodal ) {\tilde{\eta}}N(-1.2, 1) + (1-\tilde{\eta})N(1.2, 1) with \tilde{\eta}\sim Binomial(1, 0.5) . The multiplier \sigma(\mathit{\boldsymbol{Z}}, X) can be generated from either of the following two models:
(A) (homoscedastic) \sigma(\mathit{\boldsymbol{Z}}, X) = 1 ;
(B) (heteroscedastic) \sigma(\mathit{\boldsymbol{Z}}, X) = { \left|1+0.1\left({Z_{1}\alpha_1^*+Z_{2}\alpha_2^*+\int_0^1 X(t)\beta^*(t) dt}\right)\right|} , where \alpha_1^* = \alpha_2^* = 1 , and \beta^*(t) = \sqrt 2 \sin(\pi t/2)+\sqrt 2 \sin(3 \pi t/2) .
Implementing the proposed estimation method requires the predetermined levels over (0, 1), i.e., \left\{\tau_{k}\right\}_{k = 1}^{K} . Similar to the setting in [14], we take K = 19 , and choose the equidistant levels \tau_{k} = k /(K+1) , k = 1, 2, \ldots, K . In addition, for the WCAHR estimator, we employ the adaptive weights given in Remark 2.
For comparison, we also calculate the OLS estimator, the least absolute deviation (LAD) estimator, the ESL estimator, the H-ESL estimator, the Huber estimator (which corresponds to the case of AHR estimator at \tau = 0.5 ), the CAHR estimator, and the AHR estimators at \tau = 0.25 and 0.75 . In this study, the sample size n is set as 200 or 400.
To implement these methods, we need to choose the tuning parameter m . In this paper, m is selected by the cumulative percentage of total variability (CPV) method, that is,
m = \underset{p}{\operatorname{argmin}}\bigg\{ \sum\limits_{i = 1}^{p}\hat{\lambda}_i\Big/\sum\limits_{i = 1}^{\infty}\hat{\lambda}_i\geqslant\delta \bigg\}, |
where \delta equals 85\% . Other criterion, such as AIC, BIC, can be employed.
For each setting and different methods, bias (Bias), standard deviation (Sd) of the estimated \alpha_1 and \alpha_2 , and the mean squared error (MSE) of the estimated \mathit{\boldsymbol{\alpha}} with {\rm{MSE}} = \frac{1}{S}\sum_{s = 1}^S\sum_{j = 1}^2(\hat{\alpha}_j^s-{\alpha}_j)^2 , as well as the mean integrated squared error (MISE) of the estimated \beta(t) over S = 500 repetitions are summarized, where {\rm{MISE}} = {\left\{\frac{1}{100S}\sum_{s = 1}^S\sum_{i = 1}^{100} {\left(\hat{\beta}^s(t_i)-{\beta}(t_i)\right)}^2\right\}} with t_i s being 100 equally spaced grids in [0, 1], here \hat{\alpha}_j^s , \hat{\beta}^s(\cdot) are the estimates of \hat{\alpha}_j and \hat{\beta}(\cdot) from the s th sampling, j = 1, 2 .
Table 1 presents the results in the homoscedastic case. From Table 1, we can see the following facts: (a) The Sd, MSE and MISE decrease as the sample size n increases from 200 to 400 . (b) The proposed estimators are almost unbiased, which further illustrates the consistency combining with the fact (a). (c) The proposed adaptively weighted estimator performs similarly to the OLS estimator under the normal error, and is comparable to the corresponding H-ESL estimator for the mixture of normal distributions, but is significantly better than the other estimators considered when the distribution of model error is skewed or bimodal, and still enjoys the favoured being robust to outliers. This demonstrates that the proposed WCAHR estimator can well adapt to different error distributions, thus is more useful in practice.
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Table 2 presents the results in the more challenged heteroscedastic case, which violates the condition in this paper. The proposed WCAHR estimator outperforms the other estimators considered for the normal, skewed normal and skewed t error distributions, and is comparable to the corresponding H-ESL estimator for the mixture of normal distribution and bimodal distribution. This further illustrates that the proposed WCAHR estimator may be more applicable. Although the simulation results show the appealing performance for the considered heteroscedastic errors, general theoretical results are still challenging, and more conditions on the conditional moments of e may be helpful.
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
In order to detect the effect of the level choice to the performance of the WCAHR estimators, especially for the skewed error distributions, we also change in the simulation the number K_1 of levels over (0, 0.5) given the total level number K . Specifically, for the given K = 19 and different values of K_1 , we set \tau_i = \frac{i}{2K_1}, \; \text{for}\; i = 1, \cdots, K_1 and \tau_i = \frac{K+1-2K_1+i}{2(K+1-K_1)}, \; \text{for}\; i = K_1+1, \cdots, K. Table 3 presents the estimation results. We find from the results that the choice of the levels does not destroy the performance of the WCAHR estimators, although less number of levels over (0, 0.5) leads to slightly larger MSE and MISE for the positively skewed error distributions. In addition, the MISEs and MSEs decrease as K_1 increases, and stabilize eventually. This may motivate that one can take more levels appropriately over (0, 0.5) in dealing with the positively skewed error distributions.
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
In this section, we use the proposed estimation methods to the Electricity data and the Tecator data set, and the competing methods mentioned in Section 3. In the applications, all the observations are centralized before the regression analysis.
The data set consists of the daily average hourly electricity spot prices of the German electricity market ( Y ), the hourly values of Germany's wind power infeed ( X(t) ), the precipitation height ( Z_1 ) and the sunshine duration ( Z_2 ). Here we only consider the working days span from January 1, 2006 to September 30, 2008. The hourly values of Germany's wind power infeed curves are shown in the left panel of Figure 3. The data set can be obtained from the online supplements of Liebl [23]. Now we adopt the following partial functional linear regression model to fit the data:
\begin{equation} Y = Z_1 \alpha_{1}+Z_2 \alpha_{2}+\int_{1}^{24} X(t) \beta(t) dt+e. \end{equation} | (4.1) |
Firstly, the OLS method is applied to fit model (4.1). Then Shapiro-Wilk test is applied to test the normality of the residuals and the p -value is less than 2.2 \times 10^{-16} . In addition, we also give the estimated density of the residuals and the residual diagnostic plot (see Figure 1). Both the test and plot clearly indicate that e follows a skewed distribution with outliers. Notice that the density of the residuals is similar to the error distribution {Bimodal} discussed in Section 3, and the simulation results illustrate that the proposed method can be applied and provide reliable inference for this kind of data.
To evaluate the predictions obtained with different estimation methods, we randomly divide data set into a training sample of size 478 subjects and a testing sample with the remaining 160 subjects (indexed by \mathcal{J} ). The data are split for N = 100,200,400 times, respectively. We use the median quadratic error of prediction ( \rm{MEDQEP} ) defined below as the criterion to evaluate the performances:
{\rm{MEDQEP}} = \frac{1}{N}\sum\limits_{i = 1}^N{\rm{median}}\left\{{(Y_{ij}-\hat{Y}_{ij})^2}/{\textsf{Var}}_{\mathcal{J}}(Y_{ij}), j\in \mathcal{J}\right\}. |
The left 3 columns of Table 4 present the MEDQEPs of different methods mentioned above. According to the results of calculation, the WCAHR method is uniformly superior to the other estimators.
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Table 5 (the first 2 columns) presents the estimates of the parametric part by the estimation methods based on the whole data set. According to the results, both the precipitation height and the sunshine duration have negative effects on the daily average hourly electricity spot prices. In addition, Figure 4(a) plots the estimated slope function obtained by the WCAHR method, the estimates for \beta(\cdot) obtained by other methods mentioned above exhibit similar patterns and thus omitted here. From the figure, we can see the prices have a larger (in the absolute value) linkage with the wind power infeed in the daytime, which reflects the economic phenomena of price sensitivity, and more specifically, the market is active during the daytime and thus there is more correlation between the prices and the wind power infeed in the daytime. Secondly, the Germany's wind power infeed has a negative effect on the daily average hourly electricity spot prices, which reflects supply-demand balance, that is, more wind infeed creates the oversupply of electricity and thus reduces the price.
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |
The Tecator data set consists of 215 meat samples. For each sample, moisture, fat and protein are recorded in percent, and a 100-channel spectrum of absorbances is measured by the spectrometer. The data set is available from the R package fda.usc (see [24]). The right panel of Figure 3 shows the spectral trajectories. In this paper, the objective is to investigate the effects of the spectral trajectories X(t) , water content Z_1 and protein content Z_2 on the fat content Y by fitting the following model:
\begin{equation} Y = Z_1 \alpha_{1}+Z_2 \alpha_{2}+\int_{850}^{1050} X(t) \beta(t) dt+e \end{equation} | (4.2) |
The density of the residuals and the residual diagnostic plot in Figure 2 illustrate the error follows a skewed distribution with outliers. Similarly, to assess the prediction accuracy, the 215 meat samples are randomized into training set with 180 subjects and testing set with 35 subjects. We also randomly split the data set for N = 100,200,400 times and use MEDQEP as criterion to evaluate the finite sample performances of different estimation procedures. The comparison results are shown in the right 3 columns of Table 2, from which we know the proposed method performs better than the competing estimation procedures in view of prediction accuracy.
The estimated coefficients \hat{\alpha}_{1} , \hat{\alpha}_{2} using various methods based on the whole data set are also shown in the last 2 columns of Table 3. Both the protein content and water content have negative effects on the fat content. Next, the right panel of Figure 4 demonstrates the estimated slope function curves based on the WCAHR method. It is obvious that the spectrum curve of absorbance has negative impact on the quantity of fatty. In addition, the estimated slope functions by other methods mentioned above exhibit similar patterns and thus omitted here.
In this paper, we study the WCAHR estimation in the partial functional linear regression model. We use the functional principal component basis to approximate the functions, and obtain the estimators of the unknown parameter vector and the slope function through minimizing the weighted asymmetric Huber loss function. The asymptotic normality of the estimated parameter vector and the convergence rate of the estimated slope function are presented.
The proposed approach is designed for automatically reflecting distributional features as well as bounding outliers effectively without requiring prior information of the data. Simulation results show that the proposed method is almost as efficient as OLS when the error follows a normal distribution, while keeps robust to outliers and still works well when the error follows skewed or bimodal distributions. That is to say, the method is adaptive to the distribution of the error in the regression model. The analyses of two examples further illustrate that the utility of the proposed methods in modelling and forecasting.
The novelty of the method is that it focuses on the extraction of major features as well as shielding the estimator from outliers. The proposed WCAHR estimation procedure can be extended to more general situations, including dependent functional data, sparse modeling, partially observed functional data, and high dimension setting. In addition, we still need to try objective way to select K .
Authors are highly grateful to anonymous referees and editor in chief for their valuable comments and suggestions for improving this paper. This research was supported by National Natural Science Foundation of China (Grant No. 11771032 and No. 11971045), Natural Science Foundation of Beijing (Grant No. 1202001), and Natural Science Foundation of Shanxi Province, China (Grant No. 20210302124530).
The authors declare that they have no competing interests.
We just prove Theorem 2. The Theorem 1 is a special case of Theorem 2.
Proof of Theorem 2:
Let \delta_n = \sqrt{\frac{m}{n}} , {P}_{n} = \frac{1}{n}\sum_{i = 1}^{n}\mathit{\boldsymbol{\eta}}_i\mathit{\boldsymbol{\eta}}_i^{\rm T} , \mathit{\boldsymbol{V}}_n = \delta_n^{-1}{P}_{n}^{{ 1/2}}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) , \tilde{\boldsymbol\eta}_{i} = P_{n}^{-1 / 2} {\boldsymbol\eta}_{i} , \Lambda = \operatorname{diag}\left\{\lambda_{1}, \cdots, \lambda_{m}\right\} , {{{\mathit{\boldsymbol{A}}}}}_i = {\Lambda}^{-{ 1/2}}{{{\mathit{\boldsymbol{U}}}}_i} , {\hat{{\mathit{\boldsymbol{A}}}}}_i = {\Lambda}^{-{ 1/2}}\hat{{{\mathit{\boldsymbol{U}}}}_i} , H_{m} = \left(\lambda_{1}^{-1}\langle {c}_{{\mathit{\boldsymbol{Z}}}, X}, v_{1}\rangle, \ldots, \lambda_{m}^{-1}\langle {c}_{{\mathit{\boldsymbol{Z}}}, X}, v_{m}\rangle\right)^{\mathrm{T}} , \mathit{\boldsymbol{W}}_n = \delta_n^{-1}\Lambda^{ 1/2}[(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)+H_{m}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0)] , r_i = \int_{0}^{1}\beta_0(t)X_i(t)dt-{\hat{\mathit{\boldsymbol{U}}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0 , B_i = H_{m}^{\rm T}({{\mathit{\boldsymbol{U}}}}_i-\hat{{\mathit{\boldsymbol{U}}}}_i)+\sum\limits_{j = m+1}^\infty\lambda_j^{-1}\langle {c}_{\mathit{\boldsymbol{Z}}X}, v_j\rangle \xi_{ij} , \tilde{B}_i = \delta_n^{-1}(\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0)^{\rm T}B_i , S_{nk} = \delta_{n}^{-1}\left(\hat{b}_{k}-b_{0 k}\right) , \boldsymbol{S}_{n} = \left(\sqrt{w_1}S_{n 1}, \ldots, \; \sqrt{w_K}S_{n K}\right)^{\rm T} , \mathcal{F}_n = \Big\{(\mathit{\boldsymbol{V}}_n, \mathit{\boldsymbol{W}}_n, \mathit{\boldsymbol{S}}_{n}):\big\|(\mathit{\boldsymbol{V}}_n^{\rm T}, \mathit{\boldsymbol{W}}_n^{\rm T}, \mathit{\boldsymbol{S}}_{n}^{\rm T})^{\rm T}\big\|\leq L\Big\} , T_n = \left\{\left(\mathit{\boldsymbol{Z}}_1, X_1(\cdot)\big), ..., \big(\mathit{\boldsymbol{Z}}_n, X_n(\cdot)\right)\right\} .
Next, we will show that, for any given \eta > 0 , there exists a sufficiently large constant L , such that
\begin{equation} \begin{split} &P\bigg\{\inf\limits_{(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\boldsymbol{S}_{n})\in \mathcal{F}_n}\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0-\delta_n\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\Big)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; > \sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\Big)\bigg\}\geq 1-\eta. \end{split} \end{equation} | (A.1) |
This means that there is a local minimizer \mathit{\boldsymbol{\hat{\alpha}}} , \hat{\mathit{\boldsymbol{b}}} and \mathit{\boldsymbol{\hat{\gamma}}} in the ball \Big\{(\mathit{\boldsymbol{V}}_n, \mathit{\boldsymbol{W}}_n, \mathit{\boldsymbol{S}}_{n}):\big\|(\mathit{\boldsymbol{V}}_n^{\rm T}, \mathit{\boldsymbol{W}}_n^{\rm T}, \mathit{\boldsymbol{S}}_{n}^{\rm T})^{\rm T}\big\|\leq L\Big\} such that \|\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\| = O_p(\delta_n) , \left|\hat{b}_{k}-b_{0 k}\right| = O_p(\delta_n) , \|\Lambda^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\| = O_p(\delta_n) with a probability at least 1-\eta .
First, by \|v_j-\hat{v}_j\|^2 = O_p(n^{-1}j^2) (see, [4]), one has
\begin{equation*} \begin{split} |r_i|^2& = \bigg|\int_{0}^{1}\beta_0(t)X_i(t)dt-{\hat{\mathit{\boldsymbol{U}}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\bigg|^2\\ &\leq2 \bigg|\sum\limits_{j = 1}^{m}\langle X_i,\hat{v}_j-{v}_j\rangle\gamma_{0j}\bigg|^2+2\bigg|\sum\limits_{j = m+1}^{ \infty}\langle X_i,v_j\rangle\gamma_{0j}\bigg|^2\triangleq 2\rm{D}_1+2\rm{D}_2. \end{split} \end{equation*} |
For \rm{D}_1 , using Conditions C1, C2, and the Hölder inequality, one can obtain
\begin{equation*} \begin{split} \rm{D}_1& = \bigg|\sum\limits_{j = 1}^{m}\langle X_i,{v}_j-\hat{v}_j\rangle\gamma_{0j}\bigg|^2\\ &\leq Cm\sum\limits_{j = 1}^{m}\|{v}_j-\hat{v}_j\|^2|\gamma_{0j}|^2\leq Cm\sum\limits_{j = 1}^{m}O_p(n^{-1}j^{2-2b}) = O_p\left(\frac{m}{n}\right) = O_p(\delta_n^2). \end{split} \end{equation*} |
As for \rm{D}_2 , due to \mathrm{E}\Big \{\sum_{j = m+1}^{ \infty}\langle X_i, v_j\rangle\gamma_{0j}\Big \} = 0, \; \mathrm{Var}\Big \{\sum_{j = m+1}^{ \infty}\langle X_i, {v}_j\rangle\gamma_{0j}\Big \} = \sum_{j = m+1}^{ \infty}\lambda_j{\gamma_{0j}}^2 \leq C\sum_{j = m+1}^{ \infty}j^{-(a+2b)} = O(n^{-\frac{a+2b-1}{a+2b}}), one has D_2 = O_p(n^{-\frac{a+2b-1}{a+2b}}) = O_p(\delta_n^2). To sum up, we have |r_i|^2 = O_p(\delta_n^2) .
Now consider B_i . Due to
\begin{equation*} \begin{split} \label{hm} \|B_i\|^2& = \left\|H_{m}^T({{\mathit{\boldsymbol{U}}}}_i-\hat{{\mathit{\boldsymbol{U}}}}_i)+\sum\limits_{j = m+1}^\infty\lambda_j^{-1}\langle {c}_{\mathit{\boldsymbol{Z}}X},v_j\rangle \xi_{ij}\right\|^2\\ &\leq 2\sum\limits_{l = 1}^d\left\{\left\|\sum\limits_{j = 1}^m\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle \langle X_i,\hat{v}_j-{v}_j\rangle\right\|^2+\left\|\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\right\|^2\right\}, \end{split} \end{equation*} |
by Conditions C1, C2, C4, and the Hölder inequality, we obtain
\begin{equation*} \left\|\sum\limits_{j = 1}^m\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle \langle X_i,\hat{v}_j-{v}_j\rangle\right\|^2\leq Cm\sum\limits_{j = 1}^{m}\|{v}_j-\hat{v}_j\|^2|\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle|^2\\ \leq Cm\sum\limits_{j = 1}^{m}O_p(n^{-1}j^{2-2b}) = O_p\left(\frac{m}{n}\right) = O_p\left(\delta_n^2\right). \end{equation*} |
In addition, noting that
\begin{equation*} \begin{split} &\mathrm{E}\Big \{\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\Big \} = 0,\\ &\mathrm{Var}\Big \{\sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z_l}X},v_j\rangle\langle X_i,v_j\rangle\Big \} = \sum\limits_{j = m+1}^{ \infty}\lambda_j^{-1}\langle {c}_{{Z}_lX},v_j\rangle^2 = O(n^{-\frac{a+2b-1}{a+2b}}), \end{split} \end{equation*} |
together with the above inequality, one has
\begin{equation} \begin{split} \|B_i\|^2 = O_p\left(\frac{m}{n}\right) = O_p(\delta_n^2). \end{split} \end{equation} | (A.2) |
Recall that \psi_{\tau_k}(u) = \rho'_{\tau_k}(u) = ({\tau_k}-1)I(u < -c^*)+\frac{(1-{\tau_k})}{c^*}uI(-c^*\leq u < 0)+\frac{\tau_k}{c^*}uI(0\leq u < c^*)+\tau_kI(u\geq c^*), and denote \widetilde{Q}_{n}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big) = \sum_{i = 1}^{n}\sum_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0-\delta_n\big(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\big)\Big) -\sum_{i = 1}^{n}\sum_{k = 1}^{K}w_{k}\rho_{\tau_k} \Big(Y_i-b_{0k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\alpha}}_0-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\gamma}}_0\Big) . Then \widetilde{Q}_{n}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big) can be transformed into
\begin{align*} \widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right) & = E[\widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)|T_n]+\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\\ & \; \; -\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k}))\\ &\triangleq D_1^*+D_2^*+D_3^*, \end{align*} |
where
\begin{align*} &R_{nik}\big(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\big)\\ = &\rho_{\tau_k}\bigg(r_i+e_{ i}-{b_{0k}}- \delta_{n}\big(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i +{\tilde B}_i+S_{n k}\big)\bigg)-\rho_{\tau_k}(r_i+e_{i}-{b_{0k}})\\ & -E\bigg[\bigg\{\rho_{\tau_k}\big(r_i+e_{ i}-{b_{0k}}-\delta_{n}\big(\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} +\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\big)\big)-\rho_{\tau_k}(r_i+e_{i}-{b_{0k}})\bigg\}\bigg|T_n\bigg]\\ & +\delta_{n}(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k})\psi_{\tau_k}(e_{i}-b_{0 k}). \end{align*} |
Consider D_1^* . According to (A.2), we have
\begin{equation} \begin{split} \mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k} = \left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)(1+o_p(1)).\end{split} \end{equation} | (A.3) |
The proof of Theorem 3.1 in [6] indicates that \left|\frac{1}{n}\sum\limits_{i = 1}^{n}\left[\left({\mathit{\boldsymbol{A}}}_{i}^{{\rm T}}h^m\right)^{2}\right]-1\right| = O_{P}\left(n^{-1 / 4} m^{1 / 2}(\log n)^{1 / 2}\right) = o_{P}(1) with h^{m} = \mathit{\boldsymbol{W}}_n/\|\mathit{\boldsymbol{W}}_n\| , which leads to
\begin{align} \frac{1}{n}\sum\limits_{i = 1}^{n}\left(\hat{{\mathit{\boldsymbol{A}}}}_i^{\rm T}\mathit{\boldsymbol{W}}_n\right)^{2} = \|\mathit{\boldsymbol{W}}_n\|^2(1+o_{P}(1)). \end{align} | (A.4) |
Observe that \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n} = \mathit{\boldsymbol{V}}_n^{\rm T} P_{n}^{-1 / 2} \sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\boldsymbol\eta}_{i} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n, then by Conditions C1–C3, C5, E \left[\sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\eta}_{il} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n\right] = 0 and E\left(\left[\sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\eta}_{il} {{\mathit{\boldsymbol{U}}}}_{i}^{{\rm T}} \Lambda^{-1 / 2}\mathit{\boldsymbol{W}}_n\right]^{2}\right) = O(n m) . Hence,
\begin{equation} \begin{split} \sum\limits_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i} {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n} = O_{p}\left(n^{1 / 2} m^{1/2}\right). \end{split} \end{equation} | (A.5) |
Similarly, \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\mathit{\boldsymbol{V}}_n^{\rm T}\tilde{\boldsymbol\eta}_{i}S_{nk} = O_{p}\left(n^{1 / 2}\right), \; \sum_{i = 1}^{n} h_{\tau_k}(b_{0k}) {\mathit{\boldsymbol{A}}}_i^{\rm T}\boldsymbol{W}_{n}S_{nk} = O_{p}\left(n^{1 / 2} m^{1 / 2}\right). Then, together with formulas (A.3)–(A.5), we have
\begin{align*} \label{ex} &D_1^* = E[\widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)|T_n]\nonumber\\ & = \sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\int_{r_i}^{r_i-\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)} E[\psi_{\tau_k}(e_{ i}-b_{0k}+t)|T_n]dt\nonumber\\ & = \frac{1}{2}\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}h_{\tau_k}(b_{0k})\left\{\left(r_i-\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\right)^2-{r_i}^2\right\}(1+o_p(1))\nonumber\\ &\geq Cn\delta_n^2\left(\left\|\mathit{\boldsymbol{V}}_{n}\right\|^{2}+\left\|\mathit{\boldsymbol{W}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) (1+o_p(1)). \end{align*} |
As for D_2^* , due to the continuity of \psi_{\tau_{k}}(\cdot) , we have
\begin{equation*} \begin{split} Var\left(\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\bigg|T_n\right) { = o_p}\left(n\delta_n^2(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+|S_{nk}|^{2})\right), \end{split} \end{equation*} |
then
\begin{equation*} \begin{split} Var\left(\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\bigg|T_n\right){ = o_p}\left(n\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right)\right), \end{split} \end{equation*} |
from which we get
\begin{equation*} \begin{split} \label{ns} \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|D^*_2\right| = \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}R_{nik}\left(\boldsymbol{V}_{n},\boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right)\right| = o_p(\sqrt{n}\delta_nL). \end{split} \end{equation*} |
For the term D_3^* , it is easy to show that
\begin{equation*} \begin{split} E\left[\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\bigg|T_n\right] = 0, \end{split} \end{equation*} |
and
\begin{equation*} \begin{split} &\; \; \; E\left[\left\{\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}{\mathit{\boldsymbol{A}}}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\right\}^2\bigg|T_n\right]\\ &\leq Cn\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) (1+o_p(1)). \end{split} \end{equation*} |
Combining with the equation (A.4), we can obtain
\begin{align*} \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|D^*_3\right| = \sup\limits_{\|(\mathit{\boldsymbol{V}}_n,\mathit{\boldsymbol{W}}_n,\mathit{\boldsymbol{S}}_{n})\|\leq L} \left|\sum\limits_{k = 1}^{K}w_{k}\sum\limits_{i = 1}^{n}\delta_{n}\left(\mathit{\boldsymbol{V}}_n^{\rm T} \tilde{\boldsymbol\eta}_{i}+\boldsymbol{W}_{n}^{\rm T}\hat{\mathit{\boldsymbol{A}}}_i+{\tilde B}_i+S_{n k}\right)\psi_{\tau_k}(e_{i}-b_{0 k})\right| = O_p(\delta_n n^{1/2}L). \end{align*} |
From the results about D_1^*, D_2^* and D^*_3 , it is easy to obtain that \widetilde{Q}_{n}\left(\boldsymbol{V}_{n}, \boldsymbol{W}_{n}, \boldsymbol{S}_{n}\right) is dominated by the positive quadratic term Cn\delta_n^2\left(\left\|{{\mathit{\boldsymbol{V}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{W}}}}_{n}\right\|^{2}+\left\|{{\mathit{\boldsymbol{S}}}}_{n}\right\|^{2}\right) . Hence, Eq (A.1) is established, and there exists local minimizer \hat{\mathit{\boldsymbol{\gamma}}} such that
\begin{align} \|\mathit{\boldsymbol{\hat{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\| = O_p(\delta_n),\quad \left|\hat{b}_{k}-b_{0 k}\right| = O_p(\delta_n),\quad \|{\Lambda}^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = O_p(\delta_n^2). \end{align} | (A.6) |
Now we consider the convergence rate of \hat{\beta} . Since
\begin{align*} \|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = \sum\limits_{j = 1}^m \lambda_j({\hat{\gamma}_{j}}-{\gamma}_{0 j})^2\geq \lambda_m\sum\limits_{j = 1}^m (\hat{\gamma}_{j}-{\gamma}_{0 j})^2 , \end{align*} |
and based on the basic Condition C2, we have
\begin{align} \|\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0\|^2 \leq O_p(\lambda_m^{-1}\frac{m}{n}) = O_p(m^{a+1}n^{-1}) = O_p(n^{-\frac{2b-1}{{a+2b}}}). \end{align} | (A.7) |
Note that
\begin{equation*} \begin{split} \label{hatbeta} \|\hat{\beta}-\beta_0\|^2 & = \bigg\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}\hat{v}_j-\sum\limits_{j = 1}^{\infty}{\gamma}_{0j}{v}_j\bigg\|^2\\ &\leq4\bigg\|\sum\limits_{j = 1}^{m}(\hat{\gamma}_j-\gamma_{0j})\hat{v}_j\bigg\|^2+ 4\bigg\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(\hat{v}_j-v_j)\bigg\|^2+2\sum\limits_{j = m+1}^{\infty}{\gamma}_{0j}^2\\ &\triangleq4D_1^{**}+4D_2^{**}+2D_3^{**}. \end{split} \end{equation*} |
Based on the Condition C2, Eq (A.7) and the orthogonality of \{\hat{v}_j\} , as well as \|v_j-\hat{v}_j\|^2 = O_p(n^{-1}j^2) , we can obtain
\begin{equation*} \begin{split} \label{13} D_1^{**} = \bigg\|\sum\limits_{j = 1}^{m}(\hat{\gamma}_j-\gamma_{0j})\hat{v}_j\bigg\|^2 \leq\sum\limits_{j = 1}^{m}\big|\hat{\gamma}_j-\gamma_{0j}\big|^2 = \|\hat{\mathit{\boldsymbol{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0\|^2 = O_p(n^{-\frac{2b-1}{{a+2b}}}), \end{split} \end{equation*} |
\begin{equation} \begin{split} D_2^{**}& = \bigg\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(\hat{v}_j-v_j)\bigg\|^2\leq m\sum\limits_{j = 1}^{m}\parallel\hat{v}_j-v_j\parallel^2{\gamma}_{0j}^2\leq \frac{m}{n}O_p\big(\sum\limits_{j = 1}^{m}j^2{\gamma}_{0j}^2\big)\\ & = O_p\Big(n^{-1}m\sum\limits_{j = 1}^{m}j^{2-2b}\Big) = O_p(n^{-1}m) = o_p(n^{-\frac{2b-1}{a+2b}}),\\ D_3^{**}& = \sum\limits_{j = m+1}^{\infty}\gamma_{0j}^2\leq C\sum\limits_{j = m+1}^{\infty}j^{-2b} = O_p(n^{-\frac{2b-1}{a+2b}}). \end{split} \end{equation} | (A.8) |
These lead to
\|\hat{\beta}-\beta_0\|^2 = O_p(n^{-{\frac{2b-1}{a+2b}}}). |
Next, we turn to the asymptotic normality of \hat{\alpha} . Note that Q_{n}(\boldsymbol{\alpha}, \boldsymbol{\gamma}, \boldsymbol{b}) attains the minimal value at (\hat{\boldsymbol{\alpha}}, \hat{\boldsymbol{\gamma}}, \hat{\boldsymbol{b}}) with probability tending to 1 as n tends to infinity. Then, we have the following score equations
\begin{equation} \begin{split} &\frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(Y_i-\hat{b}_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\hat{\alpha}}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\hat{\gamma}}}\right) = 0, \end{split} \end{equation} | (A.9) |
\begin{equation} \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \psi_{\tau_{k}}\left(Y_i-\hat{b}_{k}-\mathit{\boldsymbol{Z}}_i^{\rm T} \mathit{\boldsymbol{\hat{\alpha}}}-\hat{\mathit{\boldsymbol{U}}}_i^{\rm T}\mathit{\boldsymbol{\hat{\gamma}}}\right) = 0. \end{equation} | (A.10) |
Further, we can write (A.9) as H_{n}+\sum_{k = 1}^{K}w_k B_{n 1}^{(k)}+\sum_{k = 1}^{K} w_k B_{n 2}^{(k)} = 0 , with
\begin{equation*} \begin{split} \label{ro333} & H_{n} = \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right), \\ & B_{n 1}^{(k)} = \frac{1}{n} \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i}E\bigg[\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\bigg|T_n\bigg], \\ \end{split} \end{equation*} |
\begin{equation*} \begin{split} & B_{n 2}^{(k)} = \frac{1}{n} \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i}\Bigg[\bigg\{\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\bigg\}\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -E\bigg\{\psi_{\tau_k}\bigg(e_{i}-b_{0 k}+r_{i}-(\hat{b}_k-b_{0k})-{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)\\ &\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; -\hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\bigg)-\psi_{\tau_k}\left(e_{i}-b_{0 k}\right)\Big|T_n\bigg\}\Bigg]. \end{split} \end{equation*} |
By simple calculations, we have B_{n 1}^{(k)} = -\frac{1}{n} \sum_{i = 1}^{n} h_{\tau_k}(b_{0k})\left[{{\mathit{\boldsymbol{Z}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\mathit{\boldsymbol{Z}}}}_{i} \hat{\boldsymbol{U}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)).
Through calculating the mean and variance directly, we can obtain B_{n 2}^{(k)} = o_{p}\left(\delta_{n}\right) . Then, Eq (A.9) can be written as
\begin{equation} \begin{split} &\; \; \; \; \frac{1}{n} \sum\limits_{k = 1}^{K}w_k \sum\limits_{i = 1}^{n} {{\mathit{\boldsymbol{Z}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\\ & = \frac{1}{n}\sum\limits_{k = 1}^{K} \sum\limits_{i = 1}^{n}w_k h_{\tau_k}(b_{0k})\left[{{\mathit{\boldsymbol{Z}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\mathit{\boldsymbol{Z}}}}_{i} \hat{{{\mathit{\boldsymbol{U}}}}}_{i}^{\rm T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)). \end{split} \end{equation} | (A.11) |
Similarly, Eq (A.10) can be changed into
\begin{equation*} \begin{split} \label{bn2} &\; \; \; \; \frac{1}{n} \sum\limits_{k = 1}^{K}\omega_k \sum\limits_{i = 1}^{n} {{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\\ & = \frac{1}{n}\sum\limits_{k = 1}^{K} \sum\limits_{i = 1}^{n}\omega_k h_{\tau_k}(b_{0k})\left[{{\hat{{\mathit{\boldsymbol{U}}}}}}_{i}{{\mathit{\boldsymbol{Z}}}}_{i}^{T}\left(\hat{\boldsymbol{\alpha}}-\boldsymbol{\alpha}_{0}\right)+{{\hat{{\mathit{\boldsymbol{U}}}}}}_{i} \hat{\boldsymbol{U}}_{i}^{T}\left(\hat{\boldsymbol{\gamma}}-\boldsymbol{\gamma}_{0}\right)\right](1+o_p(1)). \end{split} \end{equation*} |
Now let \Phi_{n} = \frac{1}{n} \sum_{i = 1}^{n} \hat{\mathit{\boldsymbol{U}}}_{i} \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}, \; \Gamma_{n} = \frac{1}{n} \sum_{i = 1}^{n} \hat{{\mathit{\boldsymbol{U}}}}_{i} {{\mathit{\boldsymbol{Z}}}}_{i}^{\rm T}, \; \Gamma = E({\mathit{\boldsymbol{U}}}_{i} \mathit{\boldsymbol{Z}}_{i}^{\rm T}), \; \Sigma_{n} = \frac{1}{n} \sum_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}, \; \widetilde{\mathit{\boldsymbol{Z}}}_{i} = \mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}, \; \Upsilon_{n k} = \frac{1}{n} \sum_{i = 1}^{n} \hat{\boldsymbol{U}}_{i}\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right). Then, above equation allows \sum_{k = 1}^{K}w_kh_{\tau_k}(b_{0k})(\hat{\mathit{\boldsymbol{\gamma}}}-\mathit{\boldsymbol{\gamma}}_{0}) = \sum_{k = 1}^{K}w_k\left(\Phi_{n}+o_{p}(1)\right)^{-1}[\Upsilon_{nk} +h_{\tau_k}(b_{0k})\Gamma_{n}(\mathit{\boldsymbol{\alpha}}_{0}-\hat{\mathit{\boldsymbol{\alpha}}})] . Substituting it into (A.11), we obtain
\begin{equation*} \begin{split} \label{frac} & \frac{1}{n} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} h_{\tau_{k}}\left(b_{0 k}\right) \mathit{\boldsymbol{Z}}_{i}\left[\mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}\right]^{\rm T}\left(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0}\right)\left(1+o_{p}(1)\right) \\ = & \frac{1}{n} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} \mathit{\boldsymbol{Z}}_{i}\left[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)- \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}\left(\Phi_{n}\right)^{-1} \Upsilon_{nk}\right]. \end{split} \end{equation*} |
Noting that
\begin{equation*} \begin{split} &\frac{1}{n} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{K}w_k h_{\tau_k}(b_{0 k}) \Gamma_{n}^{\rm T} \Phi_{n}^{-1} \hat{\mathit{\boldsymbol{U}}}_{i}\big[\mathit{\boldsymbol{Z}}_{i}-\Gamma_{n}^{\rm T}\Phi_{n}^{-1}\hat{\mathit{\boldsymbol{U}}}_{i}\big]^{\rm T} = 0\\ &\frac{1}{n}\sum\limits_{i = 1}^{n}\sum\limits_{k = 1}^{K} w_k\Gamma_{n}^{\rm T} \Phi_{n}^{-1}\hat{\boldsymbol{U}}_{i}\big\{\psi_{\tau_{k}}(e_{i}-b_{0 k})-\hat{\boldsymbol{U}}_{i}^{\rm T} \Phi_{n}^{-1} \Upsilon_{nk}\big\} = 0, \end{split} \end{equation*} |
then, it is easy to conclude that
\begin{equation*} \begin{split} &\left(\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\right) \sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0})\\ = &\frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K}w_k\sum\limits_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i}\left[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)- \hat{\mathit{\boldsymbol{U}}}_{i}^{\rm T}\left(\Phi_{n}\right)^{-1} \Upsilon_{nk}\right] (1+o_{p}(1)). \end{split} \end{equation*} |
Note that
\frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K} w_{k} \sum\limits_{i = 1}^{n} \tilde{Z}_{i} \hat{U}_{i}^{\rm T} = 0. |
Then we have
\begin{equation} \begin{split} &\; \; \; \; \left(\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\right) \sqrt{n}\left(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_{0}\right) \\ & = \frac{1}{\sqrt{n}} \sum\limits_{k = 1}^{K}w_k\sum\limits_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i}\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)(1+o_{p}(1)). \end{split} \end{equation} | (A.12) |
It is easy to see that \Phi_{n} = \Lambda+o_p(1), \; \Gamma_{n} = \Gamma+o_p(1) . And based on Lemma 1 in [8] and the Condition \mathrm{C}5, we can obtain \frac{1}{n} \sum_{i = 1}^{n} \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T} \stackrel{p}{\longrightarrow} \boldsymbol{\Sigma}\; (n \rightarrow \infty) .
Through some simple calculations, one has
\begin{equation} \begin{split} &\frac{1}{n}\sum\limits_{k = 1}^{K} w_k \sum\limits_{i = 1}^{n} h_{\tau_k}\left(b_{0 k}\right) \widetilde{\mathit{\boldsymbol{Z}}}_{i} \widetilde{\mathit{\boldsymbol{Z}}}_{i}^{\rm T}\stackrel{p}{\longrightarrow} \sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\boldsymbol{\Sigma}, \\ &\operatorname{Var}\left(\sum\limits_{k = 1}^{K} w_{k} \psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\right) = {{\mathit{\boldsymbol{w}}}}^{\rm T} \boldsymbol{V} {{\mathit{\boldsymbol{w}}}}, \end{split} \end{equation} | (A.13) |
where \boldsymbol{V} = \left(V_{k, l}\right)_{1 \leq k, l \leq K} with V_{k l} = E[\psi_{\tau_{k}}\left(e_{i}-b_{0 k}\right)\psi_{\tau_{l}}\left(e_{i}-b_{0 l}\right)] and {{\mathit{\boldsymbol{w}}}} = \left(w_{1}, \ldots, w_{k}\right)^{\rm T} . Then, according to Eqs (A.12) and (A.13), the Slutsky's theorem, and the properties of multivariate normal distributions, we can obtain
\sqrt{n}(\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0) \stackrel{d}{\longrightarrow} N\left(0, \frac{{{\mathit{\boldsymbol{w}}}}^{\rm T} {{\mathit{\boldsymbol{V}}}} {{\mathit{\boldsymbol{w}}}}}{\left\{\sum\limits_{k = 1}^{K}w_{k}h_{\tau_{k}}\left(b_{0 k}\right)\right\}^2} {\boldsymbol\Sigma}^{-1}\right). |
Lastly, we prove the third conclusion of Theorem 2. By the definition of {\rm{MSPE}}_{WCAHR} , we have
\begin{align*} \label{pe} &{\rm{MSPE}}_{WCAHR}\\ &\leq 5\sum\limits_{j = 1}^m (\hat{\gamma}_{j}-{\gamma}_{0j})^2\lambda_j +5C\left\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}(v_j-\hat{v}_j)\right\|^2\nonumber +5\sum\limits_{j = m+1}^{\infty}{\gamma}_{0j}^2\lambda_j +5C\|\hat{\mathit{\boldsymbol{\alpha}}}-\mathit{\boldsymbol{\alpha}}_0\|^2 {+5\left\{\sum\limits_{k = 1}^Kw_{k}\left|\hat{b}_{k}-b_{0k}\right|\right\}^2}\nonumber\\ &\triangleq 5F_1+5CF_2+5F_3+5CF_4{+5F_5}. \end{align*} |
Firstly, according to the previous proof process, we have F_1 = \|{\Lambda}^{1/2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2 = O_p(\frac{m}{n}) . As for F_2 , based on triangle inequality and C_R inequality,
\begin{align*} F_2& = \left\|\sum\limits_{j = 1}^{m}\hat{\gamma}_{j}(v_j-\hat{v}_j)\right\|^2 = \left\|\sum\limits_{j = 1}^{m}{\gamma}_{0j}(v_j-\hat{v}_j)+(\hat{\gamma}_{j}-{\gamma}_{0j})(v_j-\hat{v}_j)\right\|^2\\ &\leq2m\sum\limits_{j = 1}^{m}{\gamma}_{0j}^2\|v_j-\hat{v}_j\|^2+2\|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2\\ &\triangleq 2F_{21}+2F_{22}. \end{align*} |
By Eq (A.8), F_{21} = m\sum\limits_{j = 1}^{m}{\gamma}_{0 j}^2\|v_j-\hat{v}_j\|^2 = O_p\left(\frac{m}{n}\right) by. As for F_{22} , it is easy to know {\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2 = o_p(1)} , then F_{22} = \|{\Lambda}^\frac{1}{2}(\mathit{\boldsymbol{\hat{\gamma}}}-\mathit{\boldsymbol{\gamma}}_0)\|^2\sum\limits_{j = 1}^{m}\lambda_j^{-1}(v_j-\hat{v}_j)^2 = o_p\left(\frac{m}{n}\right). Next, by (A.8), we have F_3 = \sum\limits_{j = m+1}^{\infty}{\gamma}_{0 j}^2\lambda_j = O(m^{-a-2b+1}) . By (A.6), we know F_4 = O_p\left(\frac{m}{n}\right) and F_5 = O_p\left(\frac{m}{n}\right) . Then, we have
{\rm{MSPE}}_{WCAHR} = O_p(n^{-{\frac{a+2b-1}{a+2b}}}). |
The proof of Theorem 2 is complete.
[1] | G. Alessandrini, L. Rondi, Determining a sound-soft polyhedral scatterer by a single far-field measurement, Proc. Amer. Math. Soc., 133 (2005), 1685–1691. |
[2] |
H. Liu, J. Zou, Uniqueness in an inverse acoustic obstacle scattering problem for both sound-hard and sound-soft polyhedral scatterers, Inverse Probl., 22 (2006), 515–524. http://doi.org/10.1088/0266-5611/22/2/008 doi: 10.1088/0266-5611/22/2/008
![]() |
[3] |
O. Ivanyshyn, R. Kress, Nonlinear integral equations in inverse obstacle scattering, Mathematical Methods in Scattering Theory and Biomedical Engineering, 51 (2006), 39–50. https://doi.org/10.1142/9789812773197_0005 doi: 10.1142/9789812773197_0005
![]() |
[4] | J. Li, H. Liu, Numerical methods for inverse scattering problems, Singapore: Springer, 2023. https://doi.org/10.1007/978-981-99-3772-1 |
[5] | H. Diao, H. Liu, Spectral geometry and inverse scattering theory, Cham: Springer, 2023. https://doi.org/10.1007/978-3-031-34615-6 |
[6] | L. Borcea, H. Kang, H. Liu, G. Uhlmann, Inverse problems and imaging, panoramas et Syntheses, 2015. |
[7] | J. Li, H. Liu, J. Zou, An efficient multilevel algorithm for inverse scattering problem, In: Advances in computation and intelligence, Berlin, Heidelber: Springer, 2007,234–242. https://doi.org/10.1007/978-3-540-74581-5_25 |
[8] |
J. Xiang, G. Yan, The factorization method for a mixed inverse elastic scattering problem, IMA. J. Appl. Math., 87 (2022), 407–437. http://doi.org/10.1093/imamat/hxac010 doi: 10.1093/imamat/hxac010
![]() |
[9] |
J. Wang, B. Chen, Q. Yu, Y. Sun, A novel sampling method for time domain acoustic inverse source problems, Phys. Scr., 99 (2024), 035221. http://doi.org/10.1088/1402-4896/ad21c7 doi: 10.1088/1402-4896/ad21c7
![]() |
[10] |
D. Colton, A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Probl., 12 (1996), 383–393. http://doi.org/10.1088/0266-5611/12/4/003 doi: 10.1088/0266-5611/12/4/003
![]() |
[11] |
J. Li, J. Yang, B. Zhang, A linear sampling method for inverse acoustic scattering by a locally rough interface, Inverse Probl. Imag., 15 (2021), 1247–1267. http://doi.org/10.3934/ipi.2021036 doi: 10.3934/ipi.2021036
![]() |
[12] |
Y. Gao, H. Liu, X. Wang, K. Zhang, On an artificial neural network for inverse scattering problems, J. Comput. Phys., 448 (2021), 110771. http://doi.org/10.1016/j.jcp.2021.110771 doi: 10.1016/j.jcp.2021.110771
![]() |
[13] |
W. Yin, Z. Yang, P. Meng, Solving inverse scattering problem with a crack in inhomogeneous medium based on a convolutional neural network, Symmetry, 15 (2023), 119. https://doi.org/10.3390/sym15010119 doi: 10.3390/sym15010119
![]() |
[14] |
P. Zhang, P. Meng, W. Yin, H. Liu, A neural network method for time-dependent inverse source problem with limited-aperture data, J. Comput. Appl. Math., 421 (2023), 114842. https://doi.org/10.1016/j.cam.2022.114842 doi: 10.1016/j.cam.2022.114842
![]() |
[15] |
W. Yin, J. Ge, P. Meng, F. Qu, A neural network method for the inverse scattering problem of impenetrable cavities, Electron. Res. Arch., 28 (2020), 1123–1142. https://doi.org/10.3934/era.2020062 doi: 10.3934/era.2020062
![]() |
[16] |
W. Yin, W. Yang, H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 109594. https://doi.org/10.1016/j.jcp.2020.109594 doi: 10.1016/j.jcp.2020.109594
![]() |
[17] |
H. Liu, C. Mou, S. Zhang, Inverse problems for mean field games, Inverse Probl., 39 (2023), 085003. https://doi.org/10.1088/1361-6420/acdd90 doi: 10.1088/1361-6420/acdd90
![]() |
[18] |
Y. He, H. Liu, X. Wang, A novel quantitative inverse scattering scheme using interior resonant modes, Inverse Probl., 39 (2023), 085002. https://doi.org/10.1088/1361-6420/acdc49 doi: 10.1088/1361-6420/acdc49
![]() |
[19] |
X. Cao, H. Diao, H. Liu, J. Zou, Two single-measurement uniqueness results for inverse scattering problems within polyhedral geometries, Inverse Probl. Imag., 16, (2022), 1501–1528. https://doi.org/10.3934/ipi.2022023 doi: 10.3934/ipi.2022023
![]() |
[20] |
X. Cao, H. Diao, H. Liu, J. Zou, On nodal and singular structures of Laplacian eigenfunctions and applications to inverse scattering problems, J. Math. Pures Appl., 143 (2020), 116–161. https://doi.org/10.1016/j.matpur.2020.09.011 doi: 10.1016/j.matpur.2020.09.011
![]() |
[21] |
L. Liu, W. Liu, D. Teng, Y. Xiang, F.-Z. Xuan, A multiscale residual U-net architecture for super-resolution ultrasonic phased array imaging from full matrix capture data, J. Acoust. Soc. Am., 154 (2023), 2044–2054. http://doi.org/10.1121/10.0021171 doi: 10.1121/10.0021171
![]() |
[22] |
A. Reed, T. Blanford, D. Brown, S. Jayasuriya, SINR: Deconvolving circular sas images using implicit neural representations, IEEE J. Sel. Topics Signal Process., 17 (2023), 458–472. http://doi.org/10.1109/JSTSP.2022.3215849 doi: 10.1109/JSTSP.2022.3215849
![]() |
[23] |
W. Yu, X. Huang, Reconstruction of aircraft engine noise source using beamforming and compressive sensing, IEEE Access, 6 (2018), 11716–11726. http://doi.org/10.1109/ACCESS.2018.2801260 doi: 10.1109/ACCESS.2018.2801260
![]() |
[24] |
T. Nagata, K. Nakai, K. Yamada, Y. Saito, T. Nonomura, M. Kano, et al., Seismic wavefield reconstruction based on compressed sensing using data-driven reduced-order model, Geophys. J. Int., 233 (2023), 33–50. http://doi.org/10.1093/gji/ggac443 doi: 10.1093/gji/ggac443
![]() |
[25] |
M. Suhonen, A. Pulkkinen, T. Tarvainen, Single-stage approach for estimating optical parameters in spectral quantitative photo acoustic tomography, Journal of the Optical Society of America A, 41 (2024), 527–542. http://doi.org/10.1364/JOSAA.518768 doi: 10.1364/JOSAA.518768
![]() |
[26] |
M. Ding, H. Liu, G. Zheng, Shape reconstructions by using plasmon resonances with enhanced sensitivity, J. Comput. Phys., 486 (2023), 112131. http://doi.org/10.1016/j.jcp.2023.112131 doi: 10.1016/j.jcp.2023.112131
![]() |
[27] |
W. Yin, H. Qi, P. Meng, Broad learning system with preprocessing to recover the scattering obstacles with far-field data, Adv. Appl. Math. Mech., 15 (2023), 984–1000. https://doi.org/10.4208/aamm.OA-2021-0352 doi: 10.4208/aamm.OA-2021-0352
![]() |
[28] |
Y. Yin, W. Yin, P. Meng, H. Liu, The interior inverse scattering problem for a two-layered cavity using the Bayesian method, Inverse Probl. Imag., 16 (2022), 673–690. https://doi.org/10.3934/ipi.2021069 doi: 10.3934/ipi.2021069
![]() |
[29] |
Y. Yin, W. Yin, P. Meng, H. Liu, On a hybrid approach for recovering multiple obstacle, Commun. Comput. Phys., 31 (2022), 869–892. https://doi.org/10.4208/cicp.OA-2021-0124 doi: 10.4208/cicp.OA-2021-0124
![]() |
[30] |
P. Meng, J. Zhuang, L. Zhou, W. Yin, D. Qi, Efficient synchronous retrieval of OAM modes and AT strength using multi-task neural networks, Opt. Express, 32 (2024), 7816–7831. http://doi.org/10.1364/OE.511098 doi: 10.1364/OE.511098
![]() |
[31] |
P. Meng, X. Wang, W. Yin, ODE-RU: a dynamical system view on recurrent neural networks, Electron. Res. Arch., 30 (2022), 257–271. http://doi.org/10.3934/era.2022014 doi: 10.3934/era.2022014
![]() |
[32] |
Y. Gao, H. Liu, X. Wang, K. Zhang, A bayesian scheme for reconstructing obstacles in acoustic waveguides, J. Sci. Comput., 97 (2023), 53. http://doi.org/10.1007/s10915-023-02368-2 doi: 10.1007/s10915-023-02368-2
![]() |
[33] |
D. Colton, R. Kress, Using fundamental solutions in inverse scattering, Inverse Probl., 22 (2006), R49–R66. http://doi.org/10.1088/0266-5611/22/3/R01 doi: 10.1088/0266-5611/22/3/R01
![]() |
[34] | F. Cakoni, D. Colton, A qualitative approach to inverse scattering theory, New York: Springer, 2014. http://doi.org/10.1007/978-1-4614-8827-9 |
[35] |
J. Li, H. Liu, J. Zou, Multilevel linear sampling method for inverse scattering problems, SIAM J. Sci. Comput., 30 (2008), 1228–1250. http://doi.org/10.1137/060674247 doi: 10.1137/060674247
![]() |
[36] |
T. Arens, Why linear sampling works, Inverse Probl., 20 (2004), 163–173. http://doi.org/10.1088/0266-5611/20/1/010 doi: 10.1088/0266-5611/20/1/010
![]() |
[37] |
Y. Guo, P. Monk, D. Colton, The linear sampling method for sparse small aperture data, Appl. Anal., 95 (2016), 1599–1615. http://doi.org/10.1080/00036811.2015.1065317 doi: 10.1080/00036811.2015.1065317
![]() |
[38] |
P. Meng, L. Su, W. Yin, S. Zhang, Solving a kind of inverse scattering problem of acoustic waves based on linear sampling method and neural network, Alex. Eng. J., 59 (2020), 1451–1462. https://doi.org/10.1016/j.aej.2020.03.047 doi: 10.1016/j.aej.2020.03.047
![]() |
1. | Lijie Zhou, Liucang Wu, Bin Yang, Estimation and diagnostic for single-index partially functional linear regression model with p -order autoregressive skew-normal errors, 2025, 10, 2473-6988, 7022, 10.3934/math.2025321 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2690 | 0.0229 | -0.0048 | 0.1074 | 0.0096 | 0.1059 |
LAD | 0.3294 | 0.0354 | -0.0099 | 0.1320 | 0.0115 | 0.1333 | ||
ESL | 0.3434 | 0.0394 | 0.0009 | 0.1405 | -0.0013 | 0.1403 | ||
H-ESL | 0.2824 | 0.0259 | 0.0055 | 0.1141 | -0.0059 | 0.1133 | ||
AHR(0.25) | 0.3294 | 0.0786 | -0.0077 | 0.2014 | 0.0105 | 0.1948 | ||
Huber | 0.2758 | 0.0234 | -0.0063 | 0.1090 | 0.0102 | 0.1067 | ||
AHR(0.75) | 0.5251 | 0.0754 | -0.0092 | 0.1894 | 0.0150 | 0.1981 | ||
CAHR | 0.2689 | 0.0233 | -0.0051 | 0.1095 | 0.0091 | 0.1060 | ||
WCAHR | 0.2693 | 0.0230 | -0.0055 | 0.1080 | 0.0101 | 0.1058 | ||
400 | OLS | 0.1004 | 0.0105 | -0.0011 | 0.0710 | 0.0015 | 0.0738 | |
LAD | 0.1304 | 0.0163 | -0.0027 | 0.0880 | 0.0045 | 0.0924 | ||
ESL | 0.1349 | 0.0175 | 0.0010 | 0.0918 | 0.0008 | 0.0955 | ||
H-ESL | 0.1031 | 0.0113 | 0.0011 | 0.0727 | 0.0010 | 0.0773 | ||
AHR(0.25) | 0.1304 | 0.0358 | -0.0048 | 0.1313 | -0.0013 | 0.1361 | ||
Huber | 0.1048 | 0.0107 | -0.0018 | 0.0720 | 0.0027 | 0.0742 | ||
AHR(0.75) | 0.2337 | 0.0405 | 0.0043 | 0.1454 | 0.0019 | 0.1390 | ||
CAHR | 0.1006 | 0.0107 | -0.0014 | 0.0721 | 0.0020 | 0.0743 | ||
WCAHR | 0.1005 | 0.0105 | -0.0014 | 0.0709 | 0.0019 | 0.0736 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.2929 | 0.0241 | -0.0037 | 0.1127 | -0.0012 | 0.1065 |
LAD | 0.2452 | 0.0137 | -0.0001 | 0.0844 | -0.0007 | 0.0813 | ||
ESL | 0.3665 | 0.0387 | -0.0031 | 0.1412 | -0.0023 | 0.1368 | ||
H-ESL | 0.3377 | 0.0305 | -0.0009 | 0.1244 | -0.0028 | 0.1225 | ||
AHR(0.25) | 0.2260 | 0.0086 | 0.0012 | 0.0652 | -0.0022 | 0.0655 | ||
Huber | 0.1998 | 0.0098 | -0.0011 | 0.0698 | -0.0007 | 0.0702 | ||
AHR(0.75) | 0.2314 | 0.0172 | -0.0038 | 0.0949 | -0.0001 | 0.0903 | ||
CAHR | 0.2122 | 0.0099 | -0.0007 | 0.0717 | -0.0010 | 0.0691 | ||
WCAHR | 0.1884 | 0.0086 | 0.0002 | 0.0659 | -0.0017 | 0.0652 | ||
400 | OLS | 0.0994 | 0.0122 | -0.0024 | 0.0794 | 0.0052 | 0.0769 | |
LAD | 0.0718 | 0.0065 | -0.0004 | 0.0569 | 0.0001 | 0.0568 | ||
ESL | 0.1372 | 0.0193 | -0.0012 | 0.1003 | 0.0031 | 0.0962 | ||
H-ESL | 0.1022 | 0.0139 | -0.0043 | 0.0832 | 0.0056 | 0.0832 | ||
AHR(0.25) | 0.0796 | 0.0037 | 0.0028 | 0.0418 | -0.0010 | 0.0437 | ||
Huber | 0.0688 | 0.0045 | 0.0005 | 0.0468 | 0.0003 | 0.0483 | ||
AHR(0.75) | 0.0912 | 0.0082 | -0.0010 | 0.0627 | 0.0019 | 0.0652 | ||
CAHR | 0.0712 | 0.0043 | 0.0007 | 0.0454 | 0.0009 | 0.0471 | ||
WCAHR | 0.0567 | 0.0035 | -0.0009 | 0.0418 | 0.0018 | 0.0424 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.4781 | 0.0695 | 0.0047 | 0.1815 | -0.0185 | 0.1902 |
LAD | 0.2461 | 0.0215 | 0.0057 | 0.1052 | -0.0094 | 0.1015 | ||
ESL | 0.3793 | 0.0451 | 0.0023 | 0.1538 | -0.0062 | 0.1462 | ||
H-ESL | 0.3682 | 0.0443 | 0.0059 | 0.1533 | -0.0078 | 0.1437 | ||
AHR(0.25) | 0.2541 | 0.0203 | 0.0014 | 0.0987 | 0.0039 | 0.1025 | ||
Huber | 0.2829 | 0.0284 | 0.0082 | 0.1204 | -0.0013 | 0.1175 | ||
AHR(0.75) | 1.8541 | 0.3745 | 0.0128 | 0.4566 | -0.0243 | 0.4065 | ||
CAHR | 0.3606 | 0.0286 | 0.0037 | 0.1188 | -0.0001 | 0.1202 | ||
WCAHR | 0.2205 | 0.0169 | 0.0018 | 0.0920 | -0.0089 | 0.0915 | ||
400 | OLS | 0.2310 | 0.0325 | -0.0021 | 0.1296 | 0.0008 | 0.1252 | |
LAD | 0.1004 | 0.0109 | 0.0006 | 0.0742 | -0.0001 | 0.0735 | ||
ESL | 0.1563 | 0.0212 | -0.0053 | 0.1002 | 0.0027 | 0.1054 | ||
H-ESL | 0.1516 | 0.0178 | -0.0045 | 0.0917 | 0.0011 | 0.0966 | ||
AHR(0.25) | 0.1000 | 0.0088 | 0.0028 | 0.0671 | -0.0004 | 0.0659 | ||
Huber | 0.1108 | 0.0116 | 0.0019 | 0.0781 | 0.0021 | 0.0743 | ||
AHR(0.75) | 1.5565 | 0.3644 | -0.0416 | 0.4269 | 0.0216 | 0.4242 | ||
CAHR | 0.1496 | 0.0153 | 0.0016 | 0.0873 | 0.0007 | 0.0874 | ||
WCAHR | 0.0838 | 0.0076 | -0.0015 | 0.0616 | -0.0000 | 0.0618 | ||
MN | 200 | OLS | 0.8806 | 0.1464 | -0.0134 | 0.2675 | -0.0016 | 0.2732 |
LAD | 0.3783 | 0.0358 | -0.0005 | 0.1320 | -0.0013 | 0.1355 | ||
ESL | 0.3719 | 0.0363 | 0.0025 | 0.1331 | -0.0044 | 0.1361 | ||
H-ESL | 0.3101 | 0.0280 | -0.0018 | 0.1175 | -0.0028 | 0.1189 | ||
AHR(0.25) | 0.3297 | 0.1148 | -0.0120 | 0.2346 | 0.0105 | 0.2439 | ||
Huber | 0.3685 | 0.0499 | -0.0042 | 0.1613 | 0.0071 | 0.1543 | ||
AHR(0.75) | 0.7037 | 0.1060 | 0.0078 | 0.2321 | -0.0033 | 0.2281 | ||
CAHR | 0.7857 | 0.1035 | 0.0031 | 0.2292 | 0.0035 | 0.2257 | ||
WCAHR | 0.3252 | 0.0340 | -0.0046 | 0.1289 | -0.0033 | 0.1316 | ||
400 | OLS | 0.3822 | 0.0715 | 0.0063 | 0.1887 | -0.0099 | 0.1892 | |
LAD | 0.1307 | 0.0190 | 0.0055 | 0.0983 | -0.0060 | 0.0965 | ||
ELS | 0.1268 | 0.0180 | 0.0032 | 0.0963 | -0.0043 | 0.0932 | ||
H-ESL | 0.1032 | 0.0129 | 0.0048 | 0.0807 | -0.0067 | 0.0793 | ||
AHR(0.25) | 0.1491 | 0.0604 | 0.0052 | 0.1731 | 0.0055 | 0.1742 | ||
Huber | 0.1332 | 0.0156 | -0.0007 | 0.0880 | 0.0033 | 0.0887 | ||
AHR(0.75) | 0.3391 | 0.0505 | -0.0049 | 0.1597 | 0.0040 | 0.1581 | ||
CAHR | 0.3435 | 0.0419 | -0.0030 | 0.1442 | 0.0074 | 0.1449 | ||
WCAHR | 0.1127 | 0.0157 | 0.0055 | 0.0894 | -0.0077 | 0.0872 | ||
{Bimodal} | 200 | OLS | 0.4317 | 0.0560 | -0.0001 | 0.1690 | -0.0018 | 0.1657 |
LAD | 0.8634 | 0.1201 | -0.0040 | 0.2438 | 0.0008 | 0.2463 | ||
ESL | 2.2921 | 0.4163 | -0.0148 | 0.4541 | -0.0002 | 0.4582 | ||
H-ESL | 0.4417 | 0.0558 | -0.0004 | 0.1687 | -0.0017 | 0.1653 | ||
AHR(0.25) | 0.9240 | 0.3169 | -0.0258 | 0.3973 | 0.0352 | 0.3964 | ||
Huber | 0.5694 | 0.0776 | -0.0056 | 0.2018 | 0.0129 | 0.1914 | ||
AHR(0.75) | 1.8171 | 0.3150 | 0.0110 | 0.4044 | -0.0107 | 0.3889 | ||
CAHR | 0.5191 | 0.0546 | -0.0075 | 0.1652 | 0.0116 | 0.1647 | ||
WCAHR | 0.4215 | 0.0552 | 0.0016 | 0.1679 | -0.0016 | 0.1642 | ||
400 | OLS | 0.1861 | 0.0296 | 0.0032 | 0.1170 | -0.0017 | 0.1262 | |
LAD | 0.4069 | 0.0670 | -0.0068 | 0.1765 | 0.0061 | 0.1891 | ||
ESL | 1.5317 | 0.2511 | -0.0185 | 0.3481 | 0.0033 | 0.3600 | ||
H-ESL | 0.1860 | 0.0298 | 0.0032 | 0.1175 | -0.0021 | 0.1265 | ||
AHR(0.25) | 0.4420 | 0.1620 | -0.0081 | 0.2835 | -0.0025 | 0.2855 | ||
Huber | 0.2369 | 0.0454 | 0.0023 | 0.1483 | -0.0082 | 0.1529 | ||
AHR(0.75) | 0.8504 | 0.1523 | 0.0142 | 0.2774 | -0.0021 | 0.2742 | ||
CAHR | 0.2047 | 0.0296 | 0.0025 | 0.1195 | -0.0014 | 0.1239 | ||
WCAHR | 0.1825 | 0.0291 | 0.0033 | 0.1162 | -0.0019 | 0.1249 |
Errors | n | Method | MISE | MSE(\hat{\boldsymbol{\alpha}}) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
N(0, 1) | 200 | OLS | 0.2560 | 0.0237 | 0.0049 | 0.1105 | -0.0052 | 0.1069 |
LAD | 0.2988 | 0.0300 | 0.0054 | 0.1228 | -0.0009 | 0.1222 | ||
ELS | 0.2990 | 0.0316 | 0.0041 | 0.1272 | -0.0021 | 0.1240 | ||
H-ESL | 0.2655 | 0.0258 | 0.0060 | 0.1163 | -0.0052 | 0.1104 | ||
AHR(0.25) | 0.2988 | 0.1065 | -0.0860 | 0.2178 | -0.0965 | 0.2058 | ||
Huber | 0.2531 | 0.0234 | 0.0050 | 0.1099 | -0.0045 | 0.1062 | ||
AHR(0.75) | 0.5960 | 0.0911 | 0.0958 | 0.1880 | 0.0836 | 0.1990 | ||
CAHR | 0.2562 | 0.0236 | 0.0055 | 0.1099 | -0.0051 | 0.1070 | ||
WCAHR | 0.2519 | 0.0227 | 0.0051 | 0.1081 | -0.0045 | 0.1047 | ||
400 | OLS | 0.1056 | 0.0119 | -0.0029 | 0.0767 | -0.0002 | 0.0777 | |
LAD | 0.1269 | 0.0153 | -0.0027 | 0.0865 | 0.0008 | 0.0882 | ||
ELS | 0.1257 | 0.0157 | -0.0026 | 0.0884 | 0.0019 | 0.0888 | ||
H-ESL | 0.1061 | 0.0116 | -0.0036 | 0.0760 | 0.0018 | 0.0762 | ||
AHR(0.25) | 0.1269 | 0.0588 | -0.1020 | 0.1418 | -0.0889 | 0.1427 | ||
Huber | 0.1060 | 0.0117 | -0.0032 | 0.0764 | 0.0001 | 0.0766 | ||
AHR(0.75) | 0.2695 | 0.0581 | 0.0990 | 0.1428 | 0.0859 | 0.1434 | ||
CAHR | 0.1064 | 0.0120 | -0.0023 | 0.0769 | -0.0005 | 0.0777 | ||
WCAHR | 0.1046 | 0.0115 | -0.0030 | 0.0754 | 0.0002 | 0.0762 | ||
{SN}(0, 1, 15) | 200 | OLS | 0.3164 | 0.0357 | 0.0862 | 0.1070 | 0.0748 | 0.1057 |
LAD | 0.2417 | 0.0209 | 0.0695 | 0.0789 | 0.0586 | 0.0803 | ||
ELS | 0.3578 | 0.0306 | 0.0140 | 0.1234 | 0.0119 | 0.1228 | ||
H-ESL | 0.3390 | 0.0316 | 0.0308 | 0.1211 | 0.0304 | 0.1228 | ||
AHR(0.25) | 0.2417 | 0.0139 | 0.0555 | 0.0651 | 0.0485 | 0.0650 | ||
Huber | 0.2344 | 0.0209 | 0.0780 | 0.0711 | 0.0684 | 0.0713 | ||
AHR(0.75) | 0.2809 | 0.0427 | 0.1122 | 0.1026 | 0.0969 | 0.1010 | ||
CAHR | 0.2454 | 0.0211 | 0.0774 | 0.0725 | 0.0684 | 0.0718 | ||
WCAHR | 0.2155 | 0.0174 | 0.0727 | 0.0644 | 0.0623 | 0.0642 | ||
400 | OLS | 0.1180 | 0.0242 | 0.0859 | 0.0754 | 0.0776 | 0.0718 | |
LAD | 0.0776 | 0.0149 | 0.0683 | 0.0573 | 0.0622 | 0.0553 | ||
ELS | 0.1267 | 0.0143 | 0.0180 | 0.0841 | 0.0001 | 0.0833 | ||
H-ESL | 0.1199 | 0.0179 | 0.0527 | 0.0831 | 0.0393 | 0.0819 | ||
AHR(0.25) | 0.0776 | 0.0096 | 0.0532 | 0.0478 | 0.0493 | 0.0453 | ||
Huber | 0.0763 | 0.0163 | 0.0753 | 0.0511 | 0.0740 | 0.0500 | ||
AHR(0.75) | 0.1060 | 0.0330 | 0.1043 | 0.0689 | 0.1117 | 0.0700 | ||
CAHR | 0.0813 | 0.0158 | 0.0732 | 0.0491 | 0.0755 | 0.0483 | ||
WCAHR | 0.0672 | 0.0130 | 0.0678 | 0.0451 | 0.0668 | 0.0441 | ||
{St}(0, 1, 5, 3) | 200 | OLS | 0.5695 | 0.0947 | 0.1071 | 0.1861 | 0.1158 | 0.1876 |
LAD | 0.3044 | 0.0287 | 0.0704 | 0.0970 | 0.0738 | 0.0941 | ||
ELS | 0.4205 | 0.0400 | -0.0035 | 0.1404 | -0.0162 | 0.1415 | ||
H-ESL | 0.3927 | 0.0405 | 0.0125 | 0.1409 | 0.0038 | 0.1431 | ||
AHR(0.25) | 0.3044 | 0.0219 | 0.0477 | 0.0944 | 0.0466 | 0.0925 | ||
Huber | 0.3450 | 0.0355 | 0.0764 | 0.1077 | 0.0784 | 0.1093 | ||
AHR(0.75) | 2.6347 | 0.5711 | 0.1826 | 0.5139 | 0.1919 | 0.4867 | ||
CAHR | 0.4548 | 0.0423 | 0.0776 | 0.1227 | 0.0826 | 0.1200 | ||
WCAHR | 0.2863 | 0.0246 | 0.0667 | 0.0868 | 0.0703 | 0.0875 | ||
400 | OLS | 0.2663 | 0.0577 | 0.1027 | 0.1286 | 0.1196 | 0.1278 | |
LAD | 0.1092 | 0.0199 | 0.0732 | 0.0694 | 0.0738 | 0.0657 | ||
ELS | 0.1423 | 0.0190 | -0.0083 | 0.0973 | -0.0090 | 0.0968 | ||
H-ESL | 0.1429 | 0.0202 | 0.0184 | 0.0971 | 0.0177 | 0.1003 | ||
AHR(0.25) | 0.1092 | 0.0148 | 0.0456 | 0.0725 | 0.0507 | 0.0698 | ||
Huber | 0.1447 | 0.0237 | 0.0777 | 0.0762 | 0.0784 | 0.0755 | ||
AHR(0.75) | 2.4856 | 0.6166 | 0.2100 | 0.5037 | 0.1899 | 0.5317 | ||
CAHR | 0.1858 | 0.0264 | 0.0726 | 0.0833 | 0.0831 | 0.0852 | ||
WCAHR | 0.0932 | 0.0160 | 0.0645 | 0.0596 | 0.0694 | 0.0592 | ||
MN | 200 | OLS | 0.9780 | 0.1653 | -0.0191 | 0.2838 | 0.0090 | 0.2904 |
LAD | 0.3672 | 0.0409 | -0.0111 | 0.1436 | 0.0065 | 0.1420 | ||
ELS | 0.3644 | 0.0390 | -0.0092 | 0.1407 | 0.0091 | 0.1381 | ||
H-ESL | 0.3186 | 0.0321 | -0.0072 | 0.1260 | 0.0047 | 0.1271 | ||
AHR(0.25) | 0.3672 | 0.1579 | -0.1021 | 0.2647 | -0.0797 | 0.2665 | ||
Huber | 0.3700 | 0.0407 | -0.0134 | 0.1412 | 0.0106 | 0.1431 | ||
AHR(0.75) | 0.7890 | 0.1350 | 0.0802 | 0.2419 | 0.0878 | 0.2498 | ||
CAHR | 0.8570 | 0.0989 | -0.0114 | 0.2229 | 0.0077 | 0.2214 | ||
WCAHR | 0.3324 | 0.0352 | -0.0109 | 0.1326 | 0.0071 | 0.1322 | ||
400 | OLS | 0.4215 | 0.0781 | 0.0126 | 0.2002 | -0.0081 | 0.1944 | |
LAD | 0.1244 | 0.0189 | 0.0000 | 0.0980 | 0.0028 | 0.0964 | ||
ELS | 0.1247 | 0.0173 | 0.0004 | 0.0940 | 0.0028 | 0.0921 | ||
H-ESL | 0.1073 | 0.0131 | 0.0010 | 0.0808 | 0.0019 | 0.0808 | ||
AHR(0.25) | 0.1244 | 0.0759 | -0.0849 | 0.1701 | -0.0953 | 0.1752 | ||
Huber | 0.1222 | 0.0147 | 0.0054 | 0.0861 | -0.0011 | 0.0854 | ||
AHR(0.75) | 0.3546 | 0.0735 | 0.1014 | 0.1642 | 0.0908 | 0.1673 | ||
CAHR | 0.3496 | 0.0531 | 0.0116 | 0.1647 | -0.0069 | 0.1604 | ||
WCAHR | 0.1114 | 0.0148 | 0.0038 | 0.0860 | 0.0001 | 0.0857 | ||
{Bimodal} | 200 | OLS | 0.4259 | 0.0604 | -0.0118 | 0.1733 | 0.0020 | 0.1738 |
LAD | 0.7261 | 0.1323 | -0.0248 | 0.2573 | 0.0096 | 0.2558 | ||
ELS | 1.9087 | 0.3505 | -0.0283 | 0.4120 | 0.0078 | 0.4241 | ||
H-ESL | 0.4337 | 0.0614 | -0.0118 | 0.1758 | 0.0031 | 0.1743 | ||
AHR(0.25) | 0.7261 | 0.5567 | -0.1960 | 0.5100 | -0.1893 | 0.4716 | ||
Huber | 0.4839 | 0.0763 | -0.0180 | 0.1974 | 0.0064 | 0.1924 | ||
AHR(0.75) | 2.4130 | 0.4776 | 0.1823 | 0.4534 | 0.1885 | 0.4509 | ||
CAHR | 0.6745 | 0.1187 | -0.0223 | 0.2449 | 0.0087 | 0.2411 | ||
WCAHR | 0.4361 | 0.0642 | -0.0096 | 0.1797 | 0.0068 | 0.1783 | ||
400 | OLS | 0.1934 | 0.0295 | 0.0087 | 0.1251 | -0.0113 | 0.1169 | |
LAD | 0.3626 | 0.0578 | 0.0145 | 0.1715 | -0.0169 | 0.1670 | ||
ELS | 1.0701 | 0.1718 | 0.0200 | 0.2893 | -0.0187 | 0.2955 | ||
H-ESL | 0.1948 | 0.0299 | 0.0098 | 0.1256 | -0.0113 | 0.1178 | ||
AHR(0.25) | 0.3626 | 0.3363 | -0.2023 | 0.3440 | -0.2239 | 0.3562 | ||
Huber | 0.2316 | 0.0383 | 0.0093 | 0.1438 | -0.0130 | 0.1320 | ||
AHR(0.75) | 1.2586 | 0.3302 | 0.2280 | 0.3504 | 0.1848 | 0.3484 | ||
CAHR | 0.3292 | 0.0506 | 0.0163 | 0.1645 | -0.0174 | 0.1516 | ||
WCAHR | 0.2058 | 0.0313 | 0.0119 | 0.1298 | -0.0092 | 0.1193 |
Errors | n | K_1 | MISE | MSE( \hat{\mathit{\boldsymbol{\alpha}}} ) | \hat{\alpha}_1 | \hat{\alpha}_2 | ||
Bias | Sd | Bias | Sd | |||||
{SN}(0, 1, 15) | 200 | 4 | 0.2321 | 0.0088 | - 0.0034 | 0.0663 | 0.0044 | 0.0665 |
6 | 0.2295 | 0.0084 | - 0.0036 | 0.0646 | 0.0042 | 0.0645 | ||
8 | 0.2279 | 0.0081 | - 0.0038 | 0.0635 | 0.0042 | 0.0633 | ||
10 | 0.2269 | 0.0079 | - 0.0040 | 0.0629 | 0.0041 | 0.0626 | ||
12 | 0.2264 | 0.0078 | - 0.0041 | 0.0627 | 0.0041 | 0.0621 | ||
14 | 0.2262 | 0.0078 | - 0.0043 | 0.0627 | 0.0041 | 0.0620 | ||
400 | 4 | 0.0626 | 0.0039 | 0.0071 | 0.0450 | -0.0054 | 0.0425 | |
6 | 0.0611 | 0.0037 | 0.0068 | 0.0440 | - 0.0050 | 0.0415 | ||
8 | 0.0601 | 0.0036 | 0.0065 | 0.0434 | - 0.0046 | 0.0410 | ||
10 | 0.0595 | 0.0036 | 0.0063 | 0.0432 | - 0.0043 | 0.0409 | ||
12 | 0.0591 | 0.0036 | 0.0060 | 0.0432 | - 0.0039 | 0.0409 | ||
14 | 0.0589 | 0.0036 | 0.0059 | 0.0433 | - 0.0038 | 0.0411 | ||
{St}(0, 1, 5, 3) | 200 | 4 | 0.2445 | 0.0178 | - 0.0027 | 0.0961 | - 0.0023 | 0.0927 |
6 | 0.2388 | 0.0169 | - 0.0025 | 0.0938 | - 0.0024 | 0.0900 | ||
8 | 0.2349 | 0.0163 | - 0.0025 | 0.0923 | - 0.0023 | 0.0880 | ||
10 | 0.2321 | 0.0158 | - 0.0025 | 0.0911 | - 0.0023 | 0.0866 | ||
12 | 0.2301 | 0.0155 | - 0.0024 | 0.0903 | - 0.0022 | 0.0856 | ||
14 | 0.2287 | 0.0153 | - 0.0023 | 0.0898 | - 0.0022 | 0.0849 | ||
400 | 4 | 0.0925 | 0.0090 | - 0.0079 | 0.0678 | 0.0076 | 0.0652 | |
6 | 0.0899 | 0.0085 | - 0.0078 | 0.0659 | 0.0076 | 0.0637 | ||
8 | 0.0880 | 0.0082 | - 0.0076 | 0.0646 | 0.0076 | 0.0626 | ||
10 | 0.0867 | 0.0080 | - 0.0076 | 0.0636 | 0.0076 | 0.0619 | ||
12 | 0.0857 | 0.0078 | - 0.0075 | 0.0629 | 0.0077 | 0.0615 | ||
14 | 0.0850 | 0.0077 | - 0.0074 | 0.0623 | 0.0077 | 0.0611 |
Electricity prices | Tecator | |||||
Methods | N=100 | N=200 | N=400 | N=100 | N=200 | N=400 |
OLS | 0.4269 | 0.4132 | 0.4153 | 2.7824\times10^{-3} | 2.7182\times10^{-3} | 2.7257\times10^{-3} |
LAD | 0.4094 | 0.4005 | 0.4068 | 2.9388\times10^{-3} | 2.8611\times10^{-3} | 2.8253\times10^{-3} |
ESL | 0.5751 | 0.5626 | 0.5578 | 2.7268\times10^{-3} | 2.6701\times10^{-3} | 2.6142\times10^{-3} |
H-ESL | 0.4104 | 0.4026 | 0.4064 | 2.7395\times10^{-3} | 2.7336\times10^{-3} | 2.6476\times10^{-3} |
AHR(0.25) | 0.4052 | 0.3985 | 0.4032 | 2.9056\times10^{-3} | 2.7658\times10^{-3} | 2.7753\times10^{-3} |
Huber | 0.4077 | 0.4027 | 0.4074 | 2.7636\times10^{-3} | 2.6491\times10^{-3} | 2.6278\times10^{-3} |
AHR(0.75) | 0.4296 | 0.4198 | 0.4280 | 2.7409\times10^{-3} | 2.6494\times10^{-3} | 2.6063\times10^{-3} |
CAHR | 0.4103 | 0.4019 | 0.4089 | 2.8442\times10^{-3} | 2.8106\times10^{-3} | 2.7457\times10^{-3} |
WCAHR | 0.4030 | 0.3967 | 0.4008 | 2.7236\times10^{-3} | 2.6152\times10^{-3} | 2.5799\times10^{-3} |
Electricity prices | Tecator | |||
\hat \alpha_1 | \hat \alpha_2 | \hat \alpha_1 | \hat \alpha_2 | |
OLS | -0.5983 | -0.4672 | -1.1056 | -0.6894 |
LAD | -0.7095 | -0.9438 | -1.0828 | -0.7611 |
ESL | -0.5125 | -0.4629 | -1.0894 | -0.7455 |
H-ESL | -0.6007 | -0.7212 | -1.0983 | -0.7367 |
AHR(0.25) | -0.5618 | -0.4725 | -1.1122 | -0.7026 |
Huber | -0.5799 | -0.4416 | -1.0981 | -0.7235 |
AHR(0.75) | -0.5812 | -0.4302 | -1.0854 | -0.7576 |
CAHR | -0.5394 | -0.4582 | -1.0990 | -0.7274 |
WCAHR | -0.5924 | -0.6182 | -1.0964 | -0.7270 |