Loading [MathJax]/jax/output/SVG/jax.js
Research article

Critical regularity of nonlinearities in semilinear effectively damped wave models

  • In this paper we consider the Cauchy problem for the semilinear effectively damped wave equation

    uttuxx+b(t)ut=|u|3μ(|u|),u(0,x)=u0(x),ut(0,x)=u1(x).

    Our goal is to propose sharp conditions on μ to obtain a threshold between global (in time) existence of small data Sobolev solutions (stability of the zero solution) and blow-up behaviour even of small data Sobolev solutions.

    Citation: Abdelhamid Mohammed Djaouti, Michael Reissig. Critical regularity of nonlinearities in semilinear effectively damped wave models[J]. AIMS Mathematics, 2023, 8(2): 4764-4785. doi: 10.3934/math.2023236

    Related Papers:

    [1] Breix Michael Agua, Salim Bouzebda . Single index regression for locally stationary functional time series. AIMS Mathematics, 2024, 9(12): 36202-36258. doi: 10.3934/math.20241719
    [2] Fatimah Alshahrani, Wahiba Bouabsa, Ibrahim M. Almanjahie, Mohammed Kadi Attouch . Robust kernel regression function with uncertain scale parameter for high dimensional ergodic data using k-nearest neighbor estimation. AIMS Mathematics, 2023, 8(6): 13000-13023. doi: 10.3934/math.2023655
    [3] Oussama Bouanani, Salim Bouzebda . Limit theorems for local polynomial estimation of regression for functional dependent data. AIMS Mathematics, 2024, 9(9): 23651-23691. doi: 10.3934/math.20241150
    [4] Salim Bouzebda, Amel Nezzal, Issam Elhattab . Limit theorems for nonparametric conditional U-statistics smoothed by asymmetric kernels. AIMS Mathematics, 2024, 9(9): 26195-26282. doi: 10.3934/math.20241280
    [5] Lanyin Sun, Siya Wen . Applications of mixed finite element method based on Bernstein polynomials in numerical solution of Stokes equations. AIMS Mathematics, 2024, 9(12): 35978-36000. doi: 10.3934/math.20241706
    [6] Salim Bouzebda . Weak convergence of the conditional single index U-statistics for locally stationary functional time series. AIMS Mathematics, 2024, 9(6): 14807-14898. doi: 10.3934/math.2024720
    [7] Jieqiong Lu, Peixin Zhao, Xiaoshuang Zhou . Orthogonality based modal empirical likelihood inferences for partially nonlinear models. AIMS Mathematics, 2024, 9(7): 18117-18133. doi: 10.3934/math.2024884
    [8] Said Attaoui, Billal Bentata, Salim Bouzebda, Ali Laksaci . The strong consistency and asymptotic normality of the kernel estimator type in functional single index model in presence of censored data. AIMS Mathematics, 2024, 9(3): 7340-7371. doi: 10.3934/math.2024356
    [9] Sima Karamseraji, Shokrollah Ziari, Reza Ezzati . Approximate solution of nonlinear fuzzy Fredholm integral equations using bivariate Bernstein polynomials with error estimation. AIMS Mathematics, 2022, 7(4): 7234-7256. doi: 10.3934/math.2022404
    [10] Sangwan Kim, Yongku Kim, Jung-In Seo . Nonparametric Bayesian modeling for non-normal data through a transformation. AIMS Mathematics, 2024, 9(7): 18103-18116. doi: 10.3934/math.2024883
  • In this paper we consider the Cauchy problem for the semilinear effectively damped wave equation

    uttuxx+b(t)ut=|u|3μ(|u|),u(0,x)=u0(x),ut(0,x)=u1(x).

    Our goal is to propose sharp conditions on μ to obtain a threshold between global (in time) existence of small data Sobolev solutions (stability of the zero solution) and blow-up behaviour even of small data Sobolev solutions.



    Regression is the most frequently employed technique in nonparametric statistics to examine the association between two variables X and Y. In this context, Y represents the response variable, while X is a random vector of predictors (covariates) that can assume values in the real number space R. The regression function at a point xR is the conditional expectation of Y given X=x, denoted as

    r(x):=E(Y|X=x).

    Various techniques can be employed to estimate a regression function, including kernel estimators, regression spline methods, and others. Nevertheless, these methods lack robustness as they are highly susceptible to outliers. Given that outliers are commonly observed in various fields, such as finance, it is essential to handle outliers properly to emphasize a dataset's unique features. Robust regression is a statistical technique used to address the issue of lack of robustness in regression models. It ensures that the model remains stable and resistant to the influence of outliers.

    Robust regression holds significant importance within the realm of statistics. It is employed to overcome certain constraints of non-robust regression, specifically when the data exhibit heteroscedasticity or include outliers. The earliest significant outcome in this field can be traced back to Huber's work in [1]. The regression estimation method mentioned has been extensively researched. For empirical data, notable studies include Robinson [2], Collomb and H¨ardle [3], Boente and Fraiman [4,5], and Fan et al. [6] for earlier findings. Recent advancements and references can be found in Laib and Ould-Saïd [7] and Boente and Rodriguez [8]. Traditional kernel estimators often exhibit significant bias near boundaries because the kernel's support can extend beyond them, resulting in inaccurate estimates. Being supported on the entire interval, Bernstein estimators do not suffer from this boundary bias, leading to more accurate estimations near the edges.

    The Bernstein polynomial is widely acknowledged as a valuable tool for interpolating functions on a closed interval, rendering it suitable for approximating density functions within that interval.

    The use of Bernstein polynomials as density estimators for variables with finite support has been proposed in several articles. Vitale [9] first introduced this concept, followed by Petrone [10,11]. Further studies on this topic were conducted by Babu, et al. [12], Petrone and Wassermann[13], and Kakizawa [14].

    Recently, Ouimet [15] studied some asymptotic properties of Bernstein cumulative distribution function and density estimators on the d-dimensional simplex and studied their asymptotic normality and uniform strong consistency. Belalia et al. [16] introduced a two-stage Bernstein estimator for conditional distribution functions. Various other statistical topics related to the Bernstein estimator have been treated; for more references, see Ouimet [15]. Khardani [17] investigated various asymptotic properties (bias, variance, mean squared error, asymptotic normality, uniform strong consistency) for Bernstein estimators of quantiles and cumulative distribution functions when the variable of interest is subject to random right-censoring.

    It is essential to mention that several authors have devised Bernstein-based methodologies for addressing non-parametric function estimation problems. Priestley and Chao[18] first proposed the potential application of Bernstein polynomials for regression problems. Tenbusch [19], Brown and Chen [20], Choudhuri, Ghosal, and Roy [21], Chang, Hsiung, Wu, and Yang [22], Kakizawa [23], and Slaoui and Jmaei [24] have all conducted research on various non-parametric function estimation problems.

    In this paper, our contribution is to find asymptotic expressions for the bias, variance, and mean squared error (MSE) for the Bernstein robust regression function estimator defined in (2.4) and (2.3) and also prove their asymptotic normality and convergence. We deduce the asymptotically optimal bandwidth parameter m using the expression for the MSE as well. The results provided by our Bernstein approach for the robust regression function are better than those of the traditional kernel estimators. In future work, using some kernels, such as Dirichlet, Wishart, and inverse Gaussian kernels, and the robust function will be investigated in other spaces, such as the simplex, the space of positive definite matrices, and half-spaces, etc.

    The subsequent sections of the paper are structured in the following manner. In the next section, we will introduce our model. Section 3 presents notations, assumptions, and investigates various asymptotic properties of the proposed estimator. Section 4 presents a simulation study that evaluates the proposed approach's performance compared to the Bernstein-Nadaraya-Watson estimator and the Nadaraya-Watson estimator. Section 5 discusses a real data application, while the proofs of the results are provided in the Appendix.

    Let (X,Y),(X1,Y1),,(Xn,Yn) be independent, identically distributed pairs of random variables with joint density function g(x,y), and let f denote the probability density of X, which is supported on [0,1]. Let x be a fixed element of R, and let ρ a real-valued Borel function that satisfies specific regularity conditions outlined below. The robust method used to study the links between X and Y belongs to the class of M-estimates introduced by Huber [1]. The robust nonparametric parameter studied in this work, denoted by θx, is implicitly defined as the unique minimizer w.r.t. t of

    r(x,t):=E(ρ(Yt)|X=x), (2.1)

    that is

    θx=argmintRr(x,t). (2.2)

    This definition covers and includes many important nonparametric models, for example, ρ(t)=t2 yields the non-robust regression, ρ(t)=|t| leads to the conditional median function m(x)=med(YX=x), and the αth conditional quantile is obtained by setting ρ(t)=|t|+(2α1)(t). We return to Stone [25] for other examples of the function ρ.

    We utilize the techniques outlined in Vitale [9] and Leblanc [26,27] for distribution and density estimation. Additionally, we refer to the work of Slaoui [28] and Tenbusch [19,29] for non-robust regression. Our objective is to establish a Bernstein estimator for robust regression, defined as

    ˆθx=argmintRˆrn(x,t), (2.3)

    with at a given point x[0,1] such that f(x)0 and

    ˆrn(x,t)=ni=1ρ(Yit)mn1k=0I{kmn<Xik+1mn}Bk(mn1,x)ni=1mn1k=0I{kmn<Xik+1mn}Bk(mn1,x)=Nn(x,t)fn(x), (2.4)

    where Bk(m,x)=(mk)xk(1x)mk is the Bernstein polynomial of order m. This estimator can be viewed as a generalization of the estimator proposed in Slaoui and Jmaei [28], with

    Nn(x,t)=mnnni=1ρ(Yit)mn1k=0I{kmn<Xik+1mn}Bk(mn1,x),

    where fn is Vitale's estimator of the density f defined, for all x[0,1], by

    fn(x)=mnnni=1mn1k=0I{kmn<Xk+1mn}Bk(mn1,x)=mnmn1k=0{Fn(k+1mn)Fn(kmn)}Bk(mn1,x), (2.5)

    with Fn, the empirical distribution function of the variable X.

    This paper will use the following notations:

    ψ(x)=(4πx(1x))1/2,Δ1(x)=12[(12x)f(x)+x(1x)f(x)],Δ2(x)=12{(12x)(rx(x,t)f(x)+f(x)r(x,t))+x(1x)(2f(x)rx(x,t)+f(x)2rx2(x,t)+f(x)r(x,t))},Δ(x)=12{x(1x)2rx2(x,t)+[(12x)+2x(1x)f(x)f(x)]rx(x,t)},δ1=10Δ2(x)dx,δ2=10Var[ρ(Yt)X=x]f(x)ψ(x)dx.

    Moreover, we denote by o the pointwise bound in x (i.e., the error is not uniform in x[0,1]).

    Remark 2.1. Robust regression is advantageous in real data settings where outliers, non-normal errors, or heteroscedasticity are present, making it a more flexible and resilient choice.

    To state our results, we will need to gather some assumptions to make reading our results easier. In what follows, we will assume that the following assumptions hold:

    Throughout the paper, C1,C2,C3 represent positive constants, while C denotes a generic constant independent of n. Let I0:={x[0,1]:f(x)>0} and S be a compact subset of I0.

    H1: mn2, mnn+ and mn/nn+0.

    H2: g(s,t) is twice continuously differentiable with respect to s.

    H3: For q{0,1,2},sRtqg(s,t)dt is a bounded function continuous at s=x.

    H4: For q>2,sR|t|qg(s,t)dt is a bounded function.

    H5: The function ρ(.) is a bounded, monotone, differentiable function. Its derivative is bounded.

    H6: The functions r and f are continuous and admit twice continuous and bounded derivatives such that |rx(x,t)|C>0, xR.

    H7: r(x,.) is of class C1 on [θxτ,θx+τ] and satisfies inf[θxτ,θx+τ]|rt(x,.)|>C3 and uniformly continuous.

    The assumptions we make are typical for this type of framework. Assumption (H1) is a technical requirement imposed to make proofs more concise. Assumptions (H2)–(H4) are necessary conditions for the estimation of the regression function in the couple (X,Y), as outlined in the works of Nadaraya [30], Watson [31], and Slaoui and Jmaei [28]. These assumptions pertain to the regularity of the density function. The condition (H5) controls the robustness properties of our model. It maintains the same conditions on the function ρ as those provided by Collomb and Härdle [3] and Boente and Rodriguez [8] in the multivariate case. Assumptions (H6) and (H7) deal with some regularity of the function r(.,.). Note that condition (H6) is used to get the asymptotic normality of our estimator, and condition (H7) is somewhat less restrictive compared to that presented in the literature (see Boente and Fraiman [32], L. Aït Hennani, M.Lemdani, and E. Ould Saïd [33], Attouch et al.[34,35]), needed for the consistency result.

    Proposition 3.1. Under Assumptions (H1)–(H5), and for x[0,1] such that f(x)>0, we have

    E[ˆrn(x,t)]r(x,t)=Δ(x)m1n+o(m1n), (3.1)
    Var[ˆrn(x,t)]={m1/2nnE[(ρ(Yt))2X=x]f(x)ψ(x)+ox(m3/2nn)forx(0,1),mnnE[(ρ(Yt))2X=x]f(x)+ox(mnn)forx=0,1, (3.2)
    MSE[ˆrn(x,t)]={Δ2(x)m2n+m1/2nnVar(ρ(Yt)X=x)f(x)ψ(x)+o(m2n)+ox(m1/2nn)ifx(0,1),Δ2(x)m2n+mnVar(ρ(Yt)X=x)f(x)+o(m2n)+ox(mnn)ifx=0,1. (3.3)

    To minimize the MSE of ˆrn, for x[0,1] such that f(x)>0, the order mn must be equal to

    mopt={[4Δ2(x)f(x)Var(ρ(Yt)X=x)ψ(x)]2/5n2/5ifx(0,1),[2Δ2(x)f(x)Var(ρ(Yt)X=x)]1/3n1/3ifx=0,1.

    Then,

    MSE[ˆrn,mopt(x,t)]={5(Δ(x))2/5(Var(ρ(Yt)X=x)ψ(x))4/5(4f(x))4/5n4/5+o(n4/5)ifx(0,1),3(Δ(x)Var(ρ(Yt)X=x))2/3(2f(x))2/3n2/3+o(n2/3)ifx=0,1.

    Theorem 3.1. Under conditions of Proposition 3.1, we have

    ˆθxPn+θx.

    Proposition 3.2. Let Assumptions (H1)–(H7) hold.

    1) For x(0,1), we have:

    i) If nm5/2nn+c for some constant c0, then

    n1/2m1/4n(ˆrn(x,t)r(x,t))Dn+N(cΔ(x),Var(ρ(Yt)X=x)f(x)ψ(x)). (3.4)

    ii) If nm5/2nn+, then

    mn(ˆrn(x,t)r(x,t))Pn+Δ(x). (3.5)

    2) For x={0,1}, we have:

    i) If nm3nn+c for some constant c0, then

    nm(ˆrn(x,t)r(x,t))Dn+N(cΔ(x),Var(ρ(Yt)X=x)f(x)). (3.6)

    ii) If nm3nn+, then

    mn(ˆrn(x,t)r(x,t))Pn+Δ(x), (3.7)

    where Dn+ denotes the convergence in distribution, N the Gaussian distribution, and Pn+ the convergence in probability.

    Theorem 3.2. (The Mean Integrated Squared Error (MISE) of ˆrn).

    Let Assumptions (H1)–(H7) hold. Then, we have

    MISE(ˆrn)=Λ1m2n+Λ2m1/2nn+o(m1/2nn)+o(m2n). (3.8)

    Hence, the asymptotically optimal choice of m is

    mopt=[4Λ1Λ2]2/5n2/5,

    for which we get

    MISE(ˆrn,mopt)=5Λ1/51Λ4/5244/5n4/5+o(n4/5).

    Theorem 3.3. Assume that (H1)–(H7) hold. If Γ(x,θx)=E[ρ(Yθx)|X=x]0, then ˆθx exists and is unique with great probability, and we have:

    i) when x(0,1) and mn is chosen such that nm5/2n0, then

    n1/2m1/4n(ˆθxθx)DN(cΔ(x)Γ(x,θx),σ21(x,θx)),

    ii) when x[0,1] and mn is chosen such that nm3n0, then

    nmn(ˆθxθx)DN(cΔ(x)Γ(x,θx),σ22(x,θx)),

    where

    σ21(x,θx)=Var[ρ(Yθx)|X=x]f(x)Γ2(x,θx)ψ(x),σ22(x,θx)=Var[ρ(Yθx)|X=x]f(x)Γ2(x,θx),

    Dn+ denotes the convergence in distribution, and N the Gaussian distribution.

    The following corollary directly follows from the previous theorem and provides the weak convergence rate of the estimator ˆθx for x[0,1], where f(x)>0. This is specifically for the case when mn is chosen such that nm5/2n0 for x(0,1) and nm3n0 for x[0,1].

    Corollary 3.1. When x(0,1) and mn is chosen such that nm5/2n0, then

    n1/2m1/4n(ˆθxθx)DN(0,σ21(x,θx)).

    When x[0,1] and mn is chosen such that nm3n0, then

    nmn(^θxθx)DN(0,σ22(x,θx)),

    where

    σ21(x,θ)=Var[ρ(Yθx)|X=x]f(x)Γ2(x,θx)ψ(x),σ22(x,θx)=Var[ρ(Yθx)|X=x]f(x)Γ2(x,θx).

    This section is divided into two parts: the first shows our estimate's behavior for some particular conditional regression functions, and the second deals with asymptotic normality.

    Consider the regression model

    Y=r(X)+ε,

    where εN(0,1).

    A simulation was conducted to compare the proposed estimators ˆθx (robust Bernstein polynomial estimator) with ˆrBNWn(x) (Bernstein-Nadaraya-Watson estimator) introduced by Slaoui and Jmaei [28] and defined by

    ˆrBNWn(x)=ni=1Yimn1k=0I{kmn<Xik+1mn}Bk(mn1,x)ni=1mn1k=0I{kmn<Xik+1mn}Bk(mn1,x), (4.1)

    where Bk(m,x)=(mk)xk(1x)mk is the Bernstein polynomial of order m, and ˆrNWn(x) (Nadaraya-Watson estimator) is defined, for xR such that f(x)0, by

    ˆrNWn(x)=ni=1YiK(xXih)ni=1K(xXih), (4.2)

    where K:RR is a nonnegative, continuous, bounded function satisfying RK(z)dz=1,RzK(z)dz=1 and Rz2K(z)dz< and h=(hn) is a sequence of positive real numbers that goes to zero.

    When using the estimator ˆrNWn(x), we choose the Gaussian kernel K(x)=(2π)1/2exp(x2/2) and the bandwidth equal to (hn)=m1n.

    We consider three sample sizes n=20,n=100, and n=500, four regression functions

    Yi=2Xi+5+εilinear case,Yi=2X2i1+εiparabolic case,Yi=sin(32Xi)+εisine case,Yi=exp(2Xi3)+εiexponential case,

    and three densities of X: the truncated standard normal density N[0,1](0,1) (X[0,1]), the exponential density Exp(2) (X[0,)), and the standard normal density N(0,1) (X(,)). It is also possible to use the transformations ˜X=X1+X or ˜X=12+1πtan1(X) to cover the cases of random variables X with support R+ and R, respectively. These transformations allow for the application of Bernstein polynomials to smooth the empirical distribution function.

    The simulation consists of four parts. In the first three parts, the estimators are compared by their average integrated squared error ¯AISE. Every ¯AISE is calculated by a Monte-Carlo simulation with N=1000 repetitions of sample size n,

    ¯AISE=1NNk=1ISE[ˉrk],

    where ˉrk is the estimator (ˆθx or ˆrBNWn(x) or ˆrNWn(x)) computed from the kth  sample, and

    ISE[ˉrk]=10{ˉr(x)r(x)}2dx.

    According to Figures 14, it is evident that the robust Bernstein polynomial estimation converges when n is large. This is observed in all cases.

    Figure 1.  Prediction: linear case.
    Figure 2.  Prediction: parabolic case.
    Figure 3.  Prediction: sine case.
    Figure 4.  Prediction: exponential case.

    The ¯AISE of three estimators is graphed in Figure 5 for different parameter values ranging from 1 to 200. The estimators are evaluated for two sample sizes, n=20 and n=500. The outcomes are highly comparable when outlier values are not present. Nevertheless, the analysis of Tables 14 demonstrates that both the kernel estimator and the Bernstein-Nadaraya-Watson estimator exhibit significant sensitivity towards outlier values. This heightened sensitivity leads to substantial inaccuracies in predictions. In contrast, our robust Bernstein polynomial estimator consistently sustains its performance irrespective of the quantity of outlier values.

    Figure 5.  ¯AISE over the respective parameters in [1,200] for n=20 and n=500.
    Table 1.  ¯AISE: linear case.
    Density of X Outlier rate n=20 n=100 n=500
    ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x)
    (a)
    N[0,1](0,1)
    0.00% 0.37777 0.38362 0.37289 0.0386 0.04134 0.0333 0.01564 0.01684 0.00896
    0.05% 598.916 3.57632 690.548 678.998 2.20528 668.378 674.569 0.18818 692.737
    0.10% 3016.57 5.65957 3000.05 2620.97 3.16347 2593.3 2676.12 0.23244 2682.89
    0.25% 16083.1 14.182 15878.1 15896.3 6.29712 15930.6 16447.1 1.74161 16344.1
    (b)
    Exp(2)
    0.00% 0.35574 0.35578 0.35517 0.05012 0.05283 0.03747 0.01611 0.01549 0.00794
    0.05% 748.855 4.2097 819.161 689.539 1.90398 692.123 683.571 0.21493 644.829
    0.10% 2408.65 5.93149 2284.35 2501.28 3.57741 2432.78 2681.96 0.31808 2586
    0.25% 16174.5 21.0217 16094.6 16228.2 6.62117 16834.3 17422.6 1.89746 17294.2
    (c)
    N(0,1)
    0.00% 0.3345 0.33847 0.31945 0.05094 0.04832 0.03983 0.01667 0.01593 0.00886
    0.05% 770.807 4.51064 822.317 675.495 2.05097 665.198 698.8 0.15089 656.339
    0.10% 2746.52 7.79586 2559.23 2436.1 3.05173 2393.47 2497.94 0.24955 2503.83
    0.25% 19178.1 18.1898 18006 16413.5 8.0909 16893.9 17372.6 1.75941 17495.9

     | Show Table
    DownLoad: CSV
    Table 2.  ¯AISE: parabolic case.
    Density of X Outlier rate n=20 n=100 n=500
    ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x)
    (a)
    N[0,1](0,1)
    0.00% 1.48199 1.48019 1.47376 0.16874 0.25191 0.10863 0.04375 0.05576 0.02462
    0.05% 29.1125 2.49147 29.3273 14.8531 0.74622 16.0768 15.7507 1.66571 22.1559
    0.10% 64.0281 2.90274 73.1002 53.5638 0.91596 49.9706 98.0555 0.90805 67.9633
    0.25% 393.474 6.88132 326.212 1050.14 3.46544 700.888 1100.17 0.99655 751.459
    (b)
    Exp(2)
    0.00% 1.39141 1.47957 1.33418 0.17829 0.22055 0.11882 0.05203 0.05178 0.02936
    0.05% 25.489 2.38298 28.3225 13.7623 0.54491 14.1639 31.6953 1.13268 11.5467
    0.10% 71.5908 2.58522 75.7357 50.8943 1.02112 58.2273 114.747 1.50033 54.9639
    0.25% 355.306 6.60588 289.937 867.431 2.26394 454.397 1327.96 0.47757 835.553
    (c)
    N(0,1)
    0.00% 0.98856 1.05261 0.97223 0.16172 0.18081 0.09312 0.03957 0.04478 0.02101
    0.05% 25.528 2.25141 32.5343 22.6682 0.5839 15.2797 24.7481 1.48634 15.3258
    0.10% 61.7528 2.64854 85.8806 75.1123 1.22781 45.4987 111.539 1.81279 64.8843
    0.25% 469.185 9.35168 398.637 692.756 3.46427 642.386 1131.51 0.6867 526.238

     | Show Table
    DownLoad: CSV
    Table 3.  ¯AISE: sine case.
    Density of X Outlier rate n=20 n=100 n=500
    ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x)
    (a)
    N[0,1](0,1)
    0.00% 0.13301 0.12525 0.1154 0.01527 0.01522 0.01269 0.00414 0.00436 0.00316
    0.05% 19.3998 0.28717 20.5557 12.0525 0.12604 10.3004 9.47988 0.1339 8.91102
    0.10% 58.565 0.52241 49.1399 33.1922 0.23475 34.4606 38.2786 0.16123 43.7728
    0.25% 294.817 2.25567 177.939 266.142 0.54954 210.994 249.554 0.30488 273.319
    (b)
    Exp(2)
    0.00% 0.14836 0.15054 0.12541 0.01324 0.0144 0.01196 0.0055 0.00545 0.00438
    0.05% 18.8191 0.2807 25.6982 10.7864 0.14311 9.74908 10.5176 0.18333 9.51805
    0.10% 55.1011 0.45925 48.4941 44.8719 0.18583 33.2084 40.9161 0.17936 39.3012
    0.25% 234.994 1.24542 189.692 251.89 0.60327 234.829 261.372 0.34955 285.042
    (c)
    N(0,1)
    0.00% 0.13021 0.14029 0.12257 0.01506 0.01511 0.01375 0.00442 0.00442 0.00328
    0.05% 23.6259 0.28918 22.6131 12.1443 0.10116 10.529 9.82171 0.15066 9.71418
    0.10% 56.098 0.4286 55.5514 36.7151 0.22296 36.6241 35.7651 0.1501 40.4612
    0.25% 247.54 1.20312 237.361 224.049 0.50768 235.812 246.816 0.30212 276.141

     | Show Table
    DownLoad: CSV
    Table 4.  ¯AISE: exponential case.
    Density of X Outlier rate n=20 n=100 n=500
    ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x) ˆrBNWn(x) ˆθx ˆrNWn(x)
    (a)
    N[0,1](0,1)
    0.00% 0.74703 0.61209 0.56581 0.17137 0.17318 0.1214 0.10734 0.08411 0.01222
    0.05% 1.98561 1.35548 1.86444 1.56581 0.60637 1.98072 5.36892 0.08762 3.43111
    0.10% 5.07457 1.236 4.75581 7.88123 0.43664 7.38618 27.4856 0.11352 12.9594
    0.25% 34.2112 2.53764 30.5081 141.657 1.78744 166.463 366.283 0.32839 243.809
    (b)
    Exp(2)
    0.00% 0.52866 0.5851 0.47474 0.10082 0.13134 0.05472 0.03985 0.07932 0.0121
    0.05% 1.79819 1.06047 1.9109 1.31466 0.45844 1.7421 4.25689 0.07832 4.11877
    0.10% 3.47787 1.33077 4.26777 10.9025 0.44565 11.0732 46.7737 0.09391 31.0859
    0.25% 66.6146 1.7682 51.3565 196.824 1.64135 113.157 351.074 0.30732 317.418
    (c)
    N(0,1)
    0.00% 0.74933 0.71865 0.5849 0.11883 0.19364 0.11473 0.10082 0.11145 0.01105
    0.05% 1.20693 0.61275 1.51455 1.40167 0.15919 1.27418 7.52656 0.0597 3.79191
    0.10% 3.85005 1.01982 3.50089 14.9906 0.41717 10.9088 39.9271 0.11825 36.1256
    0.25% 26.8219 2.48908 25.5696 145.728 1.13974 123.627 422.568 0.33036 202.644

     | Show Table
    DownLoad: CSV

    The objective is to demonstrate the property of asymptotic normality in the context of the sine regression model. The equation is

    Yi=sin(32Xi)+εi.

    Next, let r(x) be defined as the sine function with a coefficient of 32. The data provided is the same as in the previous subsection. The procedure consists of the following steps: We approximate the regression function r(x) using ˆθx0 and compute the normalized deviation between this approximation and the theoretical regression function (refer to Theorem 3.3) for x0=0,0.5and1. Under this scheme, we generate N separate sets of n samples that are not influenced by each other. Next, we analyze the form of the estimated density (with normalized deviation) and compare it to the shape of the standard normal density in the context of the sine regression model. The following figures and table present the density of ˆθx0 as well as the pvalue by the Shapiro-Wilk normality test. We examine various values of n, specifically n=20, n=100, and n=500.

    Figures 68 and Table 5 demonstrate the advantageous characteristics of our asymptotic law compared to the standard normal distribution.

    Figure 6.  Illustration of the asymptotic normal distribution for x0=0.
    Figure 7.  Illustration of the asymptotic normal distribution for x0=0.5.
    Figure 8.  Illustration of the asymptotic normal distribution for x0=1.
    Table 5.  pvalue by Shapiro-Wilk normality test.
    n=20 n=100 n=500
    x0=0 0.0814 0.0968 0.1728
    x0=0.5 0.5299 0.5734 0.6603
    x0=1 0.0611 0.0702 0.0970

     | Show Table
    DownLoad: CSV

    Air pollution significantly affects the lives of individuals in developed nations. The source of this issue is increased levels of smoke produced by industries or vehicles, prompting authorities to search for more efficient methods to regulate air quality in real-time. London is experiencing a significant problem with air pollution exceeding legal and World Health Organisation limits. An example of this is the incident in 2010 when air pollution caused various health problems in the city, leading to a financial cost of around £3.7 billion.

    This segment analyzes the mean daily levels of gases detected at the Marylebone Road monitoring station in London. The dataset includes the average daily measurements recorded throughout 2022 for five important variables: Ozone (O3), Nitric Oxides (NO), Nitrogen Dioxide (NO2), Sulphur Dioxide (SO2), and Particulate Matter (PM10). The main objective of our research is to determine the most practical forecasting models for air pollutant concentration. The data used in this analysis was obtained from the specified website: https://www.airqualityengland.co.uk/site/data?site_id = MY1.

    To ensure clarity, let us delineate the mathematical expression representing our prediction objective. Let us consider predicting the daily air pollutant concentration, represented by the variable Y, for 365 days, denoted by X. Formally, we assume that the output variable Y and the input variable X are connected by the following equation:

    Yi=r(Xi)+εifor i{1,,n}.

    A dependable data-dependent rule for order selection is crucial when estimating an unknown regression function in any practical scenario. A widely used and effective method is cross-validation:

    CV(m)=1nni=1(Yiˉri(Xi))2,

    where ˉri is the regression estimate without the data point (Xi,Yi).

    In practice, choosing the right degree m for a Bernstein polynomial requires balancing between the complexity of the model and how well it fits the data. A useful method for this is cross-validation, where the dataset is divided into training and validation sets.

    Then, the smoothing parameter is chosen by minimizing

    CV(m)=1nni=1(Yiˉri(Xi))2.

    For convenience, we assume that the minimum of days is 1 and the maximum is 365 (the day data are such that mini(Xi)=1 and maxi(Xi)=365). Finally, we used the cross-validation method to obtain the results in Figures 913 and Table 6.

    Figure 9.  Prediction: Ozone (O3) case.
    Figure 10.  Prediction: Nitric Oxides (NO) case.
    Figure 11.  Prediction: Nitrogen Dioxide (NO2) case.
    Figure 12.  Prediction: Sulphur Dioxide (SO2) case.
    Figure 13.  Prediction: Particulate Matter (PM10) case.
    Table 6.  m optimal for each case.
    Ozone Nitric Oxides Nitrogen Dioxide Sulphur Dioxide Particulate Matter
    ˆrBNWn(x) 181 169 197 197 197
    ˆθx 121 149 101 173 181

     | Show Table
    DownLoad: CSV

    Based on the analysis of Figures 9 to 13, it is evident that the two estimators are nearly identical, except for the scenario depicted in Figure 10. In this case, non-robust estimator ˆrBNWn(x) is found to be sensitive to outliers, which provides evidence of the efficiency of our estimator.

    Based on the information in Table 6, we can infer that the parameter m can be adjusted. It does not need to be equal to n. Instead, we can choose a lower-degree polynomial to achieve a more favorable outcome.

    In this paper, we proposed a new robust regression estimator based on the Bernstein polynomials. Our contribution extends the work of Slaoui and Jmaei [28] to the case of robust regression. The asymptotic properties of this estimator were established. Afterward, we validated the effectiveness of the proposed method through a simulation study and applied it to real data on air pollution,

    We found that, in all three models, the average ISE of our robust regression estimator ˆθx, defined in 2.4, was the smallest. We also noted that the robust regression provided better results than the non-robust method when outliers were present, in the sense that, even if the sample size increases, the average ISE decrease. To conclude, the use of the robust regression estimator with Bernstein polynomials successfully addressed the edge problem, yielding results comparable to those of non-robust and Nadaraya-Watson estimators in the absence of outliers.

    We believe our research provides a foundational step that can be further developed and expanded. It sets the stage for future work to extend our robust regression estimator using the Bernstein polynomial by considering the interest random variable to be truncated. We also plan to work on the robust regression estimation using Lagrange polynomials.

    Sihem Semmar: Conceptualization, data curation, formal analysis, investigation, methodology, software, validation, writing original draft, writing – review & editing; Omar Fetitah, Salah Khardani and Mohammad Kadi Attouch: Conceptualization, supervision, writing–review & editing; Mohammed Kadi Attouch and Ibrahim M. Almanjahie: Resources, validation, writing–review & editing. All authors have read and approved the final version of the manuscript for publication.

    The real data used in this application can be found at this link: https://www.airqualityengland.co.uk/site/data?site_id = MY1

    The authors thank and extend their appreciation to the funder of this work. This work was supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University through the Large Research Groups Project under grant number R.G.P. 2/338/45.

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    In this section, we present proofs for the results in Section 3. First, we recall a series of results, which are proven in Leblanc [26], linked to different sums of Bernstein polynomial, defined by

    Smn(x)=mn1k=0B2k(mn,x).

    These results are given in the following lemma.

    Lemma A.1. We have

    (i) 0Smn(x)1,x[0,1].

    (ii) Smn(x)=m1/2[ψ(x)+ox(1)],x(0,1).

    (iii) Smn(0)=Smn(1)=1.

    (iv) Let g be any continuous function on [0,1]. Then, m1/2n10g(x)Smn(x)dx=10g(x)ψ(x)dx+o(1).

    Proof. The proof of this lemma is in Leblanc [26] and Babu et al. [12].

    Lemma A.2.

    E[Nn(x,t)]N(x,t)=Δ2(x)m1n+o(m1n). (A.1)

    Proof.

    E[Nn(x,t)]=mnE[ρ(Yt)mn1k=0I{kmn<Xk+1mn}Bk(mn1,x)]=mnmn1k=0k+1mnkmn(Rρ(yt)g(z,y)dy)dzBk(mn1,x)=mnmn1k=0(k+1mnkmnr(z,t)f(z)dz)Bk(mn1,x).

    Using a Taylor expansion, we have

    r(z,t)f(z)=[r(x,t)+(zx)rz(x,t)+(zx)222rz2(x,t)+o((zx)2)]×[f(x)+(zx)f(x)+(zx)22f(x)+o((zx)2)]=r(x,t)f(x)+(zx)[rz(x,t)f(x)+r(x,t)f(x)]+(zx)22[2rz2(x,t)f(x)+f(x)r(x,t)+2rz(x,t)f(x)]+o((zx)2),

    and since N(x,t)=r(x,t)f(x), we obtain

    E[Nn(x,t)]=r(x,t)f(x)mnmn1k=0(k+1mnkmn)Bk(mn1,x)+(rx(x,t)f(x)+f(x)r(x,t))mn2mn1k=0{(k+1mnx)2(kmnx)2}Bk(mn1,x)+(f(x)rx(x,t)+f(x)2rx2(x,t)+f(x)r(x,t))mn6mn1k=0{(k+1mnx)3(kmnx)3}Bk(mn1,x)=N(x,t)+(rx(x,t)f(x)+f(x)r(x,t))mn2mn1k=0m2n(2k+12mnx)Bk(mn1,x)+(2f(x)rx(x,t)+f(x)2rx2(x,t)+f(x)r(x,t))mn6mn1k=0m3n{(k+1mnx)2+(kmnx)2+(k+1mnx)(kmnx)}Bk(mn1,x)[1+o(1)]=N(x,t)+(rx(x,t)f(x)+f(x)r(x,t))m1n2{2T1,mn1(x)+(12x)T0,mn1(x)}+(2f(x)rx(x,t)+f(x)2rx2(x,t)+2fx2(x)r(x,t))m2n6mn1k=0{3(kmnx)2+3(kmnx)+1}Bk(mn1,x)[1+o(1)]=N(x,t)+(rx(x,t)f(x)+f(x)r(x,t))m1n2{2T1,mn1(x)+(12x)T0,mn1(x)}+(2f(x)rx(x,t)+f(x)2rx2(x,t)+f(x)r(x,t))m2n6{3T2,mn1(x)+3(12x)T1,mn1(x)+(x23x+1)T0,mn1(x)}[1+o(1)],

    where Tj,mn1(x) are the central moments of the Binomial distribution of order jN, defined as

    Tj,mn1(x)=mn1k=0(kmnx)jBk(mn1,x),jN.

    Note that it is easy to obtain

    T0,mn1(x)=1,T1,mn1(x)=0T2,mn1(x)=(mn1)x(1x).

    Then, we have

    E[Nn(x,t)]=N(x,t)+Δ2(x)m1n+o(m1n). (A.2)

    Lemma A.3. We have

    Var[Nn(x,t)]={m1/2nnE[(ρ(Yt))2X=x]f(x)ψ(x)+ox(m3/2nn)forx(0,1),mnnE[(ρ(Yt))2X=x]f(x)+ox(mnn)forx=0,1.

    Proof. We have

    Var[Nn(x,t)]=E[N2n(x,t)]E2[Nn(x,t)],

    where

    N2n(x,t)=m2nn2ni=1(ρ(Yit))2(mn1k=0I{kmn<Xik+1mn}Bk(mn1,x))2+m2nn2mni,j=1,ijρ(Yit)ρ(Yjt)(mn1k=0I{kmn<Xik+1mn}Bk(mn1,x))(mn1k=0I{kmn<Xjk+1mn}Bk(mn1,x)).

    So, we have

    E[N2n(x,t)]=m2nnE[(ρ(Yt))2(mn1k=0I{kmn<Xk+1mn}Bk(mn1,x))2]+m2nn(n1)n2E2[mn1k=0I{kmn<Xk+1mn}Bk(mn1,x)]=mnnE[(ρ(Yt))2(mn1k=0I{kmn<Xk+1mn}Bk(mn1,x))2]+(11n)E2[Nn(x,t)],

    and

    Var[Nn(x,t)]=m2nnE[(ρ(Yt))2(mn1k=0I{kmn<Xk+1mn}Bk(mn1,x))2]1nE2[Nn(x,t)]=m2nnE[(ρ(Yt))2mn1k=0I{kmn<Xk+1mn}B2k(mn1,x)]1nE2[Nn(x,t)]=m2nnmn1k=0k+1mnkmn(R(ρ(yt))2g(z,y)dy)dzB2k(mn1,x)1nE2[Nn(x,t)]=m2nnmn1k=0(k+1mnkmnE[(ρ(Yt))2X=z]f(z)dz)B2k(mn1,x)1nE2[Nn(x,t)]=mnnE[(ρ(Yt))2X=x]f(x)Smn(x)1nE2[Nn(x,t)].

    Using Lemma A.1 (ⅱ) and (ⅲ), we obtain

     Var [Nn(x,t)]={m1/2nnE[(ρ(Yt))2X=x]f(x)ψ(x)+ox(m3/2nn) for x(0,1),mnnE[(ρ(Yt))2X=x]f(x)+ox(mnn) for x=0,1. (A.3)

    Lemma A.4.

    Cov(fn(x),Nn(x,t))={m1/2nnr(x,t)f(x)ψ(x)+ox(m1/2nn)forx(0,1),mnnr(x,t)f(x)+ox(mnn)forx=0,1. (A.4)

    Proof. We have

    Cov(fn(x),Nn(x,t))=E[fn(x)Nn(x,t)]E[fn(x)]E[Nn(x,t)]=m2nnE[ρ(Yt)(mn1k=0I{kmn<Xk+1mn}Bk(mn1,x))2]+n(n1)m2nn2E2[ρ(Yt)mn1k=0I{kmn<Xk+1mn}Bk(mn1,x)]E[fn(x)]E[Nn(x,t)]=m2nnE[ρ(Yt)(mn1k=0I{kmn<Xk+1mn}Bk(mn1,x))2]1nE[fn(x)]E[Nn(x,t)]=m2nnmn1k=0k+1mnkmn(Rρ(yt)g(z,y)dy)dzB2k(mn1,x)1nE[fn(x)]E[Nn(x,t)]=mnnr(x,t)f(x)Sm(x)1nE[fn(x)]E[Nn(x,t)]

    Using Lemma A.1 (ⅱ) and (ⅲ), we get

    Cov(fn(x),Nn(x))={m1/2nnr(x)f(x)ψ(x)+ox(m1/2nn) for x(0,1),mnnr(x)f(x)+ox(mnn) for x=0,1. (A.5)

    To obtain the bias of ˆrn(x,t), we let h(x,y)=uv. Using a Taylor expansion, we have

    h(u,v)=h(u0,v0)+[uu0]hu(u0,v0)+[vv0]hv(u0,v0)+12{[uu0]22hu2(u0,v0)+[vv0]22qv2(u0,v0)}+2[uu0][vv0]2huv(u0,v0)+o((uu0,vv0)2).

    Then, we have

    uv=u0v0+1v0(uu0)u0v20(vv0)+u0v30(vv0)21v20(uu0)(vv0)+o((uu0)2+(vv0)2).

    We set (u,v)=(Nn(x,t),fn(x)) and (u0,v0)=(N(x,t),f(x)). Therefore, we infer that

    Nn(x,t)fn(x)=N(x,t)f(x)+1f(x)(Nn(x,t)N(x,t))N(x,t)f(x)2(fn(x)f(x)(x))+N(x,t)f(x)3(fn(x)f(x))21f(x)2(Nn(x,t)N(x,t))(fn(x)f(x)))+o((Nn(x,t)N(x,t))2+(fn(x)f(x))2).ˆrn(x,t)=r(x,t)+1f(x)(Nn(x,t)N(x,t))r(x,t)f(x)(fn(x)f(x))+r(x,t)f(x)2(fn(x)f(x))21f(x)2(Nn(x,t)N(x,t))(fn(x)f(x))+o((Nn(x,t)N(x,t))2+(fn(x)f(x))2).

    Hence, we set (u,v)=(fn(x),Nn(x,t)) and (u0,v0)=(f(x),N(x,t)) to obtain

    ˆrn(x,t)=r(x,t)r(x,t)f(x)(fn(x)f(x))+1f(x)(Nn(x,t)N(x,t))+r(x,t){f(x)}2(fn(x)f(x))21{f(x)}2(fn(x)f(x))(Nn(x,t)N(x,t))+o((fn(x)f(x))2+(fn(x)f(x))(Nn(x,t)N(x,t))).

    Then,

    E[ˆrn(x,t)]=r(x,t)r(x,t)f(x)(E[fn(x)]f(x))+1f(x)(E[Nn(x,t)]N(x,t))+r(x,t){f(x)}2(E[fn(x)]f(x))21{f(x)}2E[(fn(x)f(x))(Nn(x,t)N(x,t))]+o(E[(fn(x)f(x))2]+E[(fn(x)f(x))(Nn(x,t)N(x,t))]).

    Use Vitale's estimator fn, we get

    E[fn(x)]=f(x)+Δ1(x)mn+o(m1n),x[0,1] (A.6)

    and

    Var[fn(x)]={m1/2nnf(x)ψ(x)+ox(m1/2nn) for x(0,1),mnnf(x)+ox(mnn) for x=0,1. (A.7)

    To obtain (3.1) of Proposition 3.1, we use (A.6) and (A.2) to obtain

    E[ˆrn(x,t)]=r(x,t)+(1f(x)Δ2(x)r(x,t)f(x)Δ1(x))m1n+o(m1n)=r(x,t)+Δ(x)m1n+o(m1n),x[0,1].

    Now for the variance of ˆrn(x,t), we have

    Var(ˆrn(x,t))=Var(r(x,t)r(x,t)f(x)(fn(x)f(x))+1f(x)(Nn(x,t)N(x,t)))[1+o(1)],

    which ensures that

    Var(ˆrn(x,t)){r2(x,t)f2(x)Var(fn(x))+1f2(x)Var(Nn(x,t))2r(x,t)f2(x)Cov(Nn(x,t),fn(x))}[1+o(1)].

    So, for x=(0,1), we have f,

    Var[ˆrn(x,t)]=m1/2nnVar(ρ(Yt)X=x)f(x)+ox(m1/2nn),

    and, for x0,1, we have

    Var[ˆrn(x,t)]=mnnVar(ρ(Yt)X=x)f(x)+ox(mnn),

    which gives the proof of Proposition 3.1.

    Without loss of generality we can suppose that ρ(Y.) is increasing, with the decreasing case being obtained by considering ρ(Y.). As ρ(Y.) is increasing, then for all ϵ>0,

    r(x,θx.+ϵ)r(x,θx)r(x,θxϵ).

    Proposition 3.1 shows that

    ˆr(x,t)Pr(x,t),

    for all real t[θxτ,θx+τ]. As r(x,θx)=0, for sufficiently large n and for all ϵτ, this implies

    ˆr(x,θx+ϵ)0ˆr(x,θxϵ)in probability.

    Since ˆr(x,ˆθx)=0, and by the continuity of ˆr(x,.) on [θxτ,θx+τ], we deduce that

    θxϵ^θxθx+ϵin probability.

    On the other hand, since θx and ^θx are solutions of r(x,t) and ˆr(x,t), respectively, then we have

    ˆr(x,^θx)=r(x,θx)=0.

    Under (H7), and by a Taylor expansion of r(x,.) of order one around ˆθx, we have

    ˆr(x,ˆθx)r(x,ˆθx)=(θxˆθx)rt(x,ξn),

    where ξn is between θx and ^θx. Hence,

    |θx^θx|1|infxSrt(x,ξn)||ˆr(x,^θx)r(x,^θx)|,

    which yields

    supxS|θx^θx|1C3supxS|ˆr(x,^θx)r(x,^θx)|1C3supxSsupt[θxτ,θx+τ]|ˆr(x,t)r(x,t)|,

    and the rest of the proof is a sequence of Proposition 3.1.

    From (2.4), we adopt the decomposition stated as

    ˆrn(x,t)r(x,t)=1fn(x)[(Nn(x,t)N(x,t))r(x,t)(fn(x)f(x))]=1fn(x)[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x))))]+1fn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))].

    Lemma A.5. Under Assumptions (H1)–(H3), and for x[0,1] such that f(x)>0, we have

    fn(x)Pf(x). (A.8)

    Proof. We have by the results of Lemmas A.2 and A.3, that

    E(fn(x))f(x)0,

    and

    Var(fn(x))0.

    Hence,

    fn(x)Pf(x),x(0,1).

    Lemma A.6. Under Assumptions (H1)–(H4), and for x(0,1) such that f(x)>0, we have:

    i) if mn is chosen such that nm5/2nc for some constant c0, then

    n1/2m1/4nfn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))]PcΔ(x), (A.9)

    ii) if mn is chosen such that nm5/2n, then

    mnfn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))]PΔ(x). (A.10)

    Proof. By Lemmas A.2 and A.8, we have:

    i) if nm5/2nc for some constant c0, then

    n1/2m1/4nfn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))]=n1/2m5/4n(Δ1(x)r(x,t)Δ2(x)+o(1))fn(x)PcΔ(x),

    ii) if nm5/2n, then

    mnfn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))]=(Δ1(x)r(x,t)Δ2(x)+o(1))fn(x)PΔ(x).

    Lemma A.7. Under Assumptions (H1)–(H4), and for x(0,1) such that f(x)>0, we have

    n1/2m1/4n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]DN(0,Var(ρ(Yt)X=x)f(x)ψ(x)). (A.11)

    Proof. We write

    n1/2m1/4n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]=ni=1(Li(x)E(Li(x))),

    where

    Li(x)=m3/4nn1/2(ρ(yit)r(x,t))mn1k=0I{kmn<Xik+1mn}Bk(mn1,x).

    The proof of this lemma is based on the Lyapunov central limit theorem (FELLER, W.[36]) on Li(x), i.e., it suffices to show, for some δ>0, that

    ni=1E[|Li(x)E[Li(x)]|2+δ](Var[ni=1Li(x)])(2+δ)/20. (A.12)

    Clearly,

    Var[ni=1Li(x)]=nm1/2nVar[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]=nm1/2n[Var(Nn(x,t))+r2(x,t)Var(fn(x))r(x,t)Cov(Nn(x,t),fn(x))].

    Hence,

    Var[ni=1Li(x)]=Var(ρ(yt)2|X=x)f(x)ψ(x)+o(1).

    Therefore, to complete the proof of this lemma, it is enough to show that the numerator of (A.12) converges to 0. For this, we use the Cr-inequality (cf. Loève [37], page 155) to show that

    ni=1E[|Li(x)E[Li(x)]|2+δ]C1ni=1E[|Li(x)|2+δ]+C2ni=1|E[Li(x)]|2+δ.

    Recall that, because of Assumption (H4) and Lemma A.1 (ⅱ), we have

    ni=1E[|Li(x)|2+δ]=nδ/2(mn)34δ+32[|ρ(Yit)r(x,t)|2+δ(mn1k=0I{kmn<Xik+1mn}Bk(mn1,x))2+δ]nδ/2(mn)34δ+32mn1k=0k+1mnkmn(21+δR|ρ(Yt)|(2+δ)g(z,y)dy+21+δ|r(x,t)|2+δ)dzB2+δk(mn1,x)nδ/2(mn)34δ+32mn1k=0CmnB2+δk(mn1,x)nδ/2(mn)34δ+32×Cm32nC(m32nn)δ20.

    Similarly, for the second term (ni=1|E[Li(x)]|2+δ), we get

    ni=1|E[Li(x)]|2+δC(m32nn)δ20.

    Finally, (A.9) in Lemma A.6, Lemma A.7, and Slutsky's theorem complete the proof of part 3.4 of Proposition 3.2.

    Now, if nm5/2n, we have

    mn[(Nn(x,t)E(N(x,t)))r(x,t)(fn(x)E(fn(x)))]=(n1/2m5/4n)n1/2m1/4n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))].

    Since we have n1/2m5/4n0, A.10 in Lemma A.6, Lemma A.7, and Slutsky's theorem complete the proof of part 3.5. Proposition 3.2 follows from (A.11) when x{0,1}.

    Lemma A.8. Under Assumptions (H1)–(H4), and for x{0,1} such that f(x)>0, we have:

    i) if mn is chosen such that nm3nc for some constant c0, then

    n1/2m1/2nfn(x)[(E(Nn(x,t))N(x,t))r(x,t)(E(fn(x))f(x))]PcΔ(x), (A.13)

    ii) if mn is chosen such that nm3n, then

    mnfn(x)[(E(Nn(x,t))N(x,t)))r(x,t)(E(fn(x))f(x))]PΔ(x). (A.14)

    Proof. The proof of this lemma is analogous to Lemma A.6.

    Lemma A.9. Under Assumptions (H1)–(H4), and for x{0,1} such that f(x)>0, we have

    n1/2m1/2n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]DN(0,Var(ρ(Yit))f(x)). (A.15)

    Proof. We write

    n1/2m1/2n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]=ni=1(Li(x)E(Li(x))),

    where

    Li(x):=m1/2nn1/2(ρ(Yit)r(x,t))mn1k=0I{kmn<Xik+1mn}Bk(mn1,x).

    The proof of this lemma is based on the Lyapounov central limit theorem (FELLER, W.[36]) on Li(x). Clearly,

    Var[ni=1Li(x)]=nm1nVar[(Nn(x,t)E(Nn(x,t))))r(x,t)(fn(x)E(fn(x))))]=nm1n[Var(Nn(x,t))+r2(x,t)Var(fn(x))2r(x,t)Cov(Nn(x,t),fn(x))].

    Hence,

    Var[ni=1Li(x)]=Var(ρ(Yit))f(x)+o(1).

    Therefore, to complete the proof of this lemma, we follow the same steps as in the proof Lemma A.7, and find that

    ni=1E[|Li(x)E[Li(x)]|2+δ]Cm12n×(mnn)δ20.

    Finally, (A.13) in Lemma A.8, Lemma A.9, and Slutsky's theorem complete the proof of part 3.6 of Proposition 3.2. Now, if nm3n, we have

    mn[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))]=(n1/2m3/2n)n1/2m1/2n[(Nn(x,t)E(Nn(x,t)))r(x,t)(fn(x)E(fn(x)))].

    Since we have n1/2m3/2n0, A.14 in Lemma A.8, Lemma A.9, and Slutsky's theorem completes the proof of part 3.7 of Proposition 3.2 follows from (A.15).

    First, we have

    10Bias(ˆrn(x,t))2dx=10(E[ˆrn(x,t)]r(x,t))2dx=10Δ2(x)m2n+o(1m2n)dx=δ1m2n+o(1m2n).

    Moreover, we have

    Var(ˆrn(x,t))={1f2(x)Var(Nn(x,t))+r2(x,t)f2(x)Var(fn(x))2r(x,t)f2(x)Cov(Nn(x,t),fn(x))}[1+o(1)].

    Then,

    10Var(ˆrn(x,t))dx={10Var(Nn(x,t))f2(x)dx+10r2(x,t)Var(fn(x))f2(x)dx210r(x,t)Cov(Nn(x,t),fn(x))f2(x)dx}[1+o(1)]. (A.16)

    First, we have

    Var[fn(x)]=1n[Am(x)f2m(x)],

    where f2m(x)=E2[fn(x)]=f2(x)+O(m1n), and

    Am(x)=m2nmn1k=0[F(k+1m)F(km)]B2k(mn1,x)=mn[f(x)Sm1(x)+O(Hm1(x))+O(m1)],

    for x[0,1] and mn2, where

    Hm(x)=mk=0|kmx|B2k(mn,x)=Ox(m3/4n).

    Note that this error term is not uniform. For this, we use the Cauchy-Schwarz inequality to write

    Hmn(x)[mnk=0(kmnx)2Bk(mn,x)]1/2[mnk=0B3k(mn,x)]1/2[Smn(x)4mn]1/2, (A.17)

    for all mn1 and x[0,1], since 0Bk(mn,x)1 and

    mnk=0(kmnx)2Bk(mn,x)=x(1x)mn14mn.

    Then, starting from Eq (A.17) and applying Jensen's inequality and Lemma A.1 (ⅳ), we have

    10g(x)Hmn(x)dx10g(x)[Smn(x)4mn]1/2dx[10g(x)dx]1/2[14m3/2n10g(x)ψ(x)dx+o(m3/2n)]1/2=O(m3/4n).

    Then, we infer that

    10r2(x,t)Var[fn(x)]{f(x)}2dx=1n10r2(x,t)Amn(x)f2mn(x){f(x)}2dx=1n[10r2(x,t)Amn(x){f(x)}2dx10r2(x,t)]+O(1mn)=mnn[10r2(x,t){f(x)}2(Smn1(x)+O(Hmn1(x))+O(m1n))dx]1n10r2(x,t)+O(1mn)=mnn[10r2(x,t)f(x)Smn1(x)dx+O(m3/4n)]1n10r2(x,t)+O(1mn),

    and, using Lemma A.1 (ⅳ), we have

    10r2(x,t)Var[fn(x)]{f(x)}2dx=m1/2nn10r2(x,t)f(x)ψ(x)dx1n10r2(x,t)+o(m1/2nn)+O(1mn). (A.18)

    Second, we have

    Cov[fn(x),Nn(x,t)]=1n{m2nmn1k=0(k+1mkmnr(z)f(x)dz)B2k(mn1,x)E[fn(x)]E[Nn(x,t)]}=m2nnmn1k=0(k+1mnkmn[r(x,t)f(x)+O(zx)]dz)B2k(mn1,x)1nf(x)N(x,t)+O(1mn)=mnn[r(x,t)f(x)Smn1(x)+O(Hmn1(x))+O(m1n)]1nf(x)N(x,t)+O(1mn).

    Then, using the same argument for Hmn1(x) as previously, we obtain

    10r(x,t)Cov[fn(x),Nn(x,t)]{f(x)}2dx=mnn[10r2(x,t)f(x)Smn1(x)dx+O(m3/4n)]1n10r2(x,t)+O(1mn)=m1/2nn10r2(x,t)f(x)ψ(x)dx1n10r2(x,t)+o(m1/2nn)+O(1mn). (A.19)

    Third, we have

    Var[Nn(x,t)]=m2nnmn1k=0(k+1mnkmnE[ρ(Yt)2X=z]f(z)dz)B2k(mn1,x)1nE2[Nn(x,t)]=m2nnmn1k=0(k+1mnkmn[E[ρ(Yt)2X=x]f(x)+O(zx)]dz)B2k(mn1,x)1nN2(x,t)+O(1mn)=mnn[E[ρ(Yt)2X=x]f(x)Smn1(x)+O(Hmn1(x))+O(m1n.)]1nN2(x,t)+O(1mn).

    Then,

    10Var[Nn(x,t)]{f(x)}2dx=mnn[10E[ρ(Yt)2X=x]f(x)Smn1(x)dx+O(m3/4n)]1n10r2(x,t)+O(1mn)=m1/2nn10E[ρ(Yt)2X=x]f(x)ψ(x)dx1n10r2(x,t)+o(m1/2nn)+O(1mn). (A.20)

    Finally, substituting (A.18), (A.19), and (A.20) into (A.16), we obtain

    10Var[ˆrn(x,t)]dx=(10E[ρ(Yt)2X=x]f(x)ψ(x)dx10E2[ρ(Yt)X=x]f(x)ψ(x)dx)m1/2nn+o(m1/2nn)=10E[ρ(Yt)2X=x]E2[ρ(Yt)X=x]f(x)ψ(x)dxm1/2nn+o(m1/2nn)=10Var[ρ(Yt)X=x]f(x)ψ(x)dxm1/2nn+o(m1/2nn).

    Then, we obtain

    MISE(ˆrn)=10{Var(ˆrn(x,t))+Bias2(ˆrn(x,t))}=Λ1m2n+Λ2m1/2nn+o(m1/2nn)+o(m2n).

    Using a Taylor expansion of order one around θ, we get

    ˆr(x,^θx)=ˆr(x,θx)+(^θxθx)ˆrt(x,ξn),

    with ξn(^θx,θx). Because of the definition of ˆθ, we have

    ^θxθx=ˆr(x,θx)ˆrt(x,ξn).

    We will prove that the numerator is asymptotically normal, whereas the denominator converges in probability to Γ(x,θx); for that, we will use the following decompositions:

    i) When x(0,1) and mn is chosen such that nm5/2c, then

    n1/2m1/4n(ˆθxθx)=n1/2m1/4n[ˆr(x,θx)r(x,θx)]ˆrt(x,ξn).

    ii) When x{0,1} and mn is chosen such that nm3c, then

    n1/2m1/2n(ˆθxθx)=n1/2m1/2n[ˆr(x,θx)r(x,θx)]ˆrt(x,ξn).

    So, we state asymptotic normality by Slutsky's Theorem, and by Proposition 3.2 with t=θ. We show that the numerator suitably normalized is asymptotically normally distributed. Then, it suffices to show that the denominator converges in probability to Γ(x,θx) (see Lemma A.10).

    Lemma A.10. Under Assumptions (H1)–(H3), and for x[0,1] where f(x)>0, we have

    ˆrt(x,ξn)PΓ(x,θx).

    Proof. We explore the following decomposition:

    |ˆrt(x,ξn)Γ(x,θx)||ˆrt(x,ξn)ˆrt(x,θx)|+|ˆrt(x,θx)Γ(x,θx)|J1(x)+J2(x). (A.21)

    For J1(x), we write

    J1(x)supy[a,b]|ρ(yξn)tρ(yθx)t|mnfn(x)nni=1m1k=0I{km<Xik+1m}Bk(m1,x).

    Because ρ(yt)t is continuous at θ uniformly, the use of Theorem 3.1 and the convergence in probability of fn(x) to f(x) show that the first term of (A.21) converges in probability to 0. However, the limit of the second term is obtained by evaluating, separately, the bias and the variance terms of ˆrt(x,θx). Clearly, a similar argument to those invoked for proving (3.1) can be used to obtain that

    ˆrt(x,θx)Γ(x,θx)in probability.



    [1] M. D'Abbicco, The threshold of effective damping for semilinear wave equations, Math. Methods Appl. Sci., 38 (2015), 1032–1045. https://doi.org/10.1002/mma.3126 doi: 10.1002/mma.3126
    [2] M. D'Abbicco, S. Lucente, A modified test function method for damped wave equations, Adv. Nonlinear Stud., 13 (2013), 867–892. https://doi.org/10.1515/ans-2013-0407 doi: 10.1515/ans-2013-0407
    [3] M. D'Abbicco, S. Lucente, M. Reissig, A shift in the Strauss exponent for semilinear wave equations with a not effective damping, J. Differ. Equ., 259 (2015), 5040–5073. https://doi.org/10.1016/j.jde.2015.06.018 doi: 10.1016/j.jde.2015.06.018
    [4] M. D'Abbicco, S. Lucente, M. Reissig, Semi-linear wave equations with effective damping, Chin. Ann. Math. Ser. B, 34 (2013), 345–380. https://doi.org/10.1007/s11401-013-0773-0 doi: 10.1007/s11401-013-0773-0
    [5] M. R. Ebert, G. Girardi, M. Reissig, Critical regularity of nonlinearities in semilinear classical damped wave equations, Math. Ann., 378 (2020), 1311–1326. https://doi.org/10.1007/s00208-019-01921-5 doi: 10.1007/s00208-019-01921-5
    [6] A. Friedman, Partial differential equations, Corrected reprint of the original edition, Robert E. Krieger Publishing Co., New York, 1976.
    [7] R. Ikehata, M. Ohta, Critical exponents for semilinear dissipative wave equations in RN, J. Math. Anal. Appl., 269 (2002), 87–97. https://doi.org/10.1016/S0022-247X(02)00021-5 doi: 10.1016/S0022-247X(02)00021-5
    [8] J. Lin, K. Nishihara, J. Zhai, Critical exponent for the semilinear wave equation with time-dependent damping, Discrete Contin. Dyn. Syst., 32 (2012), 4307–4320. https://doi.org/10.3934/dcds.2012.32.4307 doi: 10.3934/dcds.2012.32.4307
    [9] A. Mohammed Djaouti, Semilinear systems of weakly coupled damped waves, Ph.D. Thesis, TU Bergakademie Freiberg, Freiberg, Germany, 2018.
    [10] M. Nakao, K. Ono, Existence of global solutions to the Cauchy problem for the semilinear dissipative wave equations, Math. Z., 214 (1993), 325–342. https://doi.org/10.1007/BF02572407 doi: 10.1007/BF02572407
    [11] W. Nunes do Nascimento, A. Palmieri, M. Reissig, Semi-linear wave models with power non-linearity and scale invariant time-dependent mass and dissipation, Math. Nachr., 290 (2017), 1779–1805. https://doi.org/10.1002/mana.201600069 doi: 10.1002/mana.201600069
    [12] A. Palmieri, Global in time existence and blow-up results for a semilinear wave equation with scale-invariant damping and mass, Ph.D. Thesis, TU Bergakademie Freiberg, Freiberg, Germany, 2018.
    [13] A. Palmieri, M. Reissig, Semi-linear wave models with power non-linearity and scale invariant time-dependent mass and dissipation, Ⅱ, Math. Nachr., 291 (2018), 1859–1892. https://doi.org/10.1002/mana.201700144 doi: 10.1002/mana.201700144
    [14] G. Todorova, B. Yordanov, Critical exponent for a nonlinear wave equation with damping, J. Differ. Equ., 174 (2001), 464–489. https://doi.org/10.1006/jdeq.2000.3933 doi: 10.1006/jdeq.2000.3933
    [15] J. Wirth, Asymptotic properties of solutions to wave equations with time-dependent dissipation, PhD Thesis, TU Bergakademie Freiberg, 2004.
    [16] J. Wirth, Wave equations with time-dependent dissipation Ⅱ, Effective dissipation, J. Differ. Equ., 232 (2007), 74–103. https://doi.org/10.1016/j.jde.2006.06.004 doi: 10.1016/j.jde.2006.06.004
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1542) PDF downloads(61) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog