Loading [MathJax]/extensions/TeX/mathchoice.js
Research article Special Issues

Terminal value problem for nonlinear parabolic equation with Gaussian white noise

  • In this paper, We are interested in studying the backward in time problem for nonlinear parabolic equation with time and space independent coefficients. The main purpose of this paper is to study the problem of determining the initial condition of nonlinear parabolic equations from noisy observations of the final condition. The final data are noisy by the process involving Gaussian white noise. We introduce a regularized method to establish an approximate solution. We establish an upper bound on the rate of convergence of the mean integrated squared error. This article is inspired by the article by Tuan and Nane [1].

    Citation: Vinh Quang Mai, Erkan Nane, Donal O'Regan, Nguyen Huy Tuan. Terminal value problem for nonlinear parabolic equation with Gaussian white noise[J]. Electronic Research Archive, 2022, 30(4): 1374-1413. doi: 10.3934/era.2022072

    Related Papers:

    [1] Guifen Liu, Wenqiang Zhao . Regularity of Wong-Zakai approximation for non-autonomous stochastic quasi-linear parabolic equation on $ {\mathbb{R}}^N $. Electronic Research Archive, 2021, 29(6): 3655-3686. doi: 10.3934/era.2021056
    [2] Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan . On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29(1): 1709-1734. doi: 10.3934/era.2020088
    [3] Hongze Zhu, Chenguang Zhou, Nana Sun . A weak Galerkin method for nonlinear stochastic parabolic partial differential equations with additive noise. Electronic Research Archive, 2022, 30(6): 2321-2334. doi: 10.3934/era.2022118
    [4] Haiyan Song, Fei Sun . A numerical method for parabolic complementarity problem. Electronic Research Archive, 2023, 31(2): 1048-1064. doi: 10.3934/era.2023052
    [5] Lianbing She, Nan Liu, Xin Li, Renhai Wang . Three types of weak pullback attractors for lattice pseudo-parabolic equations driven by locally Lipschitz noise. Electronic Research Archive, 2021, 29(5): 3097-3119. doi: 10.3934/era.2021028
    [6] Yang Jiao . On estimates for augmented Hessian type parabolic equations on Riemannian manifolds. Electronic Research Archive, 2022, 30(9): 3266-3289. doi: 10.3934/era.2022166
    [7] Shuting Chang, Yaojun Ye . Upper and lower bounds for the blow-up time of a fourth-order parabolic equation with exponential nonlinearity. Electronic Research Archive, 2024, 32(11): 6225-6234. doi: 10.3934/era.2024289
    [8] Yiyuan Qian, Haiming Song, Xiaoshen Wang, Kai Zhang . Primal-dual active-set method for solving the unilateral pricing problem of American better-of options on two assets. Electronic Research Archive, 2022, 30(1): 90-115. doi: 10.3934/era.2022005
    [9] Jun Zhou . Initial boundary value problem for a inhomogeneous pseudo-parabolic equation. Electronic Research Archive, 2020, 28(1): 67-90. doi: 10.3934/era.2020005
    [10] Yang Cao, Qiuting Zhao . Initial boundary value problem of a class of mixed pseudo-parabolic Kirchhoff equations. Electronic Research Archive, 2021, 29(6): 3833-3851. doi: 10.3934/era.2021064
  • In this paper, We are interested in studying the backward in time problem for nonlinear parabolic equation with time and space independent coefficients. The main purpose of this paper is to study the problem of determining the initial condition of nonlinear parabolic equations from noisy observations of the final condition. The final data are noisy by the process involving Gaussian white noise. We introduce a regularized method to establish an approximate solution. We establish an upper bound on the rate of convergence of the mean integrated squared error. This article is inspired by the article by Tuan and Nane [1].



    The forward problem for parabolic equations is finding the distribution at a later time when we know the initial distribution. In geophysical exploration, one is often faced with the problem of determining the temperature distribution in the object or any part of the Earth at a time t0>0 from temperature measurements at a time t1>t0. This is the backward in time parabolic problem. Backward parabolic problems arises in several practical areas such as image processing, mathematical finance, and physics (see [2,3]). Let T be a positive number and Ω be an open, bounded and connected domain in Rd,d1 with a smooth boundary Ω. In this paper, we consider the question of finding the function u(x,t), (x,t)Ω×[0,T], satisfying the nonlinear problem

    {ut(a(x,t)u)=F(x,t,u(x,t)),(x,t)Ω×(0,T),u|Ω=0,t(0,T),u(x,T)=g(x),(x,t)Ω×(0,T), (1.1)

    where the functions a(x,t),g(x) are given and the source function F will be given later. Here the coefficient a(x,t) is a C1 smooth function and 0<¯ma(x,t)<M for all (x,t)Ω×(0,T) for some finite constants ¯m, M. The problem is well-known to be ill-posed in the sense of Hadamard. Hence, a solution corresponding to the data does not always exist, and in the case of existence, it does not depend continuously on the given data. In fact, from small noise contaminated physical measurements, the corresponding solutions will have large errors. Hence, one has to resort to a regularization. In the simple case of deterministic noise, Problem (1.1) with a=1 and F=0 was studied by many authors [4,5,6]. However, in the case of random noise, the analysis of regularization methods is still limited. The problem is to determine the initial temperature function f given a noisy version of the temperature distribution g at time T

    gobsδ(x)=g(x)+δξ(x) (1.2)

    where δ>0 is the amplitude of the noise and ξ is a Gaussian white noise. In practice, we only observe some finite errors as follows

    gobsδ,ϕj=g,ϕj+δξ,ϕj,j=¯1,N=1,2,3,,N, (1.3)

    where the natural number N is the number of steps of discrete observations and ϕj is defined in section 2. The main goal is to find an approximate solution ˆuN(0) for u(0) and then investigate the rate of convergence EˆuN(0)u(0), which is called the mean integrated square error (MISE). Here E denotes the expectation w.r.t. the distribution of the data in the model (1.2).

    There are two main approaches to considering inverse problem for noise modeling. The first approach is based on a formal technique if one is assuming that the noise is definite and small. The second approach is based on a statistical point of view and in this approach one does not need to assume smalll levels of noise. We consider in this paper a statistical point of view for the backward parabolic equation. Our aim is to reconstruct the initial function from the disturbance measurements of the final values in a statistical inverse problem framework. There are many different types of random noise, but we are interested in Gaussian noise here. The model (1.2) and (1.3) were considered in some recent papers; see [7,8,9,10,11]. In signal processing, Gaussian white noise is a random signal of equal intensity at different frequencies, giving it a constant power spectral density and this term is used in physics, acoustic engineering, telecommunications and statistical forecasting.

    The inverse problem with random noise has a long history. The simple case of (1.1) is the homogeneous linear parabolic equation of finding the initial data u0:=u(x,0) that satisfies

    {utΔu=0,(x,t)Ω×(0,T),u|Ω=0,t(0,T),u(x,T)=g(x),(x,t)Ω×(0,T). (1.4)

    This equation is a special form of statistical inverse problems and it can be transformed by a linear operator with random noise

    g=Ku0+"noise", (1.5)

    where K is a bounded linear operator that does not have a continuous inverse. Fomula (1.5) is interpreted as Ku0 deviated from function g by a random error.

    Problem (1.4) was studied by well-known methods including spectral cut-off (or called truncation method) [7,9,12,13], the Tiknonov method [14], iterative regularization methods [15], the Bayes estimation method [16,17], and the Lavrentiev regularization method [18]. In some parts of these works, the authors show that the error EˆuN(0)u(0) tend to zero when N is suitably chosen according to the value of δ and δ0. For more details, we refer the reader to [19].

    To the best of our knowledge, there are no results for the backward problem for nonlinear parabolic equations with Gaussian white noise. There are two types of difficulty in solving our problem. The first difficulty occurs because the problem is nonlinear and nonlinear problems with random noise is more difficult since one cannot apply well known methods. The second is the random noise data, which makes the problem computationally complex. The problem of computation with random data requires some knowledge of the stochastic process, so one has to consider the expectation.

    Very recently, in [20], the authors studied the discrete random model for backward nonlinear parabolic problems. However, the problem considered in [20] is in a rectangular domain which is limited in practice. The present paper uses another random model and also gives approximation of the solution in the case of more general bounded and smooth domains Ω. Our task in this paper is to show that the expectation between the solution and the approximate solution converges to zero when N tends to infinity.

    This paper is organized as follows. In section 2, we give a couple of preliminary results. In section 3, we give an explanation for ill-posedness of the problem. To help the reader, we divide the problem into three cases under various assumptions on the coefficient a, and the source function F. Case 1: a:=a(x,t) is a constant and F is a globally Lipschitz function. In section 4, we will study this case and give convergence rates in L2 and Hp norms for p>0. The method here is the well-known spectral method. The main idea is to approximate the final data g by the approximate data and use this function to establish a regularized problem by the truncation method.

    Case 2: a:=a(x,t) depends on x and t and F is a locally Lipschitz function. This problem is more difficult. In most practical problems, the function F is often a locally Lipschitz function. The difficulty here is in the fact that the solution cannot be transformed into a Fourier series and therefore, we cannot apply well-known methods to find an approximate solution. In Section 5, we will study a new form of the quasi-reversibility method to construct a regularized solution and obtain the convergence rate. Our method is new and very different than the method of Lattes and Lions [21]. We approximate the locally Lipschitz function by a sequence of globally Lipschitz functions and use some new techniques to obtain the convergence rate.

    Case 3 Various assumptions on F. In practice there are many functions that are not locally Lipschitz. Hence our analysis in section 4 cannot applied in section 6. Our method in section 6 is also the quasi-reversibility method and is very similar to the method in section 4. However in section 6, we do not approximate F as we do in section 4. This leads to a convergence rate that is better than the one in section 4. One difficulty that occurs in this section is showing the existence and uniqueness of the regularized solution. To prove the existence of the regularized solution, we do not follow previously mentioned methods. Instead, we use the Faedo – Galerkin method, and the compactness method introduced by Lions [22]. To the best of our knowledge, this is the first result where F is not necessarily a locally Lipschitz function. Finally, in section 7, we give some specific equations which can be applied by our method.

    To give some details on this random model (1.2), we give the following definitions (See [12,19]):

    Definition 2.1. Let H be a Hilbert space. Let g,gδH satisfy (1.2). We understand the equal relationship in fomula

    gobsδ(x)=g(x)+δξ(x)

    as follows:

    gδ,χ=g,χ+δξ,χ,χH, (2.1)

    here δ is the amplitude of the noise. We also assume that ξ is a zero-mean Gaussian random process indexed by H on a probability space. ξ,χN(0,χ2H). Moreover, given χ1,χ2H then

    E(ξ,χ1ξ,χ2)=Eχ1,χ2. (2.2)

    Definition 2.2. The stochastic error is a Hilbert-space process, i.e., a bounded linear operator ξ:HL2(Ω,A,P) where (Ω,A,P) is the underlying probability space and L2(.,.) is the space of all square integrable measurable functions.

    Let us recall that the eigenvalue problem

    {Δϕj(x)=λjϕj(x),xΩ,ϕj(x)=0,xΩ, (2.3)

    admits a family of eigenvalues 0<λ1λ2λ3...λj... and eigenfunctions {ϕj} and λj as j; see page 335 in [23].

    Next, we introduce the abstract Gevrey class of functions of index σ>0, see, e.g., [24], defined by

    Wσ={vL2(Ω):j=1e2σλj|v,ϕj(x)L2(Ω)|2<},

    which is a Hilbert space equipped with the inner product

    v1,v2Wσ:=eσΔv1,eσΔv2L2(Ω),for allv1,v2Wσ;

    its corresponding norm is vWσ=j=1e2σλj|v,ϕjL2(Ω)|2<.

    The ill-posedness of the backward heat equation is well known and has appeared in many previous articles. However, in the random case, we need to give an example to illustrate the ill-posedness. From the appearance of the expected component, the evaluation of the nonconformity of the random model is much more complicated than the deterministic model. Therefore, we have to choose a simple case to find a suitable example. In this section, for a special case of Eq (1.1), we show that the nonlinear parabolic equation with random noise is ill-posed in the sense of Hadamard.

    Theorem 3.1. Problem (1.1) is ill-posed in the special case when a=1,Ω=(0,π).

    Proof. Let Ω=(0,π) and a(x,t)=1, Then λN=N2. Let us consider the following parabolic equation

    {Vδ,N(δ)tΔVδ,N(δ)(t)=F0(Vδ,N(δ)(x,t)),0<t<T,x(0,π)Vδ,N(δ)(0,t)=Vδ,N(δ)(π,t)=0,Vδ,N(δ)(x,T)=Gδ,N(δ)(x), (3.1)

    where F0 is

    F0(v(x))=j=1eTj22Tv,ϕj(x)ϕj(x) (3.2)

    for any vL2(Ω), and ϕj(x)=2πsin(jx). Let us choose Gδ,N(δ)L2(Ω) such that

    Gδ,N(δ)(x)=N(δ)j=1gδ(x),ϕj(x)ϕj(x) (3.3)

    where gδ is defined by

    gδ,ϕj=δξ,ϕj,j=¯1,N={jN,1jN}. (3.4)

    By the usual MISE decomposition which involves a variance term and a bias term, we get

    EGδ,N(δ)2L2(Ω)=E(N(δ)j=1Gδ,N(δ),ϕj2)=δ2E(N(δ)j=1ξ2j)=δ2N(δ). (3.5)

    The solution of Problem (3.1) is given by the Fourier series (see [29])

    Vδ,N(δ)(x,t)=j=1[e(Tt)λjGδ,N(δ),ϕjTte(st)λjF0(Vδ,N(δ)(s)),ϕjds]ϕj. (3.6)

    We show that Problem (3.6) has unique solution Vδ,N(δ)C([0,T];L2(Ω)). Let us consider

    Φv:=j=1e(Tt)λjGδ,N(δ),ϕjj=1[Tte(st)λjF0(v(s)),ϕjds]ϕj. (3.7)

    For any v1,v2C([0,T];L2(Ω)), using Hölder inequality, we have for all t[0,T]

    Φv1(t)Φv2(t)2L2(Ω)=j=1[Tte(st)λjF0(v1(s))F0(v2(s)),ϕjds]2Tj=1Tte2(st)λjF0(v1(s))F0(v2(s)),ϕj2ds=T4T2j=1Tte2(stT)λjv1(s)v2(s),ϕj2ds14Tj=1Ttv1(s)v2(s),ϕj2ds14v1v22C([0,T];L2(Ω)). (3.8)

    Hence, we obtain that

    Φv1Φv2|C([0,T];L2(Ω))12v1v2C([0,T];L2(Ω)). (3.9)

    Thus Φ is a contraction. Using the contraction principle and we conclude that the equation Φ(w)=w has a unique solution Vδ,N(δ)C([0,T];L2(Ω)). Using the inequality a2+b212(ab)2,a,bR, we have the following estimate

    Vδ,N(δ)2L2(Ω)12j=1e(Tt)λjGδ,N(δ),ϕjϕj2L2(Ω)I1j=1(Tte(st)λjF0(Vδ,N(δ)(s)),ϕjds)ϕj2L2(Ω)I2. (3.10)

    First, using Hölder's inequality, we get

    I2j=1(Tte(st)λjF0(Vδ,N(δ)(s)),ϕjds)2Tj=1Tte2(st)λjF0(Vδ,N(δ)(s)),ϕj2dsT4T2Ttj=1e2(stT)λjVδ,N(δ)(t),ϕj2ds14Vδ,N(δ)2C([0,T];L2(Ω)). (3.11)

    We have the lower bound for I1:

    EI1=12j=1e2(Tt)λjEGδ,N(δ),ϕj2=12Nj=1δ2e2(Tt)λj12δ2e2(Tt)λN(δ). (3.12)

    Combining (3.10), (3.11), (3.12), and we obtain

    EVδ,N(δ)2L2(Ω)+14EVδ,N(δ)2C([0,T];L2(Ω))12δ2e2(Tt)λN(δ). (3.13)

    By taking supremum of both sides on [0,T], we get

    EVδ,N(δ)2C([0,T];L2(Ω))25sup0tTδ2e2(Tt)λN(δ)=25δ2e2TλN(δ)=25δ2e2TN2(δ). (3.14)

    Choosing N:=N(δ)=12Tln(1δ), we obtain

    EGδ,N(δ)2L2(Ω)=δ2N(δ)=δ212Tln(1δ)0,whenδ0, (3.15)

    and

    EVδ,N(δ)2C([0,T];L2(Ω))25δ2e2TN2(δ)=25δ+,whenδ0. (3.16)

    From (3.15) and (3.16), we can conclude that Problem (1.1) is ill-posed.

    In this section, we consider the question of finding the function u(x,t), (x,t)Ω×[0,T], that satisfies the problem

    {utΔu=F(x,t,u(x,t)),(x,t)Ω×(0,T),u|Ω=0,t(0,T),u(x,T)=g(x),xΩ. (4.1)

    In this section, we assume there exists a constant K>0 with

    |F(x,t;u)F(x,t;v)|K|uv|,

    where (x,t)Ω×[0,T] and u,vR.

    Lemma 4.1. Let ¯Gδ,N(δ)L2(Ω) be such that

    ¯Gδ,N(δ)=N(δ)j=1gobsδ,ϕjϕj. (4.2)

    Assume that gH2γ(Ω). Then we have the following estimate

    E¯Gδ,N(δ)g2L2(Ω)δ2N(δ)+1λ2γN(δ)g2H2γ(Ω) (4.3)

    for any γ0. Here N depends on δ and satisfies limδ0N(δ)=+ and {limδ0δ2N(δ)=0}.

    Remark 4.1. Consider the right hand side of (4.3). In order for the right-hand side of (4.3) to converge to zero we require limδ0δ2N(δ)=0 and the condition

    1λ2γN(δ)0,δ0. (4.4)

    Since the fact that λkk2/d, we see that

    λ2γN(δ)(N(δ))4γd,

    and to verify the condition (4.4) we need the condition limδ0N(δ)=+.

    Proof. For the following proof, we consider the genuine model (1.3). By the usual MISE decomposition which involves a variance term and a bias term, we get

    E¯Gδ,N(δ)g2L2(Ω)=E(N(δ)j=1gobsδg,ϕj2)+jN(δ)+1g,ϕj2=δ2E(N(δ)j=1ξ2j)+jN(δ)+1λ2γjλ2γjg,ϕj2. (4.5)

    Since ξj=ξ,ϕjiidN(0,1), it follows that Eξ2j=1, so

    E¯Gδ,N(δ)g2L2(Ω)δ2N(δ)+1λ2γN(δ)gH2γ. (4.6)

    Using the truncation method, we give a regularized problem for Problem (1.1) as follows

    {tuδN(δ)ΔuδN(δ)=JαN(δ)F(x,t,uδN(δ)(x,t)),(x,t)Ω×(0,T),uδN(δ)|Ω=0,t(0,T),uδN(δ)(x,T)=JαN(δ)¯Gδ,N(δ)(x),(x,t)Ω×(0,T), (4.7)

    where αN(δ) is regularization parameter and JαN(δ) is the following operator

    JαN(δ)v:=λjαN(δ)v,ϕjϕj,forallvL2(Ω). (4.8)

    Our main result in this section is as follows

    Theorem 4.1. Problem (4.7) has a unique solution uδN(δ)C([0,T];L2(Ω)) which satisfies

    uδN(δ)(x,t)=λjαN(δ)[e(Tt)λj¯Gδ,N(δ),ϕjTte(st)λjF(uδN(δ)(s)),ϕjds]ϕj. (4.9)

    Assume that problem (1.1) has unique solution u such that

    j=1λ2βje2tλju(.,t),ϕj2<A,t[0,T]. (4.10)

    Choose αN(δ) such that

    limδ0αN(δ)=+,limδ0ekTαN(δ)λγN(δ)=0,limδ0eKTαN(δ)N(δ)δ=0. (4.11)

    Then the following estimate holds

    Eu(.,t)uδN(δ)(.,t)2L2(Ω)2e2K2(Tt)e2tαN(δ)[δ2N(δ)e2TαN(δ)+e2TαN(δ)λ2γN(δ)gH2γ+α2βN(δ)]. (4.12)

    Remark 4.2. 1. From the theorem above, it is easy to see that EuδN(δ)(x,t)u(x,t)2L2(Ω) is of the order

    e2tαN(δ)max(δ2N(δ)e2TαN(δ),e2TαN(δ)λ2γN(δ),α2βN(δ)). (4.13)

    2. Now, we give one example for the choice of N(δ) which satisfies condition (4.11). Since λNN2d, see [25], we choose αN such that ekTαN(δ)=|N(δ)|a for any 0<a<2γd. Then we have αN(δ)=akTlog(N(δ)). The number N(δ) is chosen as

    N(δ)=(1δ)ba+b2

    for 0<b<1. With N(δ) chosen as above, EuδN(δ)(x,t)u(x,t)2L2(Ω) is of the order (1δ)(ba+b2)atKT

    3. The existence and uniqueness of the solution of Eq (1.1) is an open problem, and we do not investigate this problem here. The case considered in Theorem 3.1 gives the existence of the solution of Problem (1.1) in a special case. The uniqueness of the backward parabolic problem has attracted the attention of many authors (see, for example, [26,27,28]) and this is also a challenging open problem.

    Proof of Theorem 4.1. We divide the proof into a number of parts.

    Part 1. Problem (4.7) has a unique solution uδN(δ)C([0,T];L2(Ω)). The proof is similar to [29] (see Theorem 3.1, page 2975 [29]). Hence, we omit it here.

    Part 2. Estimate the expectation of the error between the exact solution u and the regularized solution uδN(δ).

    Let us consider the following integral equation

    vδN(δ)(x,t)=λjαN(δ)[e(Tt)λjg,ϕjTte(st)λjF(vδN(δ)(s)),ϕjds]ϕj. (4.14)

    We have

    uδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2λjαNe2(Tt)λj¯Gδ,N(δ)g,ϕj2+2λjαN(δ)[Tte(st)λj(Fj(uδN(δ))(s)Fj(vδN(δ))(s))ds]22e2(Tt)αNλjαN(δ)¯Gδ,N(δ)g,ϕj2+2(Tt)Tte2(st)αN(δ)λjαN(δ)(Fj(uδN(δ))(s)Fj(vδN(δ))(s))2ds2e2(Tt)αN¯Gδ,N(δ)g2L2(Ω)+2K2TTte2(st)αNuδN(δ)(.,s)vδN(δ)(.,s)2L2(Ω)ds. (4.15)

    Taking the expectation of both sides of the last inequality, we get

    EuδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2e2(Tt)αN(δ)E¯Gδ,N(δ)g2L2(Ω)+2K2TTte2(st)αNEuδN(δ)(.,s)vδN(δ)(.,s)2L2(Ω)ds. (4.16)

    Multiplying both sides with e2tαN, we obtain

    e2tαN(δ)EuδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2e2TαN(δ)E¯Gδ,N(δ)g2L2(Ω)+2K2TTte2sαN(δ)EuδN(δ)(.,s)vδN(.,s)2L2(Ω)ds. (4.17)

    Applying Gronwall's inequality, we get

    e2tαN(δ)EuδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2e2TαN(δ)e2K2T(Tt)E¯Gδ,N(δ)g2L2(Ω). (4.18)

    Hence, using Lemma 4.1, we deduce that

    EuδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2e2K2T(Tt)e2(Tt)αN(δ)E¯Gδ,N(δ)g2L2(Ω)2e2K2T(Tt)e2(Tt)αN(δ)(δ2N(δ)+1λ2γN(δ)gH2γ). (4.19)

    Now, we continue to estimate u(.,t)vδN(δ)(.,t)L2(Ω). Indeed, using Hölder's inequality and globally Lipschitz property of F, we get

    u(.,t)vδN(δ)(.,t)2L2(Ω)2λjαN(δ)[Tte(st)λj(Fj(u)(s)Fj(vδN(δ))(s))ds]2+2λj>αNu(t),ϕj22λj>αNλ2βje2tλjλ2βje2tλju(t),ϕj2+2K2Tte2(st)λNu(.,s)vδN(δ)(.,s)2L2(Ω)dsα2βNe2tαNj=1λ2βje2tλju(t),ϕj2+2K2Tte2(st)αN(δ)u(.,s)vδN(δ)(.,s)2L2(Ω)ds;

    above, we have used the mild solution of u as follows

    u(x,t)=j=1[e(Tt)λjg,ϕjTte(st)λjF(u(s)),ϕjds]ϕj.

    Multiplying both sides with e2tαN(δ), we obtain

    e2tαN(δ)u(.,t)vδN(δ)(.,t)2L2(Ω)α2βN(δ)j=1λ2βje2tλju(.,t),ϕj2+2K2Tte2sαNu(.,s)vδN(δ)(.,s)2L2(Ω)ds. (4.20)

    Gronwall's inequality implies that

    e2tαNu(.,t)vδN(δ)(.,t)2L2(Ω)e2K2(Tt)α2βN(δ)A. (4.21)

    This together with the estimate (4.19) leads to

    Eu(.,t)uδN(δ)(.,t)2L2(Ω)2EuδN(.,t)vδN(δ)(.,t)2L2(Ω)+2u(.,t)vδN(δ)(.,t)2L2(Ω)2e2K2(Tt)αN(δ2N(δ)+1λ2γN(δ)gH2γ)+2α2βN(δ)e2tαNe2K2(Tt)A (4.22)

    where A is given in Eq (4.10). This completes our proof.

    The next theorem provides an error estimate in the Sobolev space Hp(Ω) which is equipped with a norm defined by

    g2Hp(Ω)=j=1λpjg,ϕj(x)2. (4.23)

    To estimate the error in Hp norm, we need a stronger assumption of the solution u.

    Theorem 4.2. Assume that problem (1.1) has unique solution u such that

    j=1e2(t+r)λju(.,t),ϕj2<A",t[0,T]. (4.24)

    for any r>0. Choose αN(δ) such that

    limδ0αN(δ)=+,limδ0ekTαN(δ)λγN(δ)=0,limδ0ekTαN(δ)N(δ)δ=0 (4.25)

    Then the following estimate holds

    EuδN(δ)(.,t)u(.,t)2Hp(Ω) (4.26)
    2e2k2T(Tt)e2tαN|αN(δ)|p[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)gH2γ+A"e2rαNδ]+A"|αN(δ)|pexp(2(t+r)αN(δ)). (4.27)

    Proof. First, we have

    EuδN(δ)(.,t)JαN(δ)u(.,t)2Hp(Ω)=E(λjαN(δ)λpjuδN(δ)(x,t)u(x,t),ϕj(x)2)|αN(δ)|pE(λjαN(δ)uδN(δ)(x,t)u(x,t),ϕj(x)2)|αN(δ)|pEuδN(δ)(.,t)u(.,t)2L2(Ω). (4.28)

    Next, we continue to estimate EuδN(δ)(.,t)u(.,t)2L2(Ω) with assumption (4.24). Recall vδN(δ) from (4.14). The expectation of the error between uδN(δ) and vδN(δ) is given in the estimate (4.19) as

    EuδN(δ)(.,t)vδN(δ)(.,t)2L2(Ω)2e2K2T(Tt)e2(Tt)αN(δ)(δ2N(δ)+1λ2γN(δ)gH2γ). (4.29)

    We only need to estimate u(.,t)vδN(δ)(.,t)L2(Ω). Indeed, using Hölder's inequality and the globally Lipschitz property of F, we get

    u(.,t)vδN(δ)(.,t)2L2(Ω)2λj>αNu(t),ϕj2+2λjαN(δ)[Tte(st)λj(Fj(u)(s)Fj(vδN(δ))(s))ds]22λj>αNe2(t+r)λje2(t+r)λju(t),ϕj2+2K2TTte2(st)αNδu(.,s)vδN(δ)(.,s)2L2(Ω)dse2(t+r)αNδj=1 e2(t+r)λju(t),ϕj2+2K2TTte2(st)αN(δ)u(.,s)vδN(δ)(.,s)2L2(Ω)ds.

    Multiplying both sides with e2tαN(δ), we obtain

    e2tαN(δ)u(.,t)vδN(δ)(.,t)2L2(Ω)A"e2rαNδ+2K2TTte2sαN(δ)u(.,s)vδN(δ)(.,s)2L2(Ω)ds. (4.30)

    Gronwall's inequality implies that

    e2tαNu(.,t)vδN(δ)(.,t)2L2(Ω)e2K2T(Tt)A"e2rαNδ. (4.31)

    This last estimate together with the estimate (4.29) leads to

    Eu(.,t)uδN(δ)(.,t)2L2(Ω)2EuδN(.,t)vδN(δ)(.,t)2L2(Ω)+2u(.,t)vδN(δ)(.,t)2L2(Ω)4e2K2T(Tt)e2(Tt)αN(δ)(δ2N(δ)+1λ2γN(δ)gH2γ)+2e2K2T(Tt)A"e2tαNe2rαNδ=2e2K2T(Tt)e2tαN[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)gH2γ+A"e2rαNδ]. (4.32)

    On the other hand, consider the function

    G(ξ)=ξpeDξ,D>0. (4.33)

    The derivative of G is G(ξ)=ξp1eDξ(pDξ). Hence we know that G is strictly decreasing when Dξp. Since limδ0αN(δ)=+, we see that if δ is small enough then 2rαN(δ)p. Put D=2(t+r),ξ=αN(δ) into (4.33), and we obtain for λj>αN(δ)

    G(λj)=λpjexp(2(t+r)λj)G(αN(δ))=|αN(δ)|pexp(2(t+r)αN(δ)).

    The latter equality leads to

    u(.,t)JαN(δ)u(.,t)2Hp(Ω)=λj>αN(δ)λpju(x,t),ϕj(x)2=λj>αN(δ)λpjexp(2(t+r)λj)exp(2(t+r)λj)u(x,t),ϕj(x)2|αN(δ)|pexp(2(t+r)αN(δ))λj>αN(δ)exp(2(t+r)λj)u(x,t),ϕj(x)2A"|αN(δ)|pexp(2(t+r)αN(δ)) (4.34)

    where we use assumption (4.24) for the last inequality. Combining (4.28), (4.32) and (4.34), and we deduce that

    EuδN(δ)(.,t)u(.,t)2Hp(Ω)EuδN(δ)(.,t)JαN(δ)u(.,t)2Hp(Ω)+u(.,t)JαN(δ)u(.,t)2Hp(Ω)2e2K2T(Tt)e2tαN|αN(δ)|p[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)gH2γ+A"e2rαNδ]+A"|αN(δ)|pexp(2(t+r)αN(δ)) (4.35)

    which completes the proof.

    Remark 4.3. In the above Theorem, to obtain the error estimate, we require strong assumptions on u. This is a limitation of Theorem 4.1, because there are only certain types of functions u satisfying these conditions. To remove this limitation, we need to find a new estimator. The convergence rate in the case of weak assumptions of u is a difficult problem. Indeed, in the next Theorem, we give a regularization result in the case of a weaker assumption for u, i.e., uC([0,T];L2(Ω)). This is one of the first results in this case.

    To help the reader, we describe our analysis and methods in this subsection. To obtain the approximate solution when the solution u is in C([0,T];L2(Ω)), we don't use a regularized solution as in Theorem 4.1. Since ¯Gδ,N(δ) is an approximation of G, we know that it is an observed data. It can also be called the "input data". Recall that K is the Lipschitz constant of F. We divide our results in Theorem 4.3 into two cases:

    Case 1: KT<1. By the way the input data ¯Gδ,N(δ) is defined, we construct a new regularized solution. Then we obtain the error between the new regularized solution and the sought solution u.

    Case 2: KT>1. In this case, the construction of the regularized solution is more difficult. To apply the known result in Case 1, we need to divide [0,T] into a collection of sub intervals [Th,Th] where K(ThTh)<1. From the given input data θ and appropriate parameter regularization ζ, we set the output function YζTh,Th(f)(x,t) satisfies the nonlinear integral equation (4.37). The existence of YζTh,Th(f) in C([Th,Th];L2(Ω)) holds if K(ThTh)<1. From (4.56), we have an important result: If ζ is suitably chosen and θ is an approximate function of u(x,Th) then the function YζTh,Th(f)(x,t) is an approximate solution of the sought solution u in all intervals [Th,Th]. Let s be a positive integer such that s>KT. Define a sequence of points {Tl},l=0,1,...2s such that

    T0=0<T1=¯hT<T2=2¯hT<...<T2s=2s¯hT=T. (4.36)

    where ¯h=12s. In all the intervals [Ti,Ti+1],i=¯0,2s1, we construct different regularized solutions and combine them into a final regularized solution. More details are as follows:

    ● In the first step, to construct an approximate solution on [T2s1,T], we use the input data ¯Gδ,N(δ) and parameter regularization ζ2s to establish a function Yζ2sT2s2,T2s(¯Gδ,N(δ))(x,t). Then we define a regularized solution Uδ(x,t)=Yζ2sT2s2,T2s(¯Gδ,N(δ))(x,t) for all t[T2s1,T].

    ● In the second step, to construct an approximate solution on [T2s2,T2s1], we use the input data Un(x,T2s1) (which is computed in the first step) and parameter regularization ζ2s1 to establish a function Yζ2sT2s2,T2s(¯Gδ,N(δ))(x,t). Then we define a regularized solution

    Uδ(x,t)=Yζ2s1T2s3,T2s1(Uδ(x,T2s1))(x,t)

    for all t[T2s2,T2s1].

    ● We continue similarly for the remaining steps. Finally, we obtain the regularized solution in (4.61) and (4.62).

    Now, we consider the following lemma.

    Lemma 4.2. Let 0Th<ThT. For fC([Th,Th];L2(Ω)), we consider the following nonlinear integral equation

    YζTh,Th(f)(x,t)=λjζ[e(Tht)λjf,ϕjThte(τt)λjF(YζTh,Th(f)(τ),ϕjdτ]ϕj+λj>ζ[tThe(τt)λjFjF(YζTh,Th(f)(τ),ϕjdτ]ϕj(x). (4.37)

    for ζ>0. Assume that K(ThTh)<1. Then Problem (4.37) has a unique solution YζTh,Th(f)C([Th,Th];L2(Ω)). Moreover, we have the following estimate

    EYζTh,Th(f)(.,t)u(.,t)2L2(Ω)2(1+1q0)e2(Tht)1(1+q0)K2(ThTh)2(e2(ThTh)ζEfu(.,Th)2L2(Ω)+u(.,Th)2L2(Ω)). (4.38)

    for all t[Th,Th] and q0 satisfies 0<q0<1K2(ThTh)21.

    Proof. Part A. We will begin by showing that Eq (4.37) has a unique solution in C([Th,Th];L2(Ω)). Our analysis here is similar to the one in [29]. Define on C([Th,Th];L2(Ω)) the following Bielecki norm

    v1=supThtThe(tTh)ζ(δ)v(t), (4.39)

    for all vC([Th,Th];L2(Ω)). It is easy to check that .1 is a norm of C([Th,Th];L2(Ω)). Now, let f be in L2(Ω). We want to show that the map given by

    I(w(f))(x,t)=λjζ[e(Tht)λjf,ϕjThte(τt)λjF(w(f)(τ),ϕjdτ]ϕj+λj>ζ[tThe(τt)λjF(w(f)(τ),ϕjdτ]ϕj(x), (4.40)

    for w(f)C([Th,Th];L2(Ω)), is a contraction on C([Th,Th];L2(Ω)) with the condition K(ThTh)<1. Indeed, we shall prove that, for every w1,w2C([Th,Th];L2(Ω)),

    I(w1(f))I(w2(f))1K(ThTh)w1(f)w2(f)1. (4.41)

    First, by using the Hölder inequality and the global Lipschitz property of F, we have the following estimates for all t[Th1,Th2], namely

    λjζ(Thte(τt)λj[Fj(w1(f))(τ)Fj(w2(f))(τ)]dτ)2(Tht)λjζTht|e(τt)λj[Fj(w1(f))(τ)Fj(w2)(f)(τ)]|2dτ(Tht)λjζThte2(τt)ζ[Fj(w1(f))(τ)Fj(w2)(f)(τ)]2dτK2(Tht)Thte2(τt)ζw1(f)(τ)w2(f)(τ)2dτe2(tTh)ζK2(Tht)2supThτThe2(τTh)ζw1(f)(τ)w2(f)(τ)2=e2(tTh)ζK2(Tht)2w1(f)w2(f)21.

    Noting that if λj>ζ then e(τt)λje(τt)ζ for Thτt, it follows that

    λj>ζ(tThe(τt)λjj[Fj(w1(f))(τ)Fj(w2(f))(τ)]dτ)2(tTh)λj>ζtTh|e(τt)ζ[Fj(w1(f))(τ)Fp(w2(f))(τ)]|2dτ(tTh)λj>ζtThe2(τt)ζ|Fj(w1(f))(τ)Fj(w2(f))(τ)|2dτK2(tTh)tThe2(τt)ζw1(f)(τ)w2(f)(τ)2dτe2(tTh)ζK2(tTh)2sup0τTe2(τTh)ζw1(f)(τ)w2(f)(τ)2=e2(tTh)ζK2(tTh)2w1(f)w2(f)21.

    From the definition of I in (4.40), we have

    I(w1(f))(x,t)I(w2(f))(x,t)=λjζ(Thte(τt)λj[Fj(w1(f))(τ)Fj(w2(f))(τ)]dτ)ϕj(x)+λj>ζ(tThe(τt)λj[Fj(w1(f))(τ)Fj(w2(f))(τ)]dτ)ϕj(x).

    Combining (4.42), (4.42), (4.42) and using the inequality (a+b)2(1+θ0)a2+(1+1θ0)b2 for any real numbers a,b and θ0>0, we get the following estimate for all t(Th,Th)

    I(w1(f))(.,t)I(w2(f))(.,t)2e2(tTh)ζK2(tTh)2(1+θ0)w1(f)w2(f)21+e2(tTh)ζ(1+1θ0)K2(Tht)2w1(f)w2(f)21.

    By choosing θ0=Thtt, we obtain {for all} t(Th,Th)

    e2(tTh)ζI(w1(f))(.,t)I(w2(f))(.,t)2K2(ThTh)2w1(f)w2(f)21. (4.42)

    On the other hand, letting t=Th in (4.42), we get

    e2(ThTh)ζI(w1(f))(.,Th)I(w2(f))(.,Th)2K2(ThTh)2w1(f)w2(f)21. (4.43)

    By letting t=Th in (4.42), we obtain

    I(w1(f))(.,Th)I(w2(f))(.,Th)2K2(ThTh)2w1(f)w2(f)21. (4.44)

    Combining (4.42), (4.43) and (4.44), we deduce that for all ThtTh

    e2(tTh)ζI(w1(f))(t)I(w2(f))(t)K(ThTh)w1(f)w2(f)1, (4.45)

    which leads to (4.41). Since K(ThTh)<1, it follows that I is a well-defined contraction on C([Th,Th];L2(Ω)). By the Banach fixed point theorem, it therefore has a unique fixed point, i.e., the equation I(w)=w has a unique solution which we denote by YζTh,Th(f)C([Th,Th];L2(Ω)).

    Part B. The error estimate EYζTh,Th(f)(.,t)u(.,t)2L2(Ω).

    By a similar technique as in the proof of Theorem 4.1, we obtain

    uj(Th)=e(ThTh)λjuj(Th)ThThe(τt)λjFj(u)(τ)dτ. (4.46)

    This leads to

    e(tTh)λjuj(Th)=e(Tht)λj[uj(Th)ThThe(sTh)λjFj(u)(τ)dτ]. (4.47)

    The last equality implies that after some simple transformation

    λj>ζe(Tht)λj[uj(Th)Thte(sTh)λjFj(u)(τ)dτ]ϕj(x)=λj>ζ[tThe(τt)λjFj(u)(τ)dτ]ϕj(x)+λj>ζe(tTh)λjuj(Th)ϕj(x). (4.48)

    Using the last equality and (4.47), we get

    u(x,t)=λjζ[e(Tht)λjuj(Th)Thte(τt)λjFj(u)(τ)dτ]ϕj(x)+λjζ[tThe(τt)λjFj(u)(τ)dτ]ϕj(x)+λj>ζe(tTh)λjuj(Th)ϕj(x). (4.49)

    We have

    YζTh,Th(f)(x,t)u(x,t)=λjζ[e(Tht)ζ(fjuj(Th))]ϕj(x)λjζ[Thte(τt)λj(Fj(YζTh,Th(f))(τ)Fj(u)(τ))dτ]ϕj(x)+λj>ζ[tThe(τt)λj(Fj(YζTh,Th(f))(τ)Fj(u)(τ))dτ]ϕj(x)λj>ζe(tTh)λjuj(Th)ϕj(x). (4.50)

    This implies that

    |YζTh,Th(f)(.,t)u(.,t),ϕj(.)|e(Tht)ζ|(fjuj(Th))|+ThThe(τt)ζ|Fj(YζTh,Th(f))(τ)Fj(u)(τ)|dτ+e(tTh)ζ|uj(Th)|. (4.51)

    Hence, using Parseval's identity and the inequality

    (c1+c2+c3)22(1+1q0)c21+2(1+1q0)c22+(1+q0)c23

    for any real numbers c1,c2,c3 and q0>0 we have

    EYζTh,Th(f)(.,t)u(.,t)2L2(Ω)=E(j=1|YζTh,Th(f)(.,t)u(.,t),ϕj(x)|2)2(1+1q0)E(j=1e(Tht)ζ|(fjuj(Th))|2)+(1+q0)E((ThTh)j=1ThThe2(τt)ζ|Fj(YζTh,Th(f))(τ)Fj(u)(τ)|dτ)+2(1+1q0)j=1e2(tTh)ζuj(Th)22(1+1q0)e2(Tht)ζEfu(.,Th)2L2(Ω)+2(1+1q0)e2(tTh)ζu(.,Th)2L2(Ω)+(1+q0)(ThTh)ThThe2(τt)ζE(F(YζTh,Th(f))(τ)F(u)(τ)2L2(Ω))dτ.

    Multiplying both sides of the last inequality by e2(tTh)ζ, and using the global Lipschitz property of F, we obtain

    e2(tTh)ζEYζTh,Th(f)(.,t)u(.,t)2L2(Ω)2(1+1q0)e2(ThTh)ζEfu(.,Th)2L2(Ω)+2(1+1q0)u(.,Th)2L2(Ω)+(1+q0)K2(ThTh)ThThe2(τTh)ζEYζTh,Th(f)(.,s)u(.,s)2L2(Ω)ds. (4.52)

    Since YζTh,Th(f),uC([Th,Th];L2(Ω)) we obtain that the function

    e2(tTh)ζEYζTh,Th(f)(.,t)u(.,t)2L2(Ω)

    is continuous on [Th,Th]. Therefore, the following is a finite positive constant

    ˜A=supThtThe2(tTh)ζEYζTh,Th(f)(.,t)u(.,t)2L2(Ω).

    This implies that

    ˜A2(1+1q0)e2(ThTh)ζEfu(.,Th)2L2(Ω)+2(1+1q0)u(.,Th)2L2(Ω)+(1+q0)K2(ThTh)2˜A (4.53)

    Hence

    (1(1+q0)K2(ThTh)2)˜A2(1+1q0)e2(ThTh)M(ζ)Efu(.,Th)2L2(Ω)+2(1+1q0)u(.,Th)2L2(Ω). (4.54)

    Since by assumption 0<q0<1K2(ThTh)21, it follows that the term on the left hand-side that is in parenthesis is positive. This implies that for all t[Th,Th]

    e2(tTh)ζEYζTh,Th(f)(.,t)u(.,t)2L2(Ω)2(1+1q0)e2(ThTh)ζEfu(.,Th)2L2(Ω)+2(1+1q0)u(.,Th)2L2(Ω)1(1+q0)K2(ThTh)2. (4.55)

    Hence for all t[Th,Th] we conclude that

    EYζTh,Th(f)(.,t)u(.,t)2L2(Ω)2(1+1q0)1(1+q0)K2(ThTh)2(e2(ThTh)ζEfu(.,Th)2L2(Ω)+u(.,Th)2L2(Ω))e2(Tht)ζ. (4.56)

    Our main result in this subsection is as follows.

    Theorem 4.3. Let g be as in Theorem 4.1. Assume that u is the unique solution of Problem (1.1).

    (a) Assume that KT<1, where K is the Lipschitz constant of F. A new regularized solution is given as follows

    ˆUδ(x,t)=λjζ(δ)[e(Tt)λj¯Gδ,N(δ)Tte(τt)λjFj(ˆUδ)(τ)dτ]ϕj(x)+λj>ζ(δ)[t0e(τt)λjFj(ˆUδ)(τ)dτ]ϕj(x). (4.57)

    Let us choose ζ(δ) such that

    limδ0ζ(δ)=+,limδ0ekTζ(δ)λγN(δ)=0,limδ0ekTζ(δ)N(δ)δ=0. (4.58)

    If uC([0,T];L2(Ω)) then as |δ|0

    EˆUδ(.,t)u(.,t)2L2(Ω)isofordere2tζ(δ). (4.59)

    (b) Suppose that KT>1 and let us assume that uC([0,T];L2(Ω)). Let

    ζ1(δ):=sT22s1log(1ξ(δ))ζk(δ):=sT22sklog(1ξ(δ)),k=¯2,2s. (4.60)

    We construct a regularized solution ˆUδ as follows

    ˆUδ(x,t)=Yζ2si(δ)T2si2,T2si(ˆUδ(x,T2si))(x,t),ifT2si1tT2si,i=¯0,2s2 (4.61)

    and

    ˆUδ(x,t)=Yζ1(δ)T0,T1(ˆUδ(x,T1))(x,t),if0tT1. (4.62)

    where Yζ(δ)Th1,Th2(θ)(x,t) is defined in (4.37). Then we have

    If t[Tk,Tk+1] and k=¯1,2s1 then

    EˆUδ(.,t)u(.,t)2L2(Ω)isoforder(ξ(δ))122sk. (4.63)

    If t[0,T1] then

    EˆUδ(.,t)u(.,t)2L2(Ω)isoforder(ξ(δ))st22s1. (4.64)

    Remark 4.4. In [29], we only need the regularization result for 0<KT<1. Our Theorem 4.3 extends this result for any K>0.

    Proof of part (a) of Theorem 4.3. By setting Th=0 and Th=T f=¯Gδ,N(δ) then YζTh,Th(f) given by (4.37) in Lemma 4.2 is equal to ˆUδ given by (4.57). Then apply the result from (4.38). Since KT<1, applying Lemma 4.2, we obtain

    EˆUδ(.,t)u(.,t)2L2(Ω)2(1+1q0)1(1+q0)K2T2(e2Tζ(δ)E¯Gδ,N(δ)g2L2(Ω)+g2L2(Ω))e2tζ(δ)2(1+1q0)1(1+q0)K2T2e2(Tt)ζ(δ)δ2N(δ)+42(1+1q0)1(1+q0)K2T2e2(Tt)ζ(δ)1λ2γN(δ)gH2γ+2(1+1q0)1(1+q0)K2T2g2L2(Ω)e2tζ(δ).

    This completes the proof of part (a).

    Proof of part (b) of Theorem 4.3

    By Theorem 3.1, we have E¯Gδ,N(δ)g2L2(Ω)˜Cξ2(δ). where ˜C=1+g2H2γ. We will estimate the error for time variable in interval [Tl,Tl+1] for l=¯0,2s.

    Case 1. Let t[T2s1,T]. Since ζ2s(δ)=sTlog(1ξ(δ)), by Lemma 4.2 we get

    EˆUδ(.,t)u(.,t)2L2(Ω)=EYζ2s(δ)T2s2,T2s(¯Gδ,N(δ))(.,t)u(.,t)2L2(Ω)2s2(1+1q0)s2T2K2(1+q0)[e2(T2sT2s2)ζ2s(δ)E¯Gδ,N(δ)g2L2(Ω)]e2(T2s2t)ζ2s(δ)+2s2(1+1q0)s2T2K2(1+q0)[u(.,T2s2)2L2(Ω)]e2(T2s2t)ζ2s(δ)2s2(1+1q0)s2T2K2(1+q0)(˜C+u2L(0,T;L2(Ω)))ξ(δ)=χ(s,K,q0)(˜C+u2L(0,T;L2(Ω)))ξ(δ), (4.65)

    which we note that e2(T2s2t)ζ2s(δ)e2(T2s2T2s1)ζ2s(δ)=ξ(δ) and

    χ(s,K,q0)=max{1,2s2(1+1q0)s2T2K2(1+q0)},thenχ(s,K,q0)1.

    Case 2. Let t[T2s2,T2s1]. Since ζ2s1(δ)=s2Tlog(1ξ(δ)), by Lemma 4.2 we get

    EˆUδ(.,t)u(.,t)2L2(Ω)=EYζ2s1(δ)T2s3,T2s1(ˆUδ(.,T2s1))(.,t)u(.,t)2L2(Ω)χ(s,K,q0)exp(2(T2s3t)ζ2s1(δ))exp(2(T2s1T2s3)ζ2s1(δ))EˆUδ(.,T2s1)u(.,T2s12L2(Ω)+χ(s,K,q0)exp(2(T2s3t)ζ2s1(δ))u(.,T2s3)2L2(Ω)χ(s,K,q0)( χ(s,K,q0)(˜C+uL(0,T;L2(Ω)))+uL(0,T;L2(Ω)))(ξ(δ))122χ2(s,K,q0)(˜C+uL(0,T;L2(Ω)))(ξ(δ))12,

    where we used the following result from (4.65):

    EˆUδ(.,T2s1)u(.,T2s12L2(Ω)χ(s,K,q0)(˜C+u2L(0,T;L2(Ω)))ξ(δ).

    Therefore, repeating the argument as in the above cases and using the induction method, we can prove the following estimate

    EˆUδ(.,t)u(.,t)2L2(Ω)(2sk)|χ2(s,K,q0)|2sk(˜C+u2L(0,T;L2(Ω)))(ξ(δ))122sk1,

    for all t[Tk,Tk+1] and k=¯1,2s1.

    If t[0,T1], then by a similar technique as above, we obtain the error estimate

    EˆUδ(.,t)u(.,t)2L2(Ω)s|χ2(s,K,q0)|2s(˜C+u2L(0,T;L2(Ω)))(ξ(δ))st22s2.

    Section 4 addressed a problem in which F is a global Lipschitz function. In this section we extend the analysis to a locally Lipschitz function F. Results for the locally Lipschitz case are difficult. Here, we have to find another regularization method to study the problem with a locally Lipschitz source.

    Assume that a is noisy by the observation data aobsδ:Ω×[0,T]R as follows

    aobsδ(x,t)=a(x,t)+δψ(t) (5.1)

    where δ>0 and ψL(0,T) such that

    ψL(0,T)=sup0tT|ψ(t)|¯M, (5.2)

    where ¯M>0. In the case when a is not disturbed, we can use the method in the previous sections (the case when a is not disturbed is simpler than the case a is noisy). If a is disturbed by random data, it is difficult to use the old method and we need a new approach, as outlined below.

    Assume that for each R>0, there exists KR>0 such that

    |F(x,t;u)F(x,t;v)|KR|uv|,ifmax{|u|,|v|}R, (5.3)

    where (x,t)Ω×[0,T] and

    KR:=sup{|F(x,t;u)F(x,t;v)uv|:max{|u|,|v|}R,uv,(x,t)Ω×[0,T]}<+.

    We note that KR is increasing and limR+KR=+. Now, we outline our idea to construct a regularization for problem (1.1). For all R>0, we approximate F by FR defined by

    FR(x,t;w):={F(x,t;R),w(,R)F(x,t;u),w[R,R]F(x,t;R),w(R,+). (5.4)

    For each δ>0, we consider a parameter R(δ)+ as δ0+. Let us denote the operator P=MΔ, where M is a positive number such that M>aobsδ(x,t) for all (x,t)Ω×(0,T). Define the following operator

    PδβN(δ)=P+QδβN(δ),

    where

    QδβN(δ)v(x)=1Tj=1ln(1+βN(δ)eMTλj)v(x),ϕj(x)L2(Ω)ϕj(x), (5.5)

    for any function vL2(Ω). Here N(δ) is defined in Lemma (4.1).

    We introduce the main idea to solve problem (1.1) with a generalized case of source term defined by (5.4), and we consider the problem:

    {uδN(δ)t(aobsδ(x,t)uδN(δ))QδβN(δ)(uδN(δ))(x,t)=FRδ(x,t,uδN(δ)(x,t)),(x,t)Ω×(0,T),uδN(δ)|Ω=0,t(0,T),uδN(x,T)=¯Gδ,N(δ)(x),(x,t)Ω×(0,T), (5.6)

    Here ¯Gδ,N(δ)(x) is defined in Eq (4.2). Now, we introduce some Lemmas which will be useful for our main results. First, we recall the abstract Gevrey class of functions of index σ>0, see, e.g., [24], defined by

    Wσ={vL2(Ω):n=1e2σλn|v,ϕn(x)L2(Ω)|2<},

    which is a Hilbert space equipped with the inner product

    v1,v2Wσ:=eσΔv1,eσΔv2L2(Ω),for allv1,v2Wσ;

    and the corresponding norm is vWσ=n=1e2σλn|v,ϕnL2(Ω)|2<.

    Lemma 5.1. For FRL(Ω×[0,T]×R), we have

    |FR(x,t;u)FR(x,t;v)|KR|uv|,(x,t)Ω×[0,T],u,vR.

    Proof. See the proof of Lemma 2.4 in [35].

    Lemma 5.2. 1. Let M,T>0. For any vWMT(Ω), we have

    QδβN(δ)(v)L2(Ω)βN(δ)TvWMT(Ω). (5.7)

    2. Let βN(δ)<1eMTλ1. For any vL2(Ω), we have

    PδβN(δ)vL2(Ω)1Tln(1βN(δ))vL2(Ω). (5.8)

    Proof. Using the inequality ln(1+a)a,a>0, we have

    QδβN(δ)(v)2L2(Ω)=1T2j=1ln2(1+βN(δ)eMTλj)|v,ϕjL2(Ω)|2β2N(δ)T2j=1e2MTλj|v,ϕjL2(Ω)|2β2N(δ)T2v2WMT. (5.9)

    Since βN(δ)<1eMTλ1, we know that βN(δ)+eMTλj<1. Using Parseval's equality, we easily get

    PδβN(δ)(v)2L2(Ω)=1T2j=1ln2(1βN(δ)+eMTλj)|v,ϕjL2(Ω)|21T2ln2(1βN(δ))j=1|v,ϕjL2(Ω)|21T2ln2(1βN(δ))v2L2(Ω).

    Theorem 5.1. Problem (5.6) has a unique solution uδN(δ)C([0,T];L2(Ω)). Assume that the problem (1.1) has a unique solution u satisfying u(,t)WMT. Choose βN(δ) such that

    limδ0δN(δ)β1N(δ)=limδ0β1N(δ)λγN(δ)=limδ0βN(δ)=0. (5.10)

    Choose Rδ such that

    limδ0β2tTN(δ)e2KRδT=0,t>0. (5.11)

    Then we have the following estimate

    EuδN(δ)(x,t)u(x,t)2L2(Ω)β2tTN(δ)e(2K(Rδ))+1)T˜C(δ). (5.12)

    Here ˜C(δ) is

    ˜C(δ)=δ2N(δ)β2Nδ+1λ2γN(δ)β2NδgH2γ(Ω)+u2C([0,T];WMT(Ω))+δ2T3b0β2Nδu2L(0,T;H10(Ω)).

    and assume that Ω is one dimensional domain.

    Remark 5.1. 1. Under asumption (5.11), the right hand side of Eq (5.12) converges to zero when t>0.

    2. Choose βN(δ)=N(δ)c for any 0<c<min(12,2γd), and N(δ) is chosen as

    N(δ)=(1δ)m(12c),0<m<1. (5.13)

    Choose Rδ such that

    K(Rδ)1kTln(ln(N(δ)))=1kTln(m(12c)ln(1δ)).

    Then EuδN(δ)(x,t)u(x,t)2L2(Ω) is of the order δmc(12c)tTln(1δ).

    Proof of Theorem 5.1. The proof is divided into two Steps.

    Step 1. The existence and uniqueness of the solution to the regularized problem (5.6).

    Let b(x,t) be defined by b(x,t)=Ma(x,t). It is clear that 0<b(x,t)<M. Then from (5.6), we obtain

    uδN(δ)t+(b(x,t)uδN(δ))=F(x,t,uδN(δ)(x,t))1Tj=1ln(1βN(δ)+eMTλj)uδN(δ)(,t),ϕjϕj(x), (5.14)

    for (x,t)Ω×(0,T).

    Let vδN(δ) be the function defined by vδN(δ)(x,t)=uδN(δ)(x,Tt). Then we have

    vδN(δ)t(x,t)=uδN(δ)t(x,Tt),(b(x,t)vδN(δ))(x,t)=(b(x,t)uδN(δ))(x,Tt)

    and

    1Tj=1ln(βN(δ)+eMTλj)vδN(δ)(x,t),ϕj(x)ϕj(x)=1Tj=1ln(βN(δ)+eMTλj)uδN(δ)(x,Tt),ϕj(x)ϕj(x).

    This implies that vδN(δ) satisfies the problem

    {vδN(δ)t(b(x,t)vδN(δ))=G(x,t,vδN(x,t)),(x,t)Ω×(0,T),vδN(δ)|Ω=0,t(0,T),vδN(δ)(x,0)=¯Gδ,N(δ)(x),(x,t)Ω×(0,T), (5.15)

    where G is defined by

    G(x,t,v(x,t))=F(x,t,v(x,t))+1Tj=1ln(1βN(δ)+eMTλj)v(,t),ϕjL2(Ω)ϕj(x), (5.16)

    for any vC([0,T];L2(Ω)).

    Since

    βN(δ)(0,1eMTλ1),0<ln(1βN(δ)+eMTλn)<ln(1βN(δ))

    and using Parseval's identity, we obtain for any v1,v2L2(Ω),

    G(,t,v1(,t))G(,t,v2(,t))L2(Ω)F(,t,v1(,t))F(,t,v2(,t))L2(Ω)+1Tj=1ln(1βN(δ)+eMTλj)v1(x,t)v2(x,t),ϕj(x)L2(Ω)ϕj(x)L2(Ω)Kv1(,t)v2(,t)L2(Ω)+1Tj=1ln2(1βN(δ)+eMTλj)|v1(,t)v2(,t),ϕnL2(Ω)|2[K+1Tln(1βN(δ))]v1(,t)v2(,t)L2(Ω). (5.17)

    Thus G is a Lipschitz function. Using the results of Theorem 12.2 in [32], we complete the proof of Step 1.

    Step 2. Error estimate

    We consider the error estimate between the regularized solution of problem (5.6) and the exact solution of problem (1.1).

    For (x,t)Ω×(0,T), we begin by establishing that the functions b(x,t),bobsδ(x,t) satisfy

    0<b(x,t)M,0<b0bobsδ(x,t)M

    and

    (a(x,t)aobsδ(x,t))=(MM)(b(x,t)bobsδ(x,t)),(x,t)Ω×(0,T). (5.18)

    The functions uδN(δ)(x,t) and u(x,t) solve the following equations

    ut+(bobsδ(x,t)u)=F(x,t;u(x,t))+((bobsδ(x,t)b(x,t))u)+Pu (5.19)

    and

    uδN(δ)t+(bobsδ(x,t)uδN(δ))=FRδ(x,t,uδN(δ)(x,t))+PδβN(δ)uδN(δ). (5.20)

    For ρδ>0, we put VδN(δ)(x,t)=eρδ(tT)[uδN(δ)(x,t)u(x,t)]. Then for (x,t)Ω×(0,T)

    VδN(δ)t+(bobsδ(x,t)VδN(δ))ρδVδN(δ)=PδβN(δ)VδN(δ)+eρδ(tT)QδβN(δ)ueρδ(tT)((bobsδ(x,t)b(x,t))u)+eρδ(tT)[FRδ(x,t,uδN(δ)(x,t))F(x,t;u(x,t))], (5.21)

    and

    VδN(δ)|Ω=0,VδN(δ)(x,T)=¯Gδ,N(δ)(x)g(x).

    By taking the inner product on both sides of Eq (5.21) with VδN(δ) and noting the equality

    Ω(bobsδ(x,t)VδN(δ))VδN(δ)dx=Ωbobsδ(x,t)|VδN(δ)|2dx,

    we obtain

    VδN(δ)(,T)2L2(Ω)VδN(δ)(,t)2L2(Ω)2TtΩbobsδ(x,s)|VδN(δ)|2dxds2ρδTtVδN(δ)(,s)2L2(Ω)ds=2TtPδβN(δ)VδN(δ),VδN(δ)L2(Ω)ds=:~A4+2Tteρδ(tT)QδβN(δ)u,VδN(δ)L2(Ω)ds=:~A5+2Tteρδ(tT)((bobsδ(x,t)b(x,t))u),VδN(δ)L2(Ω)ds=:~A6+2Tteρδ(tT)[FRδ(x,t,uδN(δ)(x,t))F(x,t;u(x,t))],VδN(δ)L2(Ω)ds=:~A7. (5.22)

    First, thanks to inequality (5.8), the expectation of ~A4 is estimated as follows:

    E|~A4|2Tln(1βNδ)TtEVδN(δ)(,s)2L2(Ω)ds, (5.23)

    Next, using the inequality (5.7) and the Hölder inequality, we have

    E|~A5|Tte2ρβ(sT)βNδTu2C([0,T];WMT)ds+TtEVδN(δ)(,s)2L2(Ω)dsβNδTu2C([0,T];WMT)+TtEVδN(δ)(,s)2L2(Ω)ds. (5.24)

    For estimating the expectation of |~A6|, we use the Green's formula to get the equality

    ((bobsδ(x,t)b(x,t))u),VδN(δ)L2(Ω)=((bobsδ(x,t)b(x,t))u,VδN(δ)L2(Ω)

    then using Hölder's inequality and noting the fact that

    Ω|u(.,s)|2dxu2L(0,T;H10(Ω))=sup0sTΩ|u(.,s)|2dx,

    we obtain

    E|~A6|=2E|Tteρδ(sT)((bobsδ(x,t)b(x,t))u,VδN(δ)L2(Ω)ds|ETte2ρδ(sT)b0Ω((bobsδ(x,t)b(x,t))2|u(x,t)|2dxds+ETtΩb0|VδN(δ)|2dxds=δ2Tt|ψ(s)|2dsΩ|u(.,s)|2dxb0+ETtΩb0|VδN(δ)|2dxds¯M2δ2T22b0u2L(0,T;H10(Ω))+ETtΩb0|VδN(δ)|2dxds; (5.25)

    here in the last inequality, we have used the fact that E|ψ(s)|2=s since ψ is Brownian motion. Finally, since limδ0+Rδ=+, for a sufficiently small δ>0, there is an Rδ>0 such that RδuL([0,T];L2(Ω)). For this value of Rδ we have

    FRδ(x,t;u(x,t))=F(x,t;u(x,t)).

    Using the global Lipschitz property of FR (see Lemma 3), one obtains similarly the estimate

    E|~A7|=2E|Tteρδ(tT)[FRδ(x,t,uδN(δ)(x,t))F(x,t;u(x,t))],VδN(δ)L2(Ω)ds|2ETteρδ(tT)[FRδ(x,s,uδN(δ)(x,s))F(x,s;u(x,s))]L2(Ω)VδN(δ)(,s)L2(Ω)ds2K(Rδ)TtEVδN(δ)(,s)2L2(Ω)ds. (5.26)

    Combining (5.22), (5.23), (5.24), (5.25) and (5.26), and we obtain

    EVδN(δ)(,T)2L2(Ω)EVδN(δ)(,t)2L2(Ω)+Tt(βNδTu2C([0,T];WMT)+δ2T22b0u2L(0,T;H10(Ω)))ds2ETtΩ(bobsδ(x,s)b0)|VδN(δ)|2dxds+ETt(2ρδ2Tln(1βNδ)2K(Rδ)1)VδN(δ)(,s)2L2(Ω)dsETt(2ρδ2Tln(1βNδ)2K(Rδ)1)VδN(δ)(,s)2L2(Ω)ds. (5.27)

    Thus,

    EVδN(δ)(,t)2L2(Ω)E¯Gδ,N(δ)g2L2(Ω)+βNδu2C([0,T];WMT(Ω))+δ2T3b0u2L(0,T;H10(Ω))+ETt(2ρδ+2Tln(1βNδ)+2K(Rδ)+1)VδN(δ)(,s)2L2(Ω)ds. (5.28)

    Since VδN(δ)(x,t)=eρδ(tT)(uδN(δ)(x,t)u(x,t)) and applying Lemma 4.1, we observe that

    e2ρδ(tT)EuδN(δ)(,t)u(,t)2L2(Ω)δ2N(δ)+1λ2γN(δ)gH2γ(Ω)+βNδu2C([0,T];WMT(Ω))+¯M2δ2T3b0u2L(0,T;H10(Ω))+(2K(Rδ)+1)Tte2ρδ(sT)EuδN(δ)(,s)u(,s)2L2(Ω)ds. (5.29)

    Gronwall's lemma allows us to obtain

    e2ρδ(tT)EuδN(δ)(x,t)u(x,t)2L2(Ω)[δ2N(δ)+1λ2γN(δ)gH2γ(Ω)+βNδu2C([0,T];WMT(Ω))+¯M2δ2T3b0u2L(0,T;H10(Ω))]e(2K(Rδ))+1)(Tt). (5.30)

    By choosing ρδ=1Tln(1βNδ)>0 we have

    EuδN(δ)(,t)u(,t)2L2(Ω)β2tTN(δ)e(2K(Rδ))+1)T˜C(δ). (5.31)

    The proof of Theorem 5.1 is complete.

    In most previous works on backward nonlinear problems the assumption, that the source is global or locally Lipschitz, is required. To the best of our knowledge, this section is the first result when the source term F is not necessarily a locally Lipschitz source. We will solve the problem (1.1) with a special generalized case of source term defined by (5.4). Our regularized problem is different to the one in section 4 because we do not approximate the source function F. Indeed, we have the following regularized problem

    {uδN(δ)t(aobsδ(x,t)uδN(δ))QδβN(δ)(uδN(δ))(x,t)=F(x,t,uδN(δ)(x,t)),(x,t)Ω×(0,T),uδN(δ)|Ω=0,t(0,T),uδN(x,T)=¯Gδ,N(δ)(x),(x,t)Ω×(0,T), (6.1)

    We make the following assumptions on FC0(R) in the following: There exists C1 and C1,C2 and p>1 and ¯γ such that

    zF(x,t,z)C1|z|pC1 (6.2)
    |F(x,t,z)|C2(1+|z|p1) (6.3)
    (z1z2)(F(x,t,z1)F(x,t,z2))¯γ|z1z2|2. (6.4)

    It is easy to check that the function {F(x,t,z)=z13 }satisfies conditions (6.2), (6.3) and (6.4). Note here that this function is not locally Lipschitz.

    Now we have the following result

    Theorem 6.1. Let us assume that F satisfies (6.2), (6.3) and (6.4). Then, there exists a unique weak solution uδN(δ) of problem (6.1) such that

    uδN(δ)L2(0,T;H1)L(0,T;L2).

    Assume that the problem (1.1) has a unique solution u satisfying u(,t)WMT. Choose βNδ as in Theorem 5.1. Then we have the following estimate

    EuδN(δ)(x,t)u(x,t)2L2(Ω)β2tTN(δ)e(2¯γ+1)T˜C(δ). (6.5)

    where ˜C(δ) is defined in (6.51).

    Remark 6.1. Our method in this theorem give the convergence rate (6.5) which is better than the error rate in (5.12). Indeed, since limδ0K(Rδ)=+, we have

    Therighthandsideof(5.12)Therighthandsideof(6.5)=β2tTN(δ)e(2K(Rδ))+1)T˜C(δ)β2tTN(δ)e(2¯γ+1)T˜C(δ)+ (6.6)

    when δ0.

    First, by changing variable vδN(δ)(x,t)=uδN(δ)(x,Tt), we transform Problem (6.1) into the initial value problem

    {vδN(δ)t(bobsδ(x,t)vδN(δ))=F(x,t,vδN(x,t))+PδβN(δ)(vδN(δ)(x,t)),(x,t)Ω×(0,T),vδN(δ)|Ω=0,t(0,T),vδN(δ)(x,0)=¯Gδ,N(δ)(x),(x,t)Ω×(0,T). (6.7)

    where bobsδ(x,t)=Maobsδ(x,t).

    The weak formulation of the initial boundary value problem (6.7) can then be given in the following manner: Find vδN(δ)(t) defined in the open set (0,T) such that vδN(δ) satisfies the following variational problem

    ΩddtvδN(δ),mφdx+Ωbobsδ(x,t)vδN(δ),mφdx+ΩF(vδN(δ),m(t))φdx=ΩPδβN(δ)(vδN(δ),m(t))φdx (6.8)

    for all φH1, and the initial condition

    vδN(δ)(0)=¯Gδ,N(δ). (6.9)

    Proof of the existence of solution of Problem (6.1). The main technique of this proof is learned from the article [34]. The proof consists of several steps.

    Step 1: The Faedo – Galerkin approximation (introduced by Lions [22]).

    In the space H1(Ω), we take a basis {ej}j=1 and define the finite dimensional subspace

    Vm=span{e1,e2,...em}.

    Let ¯Gδ,N(δ),m be an element of Vm such that

    ¯Gδ,N(δ),m=mj=1dδmjej¯Gδ,N(δ)stronglyinL2 (6.10)

    as m+. We can express the approximate solution of the problem (6.7) in the form

    vδN(δ),m(t)=mj=1cδmj(t)ej, (6.11)

    where the coefficients cδmj satisfy the system of linear differential equations

    ΩddtvδN(δ),meidx+Ωbobsδ(x,t)vδN(δ),meidx+ΩF(vδN(δ),m(t))eidx=ΩPδβN(δ)(vδN(δ),m(t))eidx (6.12)

    with i=¯1,m and the initial conditions

    cδmj(0)=dδmj,j=¯1,m. (6.13)

    The existence of a local solution of system (6.12)–(6.13) is guaranteed by Peano's theorem on the existence of solutions. For each m there exists a solution vδN(δ),m(t) in the form (6.11) which satisfies (6.12) and (6.13) almost everywhere on 0tTm for some Tm, 0<TmT. The following estimates allow one to take Tm=T for all m.

    Step 2. A priori estimates.

    a) The first estimate. Multiplying the ith equation of (6.12) by cδmi(t) and summing up with respect to i, afterwards, integrating by parts with respect to the time variable from 0 to t, we get after some rearrangements

    vδN(δ),m(t)2L2(Ω)+2t0Ωbobsδ(x,t)|vδN(δ),m(s)|2dxds+2t0ΩF(vδN(δ),m(s))vδN(δ),m(s)dxds=¯Gδ,N(δ),m2+2t0ΩPδβN(δ)(vδN(δ),m(s))vδN(δ),m(s)dxds (6.14)

    From (6.10), we have

    ¯Gδ,N(δ),m2B0(δ), forall m, (3.8)

    where B0(δ) depends on ¯Gδ,N(δ) and is independent of m.

    Using the lower bound of bobsδ(x,t), we have the following estimate

    2t0Ωbobsδ(x,t)|vδN(δ),m(s)|2dxds2b0t0vδN(δ),m(s)H1(Ω)ds. (6.15)

    Using the assumption on F, we have

    2t0ΩF(vδN(δ),m(s))vδN(δ),m(s)dxds2C1t0vδN(δ),m(s)pLp(Ω)ds2TC1 (6.16)

    and

    2t0ΩPδβN(δ)(vδN(δ),m(s))vδN(δ),m(s)dxds2Tln(1βN(δ))t0vδN(δ),m(s)2L2(Ω)ds. (6.17)

    Hence, it follows from (6.15)–(6.17) that

    vδN(δ),m(t)2L2(Ω)+2b0t0vδN(δ),m(s)H1(Ω)ds+2C1t0vδN(δ),m(s)pLp(Ω)dsB0(δ)+2TC1+1Tln(1βN(δ))t0vδN(δ),m(s)2L2(Ω)ds. (6.18)

    Let

    Sδm(t)=vδN(δ),m(t)2L2(Ω)+2b0t0vδN(δ),m(s)H1(Ω)ds+2C1t0vδN(δ),m(s)pLp(Ω)ds. (6.19)

    Using the fact that t0vδN(δ),m(s)2L2(Ω)dst0Sδm(s)ds, we know from (6.18) that

    Sδm(t)B0(δ)+2TC1+1Tln(1βN(δ))t0Sδm(s)ds (6.20)

    Applying Gronwall's lemma, and we obtain

    Sδm(t)[B0(δ)+2TC1]exp(tTln(1βN(δ)))[B0(δ)+2TC1]exp(ln(1βN(δ)))=B1(δ,T), (6.21)

    for all mN, for all t, 0tTmT, i.e., Tm=T, where CT always indicates a bound depending on T.

    b) The second estimate. Multiplying the ith equation of (6.12) by t2ddtcδmi(t) and summing up with respect to i, we have

    tddtvδN(δ),m(t)2L2(Ω)+t2Ωbobsδ(x,t)vδN(δ),m(t)(ddtvδN(δ),m(t))dx+Ωt2F(vδN(δ),m(t))ddtvδN(δ),m(t)dx=Ωt2PδβN(δ)(vδN(δ),m(t))ddtvδN(δ),m(t)dx. (6.22)

    It is easy to check that for any uH1(Ω)

    ddt[Ωbobsδ(x,t)|u(t)|2dx]=2Ωbobsδ(x,t)u(t)u(t)dx+Ωtbobsδ(x,t)|u(t)|2dx. (6.23)

    The equality (6.22) is equivalent to

    2tddtvδN(δ),m(t)2L2(Ω)+ddt[t2Ωbobsδ(x,t)|vδN(δ),m(t)|2dx]+2Ωt2F(vδN(δ),m(t))ddtvδN(δ),m(t)dx=2tΩbobsδ(x,t)|vδN(δ),m(t)|2dx+t2Ωtbobsδ(x,t)|vδN(δ),m(s)|2dx+Ωt2PδβN(δ)(vδN(δ),m(t))ddtvδN(δ),m(t)dx. (6.24)

    By integrating the last equality from 0 to t, we get

    2t0sddsvδN(δ),m(s)2L2(Ω)ds+t2Ωbobsδ(x,t)|vδN(δ),m(t)|2dxI1+2t0Ωs2F(vδN(δ),m(s))ddsvδN(δ),m(s)dxdsI2=2t0Ωsbobsδ(x,s)|vδN(δ),m(s)|2dxdsI3+t0Ωs2sbobsδ(x,s)|vδN(δ),m(s)|2dxdsI4+t0Ωs2PδβN(δ)(vδN(δ),m(s))ddsvδN(δ),m(s)dxdsI5. (6.25)

    Estimate I1. Since the assumption bobsδ(x,t)b0, we know that

    I1=t2Ωbobsδ(x,t)|vδN(δ),m(t)|2dxb0tvδN(δ),m(t)2H1. (6.26)

    Estimate I2. To estimate I2, we need the following Lemma

    Lemma 6.1. Let μ0=(C1C1)1/p,¯m=+μ0μ0|F(ξ)|dξ,˜F(z)=z0F(y)dy,zR. Then we get

    ¯m˜F(z)C2(|z|+1p|z|p),zR. (6.27)

    The proof of Lemma 6.1 is easy and we omit it here. Now we return to estimate I2. By a simple computation and then using Lemma 6.1, we have

    I2=2t0s2dsdds[ΩdxvδN(δ),m(x,s)0F(y)dy]=2t0s2dsdds[ΩdxvδN(δ),m(x,s)0F(y)dy]=2t0[dds(s2Ω˜F(vδN(δ),m(x,s))dx)2sΩ˜F(vδN(δ),m(x,s))dx]=2t2Ω˜F(vδN(δ),m(x,t))dx4t0sdsΩ˜F(vδN(δ),m(x,s))dx2T2¯m|Ω|4C2t0s[vδN(δ),m(s)L1+1pvδN(δ),m(s)pLp]ds2T2¯m|Ω|4TC2[TvδN(δ),mL(0,T;L2)+1p12C1Sδm(t)]B2(δ,T). (6.28)

    Estimate I3. Using (6.19), we have the following estimate

    I32Tb1t0vδN(δ),m(s)2H1ds2Tb12b0Sδm(t). (6.30)

    Estimate I4. Let us set

    ˜aT=sup(x,t)[0,1]×[0,T]tbobsδ(x,t),

    and then I4 is bounded by

    I4˜aTt0svδN(δ),m(s)2H1dsT2˜aTt0vδN(δ),m(s)2H1dsT2˜aTa0Sδm(t). (6.30)

    Estimate I5. Using Lemma 5.2, we obtain the following estimate for I5:

    I52t0sPδβN(δ)(vδN(δ),m(s))sddsvδN(δ),m(s)dst0sPδβN(δ)(vδN(δ),m(s))2ds+t0sddsvδN(δ),m(s)2dsln2(1βN(δ))t0vδN(δ),m(s)2ds+t0sddsvδN(δ),m(s)2dsln2(1βN(δ))Sδm(t)a0+t0sddsvδN(δ),m(s)2ds (6.32)

    Combining (6.26), (6.28), (6.29), (6.30), we obtain

    (6.32)

    Let

    and then since

    together with (6.32), we deduce that

    (6.33)

    where

    Applying Gronwall's inequlality, we obtain that

    (6.34)

    where depends only on and does not depend on .

    Step 3. The limiting process.

    Combining (6.19), (6.21) and (6.34), we deduce that, there exists a subsequence of still denoted by such that (see [22]), say,

    (6.35)

    here . Using a compactness lemma ([22], Lions, p. 57) applied to , we can extract from the sequence a subsequence still denoted by such that

    (6.36)

    By the Riesz-Fischer theorem, we can extract from a subsequence still denoted by such that

    (6.37)

    Because is continuous, then

    (6.38)

    On the other hand, using (6.3), (6.19), (6.21), we obtain

    (6.39)

    where is a constant independent of We shall now require the following lemma, the proof of which can be found in [22] (see Lemma 1.3).

    Lemma 6.2. Let be a bounded open subset of and such that

    (6.40)

    and

    Then

    Applying Lemma 6.2 with we deduce from (6.38) and (6.39) that

    (6.41)

    Passing to the limit in (6.12) and (6.10) by (6.35) and (6.41), we have established a solution of Problem (6.1).

    Assume that the Problem (6.1) has two solution and . We have to show that . We recall that

    (6.42)

    For , we put

    Then for , we get

    (6.43)

    and

    By taking the inner product of both sides of (6.43) with then taking the integral from to and noting the equality

    we deduce

    (6.44)

    By the assumption we have

    (6.45)

    Using the inequality (5.8), we get the following estimate

    (6.46)

    Combine equations (6.44), (6.48), (6.46) and choose

    to obtain

    This implies that for all then since . The proof is completed.

    Our analysis and proof is short and similar to the proof of Theorem 5.1. Indeed, let us also set

    By using some of the above steps we obtain

    (6.47)

    The terms are similar to (5.22). Now, we consider . By assumption (6.4), we have

    (6.48)

    After using the results of the proof of Theorem 5.1, we get

    (6.49)

    Since

    and applying Lemma 4.1, we observe that

    (6.50)

    Gronwall's lemma allows us to obtain

    (6.51)

    By choosing we have

    (6.52)

    Here we consider a special source function for Problem (1.1). This is called the Ginzburg-Landau equation. This function satisfies the condition of section 5 and does not satisfy the condition in section 4. For all , we approximate by defined by

    (7.1)

    We consider the problem

    (7.2)

    It is easy to see that . Choose for any , and is chosen as

    (7.3)

    Choose such that

    Then applying Theorem 5.1, the error is of the order

    In this subsection, we are concerned with the backward problem for a nonlinear parabolic equation of Fisher–Kolmogorov–Petrovsky–Piskunov type

    (7.4)

    with the following condition

    (7.5)

    By Skellam [33], Eq (7.4) has many applications in population dynamics and periodic environments. In these references, the quantity generally stands for a population density, and the coefficients , respectively, correspond to the diffusion coefficient, the intrinsic growth rate coefficient and a coefficient measuring the effects of competition on the birth and death rates. Our method that can be applied to this model is similar to example 7.1. However, since the ideas of Example 7.1 and 7.2 are the same, we only state the model without giving the errors.

    Taking the function it is easy to see that satisfy (6.2), (6.3) and (6.4). Moreover, we can show that is not locally Lipschitz function. So, we cannot regularize the problem in this case with Problem (5.6). We consider the problem

    (7.6)

    Choose and as in subsection 6.1. Applying Theorem 5.1, the error between the solution of Problem (7.6) and , , is of the order

    Remark 7.1. In the following, we give a comparison of the method and results in this paper with the results in [30,31]. All methods are truncation methods, but our problem is complicated due to the data being noised by random data. We need Lemma 4.1 to determine the correct set up according to the measured data. The coefficients should be chosen appropriately so that the error between the sought solution and the correct solution converges. There are two advantages to this article that were not explored in [30,31]

    In Theorem 4.3, we give a regularization result in the case of a weaker assumption for , i.e., . This is one of the first results obtained in this case and was not considered in [30,31]. In those papers, to investigate the error, the exact solution is assumed in a Gevrey space, which limits the number of functions than if one considered the function space .

    In [30,31], the source functions must satisfy a global lipschitz condition. However, in our article, we deal with a fairly broad function class, consisting of the local Lipschitz function class and some local non-Lipschitz function class (see Section 6).

    Nguyen Huy Tuan is thankful to the Van Lang University. This research is funded by Thu Dau Mot University, Binh Duong Province, Vietnam under grant number DT.21.1-011.

    The authors declare there is no conflicts of interest.



    [1] N. H. Tuan, E. Nane, Approximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data, SIAM/ASA J. Uncertain., 6 (2018), 302–338. https://doi.org/10.1137/17M1111139 doi: 10.1137/17M1111139
    [2] H. Amann, Time-delayed Perona–Malik type problems, Acta Math. Univ. Comenian., 76 (2007), 15–38.
    [3] J. Hadamard, Lectures on the Cauchy Problems in Linear Partial Differential Equations, Yale University Press, New Haven, CT, 1923.
    [4] M. Denche, K. Bessila, A modified quasi-boundary value method for ill-posed problems, J. Math. Anal. Appl., 301 (2005), 419–426. https://doi.org/10.1016/j.jmaa.2004.08.001 doi: 10.1016/j.jmaa.2004.08.001
    [5] N. V. Duc, An a posteriori mollification method for the heat equation backward in time, J. Inverse Ill-Posed Probl., 25 (2017), 403–422. https://doi.org/10.1515/jiip-2016-0026 doi: 10.1515/jiip-2016-0026
    [6] B. T. Johansson, D. Lesnic, T. Reeve, A method of fundamental solutions for radially symmetric and axisymmetric backward heat conduction problems, Int. J. Comput. Math., 89 (2012), 1555–1568. https://doi.org/10.1080/00207160.2012.680448 doi: 10.1080/00207160.2012.680448
    [7] A. B. Mair, H. F. Ruymgaart, Statistical inverse estimation in Hilbert scales, SIAM J. Appl. Math., 56 (1996), 1424–1444. https://doi.org/10.1137/S0036139994264476 doi: 10.1137/S0036139994264476
    [8] H. Kekkonen, M. Lassas, S. Siltanen, Analysis of regularized inversion of data corrupted by white Gaussian noise, Inverse Probl., 30 (2014), 045009. https://doi.org/10.1088/0266-5611/30/4/045009 doi: 10.1088/0266-5611/30/4/045009
    [9] C. König, F. Werner, T. Hohage, Convergence rates for exponentially ill-posed inverse problems with impulsive noise, SIAM J. Numer. Anal., 54 (2016), 341–360. https://doi.org/10.1137/15M1022252 doi: 10.1137/15M1022252
    [10] T. Hohage, F. Weidling, Characterizations of variational source conditions, converse results, and maxisets of spectral regularization methods, SIAM J. Numer. Anal., 55 (2017), 598–620. https://doi.org/10.1137/16M1067445 doi: 10.1137/16M1067445
    [11] A. P. N. T. Mai, A statistical minimax approach to the Hausdorff moment problem, Inverse Probl., 24 (2008), 045018. https://doi.org/10.1088/0266-5611/24/4/045018 doi: 10.1088/0266-5611/24/4/045018
    [12] L. Cavalier, Nonparametric statistical inverse problems, Inverse Probl., 24 (2008), 034004. https://doi.org/10.1088/0266-5611/24/3/034004 doi: 10.1088/0266-5611/24/3/034004
    [13] N. Bissantz, H. Holzmann, Asymptotics for spectral regularization estimators in statistical inverse problems, Comput. Statist., 28 (2013), 435–453. https://doi.org/10.1007/s00180-012-0309-1 doi: 10.1007/s00180-012-0309-1
    [14] D. D. Cox, Approximation of method of regularization estimators, Ann. Stat., 16 (1988), 694–712. https://doi.org/10.1214/aos/1176350829 doi: 10.1214/aos/1176350829
    [15] H. W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, Boston, London, 1996. https://doi.org/10.1007/978-94-009-1740-8
    [16] B. T. Knapik, A. W. van der Vaart, J. H. van Zanten, Bayesian recovery of the initial condition for the heat equation, Comm. Statist. Theory Methods, 42 (2013), 1294–1313.
    [17] N. Bochkina, Consistency of the posterior distribution in generalized linear inverse problems, Inverse Probl., 29 (2013), 095010. https://doi.org/10.1088/0266-5611/29/9/095010 doi: 10.1088/0266-5611/29/9/095010
    [18] R. Plato, Converse results, saturation and quasi-optimality for Lavrentiev regularization of accretive problems, SIAM J. Numer. Anal., 55 (2017), 1315–1329. https://doi.org/10.1137/16M1089125 doi: 10.1137/16M1089125
    [19] L. Cavalier, Inverse problems in statistics. Inverse problems and high-dimensional estimation, In: Alquier P., Gautier E., Stoltz G. (eds) Inverse Problems and High-Dimensional Estimation. Lecture Notes in Statistics, vol 203. Springer, Berlin, Heidelberg, 3–96. https://doi.org/10.1007/978-3-642-19989-9
    [20] M. Kirane, E. Nane, N. H. Tuan, On a backward problem for multidimensional Ginzburg-Landau equation with random data, Inverse Probl., 34 (2018), 015008. https://doi.org/10.1088/1361-6420/aa9c2a doi: 10.1088/1361-6420/aa9c2a
    [21] R. Lattes, J. L. Lions, Methode de Quasi-reversibility et Applications, Dunod, Paris, 1967
    [22] J. L. Lions, Quelques méthodes de résolution des problèmes aux limites nonlinéaires, Dunod; Gauthier – Villars, Paris, 1969.
    [23] L. C. Evans, Partial Differential Equations, American Mathematical Society, Providence, Rhode Island, Volume 19, 1997.
    [24] C. Cao, M. A. Rammaha, E. S. Titi, The Navier-Stokes equations on the rotating 2-D sphere: Gevrey regularity and asymptotic degrees of freedom, Z. Angew. Math. Phys., 50 (1999), 341–360. https://doi.org/10.1007/PL00001493 doi: 10.1007/PL00001493
    [25] R. Courant, D. Hilbert, Methods of mathematical physics, New York (NY): Interscience; 1953.
    [26] J. Wu, W. Wang, On backward uniqueness for the heat operator in cones, J. Differ. Equ., 258 (2015), 224–241. https://doi.org/10.1016/j.jde.2014.09.011 doi: 10.1016/j.jde.2014.09.011
    [27] A. Ruland, On the backward uniqueness property for the heat equation in two-dimensional conical domains, Manuscr. Math., 147 (2015), 415–436. https://doi.org/10.1007/s00229-015-0764-4 doi: 10.1007/s00229-015-0764-4
    [28] L. Li, V. Sverak, Backward uniqueness for the heat equation in cones, Commmun. Partial Differ. Equ., 37 (2012), 1414–1429. https://doi.org/10.1080/03605302.2011.635323 doi: 10.1080/03605302.2011.635323
    [29] N. H. Tuan, P. H. Quan, Some extended results on a nonlinear ill-posed heat equation and remarks on a general case of nonlinear terms, Nonlinear Anal. Real World Appl., 12 (2011), 2973–2984. https://doi.org/10.1016/j.nonrwa.2011.04.018 doi: 10.1016/j.nonrwa.2011.04.018
    [30] D. D. Trong, N. H. Tuan, Regularization and error estimate for the nonlinear backward heat problem using a method of integral equation, Nonlinear Anal., 71 (2009), 4167–4176. https://doi.org/10.1016/j.na.2009.02.092 doi: 10.1016/j.na.2009.02.092
    [31] P. T. Nam, An approximate solution for nonlinear backward parabolic equations, J. Math. Anal. Appl., 367 (2010), 337–349. https://doi.org/10.1016/j.jmaa.2010.01.020 doi: 10.1016/j.jmaa.2010.01.020
    [32] M. Chipot, Elements of nonlinear analysis, Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks] Birkhäuser Verlag, Basel, 2000. viii+256 pp. ISBN: 3-7643-6406-8. https://doi.org/10.1007/978-3-0348-8428-0
    [33] J. G. Skellam, Random dispersal in theoretical populations, Biometrika, 38 (1951), 196–218. https://doi.org/10.1016/S0092-8240(05)80044-8 doi: 10.1016/S0092-8240(05)80044-8
    [34] L. T. P. Ngoc, A. P. N. Dinh, N. T. Long, On a nonlinear heat equation associated with Dirichlet-Robin conditions, Numer. Funct. Anal. Optim., 33 (2012), 166–189. https://doi.org/10.1080/01630563.2011.594198 doi: 10.1080/01630563.2011.594198
    [35] N. H. Tuan, L. D. Thang, V. A. Khoa, T. Tran, On an inverse boundary value problem of a nonlinear elliptic equation in three dimensions, J. Math. Anal. Appl., 426 (2015), 1232–1261. https://doi.org/10.1016/j.jmaa.2014.12.047 doi: 10.1016/j.jmaa.2014.12.047
  • This article has been cited by:

    1. Hongwu Zhang, Yanhui Li, A Modified Iteration Method for an Inverse Problem of Diffusion Equation with Laplace and Riesz-Feller Space Fractional Operators, 2024, 1017-1398, 10.1007/s11075-024-01951-4
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2105) PDF downloads(98) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog