In this paper, We are interested in studying the backward in time problem for nonlinear parabolic equation with time and space independent coefficients. The main purpose of this paper is to study the problem of determining the initial condition of nonlinear parabolic equations from noisy observations of the final condition. The final data are noisy by the process involving Gaussian white noise. We introduce a regularized method to establish an approximate solution. We establish an upper bound on the rate of convergence of the mean integrated squared error. This article is inspired by the article by Tuan and Nane [
Citation: Vinh Quang Mai, Erkan Nane, Donal O'Regan, Nguyen Huy Tuan. Terminal value problem for nonlinear parabolic equation with Gaussian white noise[J]. Electronic Research Archive, 2022, 30(4): 1374-1413. doi: 10.3934/era.2022072
[1] |
Guifen Liu, Wenqiang Zhao .
Regularity of Wong-Zakai approximation for non-autonomous stochastic quasi-linear parabolic equation on |
[2] | Vo Van Au, Hossein Jafari, Zakia Hammouch, Nguyen Huy Tuan . On a final value problem for a nonlinear fractional pseudo-parabolic equation. Electronic Research Archive, 2021, 29(1): 1709-1734. doi: 10.3934/era.2020088 |
[3] | Hongze Zhu, Chenguang Zhou, Nana Sun . A weak Galerkin method for nonlinear stochastic parabolic partial differential equations with additive noise. Electronic Research Archive, 2022, 30(6): 2321-2334. doi: 10.3934/era.2022118 |
[4] | Haiyan Song, Fei Sun . A numerical method for parabolic complementarity problem. Electronic Research Archive, 2023, 31(2): 1048-1064. doi: 10.3934/era.2023052 |
[5] | Lianbing She, Nan Liu, Xin Li, Renhai Wang . Three types of weak pullback attractors for lattice pseudo-parabolic equations driven by locally Lipschitz noise. Electronic Research Archive, 2021, 29(5): 3097-3119. doi: 10.3934/era.2021028 |
[6] | Yang Jiao . On estimates for augmented Hessian type parabolic equations on Riemannian manifolds. Electronic Research Archive, 2022, 30(9): 3266-3289. doi: 10.3934/era.2022166 |
[7] | Shuting Chang, Yaojun Ye . Upper and lower bounds for the blow-up time of a fourth-order parabolic equation with exponential nonlinearity. Electronic Research Archive, 2024, 32(11): 6225-6234. doi: 10.3934/era.2024289 |
[8] | Yiyuan Qian, Haiming Song, Xiaoshen Wang, Kai Zhang . Primal-dual active-set method for solving the unilateral pricing problem of American better-of options on two assets. Electronic Research Archive, 2022, 30(1): 90-115. doi: 10.3934/era.2022005 |
[9] | Jun Zhou . Initial boundary value problem for a inhomogeneous pseudo-parabolic equation. Electronic Research Archive, 2020, 28(1): 67-90. doi: 10.3934/era.2020005 |
[10] | Yang Cao, Qiuting Zhao . Initial boundary value problem of a class of mixed pseudo-parabolic Kirchhoff equations. Electronic Research Archive, 2021, 29(6): 3833-3851. doi: 10.3934/era.2021064 |
In this paper, We are interested in studying the backward in time problem for nonlinear parabolic equation with time and space independent coefficients. The main purpose of this paper is to study the problem of determining the initial condition of nonlinear parabolic equations from noisy observations of the final condition. The final data are noisy by the process involving Gaussian white noise. We introduce a regularized method to establish an approximate solution. We establish an upper bound on the rate of convergence of the mean integrated squared error. This article is inspired by the article by Tuan and Nane [
The forward problem for parabolic equations is finding the distribution at a later time when we know the initial distribution. In geophysical exploration, one is often faced with the problem of determining the temperature distribution in the object or any part of the Earth at a time t0>0 from temperature measurements at a time t1>t0. This is the backward in time parabolic problem. Backward parabolic problems arises in several practical areas such as image processing, mathematical finance, and physics (see [2,3]). Let T be a positive number and Ω be an open, bounded and connected domain in Rd,d≥1 with a smooth boundary ∂Ω. In this paper, we consider the question of finding the function u(x,t), (x,t)∈Ω×[0,T], satisfying the nonlinear problem
{ut−∇(a(x,t)∇u)=F(x,t,u(x,t)),(x,t)∈Ω×(0,T),u|∂Ω=0,t∈(0,T),u(x,T)=g(x),(x,t)∈Ω×(0,T), | (1.1) |
where the functions a(x,t),g(x) are given and the source function F will be given later. Here the coefficient a(x,t) is a C1 smooth function and 0<¯m≤a(x,t)<M for all (x,t)∈Ω×(0,T) for some finite constants ¯m, M. The problem is well-known to be ill-posed in the sense of Hadamard. Hence, a solution corresponding to the data does not always exist, and in the case of existence, it does not depend continuously on the given data. In fact, from small noise contaminated physical measurements, the corresponding solutions will have large errors. Hence, one has to resort to a regularization. In the simple case of deterministic noise, Problem (1.1) with a=1 and F=0 was studied by many authors [4,5,6]. However, in the case of random noise, the analysis of regularization methods is still limited. The problem is to determine the initial temperature function f given a noisy version of the temperature distribution g at time T
gobsδ(x)=g(x)+δξ(x) | (1.2) |
where δ>0 is the amplitude of the noise and ξ is a Gaussian white noise. In practice, we only observe some finite errors as follows
⟨gobsδ,ϕj⟩=⟨g,ϕj⟩+δ⟨ξ,ϕj⟩,j=¯1,N=1,2,3,⋯,N, | (1.3) |
where the natural number N is the number of steps of discrete observations and ϕj is defined in section 2. The main goal is to find an approximate solution ˆuN(0) for u(0) and then investigate the rate of convergence E‖ˆuN(0)−u(0)‖, which is called the mean integrated square error (MISE). Here E denotes the expectation w.r.t. the distribution of the data in the model (1.2).
There are two main approaches to considering inverse problem for noise modeling. The first approach is based on a formal technique if one is assuming that the noise is definite and small. The second approach is based on a statistical point of view and in this approach one does not need to assume smalll levels of noise. We consider in this paper a statistical point of view for the backward parabolic equation. Our aim is to reconstruct the initial function from the disturbance measurements of the final values in a statistical inverse problem framework. There are many different types of random noise, but we are interested in Gaussian noise here. The model (1.2) and (1.3) were considered in some recent papers; see [7,8,9,10,11]. In signal processing, Gaussian white noise is a random signal of equal intensity at different frequencies, giving it a constant power spectral density and this term is used in physics, acoustic engineering, telecommunications and statistical forecasting.
The inverse problem with random noise has a long history. The simple case of (1.1) is the homogeneous linear parabolic equation of finding the initial data u0:=u(x,0) that satisfies
{ut−Δu=0,(x,t)∈Ω×(0,T),u|∂Ω=0,t∈(0,T),u(x,T)=g(x),(x,t)∈Ω×(0,T). | (1.4) |
This equation is a special form of statistical inverse problems and it can be transformed by a linear operator with random noise
g=Ku0+"noise", | (1.5) |
where K is a bounded linear operator that does not have a continuous inverse. Fomula (1.5) is interpreted as Ku0 deviated from function g by a random error.
Problem (1.4) was studied by well-known methods including spectral cut-off (or called truncation method) [7,9,12,13], the Tiknonov method [14], iterative regularization methods [15], the Bayes estimation method [16,17], and the Lavrentiev regularization method [18]. In some parts of these works, the authors show that the error E‖ˆuN(0)−u(0)‖ tend to zero when N is suitably chosen according to the value of δ and δ→0. For more details, we refer the reader to [19].
To the best of our knowledge, there are no results for the backward problem for nonlinear parabolic equations with Gaussian white noise. There are two types of difficulty in solving our problem. The first difficulty occurs because the problem is nonlinear and nonlinear problems with random noise is more difficult since one cannot apply well known methods. The second is the random noise data, which makes the problem computationally complex. The problem of computation with random data requires some knowledge of the stochastic process, so one has to consider the expectation.
Very recently, in [20], the authors studied the discrete random model for backward nonlinear parabolic problems. However, the problem considered in [20] is in a rectangular domain which is limited in practice. The present paper uses another random model and also gives approximation of the solution in the case of more general bounded and smooth domains Ω. Our task in this paper is to show that the expectation between the solution and the approximate solution converges to zero when N tends to infinity.
This paper is organized as follows. In section 2, we give a couple of preliminary results. In section 3, we give an explanation for ill-posedness of the problem. To help the reader, we divide the problem into three cases under various assumptions on the coefficient a, and the source function F. Case 1: a:=a(x,t) is a constant and F is a globally Lipschitz function. In section 4, we will study this case and give convergence rates in L2 and Hp norms for p>0. The method here is the well-known spectral method. The main idea is to approximate the final data g by the approximate data and use this function to establish a regularized problem by the truncation method.
Case 2: a:=a(x,t) depends on x and t and F is a locally Lipschitz function. This problem is more difficult. In most practical problems, the function F is often a locally Lipschitz function. The difficulty here is in the fact that the solution cannot be transformed into a Fourier series and therefore, we cannot apply well-known methods to find an approximate solution. In Section 5, we will study a new form of the quasi-reversibility method to construct a regularized solution and obtain the convergence rate. Our method is new and very different than the method of Lattes and Lions [21]. We approximate the locally Lipschitz function by a sequence of globally Lipschitz functions and use some new techniques to obtain the convergence rate.
Case 3 Various assumptions on F. In practice there are many functions that are not locally Lipschitz. Hence our analysis in section 4 cannot applied in section 6. Our method in section 6 is also the quasi-reversibility method and is very similar to the method in section 4. However in section 6, we do not approximate F as we do in section 4. This leads to a convergence rate that is better than the one in section 4. One difficulty that occurs in this section is showing the existence and uniqueness of the regularized solution. To prove the existence of the regularized solution, we do not follow previously mentioned methods. Instead, we use the Faedo – Galerkin method, and the compactness method introduced by Lions [22]. To the best of our knowledge, this is the first result where F is not necessarily a locally Lipschitz function. Finally, in section 7, we give some specific equations which can be applied by our method.
To give some details on this random model (1.2), we give the following definitions (See [12,19]):
Definition 2.1. Let H be a Hilbert space. Let g,gδ∈H satisfy (1.2). We understand the equal relationship in fomula
gobsδ(x)=g(x)+δξ(x) |
as follows:
⟨gδ,χ⟩=⟨g,χ⟩+δ⟨ξ,χ⟩,∀χ∈H, | (2.1) |
here δ is the amplitude of the noise. We also assume that ξ is a zero-mean Gaussian random process indexed by H on a probability space. ⟨ξ,χ⟩∼N(0,‖χ‖2H). Moreover, given χ1,χ2∈H then
E(⟨ξ,χ1⟩⟨ξ,χ2⟩)=E⟨χ1,χ2⟩. | (2.2) |
Definition 2.2. The stochastic error is a Hilbert-space process, i.e., a bounded linear operator ξ:H→L2(Ω,A,P) where (Ω,A,P) is the underlying probability space and L2(.,.) is the space of all square integrable measurable functions.
Let us recall that the eigenvalue problem
{−Δϕj(x)=λjϕj(x),x∈Ω,ϕj(x)=0,x∈∂Ω, | (2.3) |
admits a family of eigenvalues 0<λ1≤λ2≤λ3≤...≤λj≤... and eigenfunctions {ϕj} and λj→∞ as j→∞; see page 335 in [23].
Next, we introduce the abstract Gevrey class of functions of index σ>0, see, e.g., [24], defined by
Wσ={v∈L2(Ω):∞∑j=1e2σλj|⟨v,ϕj(x)⟩L2(Ω)|2<∞}, |
which is a Hilbert space equipped with the inner product
⟨v1,v2⟩Wσ:=⟨eσ√−Δv1,eσ√−Δv2⟩L2(Ω),for allv1,v2∈Wσ; |
its corresponding norm is ‖v‖Wσ=√∑∞j=1e2σλj|⟨v,ϕj⟩L2(Ω)|2<∞.
The ill-posedness of the backward heat equation is well known and has appeared in many previous articles. However, in the random case, we need to give an example to illustrate the ill-posedness. From the appearance of the expected component, the evaluation of the nonconformity of the random model is much more complicated than the deterministic model. Therefore, we have to choose a simple case to find a suitable example. In this section, for a special case of Eq (1.1), we show that the nonlinear parabolic equation with random noise is ill-posed in the sense of Hadamard.
Theorem 3.1. Problem (1.1) is ill-posed in the special case when a=1,Ω=(0,π).
Proof. Let Ω=(0,π) and a(x,t)=1, Then λN=N2. Let us consider the following parabolic equation
{∂Vδ,N(δ)∂t−ΔVδ,N(δ)(t)=F0(Vδ,N(δ)(x,t)),0<t<T,x∈(0,π)Vδ,N(δ)(0,t)=Vδ,N(δ)(π,t)=0,Vδ,N(δ)(x,T)=Gδ,N(δ)(x), | (3.1) |
where F0 is
F0(v(x))=∞∑j=1e−Tj22T⟨v,ϕj(x)⟩ϕj(x) | (3.2) |
for any v∈L2(Ω), and ϕj(x)=√2πsin(jx). Let us choose Gδ,N(δ)∈L2(Ω) such that
Gδ,N(δ)(x)=N(δ)∑j=1⟨gδ(x),ϕj(x)⟩ϕj(x) | (3.3) |
where gδ is defined by
⟨gδ,ϕj⟩=δ⟨ξ,ϕj⟩,j=¯1,N={j∈N,1≤j≤N}. | (3.4) |
By the usual MISE decomposition which involves a variance term and a bias term, we get
E‖Gδ,N(δ)‖2L2(Ω)=E(N(δ)∑j=1⟨Gδ,N(δ),ϕj⟩2)=δ2E(N(δ)∑j=1ξ2j)=δ2N(δ). | (3.5) |
The solution of Problem (3.1) is given by the Fourier series (see [29])
Vδ,N(δ)(x,t)=∞∑j=1[e(T−t)λj⟨Gδ,N(δ),ϕj⟩−∫Tte(s−t)λj⟨F0(Vδ,N(δ)(s)),ϕj⟩ds]ϕj. | (3.6) |
We show that Problem (3.6) has unique solution Vδ,N(δ)∈C([0,T];L2(Ω)). Let us consider
Φv:=∞∑j=1e(T−t)λj⟨Gδ,N(δ),ϕj⟩−∞∑j=1[∫Tte(s−t)λj⟨F0(v(s)),ϕj⟩ds]ϕj. | (3.7) |
For any v1,v2∈C([0,T];L2(Ω)), using Hölder inequality, we have for all t∈[0,T]
‖Φv1(t)−Φv2(t)‖2L2(Ω)=∞∑j=1[∫Tte(s−t)λj⟨F0(v1(s))−F0(v2(s)),ϕj⟩ds]2≤T∞∑j=1∫Tte2(s−t)λj⟨F0(v1(s))−F0(v2(s)),ϕj⟩2ds=T4T2∞∑j=1∫Tte2(s−t−T)λj⟨v1(s)−v2(s),ϕj⟩2ds≤14T∞∑j=1∫Tt⟨v1(s)−v2(s),ϕj⟩2ds≤14‖v1−v2‖2C([0,T];L2(Ω)). | (3.8) |
Hence, we obtain that
‖Φv1−Φv2‖|C([0,T];L2(Ω))≤12‖v1−v2‖C([0,T];L2(Ω)). | (3.9) |
Thus Φ is a contraction. Using the contraction principle and we conclude that the equation Φ(w)=w has a unique solution Vδ,N(δ)∈C([0,T];L2(Ω)). Using the inequality a2+b2≥12(a−b)2,a,b∈R, we have the following estimate
‖Vδ,N(δ)‖2L2(Ω)≥12‖∞∑j=1e(T−t)λj⟨Gδ,N(δ),ϕj⟩ϕj‖2L2(Ω)⏟I1−‖∞∑j=1(∫Tte(s−t)λj⟨F0(Vδ,N(δ)(s)),ϕj⟩ds)ϕj‖2L2(Ω)⏟I2. | (3.10) |
First, using Hölder's inequality, we get
I2≤∞∑j=1(∫Tte(s−t)λj⟨F0(Vδ,N(δ)(s)),ϕj⟩ds)2≤T∞∑j=1∫Tte2(s−t)λj⟨F0(Vδ,N(δ)(s)),ϕj⟩2ds≤T4T2∫Tt∞∑j=1e2(s−t−T)λj⟨Vδ,N(δ)(t),ϕj⟩2ds≤14‖Vδ,N(δ)‖2C([0,T];L2(Ω)). | (3.11) |
We have the lower bound for I1:
EI1=12∞∑j=1e2(T−t)λjE⟨Gδ,N(δ),ϕj⟩2=12N∑j=1δ2e2(T−t)λj≥12δ2e2(T−t)λN(δ). | (3.12) |
Combining (3.10), (3.11), (3.12), and we obtain
E‖Vδ,N(δ)‖2L2(Ω)+14E‖Vδ,N(δ)‖2C([0,T];L2(Ω))≥12δ2e2(T−t)λN(δ). | (3.13) |
By taking supremum of both sides on [0,T], we get
E‖Vδ,N(δ)‖2C([0,T];L2(Ω))≥25sup0≤t≤Tδ2e2(T−t)λN(δ)=25δ2e2TλN(δ)=25δ2e2TN2(δ). | (3.14) |
Choosing N:=N(δ)=√12Tln(1δ), we obtain
E‖Gδ,N(δ)‖2L2(Ω)=δ2N(δ)=δ2√12Tln(1δ)→0,whenδ→0, | (3.15) |
and
E‖Vδ,N(δ)‖2C([0,T];L2(Ω))≥25δ2e2TN2(δ)=25δ→+∞,whenδ→0. | (3.16) |
From (3.15) and (3.16), we can conclude that Problem (1.1) is ill-posed.
In this section, we consider the question of finding the function u(x,t), (x,t)∈Ω×[0,T], that satisfies the problem
{ut−Δu=F(x,t,u(x,t)),(x,t)∈Ω×(0,T),u|∂Ω=0,t∈(0,T),u(x,T)=g(x),x∈Ω. | (4.1) |
In this section, we assume there exists a constant K>0 with
|F(x,t;u)−F(x,t;v)|≤K|u−v|, |
where (x,t)∈Ω×[0,T] and u,v∈R.
Lemma 4.1. Let ¯Gδ,N(δ)∈L2(Ω) be such that
¯Gδ,N(δ)=N(δ)∑j=1⟨gobsδ,ϕj⟩ϕj. | (4.2) |
Assume that g∈H2γ(Ω). Then we have the following estimate
E‖¯Gδ,N(δ)−g‖2L2(Ω)≤δ2N(δ)+1λ2γN(δ)‖g‖2H2γ(Ω) | (4.3) |
for any γ≥0. Here N depends on δ and satisfies limδ→0N(δ)=+∞ and {limδ→0δ2N(δ)=0}.
Remark 4.1. Consider the right hand side of (4.3). In order for the right-hand side of (4.3) to converge to zero we require limδ→0δ2N(δ)=0 and the condition
1λ2γN(δ)→0,δ→0. | (4.4) |
Since the fact that λk∼k2/d, we see that
λ2γN(δ)∼(N(δ))4γd, |
and to verify the condition (4.4) we need the condition limδ→0N(δ)=+∞.
Proof. For the following proof, we consider the genuine model (1.3). By the usual MISE decomposition which involves a variance term and a bias term, we get
E‖¯Gδ,N(δ)−g‖2L2(Ω)=E(N(δ)∑j=1⟨gobsδ−g,ϕj⟩2)+∑j≥N(δ)+1⟨g,ϕj⟩2=δ2E(N(δ)∑j=1ξ2j)+∑j≥N(δ)+1λ−2γjλ2γj⟨g,ϕj⟩2. | (4.5) |
Since ξj=⟨ξ,ϕj⟩iid∼N(0,1), it follows that Eξ2j=1, so
E‖¯Gδ,N(δ)−g‖2L2(Ω)≤δ2N(δ)+1λ2γN(δ)‖g‖H2γ. | (4.6) |
Using the truncation method, we give a regularized problem for Problem (1.1) as follows
{∂∂tuδN(δ)−ΔuδN(δ)=JαN(δ)F(x,t,uδN(δ)(x,t)),(x,t)∈Ω×(0,T),uδN(δ)|∂Ω=0,t∈(0,T),uδN(δ)(x,T)=JαN(δ)¯Gδ,N(δ)(x),(x,t)∈Ω×(0,T), | (4.7) |
where αN(δ) is regularization parameter and JαN(δ) is the following operator
JαN(δ)v:=∑λj≤αN(δ)⟨v,ϕj⟩ϕj,forallv∈L2(Ω). | (4.8) |
Our main result in this section is as follows
Theorem 4.1. Problem (4.7) has a unique solution uδN(δ)∈C([0,T];L2(Ω)) which satisfies
uδN(δ)(x,t)=∑λj≤αN(δ)[e(T−t)λj⟨¯Gδ,N(δ),ϕj⟩−∫Tte(s−t)λj⟨F(uδN(δ)(s)),ϕj⟩ds]ϕj. | (4.9) |
Assume that problem (1.1) has unique solution u such that
∞∑j=1λ2βje2tλj⟨u(.,t),ϕj⟩2<A′,t∈[0,T]. | (4.10) |
Choose αN(δ) such that
limδ→0αN(δ)=+∞,limδ→0ekTαN(δ)λγN(δ)=0,limδ→0eKTαN(δ)√N(δ)δ=0. | (4.11) |
Then the following estimate holds
E‖u(.,t)−uδN(δ)(.,t)‖2L2(Ω)≤2e2K2(T−t)e−2tαN(δ)[δ2N(δ)e2TαN(δ)+e2TαN(δ)λ2γN(δ)‖g‖H2γ+α−2βN(δ)]. | (4.12) |
Remark 4.2. 1. From the theorem above, it is easy to see that E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω) is of the order
e−2tαN(δ)max(δ2N(δ)e2TαN(δ),e2TαN(δ)λ2γN(δ),α−2βN(δ)). | (4.13) |
2. Now, we give one example for the choice of N(δ) which satisfies condition (4.11). Since λN∼N2d, see [25], we choose αN such that ekTαN(δ)=|N(δ)|a for any 0<a<2γd. Then we have αN(δ)=akTlog(N(δ)). The number N(δ) is chosen as
N(δ)=(1δ)ba+b2 |
for 0<b<1. With N(δ) chosen as above, E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω) is of the order (1δ)−(ba+b2)atKT
3. The existence and uniqueness of the solution of Eq (1.1) is an open problem, and we do not investigate this problem here. The case considered in Theorem 3.1 gives the existence of the solution of Problem (1.1) in a special case. The uniqueness of the backward parabolic problem has attracted the attention of many authors (see, for example, [26,27,28]) and this is also a challenging open problem.
Proof of Theorem 4.1. We divide the proof into a number of parts.
Part 1. Problem (4.7) has a unique solution uδN(δ)∈C([0,T];L2(Ω)). The proof is similar to [29] (see Theorem 3.1, page 2975 [29]). Hence, we omit it here.
Part 2. Estimate the expectation of the error between the exact solution u and the regularized solution uδN(δ).
Let us consider the following integral equation
vδN(δ)(x,t)=∑λj≤αN(δ)[e(T−t)λj⟨g,ϕj⟩−∫Tte(s−t)λj⟨F(vδN(δ)(s)),ϕj⟩ds]ϕj. | (4.14) |
We have
‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2∑λj≤αNe2(T−t)λj⟨¯Gδ,N(δ)−g,ϕj⟩2+2∑λj≤αN(δ)[∫Tte(s−t)λj(Fj(uδN(δ))(s)−Fj(vδN(δ))(s))ds]2≤2e2(T−t)αN∑λj≤αN(δ)⟨¯Gδ,N(δ)−g,ϕj⟩2+2(T−t)∫Tte2(s−t)αN(δ)∑λj≤αN(δ)(Fj(uδN(δ))(s)−Fj(vδN(δ))(s))2ds≤2e2(T−t)αN‖¯Gδ,N(δ)−g‖2L2(Ω)+2K2T∫Tte2(s−t)αN‖uδN(δ)(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds. | (4.15) |
Taking the expectation of both sides of the last inequality, we get
E‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2(T−t)αN(δ)E‖¯Gδ,N(δ)−g‖2L2(Ω)+2K2T∫Tte2(s−t)αNE‖uδN(δ)(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds. | (4.16) |
Multiplying both sides with e2tαN, we obtain
e2tαN(δ)E‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2TαN(δ)E‖¯Gδ,N(δ)−g‖2L2(Ω)+2K2T∫Tte2sαN(δ)E‖uδN(δ)(.,s)−vδN(.,s)‖2L2(Ω)ds. | (4.17) |
Applying Gronwall's inequality, we get
e2tαN(δ)E‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2TαN(δ)e2K2T(T−t)E‖¯Gδ,N(δ)−g‖2L2(Ω). | (4.18) |
Hence, using Lemma 4.1, we deduce that
E‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2K2T(T−t)e2(T−t)αN(δ)E‖¯Gδ,N(δ)−g‖2L2(Ω)≤2e2K2T(T−t)e2(T−t)αN(δ)(δ2N(δ)+1λ2γN(δ)‖g‖H2γ). | (4.19) |
Now, we continue to estimate ‖u(.,t)−vδN(δ)(.,t)‖L2(Ω). Indeed, using Hölder's inequality and globally Lipschitz property of F, we get
‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2∑λj≤αN(δ)[∫Tte(s−t)λj(Fj(u)(s)−Fj(vδN(δ))(s))ds]2+2∑λj>αN⟨u(t),ϕj⟩2≤2∑λj>αNλ−2βje−2tλjλ2βje2tλj⟨u(t),ϕj⟩2+2K2∫Tte2(s−t)λN‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds≤α−2βNe−2tαN∞∑j=1λ2βje2tλj⟨u(t),ϕj⟩2+2K2∫Tte2(s−t)αN(δ)‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds; |
above, we have used the mild solution of u as follows
u(x,t)=∞∑j=1[e(T−t)λj⟨g,ϕj⟩−∫Tte(s−t)λj⟨F(u(s)),ϕj⟩ds]ϕj. |
Multiplying both sides with e2tαN(δ), we obtain
e2tαN(δ)‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤α−2βN(δ)∞∑j=1λ2βje2tλj⟨u(.,t),ϕj⟩2+2K2∫Tte2sαN‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds. | (4.20) |
Gronwall's inequality implies that
e2tαN‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤e2K2(T−t)α−2βN(δ)A′. | (4.21) |
This together with the estimate (4.19) leads to
E‖u(.,t)−uδN(δ)(.,t)‖2L2(Ω)≤2E‖uδN(.,t)−vδN(δ)(.,t)‖2L2(Ω)+2‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2K2(T−t)αN(δ2N(δ)+1λ2γN(δ)‖g‖H2γ)+2α−2βN(δ)e−2tαNe2K2(T−t)A′ | (4.22) |
where A′ is given in Eq (4.10). This completes our proof.
The next theorem provides an error estimate in the Sobolev space Hp(Ω) which is equipped with a norm defined by
‖g‖2Hp(Ω)=∞∑j=1λpj⟨g,ϕj(x)⟩2. | (4.23) |
To estimate the error in Hp norm, we need a stronger assumption of the solution u.
Theorem 4.2. Assume that problem (1.1) has unique solution u such that
∞∑j=1e2(t+r)λj⟨u(.,t),ϕj⟩2<A",t∈[0,T]. | (4.24) |
for any r>0. Choose αN(δ) such that
limδ→0αN(δ)=+∞,limδ→0ekTαN(δ)λγN(δ)=0,limδ→0ekTαN(δ)√N(δ)δ=0 | (4.25) |
Then the following estimate holds
E‖uδN(δ)(.,t)−u(.,t)‖2Hp(Ω) | (4.26) |
≤2e2k2T(T−t)e−2tαN|αN(δ)|p[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)‖g‖H2γ+A"e−2rαNδ]+A"|αN(δ)|pexp(−2(t+r)αN(δ)). | (4.27) |
Proof. First, we have
E‖uδN(δ)(.,t)−JαN(δ)u(.,t)‖2Hp(Ω)=E(∑λj≤αN(δ)λpj⟨uδN(δ)(x,t)−u(x,t),ϕj(x)⟩2)≤|αN(δ)|pE(∑λj≤αN(δ)⟨uδN(δ)(x,t)−u(x,t),ϕj(x)⟩2)≤|αN(δ)|pE‖uδN(δ)(.,t)−u(.,t)‖2L2(Ω). | (4.28) |
Next, we continue to estimate E‖uδN(δ)(.,t)−u(.,t)‖2L2(Ω) with assumption (4.24). Recall vδN(δ) from (4.14). The expectation of the error between uδN(δ) and vδN(δ) is given in the estimate (4.19) as
E‖uδN(δ)(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2e2K2T(T−t)e2(T−t)αN(δ)(δ2N(δ)+1λ2γN(δ)‖g‖H2γ). | (4.29) |
We only need to estimate ‖u(.,t)−vδN(δ)(.,t)‖L2(Ω). Indeed, using Hölder's inequality and the globally Lipschitz property of F, we get
‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤2∑λj>αN⟨u(t),ϕj⟩2+2∑λj≤αN(δ)[∫Tte(s−t)λj(Fj(u)(s)−Fj(vδN(δ))(s))ds]2≤2∑λj>αNe−2(t+r)λje2(t+r)λj⟨u(t),ϕj⟩2+2K2T∫Tte−2(s−t)αNδ‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds≤e−2(t+r)αNδ∞∑j=1 e2(t+r)λj⟨u(t),ϕj⟩2+2K2T∫Tte2(s−t)αN(δ)‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds. |
Multiplying both sides with e2tαN(δ), we obtain
e2tαN(δ)‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤A"e−2rαNδ+2K2T∫Tte2sαN(δ)‖u(.,s)−vδN(δ)(.,s)‖2L2(Ω)ds. | (4.30) |
Gronwall's inequality implies that
e2tαN‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤e2K2T(T−t)A"e−2rαNδ. | (4.31) |
This last estimate together with the estimate (4.29) leads to
E‖u(.,t)−uδN(δ)(.,t)‖2L2(Ω)≤2E‖uδN(.,t)−vδN(δ)(.,t)‖2L2(Ω)+2‖u(.,t)−vδN(δ)(.,t)‖2L2(Ω)≤4e2K2T(T−t)e2(T−t)αN(δ)(δ2N(δ)+1λ2γN(δ)‖g‖H2γ)+2e2K2T(T−t)A"e−2tαNe−2rαNδ=2e2K2T(T−t)e−2tαN[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)‖g‖H2γ+A"e−2rαNδ]. | (4.32) |
On the other hand, consider the function
G(ξ)=ξpe−Dξ,D>0. | (4.33) |
The derivative of G is G′(ξ)=ξp−1e−Dξ(p−Dξ). Hence we know that G is strictly decreasing when Dξ≥p. Since limδ→0αN(δ)=+∞, we see that if δ is small enough then 2rαN(δ)≥p. Put D=2(t+r),ξ=αN(δ) into (4.33), and we obtain for λj>αN(δ)
G(λj)=λpjexp(−2(t+r)λj)≤G(αN(δ))=|αN(δ)|pexp(−2(t+r)αN(δ)). |
The latter equality leads to
‖u(.,t)−JαN(δ)u(.,t)‖2Hp(Ω)=∑λj>αN(δ)λpj⟨u(x,t),ϕj(x)⟩2=∑λj>αN(δ)λpjexp(−2(t+r)λj)exp(2(t+r)λj)⟨u(x,t),ϕj(x)⟩2≤|αN(δ)|pexp(−2(t+r)αN(δ))∑λj>αN(δ)exp(2(t+r)λj)⟨u(x,t),ϕj(x)⟩2≤A"|αN(δ)|pexp(−2(t+r)αN(δ)) | (4.34) |
where we use assumption (4.24) for the last inequality. Combining (4.28), (4.32) and (4.34), and we deduce that
E‖uδN(δ)(.,t)−u(.,t)‖2Hp(Ω)≤E‖uδN(δ)(.,t)−JαN(δ)u(.,t)‖2Hp(Ω)+‖u(.,t)−JαN(δ)u(.,t)‖2Hp(Ω)≤2e2K2T(T−t)e−2tαN|αN(δ)|p[2δ2N(δ)e2TαN(δ)+2e2TαN(δ)λ2γN(δ)‖g‖H2γ+A"e−2rαNδ]+A"|αN(δ)|pexp(−2(t+r)αN(δ)) | (4.35) |
which completes the proof.
Remark 4.3. In the above Theorem, to obtain the error estimate, we require strong assumptions on u. This is a limitation of Theorem 4.1, because there are only certain types of functions u satisfying these conditions. To remove this limitation, we need to find a new estimator. The convergence rate in the case of weak assumptions of u is a difficult problem. Indeed, in the next Theorem, we give a regularization result in the case of a weaker assumption for u, i.e., u∈C([0,T];L2(Ω)). This is one of the first results in this case.
To help the reader, we describe our analysis and methods in this subsection. To obtain the approximate solution when the solution u is in C([0,T];L2(Ω)), we don't use a regularized solution as in Theorem 4.1. Since ¯Gδ,N(δ) is an approximation of G, we know that it is an observed data. It can also be called the "input data". Recall that K is the Lipschitz constant of F. We divide our results in Theorem 4.3 into two cases:
Case 1: KT<1. By the way the input data ¯Gδ,N(δ) is defined, we construct a new regularized solution. Then we obtain the error between the new regularized solution and the sought solution u.
Case 2: KT>1. In this case, the construction of the regularized solution is more difficult. To apply the known result in Case 1, we need to divide [0,T] into a collection of sub intervals [Th,Th′] where K(Th′−Th)<1. From the given input data θ and appropriate parameter regularization ζ, we set the output function YζTh,Th′(f)(x,t) satisfies the nonlinear integral equation (4.37). The existence of YζTh,Th′(f) in C([Th,Th′];L2(Ω)) holds if K(Th′−Th)<1. From (4.56), we have an important result: If ζ is suitably chosen and θ is an approximate function of u(x,Th′) then the function YζTh,Th′(f)(x,t) is an approximate solution of the sought solution u in all intervals [Th,Th′]. Let s be a positive integer such that s>KT. Define a sequence of points {Tl},l=0,1,...2s such that
T0=0<T1=¯hT<T2=2¯hT<...<T2s=2s¯hT=T. | (4.36) |
where ¯h=12s. In all the intervals [Ti,Ti+1],i=¯0,2s−1, we construct different regularized solutions and combine them into a final regularized solution. More details are as follows:
● In the first step, to construct an approximate solution on [T2s−1,T], we use the input data ¯Gδ,N(δ) and parameter regularization ζ2s to establish a function Yζ2sT2s−2,T2s(¯Gδ,N(δ))(x,t). Then we define a regularized solution Uδ(x,t)=Yζ2sT2s−2,T2s(¯Gδ,N(δ))(x,t) for all t∈[T2s−1,T].
● In the second step, to construct an approximate solution on [T2s−2,T2s−1], we use the input data Un(x,T2s−1) (which is computed in the first step) and parameter regularization ζ2s−1 to establish a function Yζ2sT2s−2,T2s(¯Gδ,N(δ))(x,t). Then we define a regularized solution
Uδ(x,t)=Yζ2s−1T2s−3,T2s−1(Uδ(x,T2s−1))(x,t) |
for all t∈[T2s−2,T2s−1].
● We continue similarly for the remaining steps. Finally, we obtain the regularized solution in (4.61) and (4.62).
Now, we consider the following lemma.
Lemma 4.2. Let 0≤Th<Th′≤T. For f∈C([Th,Th′];L2(Ω)), we consider the following nonlinear integral equation
YζTh,Th′(f)(x,t)=∑λj≤ζ[e(Th′−t)λj⟨f,ϕj⟩−∫Th′te(τ−t)λj⟨F(YζTh,Th′(f)(τ),ϕj⟩dτ]ϕj+∑λj>ζ[∫tThe(τ−t)λjFj⟨F(YζTh,Th′(f)(τ),ϕj⟩dτ]ϕj(x). | (4.37) |
for ζ>0. Assume that K(Th′−Th)<1. Then Problem (4.37) has a unique solution YζTh,Th′(f)∈C([Th,Th′];L2(Ω)). Moreover, we have the following estimate
E‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω)≤2(1+1q0)e2(Th−t)1−(1+q0)K2(Th′−Th)2(e2(Th′−Th)ζE‖f−u(.,Th′)‖2L2(Ω)+‖u(.,Th′)‖2L2(Ω)). | (4.38) |
for all t∈[Th,Th′] and q0 satisfies 0<q0<1K2(Th′−Th)2−1.
Proof. Part A. We will begin by showing that Eq (4.37) has a unique solution in C([Th,Th′];L2(Ω)). Our analysis here is similar to the one in [29]. Define on C([Th,Th′];L2(Ω)) the following Bielecki norm
‖v‖1=supTh≤t≤Th′e(t−Th)ζ(δ)‖v(t)‖, | (4.39) |
for all v∈C([Th,Th′];L2(Ω)). It is easy to check that ‖.‖1 is a norm of C([Th,Th′];L2(Ω)). Now, let f be in L2(Ω). We want to show that the map given by
I(w(f))(x,t)=∑λj≤ζ[e(Th′−t)λj⟨f,ϕj⟩−∫Th′te(τ−t)λj⟨F(w(f)(τ),ϕj⟩dτ]ϕj+∑λj>ζ[∫tThe(τ−t)λj⟨F(w(f)(τ),ϕj⟩dτ]ϕj(x), | (4.40) |
for w(f)∈C([Th,Th′];L2(Ω)), is a contraction on C([Th,Th′];L2(Ω)) with the condition K(Th′−Th)<1. Indeed, we shall prove that, for every w1,w2∈C([Th,Th′];L2(Ω)),
‖I(w1(f))−I(w2(f))‖1≤K(Th′−Th)‖w1(f)−w2(f)‖1. | (4.41) |
First, by using the Hölder inequality and the global Lipschitz property of F, we have the following estimates for all t∈[Th1,Th2], namely
∑λj≤ζ(∫Th′te(τ−t)λj[Fj(w1(f))(τ)−Fj(w2(f))(τ)]dτ)2≤(Th′−t)∑λj≤ζ∫Th′t|e(τ−t)λj[Fj(w1(f))(τ)−Fj(w2)(f)(τ)]|2dτ≤(Th′−t)∑λj≤ζ∫Th′te2(τ−t)ζ[Fj(w1(f))(τ)−Fj(w2)(f)(τ)]2dτ≤K2(Th′−t)∫Th′te2(τ−t)ζ‖w1(f)(τ)−w2(f)(τ)‖2dτ≤e−2(t−Th)ζK2(Th′−t)2supTh≤τ≤Th′e2(τ−Th)ζ‖w1(f)(τ)−w2(f)(τ)‖2=e−2(t−Th)ζK2(Th′−t)2‖w1(f)−w2(f)‖21. |
Noting that if λj>ζ then e(τ−t)λj≤e(τ−t)ζ for Th≤τ≤t, it follows that
∑λj>ζ(∫tThe(τ−t)λjj[Fj(w1(f))(τ)−Fj(w2(f))(τ)]dτ)2≤(t−Th)∑λj>ζ∫tTh|e(τ−t)ζ[Fj(w1(f))(τ)−Fp(w2(f))(τ)]|2dτ≤(t−Th)∑λj>ζ∫tThe2(τ−t)ζ|Fj(w1(f))(τ)−Fj(w2(f))(τ)|2dτ≤K2(t−Th)∫tThe2(τ−t)ζ‖w1(f)(τ)−w2(f)(τ)‖2dτ≤e−2(t−Th)ζK2(t−Th)2sup0≤τ≤Te2(τ−Th)ζ‖w1(f)(τ)−w2(f)(τ)‖2=e−2(t−Th)ζK2(t−Th)2‖w1(f)−w2(f)‖21. |
From the definition of I in (4.40), we have
I(w1(f))(x,t)−I(w2(f))(x,t)=∑λj≤ζ(∫Th′te(τ−t)λj[Fj(w1(f))(τ)−Fj(w2(f))(τ)]dτ)ϕj(x)+∑λj>ζ(∫tThe(τ−t)λj[Fj(w1(f))(τ)−Fj(w2(f))(τ)]dτ)ϕj(x). |
Combining (4.42), (4.42), (4.42) and using the inequality (a+b)2≤(1+θ0)a2+(1+1θ0)b2 for any real numbers a,b and θ0>0, we get the following estimate for all t∈(Th,Th′)
‖I(w1(f))(.,t)−I(w2(f))(.,t)‖2≤e−2(t−Th)ζK2(t−Th)2(1+θ0)‖w1(f)−w2(f)‖21+e−2(t−Th)ζ(1+1θ0)K2(Th′−t)2‖w1(f)−w2(f)‖21. |
By choosing θ0=Th′−tt, we obtain {for all} t∈(Th,Th′)
e2(t−Th)ζ‖I(w1(f))(.,t)−I(w2(f))(.,t)‖2≤K2(Th′−Th)2‖w1(f)−w2(f)‖21. | (4.42) |
On the other hand, letting t=Th′ in (4.42), we get
e2(Th′−Th)ζ‖I(w1(f))(.,Th)−I(w2(f))(.,Th′)‖2≤K2(Th′−Th)2‖w1(f)−w2(f)‖21. | (4.43) |
By letting t=Th in (4.42), we obtain
‖I(w1(f))(.,Th)−I(w2(f))(.,Th)‖2≤K2(Th′−Th)2‖w1(f)−w2(f)‖21. | (4.44) |
Combining (4.42), (4.43) and (4.44), we deduce that for all Th≤t≤Th′
e2(t−Th)ζ‖I(w1(f))(t)−I(w2(f))(t)‖≤K(Th′−Th)‖w1(f)−w2(f)‖1, | (4.45) |
which leads to (4.41). Since K(Th′−Th)<1, it follows that I is a well-defined contraction on C([Th,Th′];L2(Ω)). By the Banach fixed point theorem, it therefore has a unique fixed point, i.e., the equation I(w)=w has a unique solution which we denote by YζTh,Th′(f)∈C([Th,Th′];L2(Ω)).
Part B. The error estimate E‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω).
By a similar technique as in the proof of Theorem 4.1, we obtain
uj(Th)=e(Th′−Th)λjuj(Th′)−∫Th′The(τ−t)λjFj(u)(τ)dτ. | (4.46) |
This leads to
e−(t−Th)λjuj(Th)=e(Th′−t)λj[uj(Th′)−∫Th′The(s−Th′)λjFj(u)(τ)dτ]. | (4.47) |
The last equality implies that after some simple transformation
∑λj>ζe(Th′−t)λj[uj(Th′)−∫Th′te(s−Th′)λjFj(u)(τ)dτ]ϕj(x)=∑λj>ζ[∫tThe(τ−t)λjFj(u)(τ)dτ]ϕj(x)+∑λj>ζe−(t−Th)λjuj(Th)ϕj(x). | (4.48) |
Using the last equality and (4.47), we get
u(x,t)=∑λj≤ζ[e(Th′−t)λjuj(Th′)−∫Th′te(τ−t)λjFj(u)(τ)dτ]ϕj(x)+∑λj≤ζ[∫tThe(τ−t)λjFj(u)(τ)dτ]ϕj(x)+∑λj>ζe−(t−Th)λjuj(Th)ϕj(x). | (4.49) |
We have
YζTh,Th′(f)(x,t)−u(x,t)=∑λj≤ζ[e(Th′−t)ζ(fj−uj(Th′))]ϕj(x)−∑λj≤ζ[∫Th′te(τ−t)λj(Fj(YζTh,Th′(f))(τ)−Fj(u)(τ))dτ]ϕj(x)+∑λj>ζ[∫tThe(τ−t)λj(Fj(YζTh,Th′(f))(τ)−Fj(u)(τ))dτ]ϕj(x)−∑λj>ζe−(t−Th)λjuj(Th)ϕj(x). | (4.50) |
This implies that
|⟨YζTh,Th′(f)(.,t)−u(.,t),ϕj(.)⟩|≤e(Th′−t)ζ|(fj−uj(Th′))|+∫Th′The(τ−t)ζ|Fj(YζTh,Th′(f))(τ)−Fj(u)(τ)|dτ+e−(t−Th)ζ|uj(Th)|. | (4.51) |
Hence, using Parseval's identity and the inequality
(c1+c2+c3)2≤2(1+1q0)c21+2(1+1q0)c22+(1+q0)c23 |
for any real numbers c1,c2,c3 and q0>0 we have
E‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω)=E(∞∑j=1|⟨YζTh,Th′(f)(.,t)−u(.,t),ϕj(x)⟩|2)≤2(1+1q0)E(∞∑j=1e(Th′−t)ζ|(fj−uj(Th′))|2)+(1+q0)E((Th′−Th)∞∑j=1∫Th′The2(τ−t)ζ|Fj(YζTh,Th′(f))(τ)−Fj(u)(τ)|dτ)+2(1+1q0)∞∑j=1e−2(t−Th)ζ‖uj(Th)‖2≤2(1+1q0)e2(Th′−t)ζE‖f−u(.,Th′)‖2L2(Ω)+2(1+1q0)e−2(t−Th)ζ‖u(.,Th)‖2L2(Ω)+(1+q0)(Th′−Th)∫Th′The2(τ−t)ζE(‖F(YζTh,Th′(f))(τ)−F(u)(τ)‖2L2(Ω))dτ. |
Multiplying both sides of the last inequality by e2(t−Th)ζ, and using the global Lipschitz property of F, we obtain
e2(t−Th)ζE‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω)≤2(1+1q0)e2(Th′−Th)ζE‖f−u(.,Th′)‖2L2(Ω)+2(1+1q0)‖u(.,Th)‖2L2(Ω)+(1+q0)K2(Th′−Th)∫Th′The2(τ−Th)ζE‖YζTh,Th′(f)(.,s)−u(.,s)‖2L2(Ω)ds. | (4.52) |
Since YζTh,Th′(f),u∈C([Th,Th′];L2(Ω)) we obtain that the function
e2(t−Th)ζE‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω) |
is continuous on [Th,Th′]. Therefore, the following is a finite positive constant
˜A=supTh≤t≤Th′e2(t−Th)ζE‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω). |
This implies that
˜A≤2(1+1q0)e2(Th′−Th)ζE‖f−u(.,Th′)‖2L2(Ω)+2(1+1q0)‖u(.,Th)‖2L2(Ω)+(1+q0)K2(Th′−Th)2˜A | (4.53) |
Hence
(1−(1+q0)K2(Th′−Th)2)˜A≤2(1+1q0)e2(Th′−Th)M(ζ)E‖f−u(.,Th′)‖2L2(Ω)+2(1+1q0)‖u(.,Th)‖2L2(Ω). | (4.54) |
Since by assumption 0<q0<1K2(Th′−Th)2−1, it follows that the term on the left hand-side that is in parenthesis is positive. This implies that for all t∈[Th,Th′]
e2(t−Th)ζE‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω)≤2(1+1q0)e2(Th′−Th)ζE‖f−u(.,Th′)‖2L2(Ω)+2(1+1q0)‖u(.,Th)‖2L2(Ω)1−(1+q0)K2(Th′−Th)2. | (4.55) |
Hence for all t∈[Th,Th′] we conclude that
E‖YζTh,Th′(f)(.,t)−u(.,t)‖2L2(Ω)≤2(1+1q0)1−(1+q0)K2(Th′−Th)2(e2(Th′−Th)ζE‖f−u(.,Th′)‖2L2(Ω)+‖u(.,Th)‖2L2(Ω))e2(Th−t)ζ. | (4.56) |
Our main result in this subsection is as follows.
Theorem 4.3. Let g be as in Theorem 4.1. Assume that u is the unique solution of Problem (1.1).
(a) Assume that KT<1, where K is the Lipschitz constant of F. A new regularized solution is given as follows
ˆUδ(x,t)=∑λj≤ζ(δ)[e(T−t)λj¯Gδ,N(δ)−∫Tte(τ−t)λjFj(ˆUδ)(τ)dτ]ϕj(x)+∑λj>ζ(δ)[∫t0e(τ−t)λjFj(ˆUδ)(τ)dτ]ϕj(x). | (4.57) |
Let us choose ζ(δ) such that
limδ→0ζ(δ)=+∞,limδ→0ekTζ(δ)λγN(δ)=0,limδ→0ekTζ(δ)√N(δ)δ=0. | (4.58) |
If u∈C([0,T];L2(Ω)) then as |δ|→0
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)isofordere−2tζ(δ). | (4.59) |
(b) Suppose that KT>1 and let us assume that u∈C([0,T];L2(Ω)). Let
ζ1(δ):=sT22s−1log(1ξ(δ))ζk(δ):=sT22s−klog(1ξ(δ)),k=¯2,2s. | (4.60) |
We construct a regularized solution ˆUδ as follows
ˆUδ(x,t)=Yζ2s−i(δ)T2s−i−2,T2s−i(ˆUδ(x,T2s−i))(x,t),ifT2s−i−1≤t≤T2s−i,i=¯0,2s−2 | (4.61) |
and
ˆUδ(x,t)=Yζ1(δ)T0,T1(ˆUδ(x,T1))(x,t),if0≤t≤T1. | (4.62) |
where Yζ(δ)Th1,Th2(θ)(x,t) is defined in (4.37). Then we have
● If t∈[Tk,Tk+1] and k=¯1,2s−1 then
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)isoforder(ξ(δ))122s−k. | (4.63) |
● If t∈[0,T1] then
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)isoforder(ξ(δ))st22s−1. | (4.64) |
Remark 4.4. In [29], we only need the regularization result for 0<KT<1. Our Theorem 4.3 extends this result for any K>0.
Proof of part (a) of Theorem 4.3. By setting Th=0 and Th′=T f=¯Gδ,N(δ) then YζTh,Th′(f) given by (4.37) in Lemma 4.2 is equal to ˆUδ given by (4.57). Then apply the result from (4.38). Since KT<1, applying Lemma 4.2, we obtain
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)≤2(1+1q0)1−(1+q0)K2T2(e2Tζ(δ)E‖¯Gδ,N(δ)−g‖2L2(Ω)+‖g‖2L2(Ω))e−2tζ(δ)≤2(1+1q0)1−(1+q0)K2T2e2(T−t)ζ(δ)δ2N(δ)+42(1+1q0)1−(1+q0)K2T2e2(T−t)ζ(δ)1λ2γN(δ)‖g‖H2γ+2(1+1q0)1−(1+q0)K2T2‖g‖2L2(Ω)e−2tζ(δ). |
This completes the proof of part (a).
Proof of part (b) of Theorem 4.3
By Theorem 3.1, we have E‖¯Gδ,N(δ)−g‖2L2(Ω)≤˜Cξ2(δ). where ˜C=1+‖g‖2H2γ. We will estimate the error for time variable in interval [Tl,Tl+1] for l=¯0,2s.
Case 1. Let t∈[T2s−1,T]. Since ζ2s(δ)=sTlog(1ξ(δ)), by Lemma 4.2 we get
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)=E‖Yζ2s(δ)T2s−2,T2s(¯Gδ,N(δ))(.,t)−u(.,t)‖2L2(Ω)≤2s2(1+1q0)s2−T2K2(1+q0)[e2(T2s−T2s−2)ζ2s(δ)E‖¯Gδ,N(δ)−g‖2L2(Ω)]e2(T2s−2−t)ζ2s(δ)+2s2(1+1q0)s2−T2K2(1+q0)[‖u(.,T2s−2)‖2L2(Ω)]e2(T2s−2−t)ζ2s(δ)≤2s2(1+1q0)s2−T2K2(1+q0)(˜C+‖u‖2L∞(0,T;L2(Ω)))ξ(δ)=χ(s,K,q0)(˜C+‖u‖2L∞(0,T;L2(Ω)))ξ(δ), | (4.65) |
which we note that e2(T2s−2−t)ζ2s(δ)≤e2(T2s−2−T2s−1)ζ2s(δ)=ξ(δ) and
χ(s,K,q0)=max{1,2s2(1+1q0)s2−T2K2(1+q0)},thenχ(s,K,q0)≥1. |
Case 2. Let t∈[T2s−2,T2s−1]. Since ζ2s−1(δ)=s2Tlog(1ξ(δ)), by Lemma 4.2 we get
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)=E‖Yζ2s−1(δ)T2s−3,T2s−1(ˆUδ(.,T2s−1))(.,t)−u(.,t)‖2L2(Ω)≤χ(s,K,q0)exp(2(T2s−3−t)ζ2s−1(δ))exp(2(T2s−1−T2s−3)ζ2s−1(δ))E‖ˆUδ(.,T2s−1)−u(.,T2s−1‖2L2(Ω)+χ(s,K,q0)exp(2(T2s−3−t)ζ2s−1(δ))‖u(.,T2s−3)‖2L2(Ω)≤χ(s,K,q0)( χ(s,K,q0)(˜C+‖u‖L∞(0,T;L2(Ω)))+‖u‖L∞(0,T;L2(Ω)))(ξ(δ))12≤2χ2(s,K,q0)(˜C+‖u‖L∞(0,T;L2(Ω)))(ξ(δ))12, |
where we used the following result from (4.65):
E‖ˆUδ(.,T2s−1)−u(.,T2s−1‖2L2(Ω)≤χ(s,K,q0)(˜C+‖u‖2L∞(0,T;L2(Ω)))ξ(δ). |
Therefore, repeating the argument as in the above cases and using the induction method, we can prove the following estimate
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)≤(2s−k)|χ2(s,K,q0)|2s−k(˜C+‖u‖2L∞(0,T;L2(Ω)))(ξ(δ))122s−k−1, |
for all t∈[Tk,Tk+1] and k=¯1,2s−1.
If t∈[0,T1], then by a similar technique as above, we obtain the error estimate
E‖ˆUδ(.,t)−u(.,t)‖2L2(Ω)≤s|χ2(s,K,q0)|2s(˜C+‖u‖2L∞(0,T;L2(Ω)))(ξ(δ))st22s−2. |
Section 4 addressed a problem in which F is a global Lipschitz function. In this section we extend the analysis to a locally Lipschitz function F. Results for the locally Lipschitz case are difficult. Here, we have to find another regularization method to study the problem with a locally Lipschitz source.
Assume that a is noisy by the observation data aobsδ:Ω×[0,T]→R as follows
aobsδ(x,t)=a(x,t)+δψ(t) | (5.1) |
where δ>0 and ψ∈L∞(0,T) such that
‖ψ‖L∞(0,T)=sup0≤t≤T|ψ(t)|≤¯M, | (5.2) |
where ¯M>0. In the case when a is not disturbed, we can use the method in the previous sections (the case when a is not disturbed is simpler than the case a is noisy). If a is disturbed by random data, it is difficult to use the old method and we need a new approach, as outlined below.
Assume that for each R>0, there exists KR>0 such that
|F(x,t;u)−F(x,t;v)|≤KR|u−v|,ifmax{|u|,|v|}≤R, | (5.3) |
where (x,t)∈Ω×[0,T] and
KR:=sup{|F(x,t;u)−F(x,t;v)u−v|:max{|u|,|v|}≤R,u≠v,(x,t)∈Ω×[0,T]}<+∞. |
We note that KR is increasing and limR→+∞KR=+∞. Now, we outline our idea to construct a regularization for problem (1.1). For all R>0, we approximate F by FR defined by
FR(x,t;w):={F(x,t;−R),w∈(−∞,−R)F(x,t;u),w∈[−R,R]F(x,t;R),w∈(R,+∞). | (5.4) |
For each δ>0, we consider a parameter R(δ)→+∞ as δ→0+. Let us denote the operator P=MΔ, where M is a positive number such that M>aobsδ(x,t) for all (x,t)∈Ω×(0,T). Define the following operator
PδβN(δ)=P+QδβN(δ), |
where
QδβN(δ)v(x)=1T∞∑j=1ln(1+βN(δ)eMTλj)⟨v(x),ϕj(x)⟩L2(Ω)ϕj(x), | (5.5) |
for any function v∈L2(Ω). Here N(δ) is defined in Lemma (4.1).
We introduce the main idea to solve problem (1.1) with a generalized case of source term defined by (5.4), and we consider the problem:
{∂uδN(δ)∂t−∇(aobsδ(x,t)∇uδN(δ))−QδβN(δ)(uδN(δ))(x,t)=FRδ(x,t,uδN(δ)(x,t)),(x,t)∈Ω×(0,T),uδN(δ)|∂Ω=0,t∈(0,T),uδN(x,T)=¯Gδ,N(δ)(x),(x,t)∈Ω×(0,T), | (5.6) |
Here ¯Gδ,N(δ)(x) is defined in Eq (4.2). Now, we introduce some Lemmas which will be useful for our main results. First, we recall the abstract Gevrey class of functions of index σ>0, see, e.g., [24], defined by
Wσ={v∈L2(Ω):∞∑n=1e2σλn|⟨v,ϕn(x)⟩L2(Ω)|2<∞}, |
which is a Hilbert space equipped with the inner product
⟨v1,v2⟩Wσ:=⟨eσ√−Δv1,eσ√−Δv2⟩L2(Ω),for allv1,v2∈Wσ; |
and the corresponding norm is ‖v‖Wσ=√∑∞n=1e2σλn|⟨v,ϕn⟩L2(Ω)|2<∞.
Lemma 5.1. For FR∈L∞(Ω×[0,T]×R), we have
|FR(x,t;u)−FR(x,t;v)|≤KR|u−v|,∀(x,t)∈Ω×[0,T],u,v∈R. |
Proof. See the proof of Lemma 2.4 in [35].
Lemma 5.2. 1. Let M,T>0. For any v∈WMT(Ω), we have
‖QδβN(δ)(v)‖L2(Ω)≤βN(δ)T‖v‖WMT(Ω). | (5.7) |
2. Let βN(δ)<1−e−MTλ1. For any v∈L2(Ω), we have
‖PδβN(δ)v‖L2(Ω)≤1Tln(1βN(δ))‖v‖L2(Ω). | (5.8) |
Proof. Using the inequality ln(1+a)≤a,∀a>0, we have
‖QδβN(δ)(v)‖2L2(Ω)=1T2∞∑j=1ln2(1+βN(δ)eMTλj)|⟨v,ϕj⟩L2(Ω)|2≤β2N(δ)T2∞∑j=1e2MTλj|⟨v,ϕj⟩L2(Ω)|2≤β2N(δ)T2‖v‖2WMT. | (5.9) |
Since βN(δ)<1−e−MTλ1, we know that βN(δ)+e−MTλj<1. Using Parseval's equality, we easily get
‖PδβN(δ)(v)‖2L2(Ω)=1T2∞∑j=1ln2(1βN(δ)+e−MTλj)|⟨v,ϕj⟩L2(Ω)|2≤1T2ln2(1βN(δ))∞∑j=1|⟨v,ϕj⟩L2(Ω)|2≤1T2ln2(1βN(δ))‖v‖2L2(Ω). |
Theorem 5.1. Problem (5.6) has a unique solution uδN(δ)∈C([0,T];L2(Ω)). Assume that the problem (1.1) has a unique solution u satisfying u(⋅,t)∈WMT. Choose βN(δ) such that
limδ→0δ√N(δ)β−1N(δ)=limδ→0β−1N(δ)λ−γN(δ)=limδ→0βN(δ)=0. | (5.10) |
Choose Rδ such that
limδ→0β2tTN(δ)e2KRδT=0,t>0. | (5.11) |
Then we have the following estimate
E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω)≤β2tTN(δ)e(2K(Rδ))+1)T˜C(δ). | (5.12) |
Here ˜C(δ) is
˜C(δ)=δ2N(δ)β−2Nδ+1λ2γN(δ)β2Nδ‖g‖H2γ(Ω)+‖u‖2C([0,T];WMT(Ω))+δ2T3b0β2Nδ‖u‖2L∞(0,T;H10(Ω)). |
and assume that Ω is one dimensional domain.
Remark 5.1. 1. Under asumption (5.11), the right hand side of Eq (5.12) converges to zero when t>0.
2. Choose βN(δ)=N(δ)−c for any 0<c<min(12,2γd), and N(δ) is chosen as
N(δ)=(1δ)m(12−c),0<m<1. | (5.13) |
Choose Rδ such that
K(Rδ)≤1kTln(ln(N(δ)))=1kTln(m(12−c)ln(1δ)). |
Then E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω) is of the order δmc(12−c)tTln(1δ).
Proof of Theorem 5.1. The proof is divided into two Steps.
Step 1. The existence and uniqueness of the solution to the regularized problem (5.6).
Let b(x,t) be defined by b(x,t)=M−a(x,t). It is clear that 0<b(x,t)<M. Then from (5.6), we obtain
∂uδN(δ)∂t+∇(b(x,t)∇uδN(δ))=F(x,t,uδN(δ)(x,t))−1T∞∑j=1ln(1βN(δ)+e−MTλj)⟨uδN(δ)(⋅,t),ϕj⟩ϕj(x), | (5.14) |
for (x,t)∈Ω×(0,T).
Let vδN(δ) be the function defined by vδN(δ)(x,t)=uδN(δ)(x,T−t). Then we have
∂vδN(δ)∂t(x,t)=−∂uδN(δ)∂t(x,T−t),∇(b(x,t)∇vδN(δ))(x,t)=∇(b(x,t)∇uδN(δ))(x,T−t) |
and
1T∞∑j=1ln(βN(δ)+e−MTλj)⟨vδN(δ)(x,t),ϕj(x)⟩ϕj(x)=1T∞∑j=1ln(βN(δ)+e−MTλj)⟨uδN(δ)(x,T−t),ϕj(x)⟩ϕj(x). |
This implies that vδN(δ) satisfies the problem
{∂vδN(δ)∂t−∇(b(x,t)∇vδN(δ))=G(x,t,vδN(x,t)),(x,t)∈Ω×(0,T),vδN(δ)|∂Ω=0,t∈(0,T),vδN(δ)(x,0)=¯Gδ,N(δ)(x),(x,t)∈Ω×(0,T), | (5.15) |
where G is defined by
G(x,t,v(x,t))=−F(x,t,v(x,t))+1T∞∑j=1ln(1βN(δ)+e−MTλj)⟨v(⋅,t),ϕj⟩L2(Ω)ϕj(x), | (5.16) |
for any v∈C([0,T];L2(Ω)).
Since
βN(δ)∈(0,1−e−MTλ1),0<ln(1βN(δ)+e−MTλn)<ln(1βN(δ)) |
and using Parseval's identity, we obtain for any v1,v2∈L2(Ω),
‖G(⋅,t,v1(⋅,t))−G(⋅,t,v2(⋅,t))‖L2(Ω)≤‖F(⋅,t,v1(⋅,t))−F(⋅,t,v2(⋅,t))‖L2(Ω)+‖1T∞∑j=1ln(1βN(δ)+e−MTλj)⟨v1(x,t)−v2(x,t),ϕj(x)⟩L2(Ω)ϕj(x)‖L2(Ω)≤K‖v1(⋅,t)−v2(⋅,t)‖L2(Ω)+1T√∞∑j=1ln2(1βN(δ)+e−MTλj)|⟨v1(⋅,t)−v2(⋅,t),ϕn⟩L2(Ω)|2≤[K+1Tln(1βN(δ))]‖v1(⋅,t)−v2(⋅,t)‖L2(Ω). | (5.17) |
Thus G is a Lipschitz function. Using the results of Theorem 12.2 in [32], we complete the proof of Step 1.
Step 2. Error estimate
We consider the error estimate between the regularized solution of problem (5.6) and the exact solution of problem (1.1).
For (x,t)∈Ω×(0,T), we begin by establishing that the functions b(x,t),bobsδ(x,t) satisfy
0<b(x,t)≤M,0<b0≤bobsδ(x,t)≤M |
and
(a(x,t)aobsδ(x,t))=(MM)−(b(x,t)bobsδ(x,t)),∀(x,t)∈Ω×(0,T). | (5.18) |
The functions uδN(δ)(x,t) and u(x,t) solve the following equations
∂u∂t+∇(bobsδ(x,t)∇u)=F(x,t;u(x,t))+∇((bobsδ(x,t)−b(x,t))∇u)+Pu | (5.19) |
and
∂uδN(δ)∂t+∇(bobsδ(x,t)∇uδN(δ))=FRδ(x,t,uδN(δ)(x,t))+PδβN(δ)uδN(δ). | (5.20) |
For ρδ>0, we put VδN(δ)(x,t)=eρδ(t−T)[uδN(δ)(x,t)−u(x,t)]. Then for (x,t)∈Ω×(0,T)
∂VδN(δ)∂t+∇(bobsδ(x,t)∇VδN(δ))−ρδVδN(δ)=PδβN(δ)VδN(δ)+eρδ(t−T)QδβN(δ)u−eρδ(t−T)∇((bobsδ(x,t)−b(x,t))∇u)+eρδ(t−T)[FRδ(x,t,uδN(δ)(x,t))−F(x,t;u(x,t))], | (5.21) |
and
VδN(δ)|∂Ω=0,VδN(δ)(x,T)=¯Gδ,N(δ)(x)−g(x). |
By taking the inner product on both sides of Eq (5.21) with VδN(δ) and noting the equality
∫Ω∇(bobsδ(x,t)∇VδN(δ))VδN(δ)dx=−∫Ωbobsδ(x,t)|∇VδN(δ)|2dx, |
we obtain
‖VδN(δ)(⋅,T)‖2L2(Ω)−‖VδN(δ)(⋅,t)‖2L2(Ω)−2∫Tt∫Ωbobsδ(x,s)|∇VδN(δ)|2dxds−2ρδ∫Tt‖VδN(δ)(⋅,s)‖2L2(Ω)ds=2∫Tt⟨PδβN(δ)VδN(δ),VδN(δ)⟩L2(Ω)ds⏟=:~A4+2∫Tt⟨eρδ(t−T)QδβN(δ)u,VδN(δ)⟩L2(Ω)ds⏟=:~A5+2∫Tt⟨−eρδ(t−T)∇((bobsδ(x,t)−b(x,t))∇u),VδN(δ)⟩L2(Ω)ds⏟=:~A6+2∫Tt⟨eρδ(t−T)[FRδ(x,t,uδN(δ)(x,t))−F(x,t;u(x,t))],VδN(δ)⟩L2(Ω)ds⏟=:~A7. | (5.22) |
First, thanks to inequality (5.8), the expectation of ~A4 is estimated as follows:
E|~A4|≤2Tln(1βNδ)∫TtE‖VδN(δ)(⋅,s)‖2L2(Ω)ds, | (5.23) |
Next, using the inequality (5.7) and the Hölder inequality, we have
E|~A5|≤∫Tte2ρβ(s−T)βNδT‖u‖2C([0,T];WMT)ds+∫TtE‖VδN(δ)(⋅,s)‖2L2(Ω)ds≤βNδT‖u‖2C([0,T];WMT)+∫TtE‖VδN(δ)(⋅,s)‖2L2(Ω)ds. | (5.24) |
For estimating the expectation of |~A6|, we use the Green's formula to get the equality
⟨∇((bobsδ(x,t)−b(x,t))∇u),VδN(δ)⟩L2(Ω)=⟨((bobsδ(x,t)−b(x,t))∇u,∇VδN(δ)⟩L2(Ω) |
then using Hölder's inequality and noting the fact that
∫Ω|∇u(.,s)|2dx≤‖u‖2L∞(0,T;H10(Ω))=sup0≤s≤T∫Ω|∇u(.,s)|2dx, |
we obtain
E|~A6|=2E|∫Tt⟨eρδ(s−T)((bobsδ(x,t)−b(x,t))∇u,∇VδN(δ)⟩L2(Ω)ds|≤E∫Tte2ρδ(s−T)b0∫Ω((bobsδ(x,t)−b(x,t))2|∇u(x,t)|2dxds+E∫Tt∫Ωb0|∇VδN(δ)|2dxds=δ2∫Tt|ψ(s)|2ds∫Ω|∇u(.,s)|2dxb0+E∫Tt∫Ωb0|∇VδN(δ)|2dxds≤¯M2δ2T22b0‖u‖2L∞(0,T;H10(Ω))+E∫Tt∫Ωb0|∇VδN(δ)|2dxds; | (5.25) |
here in the last inequality, we have used the fact that E|ψ(s)|2=s since ψ is Brownian motion. Finally, since limδ→0+Rδ=+∞, for a sufficiently small δ>0, there is an Rδ>0 such that Rδ≥‖u‖L∞([0,T];L2(Ω)). For this value of Rδ we have
FRδ(x,t;u(x,t))=F(x,t;u(x,t)). |
Using the global Lipschitz property of FR (see Lemma 3), one obtains similarly the estimate
E|~A7|=2E|∫Tt⟨eρδ(t−T)[FRδ(x,t,uδN(δ)(x,t))−F(x,t;u(x,t))],VδN(δ)⟩L2(Ω)ds|≤2E∫Tt‖eρδ(t−T)[FRδ(x,s,uδN(δ)(x,s))−F(x,s;u(x,s))]‖L2(Ω)‖VδN(δ)(⋅,s)‖L2(Ω)ds≤2K(Rδ)∫TtE‖VδN(δ)(⋅,s)‖2L2(Ω)ds. | (5.26) |
Combining (5.22), (5.23), (5.24), (5.25) and (5.26), and we obtain
E‖VδN(δ)(⋅,T)‖2L2(Ω)−E‖VδN(δ)(⋅,t)‖2L2(Ω)+∫Tt(βNδT‖u‖2C([0,T];WMT)+δ2T22b0‖u‖2L∞(0,T;H10(Ω)))ds≥2E∫Tt∫Ω(bobsδ(x,s)−b0)|∇VδN(δ)|2dxds+E∫Tt(2ρδ−2Tln(1βNδ)−2K(Rδ)−1)‖VδN(δ)(⋅,s)‖2L2(Ω)ds≥E∫Tt(2ρδ−2Tln(1βNδ)−2K(Rδ)−1)‖VδN(δ)(⋅,s)‖2L2(Ω)ds. | (5.27) |
Thus,
E‖VδN(δ)(⋅,t)‖2L2(Ω)≤E‖¯Gδ,N(δ)−g‖2L2(Ω)+βNδ‖u‖2C([0,T];WMT(Ω))+δ2T3b0‖u‖2L∞(0,T;H10(Ω))+E∫Tt(−2ρδ+2Tln(1βNδ)+2K(Rδ)+1)‖VδN(δ)(⋅,s)‖2L2(Ω)ds. | (5.28) |
Since VδN(δ)(x,t)=eρδ(t−T)(uδN(δ)(x,t)−u(x,t)) and applying Lemma 4.1, we observe that
e2ρδ(t−T)E‖uδN(δ)(⋅,t)−u(⋅,t)‖2L2(Ω)≤δ2N(δ)+1λ2γN(δ)‖g‖H2γ(Ω)+βNδ‖u‖2C([0,T];WMT(Ω))+¯M2δ2T3b0‖u‖2L∞(0,T;H10(Ω))+(2K(Rδ)+1)∫Tte2ρδ(s−T)E‖uδN(δ)(⋅,s)−u(⋅,s)‖2L2(Ω)ds. | (5.29) |
Gronwall's lemma allows us to obtain
e2ρδ(t−T)E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω)≤[δ2N(δ)+1λ2γN(δ)‖g‖H2γ(Ω)+βNδ‖u‖2C([0,T];WMT(Ω))+¯M2δ2T3b0‖u‖2L∞(0,T;H10(Ω))]e(2K(Rδ))+1)(T−t). | (5.30) |
By choosing ρδ=1Tln(1βNδ)>0 we have
E‖uδN(δ)(⋅,t)−u(⋅,t)‖2L2(Ω)≤β2tTN(δ)e(2K(Rδ))+1)T˜C(δ). | (5.31) |
The proof of Theorem 5.1 is complete.
In most previous works on backward nonlinear problems the assumption, that the source is global or locally Lipschitz, is required. To the best of our knowledge, this section is the first result when the source term F is not necessarily a locally Lipschitz source. We will solve the problem (1.1) with a special generalized case of source term defined by (5.4). Our regularized problem is different to the one in section 4 because we do not approximate the source function F. Indeed, we have the following regularized problem
{∂uδN(δ)∂t−∇(aobsδ(x,t)∇uδN(δ))−QδβN(δ)(uδN(δ))(x,t)=F(x,t,uδN(δ)(x,t)),(x,t)∈Ω×(0,T),uδN(δ)|∂Ω=0,t∈(0,T),uδN(x,T)=¯Gδ,N(δ)(x),(x,t)∈Ω×(0,T), | (6.1) |
We make the following assumptions on F∈C0(R) in the following: There exists C1 and C′1,C2 and p>1 and ¯γ such that
zF(x,t,z)≥C1|z|p−C′1 | (6.2) |
|F(x,t,z)|≤C2(1+|z|p−1) | (6.3) |
(z1−z2)(F(x,t,z1)−F(x,t,z2))≥−¯γ|z1−z2|2. | (6.4) |
It is easy to check that the function {F(x,t,z)=z13 }satisfies conditions (6.2), (6.3) and (6.4). Note here that this function is not locally Lipschitz.
Now we have the following result
Theorem 6.1. Let us assume that F satisfies (6.2), (6.3) and (6.4). Then, there exists a unique weak solution uδN(δ) of problem (6.1) such that
uδN(δ)∈L2(0,T;H1)∩L∞(0,T;L2). |
Assume that the problem (1.1) has a unique solution u satisfying u(⋅,t)∈WMT. Choose βNδ as in Theorem 5.1. Then we have the following estimate
E‖uδN(δ)(x,t)−u(x,t)‖2L2(Ω)≤β2tTN(δ)e(2¯γ+1)T˜C(δ). | (6.5) |
where ˜C(δ) is defined in (6.51).
Remark 6.1. Our method in this theorem give the convergence rate (6.5) which is better than the error rate in (5.12). Indeed, since limδ→0K(Rδ)=+∞, we have
Therighthandsideof(5.12)Therighthandsideof(6.5)=β2tTN(δ)e(2K(Rδ))+1)T˜C(δ)β2tTN(δ)e(2¯γ+1)T˜C(δ)→+∞ | (6.6) |
when δ→0.
First, by changing variable vδN(δ)(x,t)=uδN(δ)(x,T−t), we transform Problem (6.1) into the initial value problem
{∂vδN(δ)∂t−∇(bobsδ(x,t)∇vδN(δ))=−F(x,t,vδN(x,t))+PδβN(δ)(vδN(δ)(x,t)),(x,t)∈Ω×(0,T),vδN(δ)|∂Ω=0,t∈(0,T),vδN(δ)(x,0)=¯Gδ,N(δ)(x),(x,t)∈Ω×(0,T). | (6.7) |
where bobsδ(x,t)=M−aobsδ(x,t).
The weak formulation of the initial boundary value problem (6.7) can then be given in the following manner: Find vδN(δ)(t) defined in the open set (0,T) such that vδN(δ) satisfies the following variational problem
∫ΩddtvδN(δ),mφdx+∫Ωbobsδ(x,t)∇vδN(δ),m∇φdx+∫ΩF(vδN(δ),m(t))φdx=∫ΩPδβN(δ)(vδN(δ),m(t))φdx | (6.8) |
for all φ∈H1, and the initial condition
vδN(δ)(0)=¯Gδ,N(δ). | (6.9) |
Proof of the existence of solution of Problem (6.1). The main technique of this proof is learned from the article [34]. The proof consists of several steps.
Step 1: The Faedo – Galerkin approximation (introduced by Lions [22]).
In the space H1(Ω), we take a basis {ej}∞j=1 and define the finite dimensional subspace
Vm=span{e1,e2,...em}. |
Let ¯Gδ,N(δ),m be an element of Vm such that
¯Gδ,N(δ),m=m∑j=1dδmjej→¯Gδ,N(δ)stronglyinL2 | (6.10) |
as m→+∞. We can express the approximate solution of the problem (6.7) in the form
vδN(δ),m(t)=m∑j=1cδmj(t)ej, | (6.11) |
where the coefficients cδmj satisfy the system of linear differential equations
∫ΩddtvδN(δ),meidx+∫Ωbobsδ(x,t)∇vδN(δ),m∇eidx+∫ΩF(vδN(δ),m(t))eidx=∫ΩPδβN(δ)(vδN(δ),m(t))eidx | (6.12) |
with i=¯1,m and the initial conditions
cδmj(0)=dδmj,j=¯1,m. | (6.13) |
The existence of a local solution of system (6.12)–(6.13) is guaranteed by Peano's theorem on the existence of solutions. For each m there exists a solution vδN(δ),m(t) in the form (6.11) which satisfies (6.12) and (6.13) almost everywhere on 0≤t≤Tm for some Tm, 0<Tm≤T. The following estimates allow one to take Tm=T for all m.
Step 2. A priori estimates.
a) The first estimate. Multiplying the ith equation of (6.12) by cδmi(t) and summing up with respect to i, afterwards, integrating by parts with respect to the time variable from 0 to t, we get after some rearrangements
‖vδN(δ),m(t)‖2L2(Ω)+2∫t0∫Ωbobsδ(x,t)|∇vδN(δ),m(s)|2dxds+2∫t0∫ΩF(vδN(δ),m(s))vδN(δ),m(s)dxds=‖¯Gδ,N(δ),m‖2+2∫t0∫ΩPδβN(δ)(vδN(δ),m(s))vδN(δ),m(s)dxds | (6.14) |
From (6.10), we have
‖¯Gδ,N(δ),m‖2≤B0(δ), forall m, | (3.8) |
where B0(δ) depends on ¯Gδ,N(δ) and is independent of m.
Using the lower bound of bobsδ(x,t), we have the following estimate
2∫t0∫Ωbobsδ(x,t)|∇vδN(δ),m(s)|2dxds≥2b0∫t0‖vδN(δ),m(s)‖H1(Ω)ds. | (6.15) |
Using the assumption on F, we have
2∫t0∫ΩF(vδN(δ),m(s))vδN(δ),m(s)dxds≥2C1∫t0‖vδN(δ),m(s)‖pLp(Ω)ds−2TC′1 | (6.16) |
and
2∫t0∫ΩPδβN(δ)(vδN(δ),m(s))vδN(δ),m(s)dxds≤2Tln(1βN(δ))∫t0‖vδN(δ),m(s)‖2L2(Ω)ds. | (6.17) |
Hence, it follows from (6.15)–(6.17) that
‖vδN(δ),m(t)‖2L2(Ω)+2b0∫t0‖vδN(δ),m(s)‖H1(Ω)ds+2C1∫t0‖vδN(δ),m(s)‖pLp(Ω)ds≤B0(δ)+2TC′1+1Tln(1βN(δ))∫t0‖vδN(δ),m(s)‖2L2(Ω)ds. | (6.18) |
Let
Sδm(t)=‖vδN(δ),m(t)‖2L2(Ω)+2b0∫t0‖vδN(δ),m(s)‖H1(Ω)ds+2C1∫t0‖vδN(δ),m(s)‖pLp(Ω)ds. | (6.19) |
Using the fact that ∫t0‖vδN(δ),m(s)‖2L2(Ω)ds≤∫t0Sδm(s)ds, we know from (6.18) that
Sδm(t)≤B0(δ)+2TC′1+1Tln(1βN(δ))∫t0Sδm(s)ds | (6.20) |
Applying Gronwall's lemma, and we obtain
Sδm(t)≤[B0(δ)+2TC′1]exp(tTln(1βN(δ)))≤[B0(δ)+2TC′1]exp(ln(1βN(δ)))=B1(δ,T), | (6.21) |
for all m∈N, for all t, 0≤t≤Tm≤T, i.e., Tm=T, where CT always indicates a bound depending on T.
b) The second estimate. Multiplying the ith equation of (6.12) by t2ddtcδmi(t) and summing up with respect to i, we have
‖tddtvδN(δ),m(t)‖2L2(Ω)+t2∫Ωbobsδ(x,t)∇vδN(δ),m(t)∇(ddt∇vδN(δ),m(t))dx+∫Ωt2F(vδN(δ),m(t))ddtvδN(δ),m(t)dx=∫Ωt2PδβN(δ)(vδN(δ),m(t))ddtvδN(δ),m(t)dx. | (6.22) |
It is easy to check that for any u∈H1(Ω)
ddt[∫Ωbobsδ(x,t)|∇u(t)|2dx]=2∫Ωbobsδ(x,t)∇u(t)∇u′(t)dx+∫Ω∂∂tbobsδ(x,t)|∇u(t)|2dx. | (6.23) |
The equality (6.22) is equivalent to
2‖tddtvδN(δ),m(t)‖2L2(Ω)+ddt[t2∫Ωbobsδ(x,t)|vδN(δ),m(t)|2dx]+2∫Ωt2F(vδN(δ),m(t))ddtvδN(δ),m(t)dx=2t∫Ωbobsδ(x,t)|∇vδN(δ),m(t)|2dx+t2∫Ω∂∂tbobsδ(x,t)|∇vδN(δ),m(s)|2dx+∫Ωt2PδβN(δ)(vδN(δ),m(t))ddtvδN(δ),m(t)dx. | (6.24) |
By integrating the last equality from 0 to t, we get
2∫t0‖sddsvδN(δ),m(s)‖2L2(Ω)ds+t2∫Ωbobsδ(x,t)|vδN(δ),m(t)|2dx⏟I1+2∫t0∫Ωs2F(vδN(δ),m(s))ddsvδN(δ),m(s)dxds⏟I2=2∫t0∫Ωsbobsδ(x,s)|∇vδN(δ),m(s)|2dxds⏟I3+∫t0∫Ωs2∂∂sbobsδ(x,s)|∇vδN(δ),m(s)|2dxds⏟I4+∫t0∫Ωs2PδβN(δ)(vδN(δ),m(s))ddsvδN(δ),m(s)dxds⏟I5. | (6.25) |
Estimate I1. Since the assumption bobsδ(x,t)≥b0, we know that
I1=t2∫Ωbobsδ(x,t)|vδN(δ),m(t)|2dx≥b0‖tvδN(δ),m(t)‖2H1. | (6.26) |
Estimate I2. To estimate I2, we need the following Lemma
Lemma 6.1. Let μ0=(C′1C1)1/p,¯m=∫+μ0−μ0|F(ξ)|dξ,˜F(z)=∫z0F(y)dy,z∈R. Then we get
−¯m≤˜F(z)≤C2(|z|+1p|z|p),z∈R. | (6.27) |
The proof of Lemma 6.1 is easy and we omit it here. Now we return to estimate I2. By a simple computation and then using Lemma 6.1, we have
I2=2∫t0s2dsdds[∫Ωdx∫vδN(δ),m(x,s)0F(y)dy]=2∫t0s2dsdds[∫Ωdx∫vδN(δ),m(x,s)0F(y)dy]=2∫t0[dds(s2∫Ω˜F(vδN(δ),m(x,s))dx)−2s∫Ω˜F(vδN(δ),m(x,s))dx]=2t2∫Ω˜F(vδN(δ),m(x,t))dx−4∫t0sds∫Ω˜F(vδN(δ),m(x,s))dx≥−2T2¯m|Ω|−4C2∫t0s[‖vδN(δ),m(s)‖L1+1p‖vδN(δ),m(s)‖pLp]ds≥−2T2¯m|Ω|−4TC2[T‖vδN(δ),m‖L∞(0,T;L2)+1p12C1Sδm(t)]≥−B2(δ,T). | (6.28) |
Estimate I3. Using (6.19), we have the following estimate
I3≤2Tb1∫t0‖vδN(δ),m(s)‖2H1ds≤2Tb12b0Sδm(t). | (6.30) |
Estimate I4. Let us set
˜aT=sup(x,t)∈[0,1]×[0,T]∂∂tbobsδ(x,t), |
and then I4 is bounded by
I4≤˜aT∫t0‖svδN(δ),m(s)‖2H1ds≤T2˜aT∫t0‖vδN(δ),m(s)‖2H1ds≤T2˜aTa0Sδm(t). | (6.30) |
Estimate I5. Using Lemma 5.2, we obtain the following estimate for I5:
I5≤2∫t0‖sPδβN(δ)(vδN(δ),m(s))‖‖sddsvδN(δ),m(s)‖ds≤∫t0‖sPδβN(δ)(vδN(δ),m(s))‖2ds+∫t0‖sddsvδN(δ),m(s)‖2ds≤ln2(1βN(δ))∫t0‖vδN(δ),m(s)‖2ds+∫t0‖sddsvδN(δ),m(s)‖2ds≤ln2(1βN(δ))Sδm(t)a0+∫t0‖sddsvδN(δ),m(s)‖2ds | (6.32) |
Combining (6.26), (6.28), (6.29), (6.30), we obtain
(6.32) |
Let
and then since
together with (6.32), we deduce that
(6.33) |
where
Applying Gronwall's inequlality, we obtain that
(6.34) |
where depends only on and does not depend on .
Step 3. The limiting process.
Combining (6.19), (6.21) and (6.34), we deduce that, there exists a subsequence of still denoted by such that (see [22]), say,
(6.35) |
here . Using a compactness lemma ([22], Lions, p. 57) applied to , we can extract from the sequence a subsequence still denoted by such that
(6.36) |
By the Riesz-Fischer theorem, we can extract from a subsequence still denoted by such that
(6.37) |
Because is continuous, then
(6.38) |
On the other hand, using (6.3), (6.19), (6.21), we obtain
(6.39) |
where is a constant independent of We shall now require the following lemma, the proof of which can be found in [22] (see Lemma 1.3).
Lemma 6.2. Let be a bounded open subset of and such that
(6.40) |
and
Then
Applying Lemma 6.2 with we deduce from (6.38) and (6.39) that
(6.41) |
Passing to the limit in (6.12) and (6.10) by (6.35) and (6.41), we have established a solution of Problem (6.1).
Assume that the Problem (6.1) has two solution and . We have to show that . We recall that
(6.42) |
For , we put
Then for , we get
(6.43) |
and
By taking the inner product of both sides of (6.43) with then taking the integral from to and noting the equality
we deduce
(6.44) |
By the assumption we have
(6.45) |
Using the inequality (5.8), we get the following estimate
(6.46) |
Combine equations (6.44), (6.48), (6.46) and choose
to obtain
This implies that for all then since . The proof is completed.
Our analysis and proof is short and similar to the proof of Theorem 5.1. Indeed, let us also set
By using some of the above steps we obtain
(6.47) |
The terms are similar to (5.22). Now, we consider . By assumption (6.4), we have
(6.48) |
After using the results of the proof of Theorem 5.1, we get
(6.49) |
Since
and applying Lemma 4.1, we observe that
(6.50) |
Gronwall's lemma allows us to obtain
(6.51) |
By choosing we have
(6.52) |
Here we consider a special source function for Problem (1.1). This is called the Ginzburg-Landau equation. This function satisfies the condition of section 5 and does not satisfy the condition in section 4. For all , we approximate by defined by
(7.1) |
We consider the problem
(7.2) |
It is easy to see that . Choose for any , and is chosen as
(7.3) |
Choose such that
Then applying Theorem 5.1, the error is of the order
In this subsection, we are concerned with the backward problem for a nonlinear parabolic equation of Fisher–Kolmogorov–Petrovsky–Piskunov type
(7.4) |
with the following condition
(7.5) |
By Skellam [33], Eq (7.4) has many applications in population dynamics and periodic environments. In these references, the quantity generally stands for a population density, and the coefficients , respectively, correspond to the diffusion coefficient, the intrinsic growth rate coefficient and a coefficient measuring the effects of competition on the birth and death rates. Our method that can be applied to this model is similar to example 7.1. However, since the ideas of Example 7.1 and 7.2 are the same, we only state the model without giving the errors.
Taking the function it is easy to see that satisfy (6.2), (6.3) and (6.4). Moreover, we can show that is not locally Lipschitz function. So, we cannot regularize the problem in this case with Problem (5.6). We consider the problem
(7.6) |
Choose and as in subsection 6.1. Applying Theorem 5.1, the error between the solution of Problem (7.6) and , , is of the order
Remark 7.1. In the following, we give a comparison of the method and results in this paper with the results in [30,31]. All methods are truncation methods, but our problem is complicated due to the data being noised by random data. We need Lemma 4.1 to determine the correct set up according to the measured data. The coefficients should be chosen appropriately so that the error between the sought solution and the correct solution converges. There are two advantages to this article that were not explored in [30,31]
● In Theorem 4.3, we give a regularization result in the case of a weaker assumption for , i.e., . This is one of the first results obtained in this case and was not considered in [30,31]. In those papers, to investigate the error, the exact solution is assumed in a Gevrey space, which limits the number of functions than if one considered the function space .
● In [30,31], the source functions must satisfy a global lipschitz condition. However, in our article, we deal with a fairly broad function class, consisting of the local Lipschitz function class and some local non-Lipschitz function class (see Section 6).
Nguyen Huy Tuan is thankful to the Van Lang University. This research is funded by Thu Dau Mot University, Binh Duong Province, Vietnam under grant number DT.21.1-011.
The authors declare there is no conflicts of interest.
[1] |
N. H. Tuan, E. Nane, Approximate solutions of inverse problems for nonlinear space fractional diffusion equations with randomly perturbed data, SIAM/ASA J. Uncertain., 6 (2018), 302–338. https://doi.org/10.1137/17M1111139 doi: 10.1137/17M1111139
![]() |
[2] | H. Amann, Time-delayed Perona–Malik type problems, Acta Math. Univ. Comenian., 76 (2007), 15–38. |
[3] | J. Hadamard, Lectures on the Cauchy Problems in Linear Partial Differential Equations, Yale University Press, New Haven, CT, 1923. |
[4] |
M. Denche, K. Bessila, A modified quasi-boundary value method for ill-posed problems, J. Math. Anal. Appl., 301 (2005), 419–426. https://doi.org/10.1016/j.jmaa.2004.08.001 doi: 10.1016/j.jmaa.2004.08.001
![]() |
[5] |
N. V. Duc, An a posteriori mollification method for the heat equation backward in time, J. Inverse Ill-Posed Probl., 25 (2017), 403–422. https://doi.org/10.1515/jiip-2016-0026 doi: 10.1515/jiip-2016-0026
![]() |
[6] |
B. T. Johansson, D. Lesnic, T. Reeve, A method of fundamental solutions for radially symmetric and axisymmetric backward heat conduction problems, Int. J. Comput. Math., 89 (2012), 1555–1568. https://doi.org/10.1080/00207160.2012.680448 doi: 10.1080/00207160.2012.680448
![]() |
[7] |
A. B. Mair, H. F. Ruymgaart, Statistical inverse estimation in Hilbert scales, SIAM J. Appl. Math., 56 (1996), 1424–1444. https://doi.org/10.1137/S0036139994264476 doi: 10.1137/S0036139994264476
![]() |
[8] |
H. Kekkonen, M. Lassas, S. Siltanen, Analysis of regularized inversion of data corrupted by white Gaussian noise, Inverse Probl., 30 (2014), 045009. https://doi.org/10.1088/0266-5611/30/4/045009 doi: 10.1088/0266-5611/30/4/045009
![]() |
[9] |
C. König, F. Werner, T. Hohage, Convergence rates for exponentially ill-posed inverse problems with impulsive noise, SIAM J. Numer. Anal., 54 (2016), 341–360. https://doi.org/10.1137/15M1022252 doi: 10.1137/15M1022252
![]() |
[10] |
T. Hohage, F. Weidling, Characterizations of variational source conditions, converse results, and maxisets of spectral regularization methods, SIAM J. Numer. Anal., 55 (2017), 598–620. https://doi.org/10.1137/16M1067445 doi: 10.1137/16M1067445
![]() |
[11] |
A. P. N. T. Mai, A statistical minimax approach to the Hausdorff moment problem, Inverse Probl., 24 (2008), 045018. https://doi.org/10.1088/0266-5611/24/4/045018 doi: 10.1088/0266-5611/24/4/045018
![]() |
[12] |
L. Cavalier, Nonparametric statistical inverse problems, Inverse Probl., 24 (2008), 034004. https://doi.org/10.1088/0266-5611/24/3/034004 doi: 10.1088/0266-5611/24/3/034004
![]() |
[13] |
N. Bissantz, H. Holzmann, Asymptotics for spectral regularization estimators in statistical inverse problems, Comput. Statist., 28 (2013), 435–453. https://doi.org/10.1007/s00180-012-0309-1 doi: 10.1007/s00180-012-0309-1
![]() |
[14] |
D. D. Cox, Approximation of method of regularization estimators, Ann. Stat., 16 (1988), 694–712. https://doi.org/10.1214/aos/1176350829 doi: 10.1214/aos/1176350829
![]() |
[15] | H. W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, Boston, London, 1996. https://doi.org/10.1007/978-94-009-1740-8 |
[16] | B. T. Knapik, A. W. van der Vaart, J. H. van Zanten, Bayesian recovery of the initial condition for the heat equation, Comm. Statist. Theory Methods, 42 (2013), 1294–1313. |
[17] |
N. Bochkina, Consistency of the posterior distribution in generalized linear inverse problems, Inverse Probl., 29 (2013), 095010. https://doi.org/10.1088/0266-5611/29/9/095010 doi: 10.1088/0266-5611/29/9/095010
![]() |
[18] |
R. Plato, Converse results, saturation and quasi-optimality for Lavrentiev regularization of accretive problems, SIAM J. Numer. Anal., 55 (2017), 1315–1329. https://doi.org/10.1137/16M1089125 doi: 10.1137/16M1089125
![]() |
[19] | L. Cavalier, Inverse problems in statistics. Inverse problems and high-dimensional estimation, In: Alquier P., Gautier E., Stoltz G. (eds) Inverse Problems and High-Dimensional Estimation. Lecture Notes in Statistics, vol 203. Springer, Berlin, Heidelberg, 3–96. https://doi.org/10.1007/978-3-642-19989-9 |
[20] |
M. Kirane, E. Nane, N. H. Tuan, On a backward problem for multidimensional Ginzburg-Landau equation with random data, Inverse Probl., 34 (2018), 015008. https://doi.org/10.1088/1361-6420/aa9c2a doi: 10.1088/1361-6420/aa9c2a
![]() |
[21] | R. Lattes, J. L. Lions, Methode de Quasi-reversibility et Applications, Dunod, Paris, 1967 |
[22] | J. L. Lions, Quelques méthodes de résolution des problèmes aux limites nonlinéaires, Dunod; Gauthier – Villars, Paris, 1969. |
[23] | L. C. Evans, Partial Differential Equations, American Mathematical Society, Providence, Rhode Island, Volume 19, 1997. |
[24] |
C. Cao, M. A. Rammaha, E. S. Titi, The Navier-Stokes equations on the rotating 2-D sphere: Gevrey regularity and asymptotic degrees of freedom, Z. Angew. Math. Phys., 50 (1999), 341–360. https://doi.org/10.1007/PL00001493 doi: 10.1007/PL00001493
![]() |
[25] | R. Courant, D. Hilbert, Methods of mathematical physics, New York (NY): Interscience; 1953. |
[26] |
J. Wu, W. Wang, On backward uniqueness for the heat operator in cones, J. Differ. Equ., 258 (2015), 224–241. https://doi.org/10.1016/j.jde.2014.09.011 doi: 10.1016/j.jde.2014.09.011
![]() |
[27] |
A. Ruland, On the backward uniqueness property for the heat equation in two-dimensional conical domains, Manuscr. Math., 147 (2015), 415–436. https://doi.org/10.1007/s00229-015-0764-4 doi: 10.1007/s00229-015-0764-4
![]() |
[28] |
L. Li, V. Sverak, Backward uniqueness for the heat equation in cones, Commmun. Partial Differ. Equ., 37 (2012), 1414–1429. https://doi.org/10.1080/03605302.2011.635323 doi: 10.1080/03605302.2011.635323
![]() |
[29] |
N. H. Tuan, P. H. Quan, Some extended results on a nonlinear ill-posed heat equation and remarks on a general case of nonlinear terms, Nonlinear Anal. Real World Appl., 12 (2011), 2973–2984. https://doi.org/10.1016/j.nonrwa.2011.04.018 doi: 10.1016/j.nonrwa.2011.04.018
![]() |
[30] |
D. D. Trong, N. H. Tuan, Regularization and error estimate for the nonlinear backward heat problem using a method of integral equation, Nonlinear Anal., 71 (2009), 4167–4176. https://doi.org/10.1016/j.na.2009.02.092 doi: 10.1016/j.na.2009.02.092
![]() |
[31] |
P. T. Nam, An approximate solution for nonlinear backward parabolic equations, J. Math. Anal. Appl., 367 (2010), 337–349. https://doi.org/10.1016/j.jmaa.2010.01.020 doi: 10.1016/j.jmaa.2010.01.020
![]() |
[32] | M. Chipot, Elements of nonlinear analysis, Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks] Birkhäuser Verlag, Basel, 2000. viii+256 pp. ISBN: 3-7643-6406-8. https://doi.org/10.1007/978-3-0348-8428-0 |
[33] |
J. G. Skellam, Random dispersal in theoretical populations, Biometrika, 38 (1951), 196–218. https://doi.org/10.1016/S0092-8240(05)80044-8 doi: 10.1016/S0092-8240(05)80044-8
![]() |
[34] |
L. T. P. Ngoc, A. P. N. Dinh, N. T. Long, On a nonlinear heat equation associated with Dirichlet-Robin conditions, Numer. Funct. Anal. Optim., 33 (2012), 166–189. https://doi.org/10.1080/01630563.2011.594198 doi: 10.1080/01630563.2011.594198
![]() |
[35] |
N. H. Tuan, L. D. Thang, V. A. Khoa, T. Tran, On an inverse boundary value problem of a nonlinear elliptic equation in three dimensions, J. Math. Anal. Appl., 426 (2015), 1232–1261. https://doi.org/10.1016/j.jmaa.2014.12.047 doi: 10.1016/j.jmaa.2014.12.047
![]() |
1. | Hongwu Zhang, Yanhui Li, A Modified Iteration Method for an Inverse Problem of Diffusion Equation with Laplace and Riesz-Feller Space Fractional Operators, 2024, 1017-1398, 10.1007/s11075-024-01951-4 |