Citation: Marco Bramanti, Sergio Polidoro. Fundamental solutions for Kolmogorov-Fokker-Planck operators with time-depending measurable coefficients[J]. Mathematics in Engineering, 2020, 2(4): 734-771. doi: 10.3934/mine.2020035
[1] | Tommaso Barbieri . On Kolmogorov Fokker Planck operators with linear drift and time dependent measurable coefficients. Mathematics in Engineering, 2024, 6(2): 238-260. doi: 10.3934/mine.2024011 |
[2] | Prashanta Garain, Kaj Nyström . On regularity and existence of weak solutions to nonlinear Kolmogorov-Fokker-Planck type equations with rough coefficients. Mathematics in Engineering, 2023, 5(2): 1-37. doi: 10.3934/mine.2023043 |
[3] | Aleksandr Dzhugan, Fausto Ferrari . Domain variation solutions for degenerate two phase free boundary problems. Mathematics in Engineering, 2021, 3(6): 1-29. doi: 10.3934/mine.2021043 |
[4] | Alessia E. Kogoj, Ermanno Lanconelli, Enrico Priola . Harnack inequality and Liouville-type theorems for Ornstein-Uhlenbeck and Kolmogorov operators. Mathematics in Engineering, 2020, 2(4): 680-697. doi: 10.3934/mine.2020031 |
[5] | Marco Sansottera, Veronica Danesi . Kolmogorov variation: KAM with knobs (à la Kolmogorov). Mathematics in Engineering, 2023, 5(5): 1-19. doi: 10.3934/mine.2023089 |
[6] | Chiara Gavioli, Pavel Krejčí . Deformable porous media with degenerate hysteresis in gravity field. Mathematics in Engineering, 2025, 7(1): 35-60. doi: 10.3934/mine.2025003 |
[7] | Serena Federico, Gigliola Staffilani . Sharp Strichartz estimates for some variable coefficient Schrödinger operators on R×T2. Mathematics in Engineering, 2022, 4(4): 1-23. doi: 10.3934/mine.2022033 |
[8] | Giuseppe Procopio, Massimiliano Giona . Bitensorial formulation of the singularity method for Stokes flows. Mathematics in Engineering, 2023, 5(2): 1-34. doi: 10.3934/mine.2023046 |
[9] | Rita Mastroianni, Christos Efthymiopoulos . Kolmogorov algorithm for isochronous Hamiltonian systems. Mathematics in Engineering, 2023, 5(2): 1-35. doi: 10.3934/mine.2023035 |
[10] | Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025 |
We wish to dedicate this paper to Sandro Salsa, in occasion of his 70th birthday.
We consider a Kolmogorov-Fokker-Planck (from now on KFP) operator of the kind:
Lu=q∑i,j=1aij(t)∂2xixju+N∑k,j=1bjkxk∂xju−∂tu,(x,t)∈RN+1 | (1.1) |
where:
(H1) A0(t)={aij(t)}qi,j=1 is a symmetric uniformly positive matrix on Rq, q≤N, of bounded measurable coefficients defined for t∈R, so that
ν|ξ|2≤q∑i,j=1aij(t)ξiξj≤ν−1|ξ|2 | (1.2) |
for some constant ν>0, every ξ∈Rq, a.e. t∈R.
Lanconelli-Polidoro in [13] have studied the operators (1.1) with constant aij, proving that they are hypoelliptic if and only if the matrix B={bij}Ni,j=1 satisfies the following condition. There exists a basis of RN such that B assumes the following form:
(H2) For m0=q and suitable positive integers m1,…,mκ such that
m0≥m1≥…≥mκ≥1,andm0+m1+…+mκ=N, | (1.3) |
we have
B=[∗∗…∗∗B1∗…∗∗OB2…∗∗⋮⋮⋱⋮⋮OO…Bκ∗] | (1.4) |
where every block Bj is a mj×mj−1 matrix of rank mj with j=1,2,…,κ, while the entries of the blocks denoted by ∗ are arbitrary.
It is also proved in [13] that the operator L (corresponding to constant aij) is left invariant with respect to a suitable (noncommutative) Lie group of translations in RN. If, in addition, all the blocks ∗ in (1.4) vanish, then L is also 2-homogeneous with respect to a family of dilations. In this very special case, the operator L fits into the rich theory of left invariant, 2-homogeneus, Hörmander operators on homoegeneous groups.
Coming back to the family of hypoelliptic and left invariant operators with constant aij (and possibly nonzero blocks ∗ in (1.4)), an explicit fundamental solution is known, after [11] and [13].
A first result of this paper consists in showing that if, under the same structural assumptions considered in [13], the coefficients aij are allowed to depend on t, even just in an L∞-way, then an explicit fundamental solution Γ can still be costructed. It is worth noting that, under our assumptions (H1)–(H2), L is hypoelliptic if and only if the coefficients ai,j's are C∞ functions, which also means that Γ is smooth outside the pole. In our more general context, Γ will be smooth in x and only locally Lipschitz continuous in t, outside the pole. Our fundamental solution also allows to solve a Cauchy problem for L under various assumptions on the initial datum, and to prove its uniqueness. Moreover, we show that the fundamental solution of L satisfies two-sided bounds in terms of the fundamental solutions of model operators of the kind:
Lαu=αq∑i=1∂2xixiu+N∑k,j=1bjkxk∂xju−∂tu, | (1.5) |
whose explicit expression is more easily handled. This fact has other interesting consequences when combined with the results of [13], which allow to compare the fundamental solution of (1.5) with that of the corresponding "principal part operator", which is obtained from (1.5) by annihilating all the blocks ∗ in (1.4). The fundamental solution of the latter operator has an even simpler explicit form, since it possesses both translation invariance and homogeneity.
To put our results into context, let us now make some historical remarks. Already in 1934, Kolmogorov in [10] exhibited an explicit fundamental solution, smooth outside the pole, for the ultraparabolic operator
∂2xx+x∂y−∂t in R3. |
For more general classes of ultraparabolic KFP operators, Weber [20], 1951, Il'in [9], 1964, Sonin [19], 1967, proved the existence of a fundamental solution smooth outside the pole, by the Levi method, starting with an approximate fundamental solution which was inspired by the one found by Kolmogorov. Hörmander, in the introduction of [8], 1967, sketches a procedure to compute explicitly (by Fourier transform and the method of characteristics) a fundamental solution for a class of KFP operators of type (1.1) (with constant aij). In all the aforementioned papers the focus is to prove that the operator, despite of its degenerate character, is hypoelliptic. This is accomplished by showing the existence of a fundamental solution smooth outside the pole, without explicitly computing it.
Kupcov in [11], 1972, computes the fundamental solution for a class of KFP operators of the kind (1.1) (with constant aij). This procedure is generalized by the same author in [12], 1982, to a class of operators (1.1) with time-dependent coefficients aij, which however are assumed of class Cκ for some positive integer κ related to the structure of the matrix B. Our procedure to compute the fundamental solution follows the technique by Hörmander (different from that of Kupcov) and works also for nonsmooth aij(t).
Based on the explicit expression of the fundamental solution, existence, uniqueness and regularity issues for the Cauchy problem have been studied in the framework of the semigroup setting. We refer here to the article by Lunardi [14], and to Farkas and Lorenzi [7]. The parametrix method introduced in [9,19,20] was used by Polidoro in [18] and by Di Francesco and Pascucci in [5] for more general families of Kolmogorov equations with Hölder continuous coefficients. We also refer to the article [4] by Delaure and Menozzi, where a Lipschitz continuous drift term is considered in the framework of the stochastic theory. For a recent survey on the theory of KFP operators we refer to the paper [1] by Anceschi-Polidoro, while a discussion on several motivations to study this class of operators can be found for instance in the survey book [2,§2.1].
The interest in studying KFP operators with a possibly rough time-dependence of the coefficients comes from the theory of stochastic processes. Indeed, let σ=σ(t) be a N×q matrix, with zero entries under the q-th row, let B as in (1.4), and let (Wt)t≥t0 be a q-dimensional Wiener process. Denote by (Xt)t≥t0 the solution to the following N-dimensional stochastic differential equation
{dXt=−BXtdt+σ(t)dWtXt0=x0. | (1.6) |
Then the forward Kolmogorov operator Kf of (Xt)t≥t0 agrees with L up to a constant zero order term:
Kfv(x,t)=Lv(x,t)+tr(B)v(x,t), |
where
aij(t)=12q∑k=1σik(t)σjk(t) i,j=1,...,q. | (1.7) |
Moreover, the backward Kolmogorov operator Kb of (Xt)t≥t0 acts as follows
Kbu(y,s)=∂su(y,s)+q∑i,j=1aij(s)∂2yiyju(y,s)−N∑i,j=1bijyj∂yiu(y,s). |
Note that Kf is the transposed operator of Kb. In general, given a differential operator K, its transposed operator K∗ is the one which satisfies the relation
∫RN+1ϕ(x,t)K∗ψ(x,t)dxdt=∫RN+1Kϕ(x,t)ψ(x,t)dxdt |
for every ϕ,ψ∈C∞0(RN+1).
A further motivation for our study is the following one. A regularity theory for the operator L with Hölder continuous coefficients has been developed by several authors (see e.g., [6,14,15]). However, as Pascucci and Pesce show in the Example 1.3 of [16], the requirement of Hölder continuity in (x,t) with respect to the control distance may be very restrictive, due to the interaction of time and space variable in the drift term of L. In view of this, a regularity requirement with respect to x-variables alone, for t fixed, with a possible rough dependence on t, seems a more natural assumption. This paper can be seen as a first step to study KFP operators with coefficients measurable in time and Hölder continuous or VMO in space, to overcome the objection pointed out in [16]. For these operators the fundamental solution of (1.1) could be used as a parametrix, as done in [17], to build a fundamental solution.
Notation 1.1. Throughout the paper we will regard vectors x∈RN as columns, and, we will write xT,MT to denote the transpose of a vector x or a matrix M. We also define the (symmetric, nonnegative) N×N matrix
A(t)=[A0(t)OOO]. | (1.8) |
Before stating our results, let us fix precise definitions of solution to the equation Lu=0 and to a Cauchy problem for L.
Definition 1.2. We say that u(x,t) is a solution to the equation Lu=0 in RN×I, for some open interval I, if:
u is jointly continuous in RN×I;
for every t∈I, u(⋅,t)∈C2(RN);
for every x∈RN, u(x,⋅) is absolutely continuous on I, and ∂u∂t (defined for a.e. t) is essentially bounded for t ranging in every compact subinterval of I;
for a.e. t∈I and every x∈RN, Lu(x,t)=0.
Definition 1.3. We say that u(x,t) is a solution to the Cauchy problem
{Lu=0 in RN×(t0,T)u(⋅,t0)=f | (1.9) |
for some T∈(−∞,+∞], t0∈(−∞,T), where f is continuous in RN or belongs to Lp(RN) for some p∈[1,∞) if:
(a) u is a solution to the equation Lu=0 in RN×(t0,T) (in the sense of the above definition);
(b1) if f∈C0(RN) then u(x,t)→f(x0) as (x,t)→(x0,t+0), for every x0∈RN;
(b2) if f∈Lp(RN) for some p∈[1,∞) then u(⋅,t)∈Lp(RN) for every t∈(t0,T), and ‖u(⋅,t)−f‖Lp(RN)→0 as t→t+0.
In the following, we will also need the transposed operator of L, defined by
L∗u=q∑i,j=1aij(s)∂2yiyju−N∑k,j=1bjkyk∂yju−uTrB+∂su. | (1.10) |
The definition of solution to the equation L∗u=0 is perfectly analogous to Definition 1.2.
We can now state precisely the main results of the paper.
Theorem 1.4. Under the assumptions (H1)–(H2) above, denote by E(s) and C(t,t0) the following N×N matrices
E(s)=exp(−sB),C(t,t0)=∫tt0E(t−σ)A(σ)E(t−σ)Tdσ | (1.11) |
for s,t,t0∈R and t>t0. Then the matrix C(t,t0) is symmetric and positive for every t>t0. Let
Γ(x,t;x0,t0)=1(4π)N/2√detC(t,t0)e−(14(x−E(t−t0)x0)TC(t,t0)−1(x−E(t−t0)x0)+(t−t0)TrB) | (1.12) |
for t>t0, Γ=0 for t≤t0. Then Γ has the following properties (so that Γ is a fundamental solution for L with pole (x0,t0)).
(i) In the region
R2N+2∗={(x,t,x0,t0)∈R2N+2:(x,t)≠(x0,t0)} | (1.13) |
the function Γ is jointly continuous in (x,t,x0,t0) and is C∞ with respect to x,x0. The functions ∂α+βΓ∂xα∂xβ0 (for every multiindices α,β) are jointly continuous in (x,t,x0,t0)∈R2N+2∗. Moreover Γ and and ∂α+βΓ∂xα∂xβ0 are Lipschitz continuous with respect to t and with respect to t0 in any region H≤t0+δ≤t≤K for fixed H,K∈R and δ>0.
lim|x|→+∞Γ(x,t;x0,t0)=0 for every t>t0 and every x0∈RN.
lim|x0|→+∞Γ(x,t;x0,t0)=0 for every t>t0 and every x∈RN.
(ii) For every fixed (x0,t0)∈RN+1, the function Γ(⋅,⋅;x0,t0) is a solution to Lu=0 in RN×(t0,+∞) (in the sense of Definition 1.2);
(iii) For every fixed (x,t)∈RN+1, the function Γ(x,t;⋅,⋅) is a solution to L∗u=0 in RN×(−∞,t);
(iv) Let f∈C0b(RN) (bounded continuous), or f∈Lp(RN) for some p∈[1,∞). Then there exists one and only one solution to the Cauchy problem (1.9) (in the sense of Definition 1.3, with T=∞) such that u∈C0b(RN×[t0,∞)) or u(t,⋅)∈Lp(RN) for every t>t0, respectively. The solution is given by
u(x,t)=∫RNΓ(x,t;y,t0)f(y)dy | (1.14) |
and is C∞(RN) with respect to x for every fixed t>t0. If moreover f is continuous and vanishes at infinity, then u(⋅,t)→f uniformly in RN as t→t+0.
(v) Let f be a (possibly unbounded) continuous function on RN satisfying the condition
∫RN|f(x)|e−α|x|2dx<∞, | (1.15) |
for some α>0. Then there exists T>0 such that there exists one and only one solution u to the Cauchy problem (1.9) satisfying condition
T∫t0∫RN|u(x,t)|e−C|x|2dxdt<+∞ | (1.16) |
for some C>0. The solution u(x,t) is given by (1.14) for t∈(t0,T). It is C∞(RN) with respect to x for every fixed t∈(t0,T).
(vi) Γ satisfies for every x0∈RN, t0<t the integral identities
∫RNΓ(x0,t;y,t0)dy=1∫RNΓ(x,t;x0,t0)dx=e−(t−t0)TrB. |
(vii) Γ satisfies the reproduction formula
Γ(x,t;y,s)=∫RNΓ(x,t;z,τ)Γ(z,τ;y,s)dz |
for every x,y∈RN and s<τ<t.
Remark 1.5. Our uniqueness results only require the condition (1.16). Indeed, as we will prove in Proposition 4.14 all the solutions to the Cauchy problem (1.9), in the sense of Definition 1.3, with f∈Lp(RN) for some p∈[1,∞), f ∈C0b(RN) or f∈C0(RN) with f satisfying (1.15), do satisfy the condition (1.16).
Remark 1.6. All the statements in the above theorem still hold if the coefficients aij(t) are defined only for t belonging to some interval I. In this case the above formulas need to be considered only for t,t0∈I. In order to simplify notation, throughout the paper we will only consider the case I=R.
The above theorem will be proved in section 4.
The second main result of this paper is a comparison between Γ and the fundamental solutions Γα of the model operators (1.5) corresponding to α=ν,α=ν−1 (with ν as in (1.2)). Specializing (1.12) to the operators (1.5) we have
Γα(x,t;x0,t0)=Γα(x−E(t−t0)x0,t−t0;0,0) |
with
Γα(x,t;0,0)=1(4πα)N/2√detC0(t)e−(14αxTC0(t)−1x+tTrB) | (1.17) |
where, here and in the following, C0(t)=C(t,0) with A0(t)=Iq (identity q×q matrix). Explicitly:
C0(t)=∫t0E(t−σ)Iq,NE(t−σ)Tdσ, | (1.18) |
where Iq,N is the N×N matrix given by
Iq,N=[Iq000]. |
Then:
Theorem 1.7. For every t>t0 and x,x0∈RN we have
νNΓν(x,t;x0,t0)≤Γ(x,t;x0,t0)≤1νNΓν−1(x,t;x0,t0). | (1.19) |
The above theorem will be proved in section 3. The following example illustrates the reason why our comparison result is useful.
Example 1.8. Let us consider the operator
Lu=a(t)ux1x1+x1ux2−ut |
with x∈R2, a(t) measurable and satisfying
0<ν≤a(t)≤ν−1 for every t∈R. |
Let us compute Γ(x,t;0,0) in this case. We have:
A=[a(t)000];B=[0010];E(s)=[10−s1];C(t)≡C(t,0)=∫t0[10−s1][a(t−s)000][1−s01]ds=∫t0a(t−s)[1−s−ss2]ds(after two integrations by parts)=[a∗(t)−a∗∗(t)−a∗∗(t)2a∗∗∗(t)] |
where we have set:
a∗(t)=∫t0a(s)ds;a∗∗(t)=∫t0a∗(s)ds;a∗∗∗(t)=∫t0a∗∗(s)ds. |
Therefore we find, for t>0:
Γ(x,t;0,0)=14π√detC(t)e−(14xTC(t)−1x) |
with
C(t)−1=1detC(t)[2a∗∗∗(t)a∗∗(t)a∗∗(t)a∗(t)] |
so that, explicitly, we have
Γ(x,t;0,0)=14π√detC(t)exp(−(2a∗∗∗(t)x21+2a∗∗(t)x1x2+a∗(t)x22)4detC(t)) with detC(t)=2a∗(t)a∗∗∗(t)−a∗∗(t)2. |
On the other hand, when considering the model operator
Lαu=αux1x1+x1ux2−ut |
with constant α>0, we have
Γα(x,t;0,0)=√32παt2exp(−1α(x21t+3x1x2t2+3x22t3)). |
The comparison result of Theorem 1.7 then reads as follows:
ν2Γν(x,t;0,0)≤Γ(x,t;0,0)≤1ν2Γν−1(x,t;0,0) |
or, explicitly,
ν√32πt2exp(−1ν(x21t+3x1x2t2+3x22t3))≤Γ(x,t;0,0)≤1ν√32πt2exp(−ν(x21t+3x1x2t2+3x22t3)). |
Plan of the paper. In §2 we compute the explicit expression of the fundamental solution Γ of L by using the Fourier transform and the method of characteristics, showing how one arrives to the the explicit formula (1.12). This procedure is somehow formal as, due to the nonsmoothness of the coefficients aij(t), we cannot plainly assume that the functional setting where the construction is done is the usual distributional one. Since all the properties of Γ which qualify it as a fundamental solution will be proved in the subsequent sections, on a purely logical basis one could say that §2 is superfluous. Nevertheless, we prefer to present this complete computation to show how this formula has been built. A further reason to do this is the following one. The unique article where the analogous computation in the constant coefficient case is written in detail seems to be [11], and it is written in Russian language.
In §3 we prove Theorem 1.7, comparing Γ with the fundamental solutions of two model operators, which is easier to write explicitly and to study. In §4 we will prove Theorem 1.4, namely: point (ⅰ) in §4.1; points (ⅱ), (ⅲ), (ⅵ) in §4.2; points (ⅳ), (ⅴ), (ⅶ) in §4.3.
As explained at the end of the introduction, this section contains a formal computation of the fundamental solution Γ. To this aim, we choose any (x0,t0)∈RN+1, and we look for a solution to the Cauchy Problem
{Lu=0for x∈RN,t>t0u(⋅,t0)=δx0inD′(RN) | (2.1) |
by applying the Fourier transform with respect to x, and using the notation
ˆu(ξ,t)=F(u(⋅,t))(ξ):=∫RNe−2πixTξu(x,t)dx. |
We have:
q∑i,j=1aij(t)(−4π2ξiξj)ˆu+N∑k,j=1bjkF(xk∂xju)−∂tˆu=0. |
By the standard properties of the Fourier transform, it follows that
F(xk∂xju)=1−2πi∂ξk(F(∂xju))=1−2πi∂ξk(2πiξjˆu)=−(δjkˆu+ξj∂ξkˆu). |
then the problem (2.1) is equivalent to the following Cauchy problem that we write in compact form (recalling the definition of the A(t) given in (1.8)) as
{(∇ξˆu(ξ,t))TBTξ+∂tˆu(ξ,t)=−(4π2ξTA(t)ξ+TrB)ˆu(ξ,t),ˆu(ξ,t0)=e−2πiξTx0. | (2.2) |
Now we solve the problem (2.2) by the method of characteristics. Fix any initial condition η∈RN, and consider the system of ODEs:
{dξds(s)=BTξ(s),ξ(0)=η,dtds(s)=1,t(0)=t0,dzds(s)=−(4π2ξT(s)A(t(s))ξ(s)+TrB)z(s),z(0)=e−2πiηTx0. | (2.3) |
We plainly find t(s)=t0+s and ξ(s)=exp(sBT)η, so that the last equation becomes
dzds(s)=−(4π2(exp(sBT)η)TA(t0+s)exp(sBT)η+TrB)z(s), |
whose solution, with initial condition z(0)=e−2πiηTx0, is
z(s)=exp(−4π2∫s0ηT[exp(σB)A(t0+σ)exp(σBT)]ηdσ−sTrB−2πiηTx0). |
Hence, substituting s=t−t0,η=exp((t0−t)BT)ξ, recalling the notation introduced in (1.11), we find
ˆu(ξ,t)=z(t−t0)=exp(−4π2∫t−t00ξTexp((t0−t+σ)B)A(t0+σ)exp((t0−t+σ)BT)ξdσ−(t−t0)TrB−2πiξTexp((t0−t)B)x0) |
=exp(−4π2ξT(∫tt0E(σ−t)A(σ)E(σ−t)Tdσ)ξ−(t−t0)TrB−2πiξTE(t−t0)x0)=exp(−4π2ξTC(t,t0)ξ−(t−t0)TrB−2πiξTE(t−t0)x0). | (2.4) |
Let
G(ξ,t;x0,t0)=exp(−4π2ξTC(t,t0)ξ−(t−t0)TrB−2πiξTE(t−t0)x0)G0(ξ,t,t0)=exp(−4π2ξTC(t,t0)ξ) | (2.5) |
and note that if
F(k(⋅,t,t0))(ξ)=G0(ξ,t,t0) |
then
F(k(⋅−E(t−t0)x0,t,t0)exp(−(t−t0)TrB))(ξ)=G(ξ,t;x0,t0), | (2.6) |
hence it is enough to compute the antitransform of G0(ξ,t,t0). In order to do that, the following will be useful:
Proposition 2.1. Let A be an N×N real symmetric positive constant matrix. Then:
F(e−(xTAx))(ξ)=(πNdetA)1/2e−π2ξTA−1ξ. |
The above formula is a standard known result in probability theory, being the characteristic function of a multivariate normal distribution (see for instance [3,Prop. 1.1.2]).
To apply the previous proposition, and antitransform the function G0(ξ,t,t0), we still need to know that the matrix C(t,t0) is strictly positive. By [13] we know that the matrix C0(t) (see (1.18)) is positive, under the structure conditions on B expressed in (1.4). Exploiting this fact, let us show that the same is true for our C(t,t0):
Proposition 2.2. For every ξ∈RN and every t>t0 we have
ν−1ξTC0(t−t0)ξ≥ξTC(t,t0)ξ≥νξTC0(t−t0)ξ. | (2.7) |
In particular, the matrix C(t,t0) is positive for t>t0.
Proof.
ξTC(t,t0)ξ=∫tt0ξTE(t−s)A(s)E(t−s)Tξds. |
Next, letting E(s)=(eij(s))Ni,j=1 and ηh(s)=∑Nk=1ξkekh(s) we have
ξTE(t−s)A(s)E(t−s)Tξ=N∑i,j,h,k=1ξieij(t−s)ajh(s)ekh(t−s)ξk |
=q∑j,h=1ajh(s)ηj(t−s)ηh(t−s)≥νq∑j=1ηj(t−s)2=νξTE(t−s)Iq,NE(t−s)Tξ |
where
Iq,N=[Iq000]. |
Integrating for s∈(t0,t) the previous inequality we get
ξTC(t,t0)ξ≥νξT∫tt0E(t−s)Iq,NE(t−s)Tdsξ=νξTC0(t−t0)ξ. |
Analogously we get the other bound.
By the previous proposition, the matrix C(t,t0) is positive definite for every t>t0, since, under our assumptions, this is true for C0(t−t0). Therefore we can invert C(t,t0) and antitransform the function G0(ξ,t,t0) in (2.5). Namely, applying Proposition 2.1 to C(t,t0)−1 we get:
F(e−(xTC(t,t0)−1x))(ξ)=πN/2√detC(t,t0)e−π2ξTC(t,t0)ξF(1(4π)N/2√detC(t,t0)e−(14xTC(t,t0)−1x))(ξ)=e−4π2ξTC(t,t0)ξ. |
Hence we have computed the antitransform of G0(ξ,t,t0), and by (2.6) this also implies
F(1(4π)N/2√detC(t,t0)e−(14(x−E(t−t0)x0)TC(t,t0)−1(x−E(t−t0)x0)+(t−t0)TrB))(ξ)=exp(−4π2ξTC(t,t0)ξ−(t−t0)TrB−2πiξTE(t−t0)x0). |
Hence the (so far, "formal") fundamental solution of L is
Γ(x,t;x0,t0)=1(4π)N/2√detC(t,t0)e−(14(x−E(t−t0)x0)TC(t,t0)−1(x−E(t−t0)x0)+(t−t0)TrB), |
which is the expression given in Theorem 1.4.
In this section we will prove Theorem 1.7. The first step is to derive from Proposition 2.2 an analogous control between the quadratic forms associated to the inverse matrices C0(t−t0)−1,C(t,t0)−1. The following algebraic fact will help:
Proposition 3.1. Let C1,C2 be two real symmetric positive N×N matrices. If
ξTC1ξ≤ξTC2ξ for every ξ∈RN | (3.1) |
then
ξTC−12ξ≤ξTC−11ξ for every ξ∈RN |
and
detC1≤detC2. |
The first implication is already proved in [18,Remark 2.1.]. For convenience of the reader, we write a proof of both.
Proof. Let us fix some shorthand notation. Whenever (3.1) holds for two symmetric positive matrices, we will write C1≤C2. Note that for every symmetric N×N matrix G,
C1≤C2⟹GC1G≤GC2G. | (3.2) |
For any symmetric positive matrix C, we can rewrite C=MTΔM with M orthogonal and Δ=diag(λ1,...,λn). Letting C1/2=MTΔ1/2M, one can check that C1/2 is still symmetric positive, and C1/2C1/2=I. Moreover, writing C−1/2=(C−1)1/2 we have
C−1/2=MTΔ−1/2M,C−1/2CC−1/2=I. |
Then, applying (3.2) with G=C−1/21 we get
I=C−1/21C1C−1/21≤C−1/21C2C−1/21. |
Next, applying (3.2) to the last inequality with G=(C−1/21C2C−1/21)−1/2 we get
C1/21C−12C1/21=(C−1/21C2C−1/21)−1=(C−1/21C2C−1/21)−1/2(C−1/21C2C−1/21)−1/2≤(C−1/21C2C−1/21)−1/2(C−1/21C2C−1/21)(C−1/21C2C−1/21)−1/2=I. |
Finally, applying (3.2) to the last inequality with G=C−1/21 we get
C−12=C−1/21(C1/21C−12C1/21)C−1/21≤C−1/21C−1/21=C−11 |
so the first statement is proved. To show the inequality on determinants, we can write, since C1≤C2,
C−1/22C1C−1/22≤I. |
Letting M be an orthogonal matrix that diagonalizes C−1/22C1C−1/22 we get
diag(λ1,...,λn)=MTC−1/22C1C−1/22M≤I |
which implies 0<λi≤1 for i=1,2,...,n hence also
1≥n∏i=1λi=det(MTC−1/22C1C−1/22M)=detC1detC2, |
so we are done.
Applying Propositions 3.1 and 2.2 we immediately get the following:
Proposition 3.2. For every ξ∈RN and every t>t0 we have
ν−1ξTC0(t−t0)−1ξ≥ξTC(t,t0)−1ξ≥νξTC0(t−t0)−1ξ | (3.3) |
ν−NdetC0(t−t0)≥detC(t,t0)≥νNdetC0(t−t0) | (3.4) |
for every t>t0.
We are now in position to give the
Proof of Thm. 1.7. Recall that C0(t) is defined in (1.18). From the definition of the matrix C(t,t0) one immediately reads that, letting Cν(t,t0) be the matrix corresponding to the operator Lν, one has
Cν(t,t0)=νC0(t−t0) | (3.5) |
hence also
det(Cν(t,t0))=νNdetC0(t−t0). | (3.6) |
From the explicit form of Γ given in (1.12) we read that whenever the matrix A(t) is constant one has
Γ(x,t;x0,t0)=Γ(x−E(t−t0)x0,t−t0;0,0), |
in particular this relation holds for Γν. Then (1.12), (3.5), (3.6) imply (1.17). Therefore (3.3) and (3.4) give:
Γ(x,t;x0,t0)=e−(14(x−E(t−t0)x0)TC(t,t0)−1(x−E(t−t0)x0)+(t−t0)TrB)(4π)N/2√detC(t,t0)≤e−(ν4(x−E(t−t0)x0)TC0(t−t0)−1(x−E(t−t0)x0)+(t−t0)TrB)(4π)N/2√νNdetC0(t−t0)=1νNΓν−1(x,t;x0,t0). |
Analogously,
Γ(x,t;x0,t0)≥νN/2e−(14ν(x−E(t−t0)x0)TC0(t−t0)−1(x−E(t−t0)x0)+(t−t0)TrB)(4π)N/2√detC0(t−t0)=νNΓν(x,t;x0,t0) |
so we have (1.19).
As anticipated in the introduction, the above comparison result has further useful consequences when combined with some results of [13], where Γα is compared with the fundamental solution of the "principal part operator" ˜Lα having the same matrix A=αIq,N and a simpler matrix B, actually the matrix obtained from (1.4) annihilating all the ∗ blocks. This operator ˜Lα is also 2-homogeneous with respect to dilations and its matrix C0(t) (which in the next statement is called C∗0(t)) has a simpler form, which gives a useful asymptotic estimate for the matrix of Lα. Namely, the following holds:
Proposition 3.3 (Short-time asymptotics of the matrix C0(t)). (See [13,(3.14), (3.9), (2.17)]) There exist integers 1=σ1≤σ2≤...≤σN=2κ+1 (with κ as in (1.4)), a constant invertible N×N matrix C∗0(1) and a N×N diagonal matrix
D0(λ)=diag(λσ1,λσ2,...,λσN) |
such that the following holds. If we let
C∗0(t)=D0(t1/2)C∗0(1)D0(t1/2), |
so that
detC∗0(t)=cNtQ |
where Q=∑Ni=1σi, then:
detC0(t)=detC∗0(t)(1+tO(1)) as t→0+xTC0(t)−1x=xTC∗0(t)−1x(1+tO(1)) as t→0+ |
where in the second equality O(1) stands for a bounded function on RN×(0,1].
The above result allows to prove the following more explicit upper bound on Γ for short times:
Proposition 3.4. There exist constants c,δ∈(0,1) such that for 0<t0−t≤δ and every x,x0∈RN we have:
Γ(x,t;x0,t0)≤1c(t−t0)Q/2e−c|x−E(t−t0)x0|2t−t0. | (3.7) |
Proof. By (1.19) and the properties of the fundamental solution when the matrix A(t) is constant, we can write:
Γ(x,t;x0,t0)≤ν−NΓν−1(x−E(t−t0)x0,t−t0;0,0). | (3.8) |
On the other hand,
Γα(y,t;0,0)=1(4πα)N/2√detC0(t)e−(14αyTC0(t)−1y+tTrB) |
and by Proposition 3.3 there exist c,δ∈(0,1) such that for 0<t≤δ and every y∈RN
detC0(t)=detC∗0(t)(1+tO(1))≥cdetC∗0(t)=c1tQyTC0(t)−1y=yTC∗0(t)−1y(1+tO(1))≥cyTC∗0(t)−1y≥c|D0(t−1/2)y|2=cN∑i=1y2itσi≥c|y|2t. |
Hence
Γ(x,t;x0,t0)≤1(4πν)N/2(t−t0)Q/2eν4|TrB|e−νc|x−E(t−t0)x0|2t−t0=1c2(t−t0)Q/2e−c2|x−E(t−t0)x0|2t−t0. |
In this section we will prove point (i) of Theorem 1.4.
With reference to the explicit form of Γ in (1.12), we start noting that the elements of the matrix
E(t−σ)A(σ)E(t−σ)T |
are measurable and uniformly essentially bounded for (t,σ,t0) varying in any region H≤t0≤σ≤t≤K for fixed H,K∈R. This implies that the matrix
C(t,t0)=∫tt0E(t−σ)A(σ)E(t−σ)Tdσ |
is Lipschitz continuous with respect to t and with respect to t0 in any region H≤t0≤t≤K for fixed H,K∈R. Moreover, C(t,t0) and detC(t,t0) are jointly continuous in (t,t0). Recalling that, by Proposition 2.2, the matrix C(t,t0) is positive definite for any t>t0, we also have that C(t,t0)−1 is Lipschitz continuous with respect to t and with respect to t0 in any region H≤t0+δ≤t≤K for fixed H,K∈R and δ>0, and is jointly continuous in (t,t0) for t>t0.
From the explicit form of Γ and the previous remarks we conclude that Γ(x,t;x0,t0) is jointly continuous in (x,t;x0,t0) for t>t0, smooth w.r.t. x and x0 for t>t0 and Lipschitz continuous with respect to t and with respect to t0 in any region H≤t0+δ≤t≤K for fixed H,K∈R and δ>0.
Moreover, every derivative ∂α+βΓ∂xα∂βx0 is given by Γ times a polynomial in (x,x0) with coefficients Lipschitz continuous with respect to t and with respect to t0 in any region H≤t0+ε≤t≤K for fixed H,K∈R and ε>0, and jointly continuous in (t,t0) for t>t0.
In order to show that Γ and ∂α+βΓ∂xα∂xβ0 are jointly continuous in the region R2N+2∗ (see (1.13)) we also need to show that these functions tend to zero as (x,t)→(y,t+0) and y≠x0. For Γ, this assertion follows by Proposition 3.4: for y≠x0 and (x,t)→(y,t+0) we have
|x−E(t−t0)x0|2→|y−x0|2≠0, |
hence
1(t−t0)Q/2e−c2|x−E(t−t0)x0|2t−t0→0 |
and the same is true for Γ(x,t;x0,t0).
To prove the analogous assertion for ∂α+βΓ∂xα∂xβ0 we first need to establish some upper bounds for these derivatives, which will be useful several times in the following.
Proposition 4.1. For t>s, let C(t,s)−1={γij(t,s)}Ni,j=1, let
C′(t,s)=E(t−s)TC(t,s)−1E(t−s) |
and let C′(t,s)={γ′ij(t,s)}Ni,j=1. Then:
(i) For every x,y∈RN, every t>s, k,h=1,2,...,N,
∂xkΓ(x,t;y,s)=−12Γ(x,t;y,s)⋅N∑i=1γik(t,s)(x−E(t−s)y)i | (4.1) |
∂2xhxkΓ(x,t;y,s)=Γ(x,t;y,s)(14(∑iγik(t,s)(x−E(t−s)y)i)⋅⋅(∑jγjh(t,s)(x−E(t−s)y)j)−12γhk(t,s)) | (4.2) |
∂ykΓ(x,t;y,s)=−12Γ(x,t;y,s)⋅N∑i=1γ′ik(t,s)(y−E(s−t)x)i | (4.3) |
∂2yhykΓ(x,t;y,s)=Γ(x,t;y,s)⋅(14(∑iγ′ik(t,s)(y−E(s−t)x)i)⋅⋅(∑jγ′jh(t,s)(y−E(s−t)x)j)−12γ′hk(t,s)). | (4.4) |
(ii) For every n,m=0,1,2,... there exists c>0 such that for every x,y∈RN, every t>s
∑|α|≤n,|β|≤m|∂αx∂βyΓ(x,t;y,s)|≤cΓ(x,t;y,s)⋅{1+‖C(t,s)−1‖+‖C(t,s)−1‖n|x−E(t−s)y|n}⋅{1+‖C′(t,s)‖+‖C′(t,s)‖m|y−E(s−t)x|m} | (4.5) |
where ‖⋅‖ stands for a matrix norm.
Proof. A straightforward computation gives (4.1) and (4.2). Iterating this computation we can also bound
∑|α|≤n|∂αxΓ(x,t;y,s)|≤cΓ(x,t;y,s)⋅⋅{1+‖C(t,s)−1‖+‖C(t,s)−1‖n|x−E(t−s)y|n}. |
To compute y-derivatives of Γ, it is convenient to write
(x−E(t−s)y)TC(t,s)−1(x−E(t−s)y)=(y−E(s−t)x)TC′(t,s)(y−E(s−t)x) |
with
C′(t,s)=E(t−s)TC(t,s)−1E(t−s). |
With this notation, we have
Γ(x,t;y,s)=1(4π)N/2√detC(t,s)e−(14(y−E(s−t)x)TC′(t,s)(y−E(s−t)x)+(t−s)TrB) |
and an analogous computation gives (4.3), (4.4) and, by iteration
∑|α|≤m|∂αyΓ(x,t;y,s)|≤cΓ(x,t;y,s)⋅⋅{1+‖C′(t,s)‖+‖C′(t,s)‖m|y−E(s−t)x|m} |
and finally also (4.5).
With the previous bounds in hands we can now prove the following:
Theorem 4.2 (Upper bounds on the derivatives of Γ). (i) For every n,m=0,1,2... and t,s ranging in a compact subset of {(t,s):t≥s+ε} for some ε>0 we have
∑|α|≤n,|β|≤m|∂αx∂βyΓ(x,t;y,s)|≤Ce−C′|x−E(t−s)y|2⋅{1+|x−E(t−s)y|n+|y−E(s−t)x|m} | (4.6) |
for every x,y∈RN, for constants C,C′ depending on n,m and the compact set.
In particular, for fixed t>s we have
lim|x|→+∞∂αx∂βyΓ(x,t;y,s)=0 for every y∈RNlim|y|→+∞∂αx∂βyΓ(x,t;y,s)=0 for every x∈RN |
for every multiindices α,β.
(ii) For every n,m=0,1,2... there exists δ∈(0,1),C,c>0 such that for 0<t−s<δ and every x,y∈RN we have
∑|α|≤n,|β|≤m|∂αx∂βyΓ(x,t;y,s)|≤C(t−s)Q/2e−c|x−E(t−s)y|2t−s⋅{(t−s)−σN+(t−s)−nσN|x−E(t−s)y|n}⋅{(t−s)−σN+(t−s)−mσN|y−E(s−t)x|m}. | (4.7) |
In particular, for every fixed x0,y∈RN, x0≠y,s∈R,
lim(x,t)→(x0,s+)∑|α|+|β|≤k|∂αx∂βyΓ(x,t;y,s)|=0 |
so that Γ and ∂αx∂βyΓ(x,t;y,s) are jointly continuous in the region R2N+2∗.
Proof. (i) The matrix C(t,s) is jointly continuous in (t,s) and, by Proposition 2.2 is positive definite for any t>s. Hence for t,s ranging in a compact subset of {(t,s):t≥s+ε} we have
‖C(t,s)−1‖n+‖C′(t,s)‖m≤ce−(14(x−E(t−s)y)TC(t,s)−1(x−E(t−s)y)+(t−s)TrB)≤c1e−c|x−E(t−s)y|2 |
for some c,c1>0 only depending on n,m and the compact set. Hence by (4.5) and (1.12) we get (4.6).
Let now t,s be fixed. If y is fixed and \left\vert x\right\vert \rightarrow\infty then (4.6) gives
\sum\limits_{\left\vert \alpha\right\vert \leq n, \left\vert \beta\right\vert \leq m}\left\vert \partial_{x}^{\alpha}\partial_{y}^{\beta}\Gamma\left( x, t;y, s\right) \right\vert \leq Ce^{-C^{\prime}\left\vert x\right\vert ^{2} }\left\{ 1+\left\vert x\right\vert ^{n}+\left\vert x\right\vert ^{m}\right\} \rightarrow0. |
If x is fixed and \left\vert y\right\vert \rightarrow\infty,
\begin{align*} \sum\limits_{\left\vert \alpha\right\vert \leq n, \left\vert \beta\right\vert \leq m}\left\vert \partial_{x}^{\alpha}\partial_{y}^{\beta}\Gamma\left( x, t;y, s\right) \right\vert \leq Ce^{-C^{\prime}\left\vert E\left( t-s\right) y\right\vert ^{2} }\left\{ 1+\left\vert E\left( t-s\right) y\right\vert ^{n}+\left\vert E\left( s-t\right) x\right\vert ^{m}\right\} \rightarrow0, \end{align*} |
because when \left\vert y\right\vert \rightarrow\infty also \left\vert E\left(t-s\right) y\right\vert \rightarrow\infty , since E\left(t-s\right) is invertible.
(ii) Applying (4.5) together with Proposition 3.4 we get that for some \delta\in\left(0, 1\right) , whenever 0 < t-s < \delta we have
\begin{align*} & \sum\limits_{\left\vert \alpha\right\vert \leq n, \left\vert \beta\right\vert \leq m}\left\vert \partial_{x}^{\alpha}\partial_{y}^{\beta}\Gamma\left( x, t;y, s\right) \right\vert \\ & \leq\frac{1}{c\left( t-s\right) ^{Q/2}}e^{-c\frac{\left\vert x-E\left( t-s\right) y\right\vert ^{2}}{t-s}}\cdot\left\{ 1+\left\Vert C\left( t, s\right) ^{-1}\right\Vert +\left\Vert C\left( t, s\right) ^{-1}\right\Vert ^{n}\left\vert x-E\left( t-s\right) y\right\vert ^{n}\right\} \\ & \cdot\left\{ 1+\left\Vert C^{\prime}\left( t, s\right) \right\Vert +\left\Vert C^{\prime}\left( t, s\right) \right\Vert ^{m}\left\vert y-E\left( s-t\right) x\right\vert ^{m}\right\} . \end{align*} |
Next, we recall that by Proposition 3.2 we have
\left\Vert C\left( t, s\right) ^{-1}\right\Vert \leq c\left\Vert C_{0}\left( t-s\right) ^{-1}\right\Vert |
by Proposition 3.3, for 0 < t-s\leq\delta
\leq c^{\prime}\left\Vert C_{0}^{\ast}\left( t-s\right) ^{-1}\right\Vert \leq c^{\prime\prime}\left( t-s\right) ^{-\sigma_{N}} |
and an analogous bound holds for C^{\prime}\left(t, s\right) , for small \left(t-s\right) . Hence we get (4.7).
If now x_{0}\neq y are fixed, from (4.7) we deduce
\sum\limits_{\left\vert \alpha\right\vert \leq n, \left\vert \beta\right\vert \leq m}\left\vert \partial_{x}^{\alpha}\partial_{y}^{\beta}\Gamma\left( x, t;y, s\right) \right\vert \leq\frac{C}{\left( t-s\right) ^{\frac{Q} {2}+\left( n+m\right) \sigma_{N}}}\exp\left( -\frac{c}{t-s}\right) \rightarrow0 |
as \left(x, t\right) \rightarrow\left(x_{0}, s^{+}\right).
With the above theorem, the proof of point (i) in Theorem 1.4 is complete.
Remark 4.3 (Long time behavior of \Gamma ). We have shown that the fundamental solution \Gamma\left(x, t; y, s\right) and its spacial derivatives of every order tend to zero for x or y going to infinity, and tend to zero for t\rightarrow s^{+} and x\neq y . It is natural to ask what happens for t\rightarrow+\infty . However, nothing can be said in general about this limit, even when the coefficients a_{ij} are constant, and even in nondegenerate cases. Compare, for N = 1 , the heat operator
Hu = u_{xx}-u_{t}, |
for which
\Gamma\left( x, y;0, 0\right) = \frac{1}{\sqrt{4\pi t}}e^{-\frac{x^{2}}{4t} }\rightarrow0\ for \ t\rightarrow+\infty, every\ x\in\mathbb{R} |
and the operator
Lu = u_{xx}+xu_{x}-u_{t} |
for which (1.12) gives
\Gamma\left( x, t;0, 0\right) = \frac{1}{\sqrt{2\pi\left( 1-e^{-2t}\right) } }e^{-\frac{x^{2}}{2\left( 1-e^{-2t}\right) }}\rightarrow\frac{1}{\sqrt{2\pi }}e^{-\frac{x^{2}}{2}}\ as\ t\rightarrow+\infty. |
In this section we will prove points (ⅱ), (ⅲ), (ⅵ) of Theorem 1.4.
We want to check that our "candidate fundamental solution" with pole at \left(x_{0}, t_{0}\right) , given by (1.12), actually solves the equation outside the pole, with respect to \left(x, t\right) . Note that, by the results in § 4.1 we already know that \Gamma is infinitely differentiable w.r.t. x, x_{0} , and a.e. differentiable w.r.t. t, t_{0} .
Theorem 4.4. For every fixed \left(x_{0}, t_{0}\right) \in \mathbb{R}^{N+1} ,
\mathcal{L}\left( \Gamma\left( \cdot, \cdot;x_{0}, t_{0}\right) \right) \left( x, t\right) = 0\ \mathit{\text{for a.e.}}\ t \gt t_{0}\ \mathit{\text{and every}}\ x\in \mathbb{R}^{N}\mathit{\text{.}} |
Before proving the theorem, let us establish the following easy fact, which will be useful in the subsequent computation and is also interesting in its own:
Proposition 4.5. For every t > t_{0} and x_{0}\in\mathbb{R}^{N} we have
\begin{align} \int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, t_{0}\right) dx & = e^{-\left( t-t_{0}\right) \operatorname*{Tr}B} \\ \int_{\mathbb{R}^{N}}\Gamma\left( x_{0}, t;y, t_{0}\right) dy & = 1. \end{align} | (4.8) |
Proof. Let us compute, for t > t_{0} :
\begin{align*} & \int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, t_{0}\right) dx\\ & = \frac{e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t, t_{0}\right) }}\int_{\mathbb{R}^{N} }e^{-\frac{1}{4}\left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T} C^{-1}\left( t, t_{0}\right) \left( x-E\left( t-t_{0}\right) x_{0}\right) }dx\\ \text{letting }x & = E\left( t-t_{0}\right) x_{0}+2C\left( t, t_{0}\right) ^{1/2}y;dx = 2^{N}\det C\left( t, t_{0}\right) ^{1/2}dy\\ & = \frac{e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t, t_{0}\right) }}2^{N}\sqrt{\det C\left( t, t_{0}\right) }\int_{\mathbb{R}^{N}}e^{-\left\vert y\right\vert ^{2}}dy = e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}. \end{align*} |
Next,
\begin{align*} & \int_{\mathbb{R}^{N}}\Gamma\left( x_{0}, t;y, t_{0}\right) dy\\ & = \frac{e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t, t_{0}\right) }}\int_{\mathbb{R}^{N} }e^{-\frac{1}{4}\left( x_{0}-E\left( t-t_{0}\right) y\right) ^{T} C^{-1}\left( t, t_{0}\right) \left( x_{0}-E\left( t-t_{0}\right) y\right) }dy\\ \text{letting }y & = E\left( t_{0}-t\right) \left( x_{0}-2C\left( t, t_{0}\right) ^{1/2}z\right) ;\\ dy & = 2^{N}\det C\left( t, t_{0}\right) ^{1/2}\det E\left( t_{0}-t\right) dz = 2^{N}\det C\left( t, t_{0}\right) ^{1/2}e^{\left( t-t_{0}\right) \operatorname{Tr}B}dz \end{align*} |
= \frac{e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t, t_{0}\right) }}2^{N}\det C\left( t, t_{0}\right) ^{1/2}e^{\left( t-t_{0}\right) \operatorname{Tr}B}\int_{\mathbb{R}^{N} }e^{-\left\vert y\right\vert ^{2}}dy = 1. |
Here in the change of variables we used the relation \det\left(\exp B\right) = e^{\operatorname{Tr}B} , holding for every square matrix B .
Proof of Theorem 4.4. Keeping the notation of Proposition 4.1, and exploiting (4.1)–(4.2) we have
\begin{align} & \sum\limits_{k, j = 1}^{N}b_{jk}x_{k}\partial_{x_{j}}\Gamma\left( x, t;x_{0} , t_{0}\right) = \left( \nabla_{x}\Gamma\left( x, t;x_{0}, t_{0}\right) \right) ^{T}Bx\\ & = -\frac{1}{2}\Gamma\left( x, t;x_{0}, t_{0}\right) \left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T}C\left( t, t_{0}\right) ^{-1} Bx. \end{align} | (4.9) |
\begin{align} & \sum\limits_{h, k = 1}^{q}a_{hk}\left( t\right) \partial_{x_{h}x_{k}}^{2} \Gamma\left( x, t;x_{0}, t_{0}\right) \\ & = \Gamma\left\{ \frac{1}{4}\sum\limits_{i, j = 1}^{N}\left( \sum\limits_{h, k = 1}^{q} a_{hk}\left( t\right) \gamma_{ik}\left( t, t_{0}\right) \gamma_{jh}\left( t\right) \right) \cdot\right. \\ & \left. \cdot\left( x-E\left( t-t_{0}\right) x_{0}\right) _{i}\left( x-E\left( t-t_{0}\right) x_{0}\right) _{j}-\frac{1}{2}\sum\limits_{h, k = 1} ^{q}a_{hk}\left( t\right) \gamma_{hk}\left( t, t_{0}\right) \right\} \\ & = \Gamma\left( x, t;x_{0}, t_{0}\right) \cdot\left\{ \frac{1}{4}\left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T}C\left( t, t_{0}\right) ^{-1}A\left( t\right) C^{-1}\left( x-E\left( t-t_{0}\right) x_{0}\right) \right. \\ & \left. -\frac{1}{2}\operatorname{Tr}A\left( t\right) C\left( t, t_{0}\right) ^{-1}\right\} . \end{align} | (4.10) |
\begin{align} & \partial_{t}\Gamma\left( x, t;x_{0}, t_{0}\right) \\ & = -\frac{\partial_{t}\left( \det C\left( t, t_{0}\right) \right) }{\left( 4\pi\right) ^{N/2}2\det^{3/2}C\left( t, t_{0}\right) }e^{-\left( \frac{1}{4}\left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T}C\left( t, t_{0}\right) ^{-1}\left( x-E\left( t-t_{0}\right) x_{0}\right) +\left( t-t_{0}\right) \operatorname*{Tr}B\right) }\\ & -\Gamma\left( x, t;x_{0}, t_{0}\right) \cdot\\ & \cdot\partial_{t}\left( \frac{1}{4}\left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T}C\left( t, t_{0}\right) ^{-1}\left( x-E\left( t-t_{0}\right) x_{0}\right) +\left( t-t_{0}\right) \operatorname*{Tr} B\right) \\ = & -\Gamma\left( x, t;x_{0}, t_{0}\right) \left\{ \frac{\partial_{t}\left( \det C\left( t, t_{0}\right) \right) }{2\det C\left( t, t_{0}\right) }\right. \\ & +\left. \frac{1}{4}\partial_{t}\left( \left( x-E\left( t-t_{0}\right) x_{0}\right) ^{T}C\left( t, t_{0}\right) ^{-1}\left( x-E\left( t-t_{0}\right) x_{0}\right) \right) +\operatorname*{Tr}B\right\} . \end{align} | (4.11) |
To shorten notation, from now on, throughout this proof, we will write
\begin{align*} & C\text{ for }C\left( t, t_{0}\right) \text{, and}\\ & E\text{ for }E\left( t-t_{0}\right) . \end{align*} |
To compute the t -derivative appearing in (4.11) we start writing
\begin{align} & \partial_{t}\left( \left( x-Ex_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) \right) \\ & = 2\left( -\partial_{t}Ex_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) \\ & +\left( x-Ex_{0}\right) ^{T}\partial_{t}\left( C^{-1}\right) \left( x-Ex_{0}\right) . \end{align} | (4.12) |
First, we note that
\begin{equation} \partial_{t}E = -B\exp\left( -\left( t-t_{0}\right) B\right) = -BE. \end{equation} | (4.13) |
Also, note that B commutes with E\left(t\right) and B^{T} commutes with E\left(t\right) ^{T} . Second, differentiating the identity C^{.-1}C = I we get
\begin{equation} \partial_{t}\left( C^{-1}\right) = -C^{-1}\partial_{t}\left( C\right) C^{-1}. \end{equation} | (4.14) |
In turn, at least for a.e. t , we have
\begin{align*} \partial_{t}\left( C\left( t, t_{0}\right) \right) & = E\left( 0\right) A\left( t\right) E\left( 0\right) ^{T}+\int_{t_{0}}^{t}\partial _{t}E\left( t-\sigma\right) A\left( \sigma\right) E\left( t-\sigma \right) ^{T}d\sigma\\ & +\int_{t_{0}}^{t}E\left( t-\sigma\right) A\left( \sigma\right) \partial_{t}E\left( t-\sigma\right) ^{T}d\sigma\\ & = A\left( t\right) -BC-CB^{T}. \end{align*} |
By (4.14) this gives
\begin{equation} \partial_{t}\left( C^{-1}\right) = -C^{-1}A\left( t\right) C^{-1} +C^{-1}B+B^{T}C^{-1}. \end{equation} | (4.15) |
Inserting (4.13) and (4.15) in (4.12) and then in (4.11) we have
\begin{align*} & \partial_{t}\left( \left( x-Ex_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) \right) \\ & = 2\left( BEx_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) \\ & +\left( x-Ex_{0}\right) ^{T}\left[ -C^{-1}A\left( t\right) C^{-1}+2B^{T}C^{-1}\right] \left( x-Ex_{0}\right) . \end{align*} |
\begin{align} \partial_{t}\Gamma & = -\Gamma\left\{ \frac{\partial_{t}\left( \det C\right) }{2\det C}+\operatorname*{Tr}B\right. +\frac{1}{4}\left[ 2\left( BEx_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) \right. \\ & \left. \frac{{}}{{}}+\left. \left( x-Ex_{0}\right) ^{T}\left[ -C^{-1}A\left( t\right) C^{-1}+2B^{T}C^{-1}\right] \left( x-Ex_{0}\right) \right] \right\} \end{align} |
\begin{align} & = -\Gamma\left\{ \frac{\partial_{t}\left( \det C\right) }{2\det C}+\operatorname*{Tr}B\right. -\frac{1}{4}\left( x-Ex_{0}\right) ^{T} C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) \\ & +\left. \frac{1}{2}x^{T}B^{T}C^{-1}\left( x-Ex_{0}\right) \right\} . \end{align} | (4.16) |
Exploiting (4.10), (4.9) and (4.16) we can now compute \mathcal{L}\Gamma :
\begin{align*} & \sum\limits_{h, k = 1}^{q}a_{hk}\left( t\right) \partial_{x_{h}x_{k}}^{2} \Gamma+\left( \nabla\Gamma\right) ^{T}Bx-\partial_{t}\Gamma\\ & = \Gamma\left\{ \frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) -\frac{1}{2}\operatorname{Tr}A\left( t\right) C^{-1}\right. \\ & -\frac{1}{2}\Gamma\left( x-Ex_{0}\right) ^{T}C^{-1}Bx+\frac{\partial _{t}\left( \det C\right) }{2\det C}+\operatorname*{Tr}B\\ & -\frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) +\left. \frac{1}{2}x^{T}B^{T}C^{-1}\left( x-Ex_{0}\right) \right\} \\ & = \Gamma\left\{ -\frac{1}{2}\operatorname{Tr}A\left( t\right) C^{-1}+\frac{\partial_{t}\left( \det C\right) }{2\det C}+\operatorname*{Tr} B\right\} . \end{align*} |
To conclude our proof we are left to check that, in the last expression, the quantity in braces identically vanishes for t > t_{0} . This, however, is not a straightforward computation, since the term \partial_{t}\left(\det C\right) is not easily explicitly computed. Let us state this fact as a separate ancillary result.
Proposition 4.6. For a.e. t > t_{0} we have
\frac{\partial_{t}\left( \det C\left( t, t_{0}\right) \right) }{2\det C\left( t, t_{0}\right) } = \frac{1}{2}\operatorname*{Tr}A\left( t\right) C\left( t, t_{0}\right) ^{-1}-\operatorname*{Tr}B. |
To prove this proposition we also need the following
Lemma 4.7. For every N\times N matrix A , and every x_{0} \in\mathbb{R}^{N} we have:
\begin{align} \int_{\mathbb{R}^{N}}e^{-\left\vert x\right\vert ^{2}}\left( x^{T}Ax\right) dx & = \frac{\pi^{N/2}}{2}\operatorname*{Tr}A\\ \int_{\mathbb{R}^{N}}e^{-\left\vert x\right\vert ^{2}}\left( x_{0} ^{T}Ax\right) dx & = 0. \end{align} | (4.17) |
Proof of Lemma 4.7. The second identity is obvious for symmetry reasons. As to the first one, letting A = \left(a_{ij}\right) _{i, j = 1}^{N},
\begin{align*} & \int_{\mathbb{R}^{N}}e^{-\left\vert x\right\vert ^{2}}\left( x^{T}Ax\right) dx\\ & = \sum\limits_{i = 1}^{N}\left\{ \sum\limits_{j = 1, ..., N, j\neq i}a_{ij}\int_{\mathbb{R}^{N} }e^{-\left\vert x\right\vert ^{2}}x_{i}x_{j}dx+a_{ii}\int_{\mathbb{R}^{N} }e^{-\left\vert x\right\vert ^{2}}x_{i}^{2}dx\right\} \\ & = \sum\limits_{i = 1}^{N}\left\{ 0+a_{ii}\left( \int_{\mathbb{R}^{N-1} }e^{-\left\vert w\right\vert ^{2}}dw\right) \left( \int_{\mathbb{R} }e^{-x_{i}^{2}}x_{i}^{2}dx_{i}\right) \right\} \\ & = \sum\limits_{i = 1}^{N}a_{ii}\pi^{\frac{N-1}{2}}\left( \int_{\mathbb{R}}e^{-t^{2} }t^{2}dt\right) = \pi^{\frac{N-1}{2}}\cdot\frac{\sqrt{\pi}}{2}\sum\limits_{i = 1} ^{N}a_{ii} = \frac{\pi^{N/2}}{2}\operatorname*{Tr}A \end{align*} |
where the integrals corresponding to the terms with i\neq j vanish for symmetry reasons.
Proof of Proposition 4.6. Taking \frac{\partial}{\partial t} in the identity (4.8) we have, by (4.16), for almost every t > t_{0},
\begin{align*} & -e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}\operatorname*{Tr} B = \int_{\mathbb{R}^{N}}\frac{\partial\Gamma}{\partial t}\left( x, t;x_{0} , t_{0}\right) dx\\ & = -\int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, t_{0}\right) \left\{ \frac{\partial_{t}\left( \det C\right) }{2\det C}+\operatorname*{Tr} B+\frac{1}{2}x^{T}B^{T}C^{-1}\left( x-Ex_{0}\right) \right. \\ & \left. -\frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) \right\} dx \end{align*} |
\begin{align*} & = -\left\{ \frac{\partial_{t}\left( \det C\right) }{2\det C} +\operatorname*{Tr}B\right\} e^{-\left( t-t_{0}\right) \operatorname*{Tr} B}-\int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, t_{0}\right) \left\{ \frac{1}{2}x^{T}B^{T}C^{-1}\left( x-Ex_{0}\right) \right. \\ & \left. -\frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) \right\} dx \end{align*} |
hence
\begin{align*} & \frac{\partial_{t}\left( \det C\right) }{2\det C}\cdot e^{-\left( t-t_{0}\right) \operatorname*{Tr}B} = -\frac{e^{-\left( t-t_{0}\right) \operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C}}\cdot\\ & \cdot\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}\left( x-Ex_{0}\right) }\left\{ \frac{1}{2}x^{T}B^{T} C^{-1}\left( x-Ex_{0}\right) \right. \\ & \left. -\frac{1}{4}\left( x-Ex_{0}\right) ^{T}C^{-1}A\left( t\right) C^{-1}\left( x-Ex_{0}\right) \right\} dx \end{align*} |
and letting again x = Ex_{0}+2C^{1/2}y inside the integral
\begin{align*} & \frac{\partial_{t}\left( \det C\right) }{2\det C} = -\frac{1}{\pi^{N/2} }\int_{\mathbb{R}^{N}}e^{-\left\vert y\right\vert ^{2}}\cdot\\ & \cdot\left\{ \left( x_{0}^{T}E^{T}+2y^{T}C^{1/2}\right) B^{T} C^{-1/2}y-y^{T}C^{-1/2}A\left( t\right) C^{-1/2}y\right\} dy\\ & = -\frac{1}{\pi^{N/2}}\frac{\pi^{N/2}}{2}\left( 0+2\operatorname{Tr} C^{1/2}B^{T}C^{-1/2}+\operatorname{Tr}C^{-1/2}A\left( t\right) C^{-1/2}\right) \\ & = -\operatorname{Tr}C^{1/2}B^{T}C^{-1/2}+\frac{1}{2}\operatorname{Tr} C^{-1/2}A\left( t\right) C^{-1/2}. \end{align*} |
where we used Lemma 4.7. Finally, since similar matrices have the same trace,
\begin{align*} & -\operatorname{Tr}C^{1/2}BC^{-1/2}+\frac{1}{2}\operatorname{Tr} C^{-1/2}A\left( t\right) C^{-1/2}\\ & = -\operatorname{Tr}B+\frac{1}{2}\operatorname{Tr}A\left( t\right) C^{-1}, \end{align*} |
so we are done.
The proof of Proposition 4.6 also completes the proof of Theorem 4.4.
Remark 4.8. Since, by Theorem 4.4, we can write
\partial_{t}\Gamma\left( x, t, x_{0}, t_{0}\right) = \sum\limits_{i, j = 1}^{q} a_{ij}\left( t\right) \partial_{x_{i}x_{j}}^{2}\Gamma\left( x, t, x_{0} , t_{0}\right) +\sum\limits_{k, j = 1}^{N}b_{jk}x_{k}\partial_{x_{j}}\Gamma\left( x, t, x_{0}, t_{0}\right) , |
the function \partial_{t}\Gamma satisfies upper bounds analogous to those proved in Theorem 4.2 for \partial_{x_{i}x_{j}} ^{2}\Gamma .
Let us now show that \Gamma satisfies, with respect to the other variables, the transposed equation, that is:
Theorem 4.9. Letting
\mathcal{L}^{\ast}u = \sum\limits_{i, j = 1}^{q}a_{ij}\left( s\right) \partial _{y_{i}y_{j}}^{2}u-\sum\limits_{k, j = 1}^{N}b_{jk}y_{k}\partial_{j_{j}} u-u\operatorname*{Tr}B+\partial_{s}u |
we have, for every fixed \left(x, t\right)
\mathcal{L}^{\ast}\left( \Gamma\left( x, t;\cdot, \cdot\right) \right) \left( y, s\right) = 0 |
for a.e. s < t and every y .
Proof. We keep the notation used in the proof of Proposition 4.1:
\begin{align*} C^{\prime}\left( t, s\right) & = E\left( t-s\right) ^{T}C\left( t, s\right) ^{-1}E\left( t-s\right) \\ \Gamma\left( x, t;y, s\right) & = \frac{1}{\left( 4\pi\right) ^{N/2} \sqrt{\det C\left( t, s\right) }}e^{-\left( \frac{1}{4}\left( y-E\left( s-t\right) x\right) ^{T}C^{\prime}\left( t, s\right) \left( y-E\left( s-t\right) x\right) +\left( t-s\right) \operatorname*{Tr}B\right) }. \end{align*} |
Exploiting (4.3) and (4.4) we have, by a tedious computation which is analogous to that in the proof of Theorem 4.4,
\begin{align*} \mathcal{L}^{\ast}\Gamma\left( x, t;y, s\right) & = \frac{1}{2}\Gamma\left( x, t;y, s\right) \left\{ -\operatorname{Tr}A\left( s\right) C^{\prime }\left( t, s\right) -\frac{\partial_{s}\left( \det C\left( t, s\right) \right) }{\det C\left( t, s\right) }\right. \\ & +y^{T}B^{T}C^{\prime}\left( t, s\right) y-y^{T}B^{T}E\left( t-s\right) ^{T}C\left( t, s\right) ^{-1}x\\ & \left. +\left( BE\left( t-s\right) y\right) ^{T}C\left( t, s\right) ^{-1}\left( x-E\left( t-s\right) y\right) \right\} \\ & = \frac{1}{2}\Gamma\left( x, t;y, s\right) \left\{ -\operatorname{Tr} A\left( s\right) C^{\prime}\left( t, s\right) -\frac{\partial_{s}\left( \det C\left( t, s\right) \right) }{\det C\left( t, s\right) }\right\} . \end{align*} |
So we are done provided that:
Proposition 4.10. For a.e. s < t we have
\frac{\partial_{s}\left( \det C\left( t, s\right) \right) }{2\det C\left( t, s\right) } = -\operatorname*{Tr}A\left( s\right) C^{\prime}\left( t, s\right) . |
Proof. Taking \frac{\partial}{\partial s} in the identity (4.8) we have, by (4.16), for almost every s < t,
\begin{align*} & e^{-\left( t-s\right) \operatorname*{Tr}B}\operatorname*{Tr} B = \int_{\mathbb{R}^{N}}\frac{\partial\Gamma}{\partial s}\left( x, t;x_{0} , s\right) dx\\ & = -\int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, s\right) \cdot\\ & \cdot\left\{ \frac{\partial_{s}\left( \det C\right) }{2\det C}-\operatorname*{Tr}B-\frac{1}{2}\left( BE\left( t-s\right) x_{0}\right) ^{T}C\left( t, s\right) ^{-1}\left( x-E\left( t-s\right) x_{0}\right) \right. \\ & \left. +\frac{1}{4}\left( E\left( s-t\right) x-x_{0}\right) ^{T}C^{\prime}\left( t, s\right) A\left( s\right) C^{\prime}\left( t, s\right) \left( E\left( s-t\right) x-x_{0}\right) \right\} dx \end{align*} |
\begin{align*} & = -\left\{ \frac{\partial_{s}\left( \det C\right) }{2\det C} -\operatorname*{Tr}B\right\} e^{-\left( t-s\right) \operatorname*{Tr}B}\\ & -\int_{\mathbb{R}^{N}}\Gamma\left( x, t;x_{0}, s\right) \left\{ -\frac {1}{2}\left( BE\left( t-s\right) x_{0}\right) ^{T}C\left( t, s\right) ^{-1}\left( x-E\left( t-s\right) x_{0}\right) \right. \\ & \left. +\frac{1}{4}\left( E\left( s-t\right) x-x_{0}\right) ^{T}C^{\prime}\left( t, s\right) A\left( s\right) C^{\prime}\left( t, s\right) \left( E\left( s-t\right) x-x_{0}\right) \right\} dx \end{align*} |
hence
\begin{align*} & \frac{\partial_{s}\left( \det C\right) }{2\det C} = -\frac{1}{\left( 4\pi\right) ^{N/2}\sqrt{\det C}}\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}\left( x-E\left( t-s\right) x_{0}\right) ^{T}C\left( t, s\right) ^{-1}\left( x-E\left( t-s\right) x_{0}\right) }\cdot\\ & \cdot\left\{ -\frac{1}{2}\left( BE\left( t-s\right) x_{0}\right) ^{T}C\left( t, s\right) ^{-1}\left( x-E\left( t-s\right) x_{0}\right) \right. \\ & \left. +\frac{1}{4}\left( E\left( s-t\right) x-x_{0}\right) ^{T}C^{\prime}\left( t, s\right) A\left( s\right) C^{\prime}\left( t, s\right) \left( E\left( s-t\right) x-x_{0}\right) \right\} dx \end{align*} |
and letting again x = E\left(t-s\right) x_{0}+2C^{1/2}\left(t, s\right) y inside the integral, applying Lemma 4.7 and (4.17), with some computation we get
\frac{\partial_{s}\left( \det C\right) }{\det C} = -\operatorname{Tr} C^{-1/2}\left( t, s\right) E\left( t-s\right) A\left( s\right) E\left( t-s\right) ^{T}C\left( t, s\right) ^{-1/2}. |
Since C^{-1/2}\left(t, s\right) E\left(t-s\right) A\left(s\right) E\left(t-s\right) ^{T}C\left(t, s\right) ^{-1/2} and A\left(s\right) C^{\prime}\left(t, s\right) are similar, they have the same trace, so the proof is concluded.
In this section we will prove points (ⅳ), (ⅴ), (ⅶ) of Theorem 1.4.
We are going to show that the Cauchy problem can be solved, by means of our fundamental solution \Gamma . Just to simplify notation, let us now take t_{0} = 0 and let C\left(t\right) = C\left(t, 0\right) . We have the following:
Theorem 4.11. Let
\begin{align} u\left( x, t\right) & = \int_{\mathbb{R}^{N}}\Gamma\left( x, t;y, 0\right) f\left( y\right) dy\\ & = \frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}\left( x-E\left( t\right) y\right) ^{T}C\left( t\right) ^{-1}\left( x-E\left( t\right) y\right) }f\left( y\right) dy. \end{align} | (4.18) |
Then:
(a) if f\in L^{p}\left(\mathbb{R}^{N}\right) for some p\in\left[1, \infty\right] or f\in C_{b}^{0}\left(\mathbb{R}^{N}\right) (bounded continuous) then u solves the equation \mathcal{L}u = 0 in \mathbb{R} ^{N}\times\left(0, \infty\right) and u\left(\cdot, t\right) \in C^{\infty}\left(\mathbb{R}^{N}\right) for every fixed t > 0 .
(b) if f\in C^{0}\left(\mathbb{R}^{N}\right) and there exists C > 0 such that (1.15) holds, then there exists T > 0 such that u solves the equation \mathcal{L}u = 0 in \mathbb{R}^{N}\times\left(0, T\right) and u\left(\cdot, t\right) \in C^{\infty}\left(\mathbb{R}^{N}\right) for every fixed t\in\left(0, T\right) .
The initial condition f is attained in the following senses:
(i) For every p\in\lbrack1, +\infty), if f\in L^{p}\left(\mathbb{R} ^{N}\right) we have u\left(\cdot, t\right) \in L^{p}\left(\mathbb{R}^{N}\right) for every t > 0 , and
\left\Vert u\left( \cdot, t\right) -f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\rightarrow0 \ \mathit{\text{as}} \ t\rightarrow0^{+}. |
(ii) If f\in L^{\infty}\left(\mathbb{R}^{N}\right) and f is continuous at some point x_{0}\in\mathbb{R}^{N} then
u\left( x, t\right) \rightarrow f\left( x_{0}\right) \ \mathit{\text{as}} \ \left( x, t\right) \rightarrow\left( x_{0}, 0\right) . |
(iii) If f\in C_{\ast}^{0}\left(\mathbb{R}^{N}\right) (i.e., vanishing at infinity) then
\sup\limits_{x\in\mathbb{R}^{N}}\left\vert u\left( x, t\right) -f\left( x\right) \right\vert \rightarrow0 \ \mathit{\text{as}} \ t\rightarrow0^{+}. |
(iv) If f\in C^{0}\left(\mathbb{R}^{N}\right) and satisfies (1.15), then
u\left( x, t\right) \rightarrow f\left( x_{0}\right) \ \mathit{\text{as}} \ \left( x, t\right) \rightarrow\left( x_{0}, 0\right) . |
Proof. From Theorem 4.2, (i), we read that for \left(x, t\right) ranging in a compact subset of \mathbb{R}^{N}\times\left(0, +\infty\right) , and every y\in\mathbb{R}^{N} ,
\sum\limits_{\left\vert \alpha\right\vert \leq n}\left\vert \partial_{x}^{\alpha }\Gamma\left( x, t;y, 0\right) \right\vert \leq ce^{-c_{1}\left\vert y\right\vert ^{2}}\cdot\left\{ 1+\left\vert y\right\vert ^{n}\right\} |
for suitable constants c, c_{1} > 0 . Moreover, by Remark 4.8, \left\vert \partial_{t}\Gamma\right\vert also satisfies this bound (with n = 2 ). This implies that for every f\in L^{p}\left(\mathbb{R}^{N}\right) for some p\in\left[1, \infty\right] , (in particular for f\in C_{b}^{0}\left(\mathbb{R}^{N}\right) ) the integral defining u converges and \mathcal{L}u can be computed taking the derivatives inside the integral. Moreover, all the derivatives u_{x_{i} }, u_{x_{i}x_{j}} are continuous, while u_{t} is defined only almost everywhere, and locally essentially bounded. Then by Theorem 4.4 we have \mathcal{L}u\left(x, t\right) = 0 for a.e. t > 0 and every x\in\mathbb{R}^{N} . Also, the x -derivatives of every order can be actually taken under the integral sign, so that u\left(\cdot, t\right) \in C^{\infty}\left(\mathbb{R}^{N}\right) . This proves (a). Postponing for a moment the proof of (b), to show that u attains the initial condition (points (ⅰ)–(ⅲ)) let us perform, inside the integral in (4.18), the change of variables
\begin{align*} C\left( t\right) ^{-1/2}\left( x-E\left( t\right) y\right) & = 2z\\ y & = E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \\ dy & = 2^{N}e^{t\operatorname*{Tr}B}\det C\left( t\right) ^{1/2}dz \end{align*} |
so that
u\left( x, t\right) = \frac{1}{\pi^{N/2}}\int_{\mathbb{R}^{N}}e^{-\left\vert z\right\vert ^{2}}f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) dz |
and, since \int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}} {\pi^{N/2}}dz = 1,
\left\vert u\left( x, t\right) -f\left( x\right) \right\vert \leq \int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2} }\left\vert f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) -f\left( x\right) \right\vert dz. |
Let us now proceed separately in the three cases.
(i) By Minkowsky's inequality for integrals we have
\left\Vert u\left( \cdot, t\right) -f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\leq\int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }dz. |
Next,
\begin{align*} & \left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\\ & \leq\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) \right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }+\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\\ & = \left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }+\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\leq c\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) } \end{align*} |
for 0 < t < 1 , since
\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }^{p} = \int_{\mathbb{R}^{N} }\left\vert f\left( E\left( -t\right) \left( x\right) \right) \right\vert ^{p}dx |
letting E\left(-t\right) x = y; x = E\left(t\right) y; dx = e^{-t\operatorname*{Tr}B}dy,
= e^{-t\operatorname*{Tr}B}\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\leq e^{\left\vert \operatorname*{Tr}B\right\vert }\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\text{ for }0 \lt t \lt 1. |
This means that for every t\in\left(0, 1\right) we have
\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\leq c\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) } \frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\in L^{1}\left( \mathbb{R}^{N}\right) . |
Let us show that for a.e. fixed z\in\mathbb{R}^{N} we also have
\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\rightarrow0\text{ as }t\rightarrow0^{+}, |
this will imply the desired result by Lebesgue's theorem.
\begin{align*} & \left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\\ & \leq\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( E\left( -t\right) \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }+\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }. \end{align*} |
Now:
\begin{align*} & \left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( E\left( -t\right) \cdot\right) \right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }^{p}\\ & = \int_{\mathbb{R}^{N}}\left\vert f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) -f\left( E\left( -t\right) x\right) \right\vert ^{p}dx\\ & = e^{t\operatorname*{Tr}B}\int_{\mathbb{R}^{N}}\left\vert f\left( y-2E\left( -t\right) C\left( t\right) ^{1/2}z\right) -f\left( y\right) \right\vert ^{p}dy\rightarrow0 \end{align*} |
for z fixed and t\rightarrow0^{+} , because 2E\left(-t\right) C\left(t\right) ^{1/2}z\rightarrow0 and the translation operator is continuous on L^{p}\left(\mathbb{R}^{N}\right).
It remains to show that
\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\rightarrow0\text{ as }t\rightarrow0^{+}\text{, } |
which is not straightforward. For every fixed \varepsilon > 0 , let \phi be a compactly supported continous function such that \left\Vert f-\phi\right\Vert _{p} < \varepsilon , then
\begin{align*} \left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -f\right\Vert _{p} & \leq\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\left( E\left( -t\right) \left( \cdot\right) \right) \right\Vert _{p}\\ & +\left\Vert \phi\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\right\Vert _{p}+\left\Vert f-\phi\right\Vert _{p} \end{align*} |
and
\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\left( E\left( -t\right) \left( \cdot\right) \right) \right\Vert _{p} = \left( e^{t\operatorname*{Tr}B}\right) ^{1/p}\left\Vert f-\phi \right\Vert _{p}\leq\left( e^{\left\vert \operatorname*{Tr}B\right\vert }\right) ^{1/p}\varepsilon |
for t\in\left(0, 1\right) . Let \operatorname*{sprt}\phi\subset B_{R}\left(0\right) , then for every t\in\left(0, 1\right) we have \left\vert E\left(-t\right) \left(x\right) \right\vert \leq c\left\vert x\right\vert so that
\left\Vert \phi\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }^{p} = \int_{\left\vert x\right\vert \lt CR}\left\vert \phi\left( E\left( -t\right) \left( x\right) \right) -\phi\left( x\right) \right\vert ^{p}dx. |
Since for every x\in\mathbb{R}^{N} , \phi\left(E\left(-t\right) \left(x\right) \right) \rightarrow\phi\left(x\right) as t\rightarrow0^{+} and
\left\vert \phi\left( E\left( -t\right) \left( x\right) \right) -\phi\left( x\right) \right\vert ^{p}\leq2\max\left\vert \phi\right\vert ^{p} |
which is integrable on B_{CR}\left(0\right) , by uniform continuity of \phi ,
\left\Vert \phi\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\rightarrow0\text{ as }t\rightarrow0^{+}, |
hence for t small enough
\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -f\right\Vert _{p}\leq c\varepsilon, |
and we are done.
(ii) Let f\in L^{\infty}\left(\mathbb{R}^{N}\right) , and let f be continuous at some point x_{0}\in\mathbb{R}^{N} then
\left\vert u\left( x, t\right) -f\left( x_{0}\right) \right\vert \leq \int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2} }\left\vert f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) -f\left( x_{0}\right) \right\vert dz. |
Now, for fixed z\in\mathbb{R}^{N} and \left(x, t\right) \rightarrow \left(x_{0}, 0\right) we have
\begin{align*} E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) & \rightarrow x_{0}\\ f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) & \rightarrow f\left( x_{0}\right) \end{align*} |
while
\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\vert f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) -f\left( x_{0}\right) \right\vert \leq2\left\Vert f\right\Vert _{L^{\infty }\left( \mathbb{R}^{N}\right) }\frac{e^{-\left\vert z\right\vert ^{2}}} {\pi^{N/2}}\in L^{1}\left( \mathbb{R}^{N}\right) |
hence by Lebesgue's theorem
\left\vert u\left( x, t\right) -f\left( x_{0}\right) \right\vert \rightarrow0. |
(ⅲ) As in point (ⅰ) we have
\left\Vert u\left( \cdot, t\right) -f\right\Vert _{L^{\infty}\left( \mathbb{R}^{N}\right) }\leq\int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{\infty}\left( \mathbb{R}^{N}\right) }dz |
and as in point (ⅱ)
\begin{align*} & \frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{\infty}\left( \mathbb{R}^{N}\right) } \leq2\left\Vert f\right\Vert _{L^{\infty}\left( \mathbb{R}^{N}\right) }\frac{e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\in L^{1}\left( \mathbb{R}^{N}\right) . \end{align*} |
Let us show that for every fixed z we have
\left\Vert f\left( E\left( -t\right) \left( \cdot-2C\left( t\right) ^{1/2}z\right) \right) -f\left( \cdot\right) \right\Vert _{L^{\infty }\left( \mathbb{R}^{N}\right) }\rightarrow0\text{ as }t\rightarrow0^{+}, |
hence by Lebesgue's theorem we will conclude the desired assertion.
For every \varepsilon > 0 we can pick \phi\in C_{c}^{0}\left(\mathbb{R} ^{N}\right) such that \left\Vert f-\phi\right\Vert _{\infty} < \varepsilon , then
\begin{align*} \left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -f\right\Vert _{\infty} & \leq\left\Vert f\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\left( E\left( -t\right) \left( \cdot\right) \right) \right\Vert _{\infty}+\left\Vert \phi\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\right\Vert _{\infty}+\left\Vert f-\phi\right\Vert _{\infty}\\ & \lt 2\varepsilon+\left\Vert \phi\left( E\left( -t\right) \left( \cdot\right) \right) -\phi\right\Vert _{\infty}. \end{align*} |
Since \phi\ is compactly supported, there exists R > 0 such that for every t\in\left(0, 1\right) we have \phi\left(E\left(-t\right) \left(x\right) \right) -\phi\left(x\right) \neq0 only if \left\vert x\right\vert < R .
\left\vert E\left( -t\right) \left( x\right) -x\right\vert \leq\left\vert E\left( -t\right) -I\right\vert R. |
Since \phi is uniformly continuous, for every \varepsilon > 0 there exists \delta > 0 such that for 0 < t < \delta we have
\left\vert \phi\left( E\left( -t\right) \left( x\right) \right) -\phi\left( x\right) \right\vert \lt \varepsilon |
whenever \left\vert x\right\vert < R. So we are done.
Let us now prove (b). To show that u is well defined, smooth in x , and satisfies the equation, for \left\vert x\right\vert \leq R let us write
\begin{align*} u\left( x, t\right) & = \int_{\left\vert y\right\vert \lt 2R}\Gamma\left( x, t;y, 0\right) f\left( y\right) dy+\int_{\left\vert y\right\vert \gt 2R} \Gamma\left( x, t;y, 0\right) f\left( y\right) dy\\ & \equiv I\left( x, t\right) +II\left( x, t\right) . \end{align*} |
Since f is bounded for \left\vert y\right\vert < 2R , reasoning like in the proof of point (a) we see that \mathcal{L}I\left(x, t\right) can be computed taking the derivatives under the integral sign, so that \mathcal{L}I\left(x, t\right) = 0. Moreover, the function x\mapsto I\left(x, t\right) is C^{\infty}\left(\mathbb{R}^{N}\right) .
To prove the analogous properties for II\left(x, t\right) we have to apply Theorem 4.2, (ii): there exists \delta \in\left(0, 1\right), C, c > 0 such that for 0 < t < \delta and every x, y\in\mathbb{R}^{N} we have, for n = 0, 1, 2, ...
\begin{equation} \sum\limits_{\left\vert \alpha\right\vert \leq n}\left\vert \partial_{x}^{\alpha }\Gamma\left( x, t;y, 0\right) \right\vert \leq\frac{C}{t^{Q/2}} e^{-c\frac{\left\vert x-E\left( t\right) y\right\vert ^{2}}{t}}\cdot\left\{ t^{-\sigma_{N}}+t^{-n\sigma_{N}}\left\vert x-E\left( t\right) y\right\vert ^{n}\right\} .\nonumber \end{equation} |
Recall that \left\vert x\right\vert < R and \left\vert y\right\vert > 2R . For \delta small enough and t\in\left(\frac{\delta}{2}, \delta\right) we have
\begin{equation} \sum\limits_{\left\vert \alpha\right\vert \leq n}\left\vert \partial_{x}^{\alpha }\Gamma\left( x, t;y, 0\right) \right\vert \leq Ce^{-c\frac{\left\vert y\right\vert ^{2}}{t}}\cdot\left\{ 1+\left\vert y\right\vert ^{n}\right\} \nonumber \end{equation} |
with constants depending on \delta, n . Therefore, if \alpha is the constant appearing in the assumption (1.15),
\begin{align*} & \int_{\left\vert y\right\vert \gt 2R}\sum\limits_{\left\vert \alpha\right\vert \leq n}\left\vert \partial_{x}^{\alpha}\Gamma\left( x, t;y, 0\right) \right\vert \left\vert f\left( y\right) \right\vert dy\\ & \leq C\int_{\left\vert y\right\vert \gt 2R}e^{-c\frac{\left\vert y\right\vert ^{2}}{\delta}}\cdot\left\{ 1+\left\vert y\right\vert ^{n}\right\} e^{\alpha\left\vert y\right\vert ^{2}}\left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2}}dy\\ & \leq C\sup\limits_{y\in\mathbb{R}^{N}}\left( e^{\left( -\frac{c}{\delta} +\alpha\right) \left\vert y\right\vert ^{2}}\left\{ 1+\left\vert y\right\vert ^{n}\right\} \right) \cdot\int_{\mathbb{R}^{N}}\left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2}}dy \end{align*} |
which shows that for \delta small enough \mathcal{L}II\left(x, t\right) can be computed taking the derivatives under the integral sign, so that \mathcal{L}II\left(x, t\right) = 0. Moreover, the function x\mapsto II\left(x, t\right) is C^{\infty}\left(\mathbb{R}^{N}\right) . This proves (b).
(iv). For \left\vert x_{0}\right\vert \leq R let us write
u\left( x, t\right) = \int_{\left\vert y\right\vert \lt 2R}\Gamma\left( x, t;y, 0\right) f\left( y\right) dy+\int_{\left\vert y\right\vert \gt 2R} \Gamma\left( x, t;y, 0\right) f\left( y\right) dy\equiv I+II. |
Applying point (ii) to f\left(y\right) \chi_{B_{2r}\left(0\right) } we have
I = \int_{\left\vert y\right\vert \lt 2R}\Gamma\left( x, t;y, 0\right) f\left( y\right) dy\rightarrow f\left( x_{0}\right) |
as \left(x, t\right) \rightarrow\left(x_{0}, 0\right) . Let us show that II\rightarrow0. By (3.7) we have
\left\vert II\right\vert \leq\int_{\left\vert y\right\vert \gt 2R}\frac {1}{ct^{Q/2}}e^{-c\frac{\left\vert x-E\left( t\right) y\right\vert ^{2}}{t} }\left\vert f\left( y\right) \right\vert dy. |
For y fixed with \left\vert y\right\vert > 2R , hence \left\vert x_{0}-y\right\vert \neq0 , we have
\lim\limits_{\left( x, t\right) \rightarrow\left( x_{0}, 0\right) }\frac{1} {t^{Q/2}}e^{-c\frac{\left\vert x-E\left( t\right) y\right\vert ^{2}}{t} } = \lim\limits_{\left( x, t\right) \rightarrow\left( x_{0}, 0\right) }\frac {1}{t^{Q/2}}e^{-c\frac{\left\vert x_{0}-y\right\vert ^{2}}{t}} = 0. |
Since \left\vert y\right\vert > 2R, \left\vert x_{0}\right\vert < R, for x\rightarrow x_{0} we can assume \left\vert x\right\vert < \frac{3}{2}R and for t small enough we have \left\vert x-E\left(t\right) y\right\vert \geq c\left\vert y\right\vert for some c > 0 , hence
\begin{align*} \frac{1}{ct^{Q/2}}e^{-c\frac{\left\vert x-E\left( t\right) y\right\vert ^{2}}{t}}\left\vert f\left( y\right) \right\vert \chi_{\left\{ \left\vert y\right\vert \gt 2R\right\} } & \leq\frac{1}{ct^{Q/2}}e^{-c_{1}\frac {\left\vert y\right\vert ^{2}}{t}}e^{\alpha\left\vert y\right\vert ^{2}} \chi_{\left\{ \left\vert y\right\vert \gt 2R\right\} }\left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2}}\\ & \leq\frac{1}{ct^{Q/2}}e^{\left( \alpha-\frac{c_{1}}{t}\right) \left\vert y\right\vert ^{2}}\chi_{\left\{ \left\vert y\right\vert \gt 2R\right\} }\left\{ \left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2}}\right\} \end{align*} |
for t small enough
\begin{align*} & \leq\frac{1}{ct^{Q/2}}e^{-\frac{c_{1}}{2t}\left\vert y\right\vert ^{2}} \chi_{\left\{ \left\vert y\right\vert \gt 2R\right\} }\left\{ \left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2}}\right\} \\ & \leq\frac{1}{ct^{Q/2}}e^{-\frac{2c_{1}}{t}R^{2}}\left\{ \left\vert f\left( y\right) \right\vert e^{-\alpha\left\vert y\right\vert ^{2} }\right\} \leq c\left\vert f\left( y\right) \right\vert e^{-\alpha \left\vert y\right\vert ^{2}}\in L^{1}\left( \mathbb{R}^{N}\right) . \end{align*} |
Hence by Lebesgue's theorem II\rightarrow0 as \left(x, t\right) \rightarrow\left(x_{0}, 0\right), and we are done.
Remark 4.12. If f is an unbounded continuous function satisfying (1.15), the solution of the Cauchy problem can blow up in finite time, already for the heat operator: the solution of
\left\{ \begin{array} [c]{l} u_{t}-u_{xx} = 0\text{ in }\mathbb{R}\times\left( 0, +\infty\right) \\ u\left( x, 0\right) = e^{x^{2}} \end{array} \right. |
is given by
u\left( x, t\right) = \frac{1}{\sqrt{4\pi t}}\int_{\mathbb{R}}e^{-\frac {\left( x-y\right) ^{2}}{4t}}e^{y^{2}}dy = \frac{e^{\frac{x^{2}}{1-4t}}} {\sqrt{1-4t}}\text{ for }0 \lt t \lt \frac{1}{4}, |
with u\left(x, t\right) \rightarrow+\infty for t\rightarrow\left(\frac{1}{4}\right) ^{-}.
We next prove a uniqueness results for the Cauchy problem (1.9). In the following we consider solutions defined in some possibly bounded time interval [0, T).
Theorem 4.13 (Uniqueness). Let \mathcal{L} be an operator of the form (1.1) satisfying the assumptions (H1)–(H2), let T\in(0, +\infty] , and let either f\in C(\mathbb{R}^{N}) , or f\in L^{p}(\mathbb{R}^{N}) with 1\leq p < +\infty .
If u_{1} and u_{2} are two solutions to the same Cauchy problem
\begin{equation} \left\{ \begin{array} [c]{l} \mathcal{L}u = 0\ \mathit{\text{in}}\ \mathbb{R}^{N}\times\left( 0, T\right) , \\ u\left( \cdot, 0\right) = f, \end{array} \right. \end{equation} | (4.19) |
satisfying (1.16) for some C > 0 , then u_{1}\equiv u_{2} in \mathbb{R}^{N}\times\left(0, T\right) .
Proof. Because of the linearity of \mathcal{L} , it is enough to prove that if the function u: = u_{1}-u_{2} satisfies (4.19) with f = 0 and (1.16), then u(x, t) = 0 for every (x, t)\in\mathbb{R} \times\left(0, +\infty\right) . We will prove that u = 0 in a suitably thin strip \mathbb{R}\times\left(0, t_{1}\right) , where t_{1} only depends on \mathcal{L} and C , the assertion then will follow by iterating this argument.
Let t_{1}\in(0, T] be a fixed mumber that will be specified later. For every positive R we consider a function h_{R}\in C^{\infty}(\mathbb{R}^{N}) , such that h_{R}\left(\xi\right) = 1 whenever \left\vert \xi\right\vert \leq R , h_{R}\left(\xi\right) = 0 for every \left\vert \xi\right\vert \geq R+1/2 and that 0\leq h_{R}\left(\xi\right) \leq1 . We also assume that all the first and second order derivatives of h_{R} are bounded by a constant that doesn't depend on R . We fix a point (y, s)\in\mathbb{R} ^{N}\times\left(0, t_{1}\right) , and we let v denote the function
v\left( \xi, \tau\right) : = h_{R}\left( \xi\right) \Gamma\left( y, s;\xi, \tau\right) . |
For \varepsilon\in\left(0, t_{1}/2\right) we define the domain
Q_{R, \varepsilon}: = \left\{ (\xi, \tau)\in\mathbb{R}^{N}\times\left( 0, t_{1}\right) :\left\vert \xi\right\vert \lt R+1, \varepsilon \lt \tau \lt s-\varepsilon\right\} |
and we also let Q_{R} = Q_{R, 0} . Note that in Q_{R, \varepsilon} the function v\left(\xi, \tau\right) is smooth in \xi and Lipschitz continuous in \tau .
By (1.1) and (1.10) we can compute the following Green identity, with u and v as above.
\begin{align*} & v\mathcal{L}u-u\mathcal{L}^{\ast}v\\ & = \sum\limits_{i, j = 1}^{q}a_{ij}\left( t\right) \left( v\partial_{x_{i}x_{j}} ^{2}u-u\partial_{x_{i}x_{j}}^{2}v\right) +\sum\limits_{k, j = 1}^{N}b_{jk}x_{k}\left( v\partial_{x_{j}}u+u\partial_{x_{j}}v\right) \\ & -\left( v\partial_{t}u+u\partial_{t}v\right) +uv\operatorname*{Tr}B\\ & = \sum\limits_{i, j = 1}^{q}\partial_{x_{i}}\left( a_{ij}\left( t\right) \left( v\partial_{x_{j}}u-u\partial_{x_{j}}v\right) \right) +\sum\limits_{k, j = 1} ^{N}\partial_{x_{j}}\left( b_{jk}x_{k}uv\right) -\partial_{t}\left( uv\right) . \end{align*} |
We now integrate the above identity on Q_{R, \varepsilon} and apply the divergence theorem, noting that v, \partial_{x_{1}}v, \dots, \partial_{x_{N}}v vanish on the lateral part of the boundary of Q_{R, \varepsilon} , by the properties of h_{R} . Hence:
\begin{equation} \begin{split} & \int_{Q_{R, \varepsilon}}v(\xi, \tau)\mathcal{L}u(\xi, \tau)-u(\xi , \tau)\mathcal{L}^{\ast}v(\xi, \tau)d\xi\, d\tau\\ & = \int_{\mathbb{R}^{N}}u(\xi, \varepsilon)v(\xi, \varepsilon)d\xi -\int_{\mathbb{R}^{N}}u(\xi, s-\varepsilon)v(\xi, s-\varepsilon)d\xi. \end{split} \end{equation} | (4.20) |
Concerning the last integral, since the function y\mapsto h_{R}(y)u(y, s) is continuous and compactly supported, by Theorem 4.11, (iii) we have that
\int_{\mathbb{R}^{N}}u(\xi, s-\varepsilon)v(\xi, s-\varepsilon)d\xi = \int_{\mathbb{R}^{N}}u(\xi, s-\varepsilon)h_{R}(\xi)\Gamma(y, s;\xi , s-\varepsilon)d\xi\rightarrow h_{R}(y)u(y, s) |
as \varepsilon\rightarrow0^{+} . Moreover
\int_{\mathbb{R}^{N}}u(\xi, \varepsilon)v(\xi, \varepsilon)d\xi = \int _{\mathbb{R}^{N}}u(\xi, \varepsilon)h_{R}(\xi)\Gamma(y, s;\xi, \varepsilon )d\xi\rightarrow0, |
as \varepsilon\rightarrow0^{+} , since \Gamma is a bounded function whenever (\xi, \varepsilon)\in\mathbb{R}^{N}\times\left(0, s/2\right) , and u(\cdot, \varepsilon)h_{R}\rightarrow0 either uniformly, if the inital datum is assumed by continuity, or in the L^{p} norm. Using the fact that \mathcal{L}u = 0 and u(\cdot, 0) = 0 , we conclude that, as \left\vert y\right\vert < R , (4.20) gives
\begin{equation} u(y, s) = \int_{Q_{R}}u(\xi, \tau)\mathcal{L}^{\ast}v(\xi, \tau)d\xi\, d\tau. \end{equation} | (4.21) |
Since \mathcal{L}^{\ast}\Gamma(y, s; \xi, \tau) = 0 whenever \tau < s , we have
\begin{align*} & \mathcal{L}^{\ast}\left( h_{R}\Gamma\right) = \sum\limits_{i, j = 1}^{q} a_{ij}\left( \tau\right) \partial_{\xi_{i}\xi_{j}}^{2}\left( h_{R} \Gamma\right) -\sum\limits_{k, j = 1}^{N}b_{jk}\xi_{k}\partial_{\xi_{j}}\left( h_{R}\Gamma\right) -h_{R}\left( \Gamma\operatorname*{Tr}B+\partial_{\tau }\Gamma\right) \\ & = \Gamma\sum\limits_{i, j = 1}^{q}a_{ij}\left( \tau\right) \partial_{\xi_{i}\xi_{j} }^{2}h_{R}+2\sum\limits_{i, j = 1}^{q}a_{ij}\left( \tau\right) \left( \partial _{\xi_{i}}h_{R}\right) \left( \partial_{\xi_{j}}\Gamma\right) -\Gamma \sum\limits_{k, j = 1}^{N}b_{jk}\xi_{k}\partial_{\xi_{j}}h_{R} \end{align*} |
therefore the identity (4.21) yields, since \partial _{\xi_{i}}h_{R} = 0 for \left\vert \xi\right\vert \leq R ,
\begin{equation} \begin{split} u(y, s) = & \int_{Q_{R}\backslash Q_{R-1}}u(\xi, \tau)\left\{ \sum\limits_{i, j = 1} ^{q}a_{i, j}(\tau)\cdot\right. \\ \cdot & \left[ 2\partial_{\xi_{i}}h_{R}(\xi)\partial_{\xi_{j}}\Gamma\left( y, s;\xi, \tau\right) +\Gamma(y, s;\xi, \tau)\partial_{\xi_{i}\xi_{j}}h_{R} (\xi)\right] \\ & \left. -\sum\limits_{k, j = 1}^{N}b_{jk}\xi_{k}\partial_{\xi_{j}}h_{R}(\xi )\Gamma(y, s;\xi, \tau)\right\} d\xi\, d\tau. \end{split} \end{equation} | (4.22) |
We claim that (4.22) implies
\begin{equation} \left\vert u(y, s)\right\vert \leq\int_{Q_{R}\backslash Q_{R-1}}C_{1} |u(\xi, \tau)|e^{-C|\xi|^{2}}d\xi\, d\tau, \end{equation} | (4.23) |
for some positive constant C_{1} only depending on the operator \mathcal{L} and on the uniform bound f the derivatives of h_{R} , provided that t_{1} is sufficiently small. Our assertion then follows by letting R\rightarrow+\infty .
So we are left to prove (4.23). By Proposition 3.4 we know that, for suitable constants \delta\in\left(0, 1\right) , c_{1}, c_{2} > 0 , for 0 < s-\tau\leq\delta and every y, \xi\in\mathbb{R}^{N} we have:
\begin{equation} \Gamma\left( y, s, \xi, \tau\right) \leq\frac{c_{1}}{\left( s-\tau\right) ^{Q/2}}e^{-c_{2}\frac{\left\vert y-E\left( s-\tau\right) \xi\right\vert ^{2}}{s-\tau}}. \end{equation} | (4.24) |
Moreover, from the computation in the proof of Theorem 4.9 we read that
\nabla_{\xi}\Gamma\left( y, s;\xi, \tau\right) = -\frac{1}{2}\Gamma\left( y, s;\xi, \tau\right) C^{\prime}\left( s, \tau\right) \left( \xi-E\left( \tau-s\right) y\right) |
where
C^{\prime}\left( s, \tau\right) = E\left( s-\tau\right) ^{T}C\left( s, \tau\right) ^{-1}E\left( s-\tau\right) . |
Hence
\nabla_{\xi}\Gamma\left( y, s;\xi, \tau\right) = \frac{1}{2}\Gamma\left( y, s;\xi, \tau\right) E\left( s-\tau\right) ^{T}C\left( s, \tau\right) ^{-1}\left( y-E\left( s-\tau\right) \xi\right) . |
By (3.3) we have inequality for matrix norms
\left\Vert C\left( s, \tau\right) ^{-1}\right\Vert \leq c\left\Vert C_{0}\left( s-\tau\right) ^{-1}\right\Vert |
and, for 0 < s-\tau\leq\delta
\leq c\left\Vert C_{0}^{\ast\left( s-\tau\right) -1}\right\Vert \leq c\left\Vert D_{0}\left( s-\tau\right) \right\Vert ^{-1} |
hence
\begin{align} \left\vert \nabla_{\xi}\Gamma\left( y, s;\xi, \tau\right) \right\vert & \leq c\Gamma\left( y, s;\xi, \tau\right) \left\Vert D_{0}\left( s-\tau\right) \right\Vert ^{-1}\left\vert y-E\left( s-\tau\right) \xi\right\vert \\ & \leq\frac{c_{1}}{\left( s-\tau\right) ^{\frac{Q}{2}+\sigma_{N}}} e^{-c_{2}\frac{\left\vert y-E\left( s-\tau\right) \xi\right\vert ^{2} }{s-\tau}}\left\vert y-E\left( s-\tau\right) \xi\right\vert . \end{align} | (4.25) |
Now, in the integral in (4.22) we have R < \left\vert \xi\right\vert < R+1 . Then for \left\vert y\right\vert < R/2 and 0 < s-\tau\leq\delta < 1 we have
\begin{align*} \frac{\left\vert \xi\right\vert }{2} & \leq\left\vert \xi\right\vert -\left\vert y\right\vert \leq\left\vert y-\xi\right\vert \leq\left\vert y-E\left( s-\tau\right) \xi\right\vert +\left\vert E\left( s-\tau\right) \xi-\xi\right\vert \\ & \leq\left\vert y-E\left( s-\tau\right) \xi\right\vert +\left\Vert E\left( s-\tau\right) -I\right\Vert \left\vert \xi\right\vert \leq\left\vert y-E\left( s-\tau\right) \xi\right\vert +\frac{\left\vert \xi\right\vert } {4}. \end{align*} |
Hence
\left\vert y-E\left( s-\tau\right) \xi\right\vert \geq\frac{\left\vert \xi\right\vert }{4}. |
Moreover
\left\vert y-E\left( s-\tau\right) \xi\right\vert \leq\left\vert y\right\vert +c\left\vert \xi\right\vert \leq c_{1}\left\vert \xi\right\vert . |
Hence (4.24)–(4.25) give
\begin{align*} \Gamma\left( y, s, \xi, \tau\right) & \leq\frac{c_{1}}{\left( s-\tau\right) ^{Q/2}}e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2}}{s-\tau}}\\ \left\vert \partial_{\xi_{j}}\Gamma\left( y, s;\xi, \tau\right) \right\vert & \leq\frac{c_{1}}{\left( s-\tau\right) ^{\frac{Q}{2}+\sigma_{N}} }\left\vert \xi\right\vert e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2} }{s-\tau}}. \end{align*} |
Therefore (4.22) gives
\left\vert u(y, s)\right\vert \leq\int_{Q_{R}\backslash Q_{R-1}}|u(\xi , \tau)|\left\{ \frac{c_{1}}{\left( s-\tau\right) ^{\frac{Q}{2}+\sigma_{N}} }\left\vert \xi\right\vert e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2} }{s-\tau}}\right\} d\xi\, d\tau. |
We can assume R > 1, writing, for 0 < s-\tau < 1 and every \xi\in\mathbb{R} ^{N} with \left\vert \xi\right\vert > 1 ,
\begin{align*} & \frac{c_{1}}{\left( s-\tau\right) ^{\frac{Q}{2}+\sigma_{N}}}\left\vert \xi\right\vert e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2}}{s-\tau}} = \frac{c_{1}}{\left( s-\tau\right) ^{\frac{Q}{2}+\sigma_{N}}}\left\vert \xi\right\vert e^{-c_{3}\frac{1}{s-\tau}}e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2}-1}{s-\tau}}\\ & \leq c\left\vert \xi\right\vert e^{-c_{3}\frac{\left\vert \xi\right\vert ^{2}-1}{s-\tau}}\leq c\left\vert \xi\right\vert e^{-c_{3}\left( \left\vert \xi\right\vert ^{2}-1\right) } = c_{4}\left\vert \xi\right\vert e^{-c_{3} \left\vert \xi\right\vert ^{2}}\leq c_{5}e^{-c_{6}\left\vert \xi\right\vert ^{2}}. \end{align*} |
This implies the Claim, so we are done.
The link between the existence result of Theorem 4.11 and the uniqueness result of Theorem 4.13 is completed by the following
Proposition 4.14. (a) Let f be a bounded continuous function on \mathbb{R}^{N} , or a function belonging to L^{p}\left(\mathbb{R} ^{N}\right) for some p\in\lbrack1, \infty) . Then the function
u\left( x, t\right) = \int_{\mathbb{R}^{N}}\Gamma\left( x, t;y, 0\right) f\left( y\right) dy |
satisfies the condition (1.16) for every fixed constants T, C > 0.
(b) If f\in C^{0}\left(\mathbb{R}^{N}\right) satisfies the condition (1.15) for some constant \alpha > 0 then the function u satisfies (1.16) for some T, C > 0.
This means that in the class of functions satisfying (1.16) there exists one and only one solution to the Cauchy problem, under any of the above assumptions on the initial datum f .
Proof. (a) If f is bounded continuous we simply have
\left\vert u\left( x, t\right) \right\vert \leq\left\Vert f\right\Vert _{C_{b}^{0}\left( \mathbb{R}^{N}\right) }\int_{\mathbb{R}^{N}}\Gamma\left( x, t;y, 0\right) dy = \left\Vert f\right\Vert _{C_{b}^{0}\left( \mathbb{R} ^{N}\right) } |
by Proposition 4.5. Hence (1.16) holds for every fixed C, T > 0.
Let now f\in L^{p}\left(\mathbb{R}^{N}\right) for some p\in \lbrack1, \infty) . Let us write
\begin{align*} u\left( x, t\right) & = \frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\int_{\mathbb{R}^{N} }e^{-\frac{1}{4}\left( x-E\left( t\right) y\right) ^{T}C\left( t\right) ^{-1}\left( x-E\left( t\right) y\right) }f\left( y\right) dy\\ & = \frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}\left( E\left( -t\right) x-y\right) ^{T}C^{\prime}\left( t\right) \left( E\left( -t\right) x-y\right) }f\left( y\right) dy\\ & = \frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\left( k_{t}\ast f\right) \left( E\left( -t\right) x\right) \end{align*} |
having set
k_{t}\left( x\right) = e^{-\frac{1}{4}x^{T}C^{\prime}\left( t\right) x}. |
Then
\begin{align} & \int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-C\left\vert x\right\vert ^{2}}dx\right) dt\\ & \leq\int_{0}^{T}\frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\left( \int_{\mathbb{R}^{N}}\left\vert \left( k_{t}\ast f\right) \left( E\left( -t\right) x\right) \right\vert e^{-C\left\vert x\right\vert ^{2}}dx\right) dt. \end{align} | (4.26) |
Applying Hölder inequality with q^{-1}+p^{-1} = 1 and Young's inequality we get:
\begin{align} & \int_{\mathbb{R}^{N}}\left\vert \left( k_{t}\ast f\right) \left( E\left( -t\right) x\right) \right\vert e^{-C\left\vert x\right\vert ^{2} }dx\\ E\left( -t\right) x & = y;x = E\left( t\right) y;dx = e^{-t\operatorname{Tr} B}dy\\ & = e^{-t\operatorname{Tr}B}\int_{\mathbb{R}^{N}}\left\vert \left( k_{t}\ast f\right) \left( y\right) \right\vert e^{-C\left\vert E\left( t\right) y\right\vert ^{2}}dy\\ & \leq e^{-t\operatorname{Tr}B}\left\Vert k_{t}\ast f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\left\Vert e^{-C\left\vert E\left( t\right) y\right\vert ^{2}}\right\Vert _{L^{q}\left( \mathbb{R}^{N}\right) }\\ & \leq c\left( q, T\right) e^{-t\operatorname{Tr}B}\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }\left\Vert k_{t}\right\Vert _{L^{1}\left( \mathbb{R}^{N}\right) } \end{align} | (4.27) |
and inserting (4.27) into (4.26) we have
\begin{align*} & \int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-C\left\vert x\right\vert ^{2}}dx\right) dt\\ & \leq\int_{0}^{T}\frac{e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}c\left( q, T\right) e^{-t\operatorname{Tr}B}\left\Vert f\right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}x^{T}C^{\prime}\left( t\right) x}dxdt\\ & = c\left( q, T\right) \left\Vert f\right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }\int_{0}^{T}\int_{\mathbb{R}^{N}}\frac{e^{-t\operatorname*{Tr} B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }} e^{-t\operatorname{Tr}B}e^{-\frac{1}{4}x^{T}C^{\prime}\left( t\right) x}dxdt\\ x & = E\left( -t\right) w;dx = e^{t\operatorname{Tr}B}dw\\ & = c\left( q, T\right) \left\Vert f\right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }\int_{0}^{T}\int_{\mathbb{R}^{N}}\frac{e^{-t\operatorname*{Tr} B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}e^{-\frac{1} {4}\left( E\left( -t\right) w\right) ^{T}C^{\prime}\left( t\right) E\left( -t\right) w}dwdt\\ & = c\left( q, T\right) \left\Vert f\right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }\int_{0}^{T}\int_{\mathbb{R}^{N}}\Gamma\left( w, t;0, 0\right) dwdt\\ & = c\left( q, T\right) \left\Vert f\right\Vert _{L^{p}\left( \mathbb{R} ^{N}\right) }\int_{0}^{T}e^{-t\operatorname*{Tr}B}dt\leq c\left( q, T\right) \left\Vert f\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) } \end{align*} |
by (4.8). Hence (1.16) still holds for every fixed C, T > 0.
(b) Assume that
\int_{\mathbb{R}^{N}}\left\vert f\left( y\right) \right\vert e^{-\alpha \left\vert y\right\vert ^{2}}dy \lt \infty |
for some \alpha > 0 and, for T\in\left(0, 1\right), \beta > 0 to be chosen later, let us bound:
\begin{align*} & \int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dt\\ & \leq\int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left( \frac {e^{-t\operatorname*{Tr}B}}{\left( 4\pi\right) ^{N/2}\sqrt{\det C\left( t\right) }}\int_{\mathbb{R}^{N}}e^{-\frac{1}{4}\left( x-E\left( t\right) y\right) ^{T}C\left( t\right) ^{-1}\left( x-E\left( t\right) y\right) }\left\vert f\left( y\right) \right\vert dy\right) e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dt \end{align*} |
y = E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) ;dy = e^{t\operatorname*{Tr}B}2^{N}\det C\left( t\right) ^{1/2}dz |
= \int_{0}^{T}\int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}}} {\pi^{N/2}}\left( \int_{\mathbb{R}^{N}}\left\vert f\left( E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) \right) \right\vert e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dzdt |
E\left( -t\right) \left( x-2C\left( t\right) ^{1/2}z\right) = w;e^{t\operatorname{Tr}B}dx = dw |
\begin{align*} & = \int_{0}^{T}\int_{\mathbb{R}^{N}}\frac{e^{-\left\vert z\right\vert ^{2}} }{\pi^{N/2}}\left( \int_{\mathbb{R}^{N}}e^{-t\operatorname{Tr}B}\left\vert f\left( w\right) \right\vert e^{-\beta\left\vert E\left( t\right) w+2C\left( t\right) ^{1/2}z\right\vert ^{2}}dw\right) dzdt\\ & = \int_{0}^{T}e^{-t\operatorname{Tr}B}\int_{\mathbb{R}^{N}}\frac {e^{-\left\vert z\right\vert ^{2}}}{\pi^{N/2}}\cdot\\ & \cdot\left( \int_{\mathbb{R}^{N}}\left\vert f\left( w\right) \right\vert e^{-\beta\left( \left\vert E\left( t\right) w\right\vert ^{2}+4\left\vert C\left( t\right) ^{1/2}z\right\vert ^{2}+2\left( E\left( t\right) w\right) ^{T}C\left( t\right) ^{1/2}z\right) }dw\right) dzdt\\ & = \int_{0}^{T}\frac{e^{-t\operatorname{Tr}B}}{\pi^{N/2}}\left( \int_{\mathbb{R}^{N}}\left\vert f\left( w\right) \right\vert e^{-\beta \left\vert E\left( t\right) w\right\vert ^{2}}\right. \cdot\\ & \left. \cdot\left( \int_{\mathbb{R}^{N}}e^{-\left\vert z\right\vert ^{2} }e^{-4\beta\left\vert C\left( t\right) ^{1/2}z\right\vert ^{2}} e^{-2\beta\left( E\left( t\right) w\right) ^{T}C\left( t\right) ^{1/2} z}dz\right) dw\right) dt. \end{align*} |
Next, for 0 < t < 1 we have, since \left\Vert C\left(t\right) \right\Vert \leq ct,
\left\vert -2\beta\left( E\left( t\right) w\right) ^{T}C\left( t\right) ^{1/2}z\right\vert \leq c_{1}\beta\left\vert w\right\vert \sqrt{t}\left\vert z\right\vert |
so that
\begin{align*} & \int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dt\\ & \leq\frac{e^{\left\vert \operatorname{Tr}B\right\vert }}{\pi^{N/2}}\int _{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert f\left( w\right) \right\vert e^{-\beta\left\vert E\left( t\right) w\right\vert ^{2}}\left( \int_{\mathbb{R}^{N}}e^{-\left\vert z\right\vert ^{2}}e^{c_{1}\beta\left\vert w\right\vert \sqrt{t}\left\vert z\right\vert }dz\right) dw\right) dt. \end{align*} |
Next,
\begin{align*} \int_{\mathbb{R}^{N}}e^{-\left\vert z\right\vert ^{2}}e^{c_{1}\beta\left\vert w\right\vert \sqrt{t}\left\vert z\right\vert }dz & = c_{n}\int_{0}^{+\infty }e^{-\rho^{2}+c_{1}\beta\left\vert w\right\vert \sqrt{t}\rho}\rho^{n-1}d\rho\\ & \leq c\int_{0}^{+\infty}e^{-\frac{\rho^{2}}{2}+c_{1}\beta\rho\sqrt{t}} d\rho = ce^{c_{2}\beta^{2}t\left\vert w\right\vert ^{2}} \end{align*} |
and
\int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dt\leq c\int _{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert f\left( w\right) \right\vert e^{-\beta\left\vert E\left( t\right) w\right\vert ^{2}}e^{c_{2}\beta ^{2}t\left\vert w\right\vert ^{2}}dw\right) dt. |
Since E\left(t\right) is invertible and E\left(0\right) = 1 , for T small enough and t\in(0, T) we have \left\vert E\left(t\right) w\right\vert \geq\frac{1}{2}\left\vert w\right\vert so that
e^{-\beta\left\vert E\left( t\right) w\right\vert ^{2}}e^{c_{2}\beta ^{2}t\left\vert w\right\vert ^{2}}\leq e^{-\left\vert w\right\vert ^{2} \beta\left( \frac{1}{2}-c_{2}t\beta\right) }. |
We now fix \beta = 4\alpha and then fix T small enough such that \frac {1}{2}-c_{2}T\beta\geq\frac{1}{4}, so that for t\in\left(0, T\right) we have
e^{-\left\vert w\right\vert ^{2}\beta\left( \frac{1}{2}-c_{2}t\beta\right) }\leq e^{-\left\vert w\right\vert ^{2}\beta\left( \frac{1}{2}-c_{2} T\beta\right) }\leq e^{-\alpha\left\vert w\right\vert ^{2}} |
and
\int_{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert u\left( x, t\right) \right\vert e^{-\beta\left\vert x\right\vert ^{2}}dx\right) dt\leq c\int _{0}^{T}\left( \int_{\mathbb{R}^{N}}\left\vert f\left( w\right) \right\vert e^{-\alpha\left\vert w\right\vert ^{2}}dw\right) dt \lt \infty. |
So we are done.
The previous uniqueness property for the Cauchy problem also implies the following replication property for the heat kernel:
Corollary 4.15. For every x, y\in\mathbb{R}^{N} and s < \tau < t we have
\Gamma\left( x, t;y, s\right) = \int_{\mathbb{R}^{N}}\Gamma\left( x, t;z, \tau\right) \Gamma\left( z, \tau;y, s\right) dz. |
Proof. Let
\begin{align*} u\left( x, t\right) & = \int_{\mathbb{R}^{N}}\Gamma\left( x, t;z, \tau \right) \Gamma\left( z, \tau;y, s\right) dz\\ f\left( z\right) & = \Gamma\left( z, \tau;y, s\right) \end{align*} |
for y\in\mathbb{R}^{N} fixed, \tau > s fixed. By Theorem 1.4, (i), f\in C_{\ast}^{0}\left(\mathbb{R}^{N}\right) . Hence by Theorem 4.11, point (iii), u solves the Cauchy problem
\left\{ \begin{array} [c]{l} \mathcal{L}u\left( x, t\right) = 0\text{ for }t \gt \tau\\ u\left( x, \tau\right) = \Gamma\left( x, \tau;y, s\right) \end{array} \right. |
where the initial datum is assumed continuously, uniformly as t\rightarrow \tau . Since v\left(x, t\right) = \Gamma\left(x, t; y, s\right) solves the same Cauchy problem, by Theorem 4.13 the assertion follows.
The authors thank the anonymous referee for her/his suggestions that improved this manuscript. This research was partially supported by the grant of Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The second author acknowledges financial support from the FAR2019 project: "Risk assessment in the EU: new indices based on machine learning methods" funded by UNIMORE.
The authors declare no conflict of interest.
[1] | Anceschi F, Polidoro S (2020) A survey on the classical theory for Kolmogorov equation. Le Matematiche 75: 221-258. |
[2] | Bramanti M (2014) An Invitation to Hypoelliptic Operators and Hörmander's Vector Fields, Cham: Springer. |
[3] | Da Prato G (1998) Introduction to Stochastic Differential Equations, 2Eds., Scuola Normale Superiore, Pisa. |
[4] |
Delarue F, Menozzi S (2010) Density estimates for a random noise propagating through a chain of differential equations. J Funct Anal 259: 1577-1630. doi: 10.1016/j.jfa.2010.05.002
![]() |
[5] |
Di Francesco M, Pascucci S (2005) On a class of degenerate parabolic equations of Kolmogorov type. Appl Math Res Express 2005: 77-116. doi: 10.1155/AMRX.2005.77
![]() |
[6] | Di Francesco M, Polidoro S (2006) Schauder estimates, Harnack inequality and Gaussian lower bound for Kolmogorov-type operators in non-divergence form. Adv Differential Equ 11: 1261-1320. |
[7] |
Farkas B, Lorenzi L (2009) On a class of hypoelliptic operators with unbounded coefficients in \mathbb{R}^N. Commun Pure Appl Anal 8: 1159-1201. doi: 10.3934/cpaa.2009.8.1159
![]() |
[8] |
Hörmander L (1967) Hypoelliptic second order differential equations. Acta Math 119: 147-171. doi: 10.1007/BF02392081
![]() |
[9] | Il'in AM (1964) On a class of ultraparabolic equations. Dokl Akad Nauk SSSR 159: 1214-1217. |
[10] |
Kolmogorov AN (1934) Zur Theorie der Brownschen Bewegung. Ann Math 35: 116-117. doi: 10.2307/1968123
![]() |
[11] | Kupcov LP (1972) The fundamental solutions of a certain class of elliptic-parabolic second order equations. Differencial'nye Uravnenija 8: 1649-1660. |
[12] | Kuptsov LP (1982) Fundamental solutions of some second-order degenerate parabolic equations. Mat Zametki 31: 559-570. |
[13] | Lanconelli E, Polidoro S (1994) On a class of hypoelliptic evolution operators. Rend Sem Mat 52: 29-63. |
[14] | Lunardi A (1997) Schauder estimates for a class of degenerate elliptic and parabolic operators with unbounded coefficients in \mathbb{R}^n. Ann Scuola Norm Sup Pisa 24: 133-164. |
[15] | Manfredini M (1997) The Dirichlet problem for a class of ultraparabolic equations. Adv Differential Equ 2: 831-866. |
[16] | Pascucci A, Pesce A (2019) On stochastic Langevin and Fokker-Planck equations: the two-dimensional case. arXiv:1910.05301. |
[17] | Pascucci A, Pesce A (2020) The parametrix method for parabolic SPDEs. Stochastic Process Appl, To appear. |
[18] | Polidoro S (1994) On a class of ultraparabolic operators of Kolmogorov-Fokker-Planck type. Matematiche 49: 53-105. |
[19] |
Sonin IM (1967) A class of degenerate diffusion processes. Trans Theory Probab Appl 12: 490-496. doi: 10.1137/1112059
![]() |
[20] |
Weber M (1951) The fundamental solution of a degenerate partial differential equation of parabolic type. T Am Math Soc 71: 24-37. doi: 10.1090/S0002-9947-1951-0042035-0
![]() |
1. | Andrea Pascucci, Antonello Pesce, Backward and forward filtering under the weak Hörmander condition, 2022, 2194-0401, 10.1007/s40072-021-00225-7 | |
2. | Cong He, Jingchun Chen, Houzhang Fang, Huan He, Fundamental solution of fractional Kolmogorov–Fokker–Planck equation, 2021, 1, 2666657X, 100031, 10.1016/j.exco.2021.100031 | |
3. | Andrea Pascucci, Antonello Pesce, On stochastic Langevin and Fokker-Planck equations: The two-dimensional case, 2022, 310, 00220396, 443, 10.1016/j.jde.2021.11.004 | |
4. | Francesca Anceschi, Annalaura Rebucci, On the fundamental solution for degenerate Kolmogorov equations with rough coefficients, 2022, 2296-9020, 10.1007/s41808-022-00191-8 | |
5. | Stefano Biagi, Marco Bramanti, 2024, Chapter 4, 978-981-97-0224-4, 93, 10.1007/978-981-97-0225-1_4 | |
6. | Stefano Pagliarani, Giacomo Lucertini, Andrea Pascucci, Optimal regularity for degenerate Kolmogorov equations in non-divergence form with rough-in-time coefficients, 2023, 23, 1424-3199, 10.1007/s00028-023-00916-9 | |
7. | Tommaso Barbieri, On Kolmogorov Fokker Planck operators with linear drift and time dependent measurable coefficients, 2024, 6, 2640-3501, 238, 10.3934/mine.2024011 | |
8. | Stefano Biagi, Marco Bramanti, Schauder estimates for Kolmogorov-Fokker-Planck operators with coefficients measurable in time and Hölder continuous in space, 2024, 533, 0022247X, 127996, 10.1016/j.jmaa.2023.127996 | |
9. | S. Biagi, M. Bramanti, B. Stroffolini, KFP operators with coefficients measurable in time and Dini continuous in space, 2024, 24, 1424-3199, 10.1007/s00028-024-00964-9 |