We prove the well-posedness of a Cauchy problem of the kind:
{Lu=f, in D′(RN×(0,+∞)),u(x,0)=g(x),∀x∈RN,
where f is Dini continuous in space and measurable in time and g satisfies suitable regularity properties. The operator L is the degenerate Kolmogorov-Fokker-Planck operator
L=q∑i,j=1aij(t)∂2xixj+N∑k,j=1bkjxk∂xj−∂t
where {aij}qij=1 is measurable in time, uniformly positive definite and bounded while {bij}Nij=1 have the block structure:
{bij}Nij=1=(O…OOB1…OO⋮⋱⋮⋮O…BκO)
which makes the operator with constant coefficients hypoelliptic, 2-homogeneous with respect to a family of dilations and traslation invariant with respect to a Lie group.
Citation: Tommaso Barbieri. On Kolmogorov Fokker Planck operators with linear drift and time dependent measurable coefficients[J]. Mathematics in Engineering, 2024, 6(2): 238-260. doi: 10.3934/mine.2024011
[1] | Marco Bramanti, Sergio Polidoro . Fundamental solutions for Kolmogorov-Fokker-Planck operators with time-depending measurable coefficients. Mathematics in Engineering, 2020, 2(4): 734-771. doi: 10.3934/mine.2020035 |
[2] | Prashanta Garain, Kaj Nyström . On regularity and existence of weak solutions to nonlinear Kolmogorov-Fokker-Planck type equations with rough coefficients. Mathematics in Engineering, 2023, 5(2): 1-37. doi: 10.3934/mine.2023043 |
[3] | Serena Federico, Gigliola Staffilani . Sharp Strichartz estimates for some variable coefficient Schrödinger operators on R×T2. Mathematics in Engineering, 2022, 4(4): 1-23. doi: 10.3934/mine.2022033 |
[4] | María Ángeles García-Ferrero, Angkana Rüland . Strong unique continuation for the higher order fractional Laplacian. Mathematics in Engineering, 2019, 1(4): 715-774. doi: 10.3934/mine.2019.4.715 |
[5] | Marco Sansottera, Veronica Danesi . Kolmogorov variation: KAM with knobs (à la Kolmogorov). Mathematics in Engineering, 2023, 5(5): 1-19. doi: 10.3934/mine.2023089 |
[6] | Alessia E. Kogoj, Ermanno Lanconelli, Enrico Priola . Harnack inequality and Liouville-type theorems for Ornstein-Uhlenbeck and Kolmogorov operators. Mathematics in Engineering, 2020, 2(4): 680-697. doi: 10.3934/mine.2020031 |
[7] | Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025 |
[8] | Yuzhe Zhu . Propagation of smallness for solutions of elliptic equations in the plane. Mathematics in Engineering, 2025, 7(1): 1-12. doi: 10.3934/mine.2025001 |
[9] | Chiara Gavioli, Pavel Krejčí . Deformable porous media with degenerate hysteresis in gravity field. Mathematics in Engineering, 2025, 7(1): 35-60. doi: 10.3934/mine.2025003 |
[10] | Rita Mastroianni, Christos Efthymiopoulos . Kolmogorov algorithm for isochronous Hamiltonian systems. Mathematics in Engineering, 2023, 5(2): 1-35. doi: 10.3934/mine.2023035 |
We prove the well-posedness of a Cauchy problem of the kind:
{Lu=f, in D′(RN×(0,+∞)),u(x,0)=g(x),∀x∈RN,
where f is Dini continuous in space and measurable in time and g satisfies suitable regularity properties. The operator L is the degenerate Kolmogorov-Fokker-Planck operator
L=q∑i,j=1aij(t)∂2xixj+N∑k,j=1bkjxk∂xj−∂t
where {aij}qij=1 is measurable in time, uniformly positive definite and bounded while {bij}Nij=1 have the block structure:
{bij}Nij=1=(O…OOB1…OO⋮⋱⋮⋮O…BκO)
which makes the operator with constant coefficients hypoelliptic, 2-homogeneous with respect to a family of dilations and traslation invariant with respect to a Lie group.
Starting from the theory present in [2,3,4], we shall prove the well-posedness of a global Cauchy problem for a class of Kolmogorov-Fokker-Planck operators (briefly KFP) with time dependent measurable coefficients and linear drift (Theorem 1.1). The main point is to prove the existence of a solution because uniqueness and the stability estimates follow from [2,3]. Moreover, since we have at our disposal an explicit fundamental solution (see [4]) we shall look for a solution in the form prescribed by the Duhamel method. To this aim, we first employ some techniques from [2] to obtain existence for smooth datum and then thanks to the estimates from [3], we can achieve existence under the minimal regularity assumptions on the datum.
The KFP operator we consider is defined as follows:
L=q∑i,j=1aij(t)∂2xixj+N∑k,j=1bkjxk∂xj−∂t,x∈RN, t∈R, | (1.1) |
for some q≤N. Moreover we assume the two following hypotheses:
h1) The coefficients aij are measurable functions and there exists ν>0 such that the matrix A(t)={aij(t)}qi,j=1 satisfies the following condition:
ν|ξ|2≤q∑i,j=1aij(t)ξiξj≤1ν|ξ|2,a.e. t∈R∀ξ∈Rq; | (1.2) |
h2) There exist positive integers {mj}κj=1 satisfying q=m0≥m1≥⋯≥mκ and ∑κj=1mj=N such that the matrix B={bik}Ni,k=1 assumes the following block structure:
B=(OO…OOB1O…OOOB2…OO⋮⋮⋱⋮⋮OO…BκO), | (1.3) |
where for every j∈{1,…,κ} the block Bj has dimension mj×mj−1 and rank equal to mj.
Remark 1.1. It is convenient to introduce the first order operator
Y:=N∑j,k=1bjkxk∂xj−∂t, |
so that (1.1) becomes: L=∑qi,j=1aij(t)∂2xixj+Y.
As pointed out by Bramanti and Polidoro in [4] this class of operators is naturally linked to stochastic systems of the kind:
{dX=−BXdt+σ(t)dW,X(0)=x0,a.s. |
Indeed, taking aij(t)=12∑qk=1σik(t)σjk(t), the forward Kolmogorov operator of this system corresponds to L while the backward Kolmogorov operator corresponds to the adjoint of L. In accordance to this possible application we remark that the simple case
L=N∑i=1∂2xixi+N∑k=1xi∂xi+k−∂t | (1.4) |
has been studied, already in 1934, by Kolmogorov in relation to a system with 2N degrees of freedom [6] (see [5, Section 2]). It is interesting to notice that this operator is hypoelliptic and admits an explicit fundamental solution found, by Kolmogorov himself, which is smooth outside the pole. The operator (1.4) is a particular case of the one studied by Lanconelli and Polidoro in the fundamental paper [8] which contains a characterization of the hypoellipticity for the more general operator
L=N∑i,j=1˜aij∂2xixj+N∑i,j=1˜bjixi∂xj−∂t | (1.5) |
where {˜aij}Ni,j=1 and {˜bij}Ni,j=1 are constant matrices. Actually (see [8]), the operator (1.5) is hypoelliptic if and only if there exists a change of variables which leads to an operator of the kind (1.1) whose matrix A is constant and positive definite while B has a structure similar to (1.3) but with blocks above the Bj ones arbitrarily chosen:
B=(∗∗…∗∗B1∗…∗∗OB2…∗∗⋮⋮⋱⋮⋮OO…Bκ∗). | (1.6) |
Moreover, the mentioned paper points out the following homogeneous group structure which is fundamental for the study of this kind of operators.
Definition 1.1. Let E(s):=e−sB and let
(q1,…,qN):=(1,…,1⏟m0,…,2i+1,…,2i+1⏟mi,…,2κ+1,…,2κ+1⏟mκ). |
The homogeneous group structure we consider is given by the group law ∘
(x,t)∘(y,s):=(y+E(s)x,t+s),(x,t),(y,s)∈RN+1 |
and the family of automorphisms {D(λ)}λ>0
D(λ)=diag(D0(λ),λ2):=diag(λq1,…,λqN,λ2). |
From this definition it easily follows that (RN+1,∘) is a Lie group:
(y,s)−1=(−E(−s)y,−s) and (y,s)−1∘(x,t)=(x−E(t−s)y,t−s), |
and under the assumption (h2) (see [8, Remark 2.1]) for any (x,t)∈RN+1 and (y,s)∈RN+1 we have
D(λ)((x,t)∘(y,s))=(D(λ)(x,t))∘(D(λ)(y,s)). |
Moreover, the homogeneous group defined above admits a metric structure.
Definition 1.2. We define the homogeneous norm ρ:RN+1→[0,+∞) as
ρ(x,t)=‖x‖+√|t|=N∑i=1|xi|1qi+√|t|. |
Thanks to ρ we can define a quasi distance d on RN+1 as follows:
d((x,t),(y,s))=ρ((y,s)−1∘(x,t)),(x,t),(y,s)∈RN+1. |
From this definition it immediately follows that for any (x,t)∈RN+1 and λ∈(0,+∞) we have:
ρ(D(λ)(x,t))=λρ(x,t) |
while for ξ, ζ and η in RN+1 we have:
d(ξ,ζ)=d(η−1∘ξ,η−1∘ζ). |
Now we recall some definitions that can be found in [3].
Definition 1.3. Let I be an interval and let Ω=RN×I.
For any measurable f:Ω→R we define:
ωf,Ω(r)=esssupt∈Isup |
Then the function f is said partially Dini continuous if
\begin{equation*} [\omega_{f,\Omega}]: = \int_0^1\frac{\omega_{f,\Omega}(s)}{s}ds < +\infty \text{.} \end{equation*} |
Finally, the space of functions f\in L^\infty(\Omega) which are partially Dini continuous is denoted with \mathcal{D}(\Omega) .
Definition 1.4. Let I and \Omega be as in the previous definition and let \mu > 0 . For f\in \mathcal{D}(\Omega) let \mathcal{M}_{f, \Omega} and \mathcal{U}^\mu_{f, \Omega} be defined as follows:
\begin{align*} \mathcal{M}_{f,\Omega}(r)& = \,\omega_{f,\Omega}(r)+\int_{0}^{r}\frac{\omega_{f,\Omega}(s)}{s}ds+ r\int_{r}^{\infty}\frac{\omega_{f,\Omega}(s)}{s^{2}}ds, \\ \mathcal{U}_{f,\Omega}^{\mu}(r)& = \int_{\mathbb{R}^N}e^{-\mu|z|^{2}} \Big(\int_{0}^{r\Vert z\Vert}\frac{\omega_{f,\Omega}(s)}{s}ds\Big)dz. \end{align*} |
Remark 1.2. Concerning \mathcal{M}_{f, \Omega}(r) and \mathcal{U}_{f, \Omega}^\mu(r) we remark only that for f\in \mathcal{D}(\Omega) they are two modului of continuity (i.e., monotone nondecreasing functions vanishing as r\to 0^+ ). Moreover whenever f is log-Dini continuous, meaning that its modulus of continuity satisfies:
\begin{equation*} \int_0^1\frac{\omega_{f,\Omega}(s)}{s}|\log(s)|ds < +\infty\text{,} \end{equation*} |
the function \mathcal{M}_{f, \Omega} is a Dini continuity modulus. See [3, Section 2] for details concerning these definitions and in particular for the assertions above, see Proposition 2.8 and Lemma 2.12.
Let us introduce the following definitions concerning the spaces in which we look for a solution.
Definition 1.5. Let I be an open interval, then we define the spaces
\begin{equation*} \begin{split} S^0(\mathbb{R}^N\times I)&: = \{u\in C(\overline{\mathbb{R}^N\times I}) \cap L^\infty(\mathbb{R}^N\times I): \partial_{x_i}u, \partial_{x_ix_j}^2u\in L^\infty(\mathbb{R}^N\times I)\\ & \text{ for } i,j\leq q \text{, } Yu\in L^\infty(\mathbb{R}^N\times I)\} \end{split} \end{equation*} |
and
\begin{align*} \mathbb{S}(I): = \{u\in C&(\overline{\mathbb{R}^N\times I})\cap L^\infty (\mathbb{R}^N\times I): u\in S^0(\mathbb{R}^N\times J), \quad\forall J\subset\subset I\:\text{open interval}\:\}\text{.} \end{align*} |
The partial derivatives in the definition of S^0(\mathbb{R}^N\times I) are distributional derivatives.
For simplicity we adopt the notation
S_{+\infty}: = \mathbb{R}^N\times(-\infty,+\infty),\; \; S_T: = \mathbb{R}^N\times(-\infty,T) |
and
S_{\tau,T}: = \mathbb{R}^N\times(\tau,T) |
for \tau < T real numbers. Now we can state the main result of this paper.
Theorem 1.1 (Well-posedness of the Cauchy problem). Let f\in \mathcal{D}(S_{+\infty}) with {\rm{supp}}(f)\subset \mathbb{R}^N\times [0, +\infty) and let g\in \mathcal{D}(\mathbb{R}^N) satisfy \partial_{x_i x_j}g\in \mathcal{D}(\mathbb{R}^N) for i, j\in\{1, \dots, q\} and Yg\in\mathcal{D}(\mathbb{R}^N) (the derivatives are taken in the distributional sense). Then, there exists a unique u\in \mathbb{S}(0, +\infty) solution of the Cauchy problem:
\begin{equation} \left\{\begin{array}{@{}l@{}c} \mathcal{L}u = f, & {{in}}\;D'(\mathbb{R}^N\times(0,+\infty)),\\ u(x,0) = g(x),&\forall x\in\mathbb{R}^N. \end{array}\right. \end{equation} | (1.7) |
Moreover, the solution u is in the form:
\begin{equation} u(x,t) = -\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds+\int_{\mathbb{R}^N}\Gamma(x,t;y,0)g(y)dy \end{equation} | (1.8) |
and in addition there exist two constants c and \mu depending only on \nu and \mathbb{B} such that for any T > 0 the following estimates hold:
\begin{equation} \begin{split} \sum\limits_{i,j = 1}^{q}&\Vert\partial_{x_{i}x_{j}}^{2}u\Vert_{L^\infty(S_{0,T})}+ \Vert Yu\Vert_{L^\infty(S_{0,T})}\\ \leq & c\Big(\Vert f\Vert_{L^\infty(S_{0,T})} + \mathcal{U}^\mu_{f,S_{0,T}}(\sqrt{T})+\Vert g\Vert_{L^\infty(\mathbb{R}^N)}+ \sum\limits_{i,j = 1}^q\Vert\partial_{x_ix_j}^2 g\Vert_{L^\infty(\mathbb{R}^N)}+ \Vert Y g\Vert_{L^\infty(\mathbb{R}^N)}\\ +&\mathcal{U}^\mu_{g,\mathbb{R}^N}(\sqrt{T+1})+ \sum\limits_{i,j = 1}^q\mathcal{U}^\mu_{\partial_{x_ix_j}^2 g,\mathbb{R}^N}(\sqrt{T+1})+ \mathcal{U}^\mu_{Y g,\mathbb{R}^N}(\sqrt{T+1})\Big), \end{split} \end{equation} | (1.9) |
\begin{align} & \omega_{\partial^2_{x_ix_j}u,S_{0,T}}(r)+\omega_{Y u,S_{0,T}}(r) \\ &\leq c \Big(\mathcal{M}_{f,S_{0,T}}(cr)+ +\mathcal{M}_{g,\mathbb{R}^N}(cr)+ \sum\limits_{i,j = 1}^q\mathcal{M}_{\partial_{x_ix_j}^2g,\mathbb{R}^N} (cr)+ \mathcal{M}_{Yg,\mathbb{R}^N}(cr)\Big) \mathit{\text{.}} \end{align} | (1.10) |
Remark 1.3. Concerning the existence of a solution the assumptions on the datum g could be significantly weakened. Indeed, the existence for the homogeneous problem has been studied under very weak assumptions by Bramanti and Polidoro in [4].
This section recalls some known results about the operator (1.1) first assuming the matrix \mathbb{A} = \{a_{ij}\}_{i, j = 1}^q constant and then for the more general case. The main references are the papers [2,3,4,7,8].
If \mathbb{A} is constant, the operator (1.1) enjoys nice properties related to the homogeneous group, which remind the ones of the heat operator (see [7,8]). We shall list some of them in the next theorem. We also define the homogeneous dimension:
\begin{equation*} Q: = \sum\limits_{i = 0}^\kappa m_i (2i+1) \end{equation*} |
where the coefficients \{m_j\}_{j = 1}^\kappa are defined in (1.3).
Theorem 2.1 ([8]). Assumes (h1) and (h2) . If the matrix \mathbb{A} = \{a_{ij}\}_{i, j = 1}^q is constant the operator (1.1) satisfies the following properties:
i) \mathcal{L} is hypoelliptic;
ii) \mathcal{L} is invariant with respect to left translations in (\mathbb{R}^{n+1}, \circ) : Let (y, s)\in \mathbb{R}^{N+1} and u\in C_0^\infty(\mathbb{R}^{N+1}) , then, for any (x, t)\in\mathbb{R}^{N+1}
\begin{equation*} \mathcal{L}_{(x,t)}u((y,s)\circ(x,t)) = (\mathcal{L}u)((y,s)\circ(x,t)) \mathit{\text{;}} \end{equation*} |
iii) \mathcal{L} is D(\lambda) -homogeneous of degree 2: for every \lambda > 0 and u\in C_0^\infty(\mathbb{R}^{N+1})
\begin{equation*} \mathcal{L}(u(D(\lambda)(x,t)) = \lambda^2(\mathcal{L}u)(D(\lambda)(x,t)) \mathit{\text{;}} \end{equation*} |
iv) Let C(1) and c(1) be defined as:
\begin{equation*} C(1): = \int_0^1 E(\sigma) \left(\begin{matrix} \mathbb{A} & \mathbb{O}\\ \mathbb{O} & \mathbb{O} \end{matrix}\right) E(\sigma)^Td\sigma, \quad c(1) = \det(C(1)) \end{equation*} |
and let
\Gamma:\{(x,t;y,s)\in\mathbb{R}^{2N+2}:(x,t)\not = (y,s)\}\to\mathbb{R} |
be defined by
\begin{equation*} \begin{split} \Gamma(x,t;y,s)& = \frac{(t-s)^{-\frac{Q}{2}}}{\sqrt{c(1)}(4\pi)^{\frac{N}{2}} } \:\chi_{(0,+\infty)}(t-s) \\ &\times\exp\Big(-\frac{1}{4}(x-E(t-s)y)^T D_0\big(\frac{1}{\sqrt{t-s}}\big) C(1)^{-1} D_0\big(\frac{1}{\sqrt{t-s}}\big)(x-E(t-s)y)\Big) \mathit{\text{.}} \end{split} \end{equation*} |
Then, for any fixed (y, s)\in \mathbb{R}^{N+1} the function \Gamma(\cdot; y, s) belongs to C^\infty(\mathbb{R}^{N+1}\setminus\{(y, s)\}) and is the fundamental solution with pole (y, s) :
\begin{equation*} \mathcal{L}[\Gamma(\cdot;y,s)](x,t) = 0, \quad \; {{for\; any}} \;(x,t)\in \mathbb{R}, \;{{such \;that}}\; t > s \end{equation*} |
and for any g\in C_0(\mathbb{R}^N) :
\begin{equation*} \int_{\mathbb{R}^N}\Gamma(\cdot,t;y,s)g(y)dy\xrightarrow[t\to s^+]{} g(\cdot) \quad \;{{uniformly \;on}} \; \mathbb{R}^{N} \mathit{\text{;}} \end{equation*} |
v) For any (y, s), (x, t)\in \mathbb{R}^{N+1} if t > s we have:
\begin{equation*} \int_{\mathbb{R}^N}\Gamma(z,t;y,s)dz = \int_{\mathbb{R}^N}\Gamma(x,t;z,s)dz = 1 \mathit{\text{;}} \end{equation*} |
vi) For any (x, t), (y, s)\in \mathbb{R}^{N+1} such that (x, t)\not = (y, s)
\begin{equation*} \Gamma(x,t;y,s) = \Gamma((y,s)^{-1}\circ(x,t);0,0) \mathit{\text{;}} \end{equation*} |
vii) For any \lambda > 0 and (x, t)\in \mathbb{R}^{N+1}\setminus\{(0, 0)\}
\begin{equation*} \Gamma(D(\lambda)(x,t);0,0) = \lambda^{-Q}\Gamma(x,t;0,0) \mathit{\text{.}} \end{equation*} |
When \mathbb{A} is a multiple of the identity the fundamental solution just introduced enters many computations, hence, we introduce the following notation.
Notation 2.1. Let a > 0 , \Gamma_a shall denote the fundamental solution of
\begin{equation*} a \sum\limits_{i = 1}^q\partial_{x_i x_i}^2+\sum\limits_{j,k = 1}^N b_{jk}x_k\partial_{x_j}-\partial_t\mathit{\text{.}} \end{equation*} |
Moreover, we define
\begin{equation*} C_0: = \int_0^1 E(\sigma) \left(\begin{matrix} \mathbb{I}_q & \mathbb{O}\\ \mathbb{O} & \mathbb{O} \end{matrix}\right) E(\sigma)^Td\sigma, \quad c_0 = \det(C_0) \end{equation*} |
and
\begin{equation*} |x|_0: = \sqrt{x^T C_0^{-1} x}\quad {{for \;any}}\;x\in \mathbb{R}^N\mathit{\text{.}} \end{equation*} |
With this notation, for (x, t), (y, s)\in \mathbb{R}^{N+1} with t > s , we have:
\begin{equation*} \Gamma_a(x,t;y,s) = \frac{(t-s)^{-\frac{Q}{2}}}{\sqrt{c_0}(4\pi a)^{\frac{N}{2}}} \exp\Big(-\frac{1}{4a}| D_0(\frac{1}{\sqrt{t-s}})(x-E(t-s)y)|_0^2\Big) \mathit{\text{.}} \end{equation*} |
Now we turn our attention to the case of varying coefficients. The problem of finding a fundamental solution for operators with nonconstant matrix \mathbb{A} = \{a_{ij}\}_{i, j = 1}^q has been studied by various authors, see [4,5,7,9,10,11]. However among the mentioned papers only the last two assume coefficients depending on time in a nonsmooth way, here we recall some results from [4].
Theorem 2.2. [4, Theorem 1.4] Let C(t, s) and c(t, s) be defined for t > s as follows
\begin{equation*} C(t,s): = \int_s^t E(t-\sigma) \left(\begin{matrix} \mathbb{A}(\sigma) & \mathbb{O}\\ \mathbb{O} & \mathbb{O} \end{matrix}\right) E(t-\sigma)^Td\sigma, \quad c(t,s) = \det(C(t,s)) \end{equation*} |
and let \Gamma:\{(x, t;y, s)\in\mathbb{R}^{2N+2}:(x, t)\not = (y, s)\}\to\mathbb{R} be defined by:
\begin{align*} \Gamma(x,t;y,s) = &\frac{1}{\sqrt{c(t,s)}(4\pi)^{\frac{N}{2}}}\:\chi_{(0,+\infty)}(t-s) \\ \times&\exp\Big(-\frac{1}{4}(x-E(t-s)y)^T C(t,s)^{-1}(x-E(t-s)y)\Big) \mathit{\text{.}} \end{align*} |
Then, the following properties hold:
i) \Gamma is continuous and it is of class C^\infty with respect to the x and y variables, moreover \partial_x^\alpha\partial_y^\beta\Gamma is continuous for any multiindices \alpha and \beta ;
ii) \Gamma and \partial_x^\alpha\partial_y^\beta\Gamma are Lipschitz continuous with respect to t and with respect to s in \{(x, t;y, s)\in \mathbb{R}^{2N+2}: a\leq s, t\leq b\quad\mathit{\text{and}}\quad t-s > \delta\} for any fixed a, b\in \mathbb{R} and \delta > 0 ;
iii) For any (y, s)\in \mathbb{R}^{N+1} we have:
\begin{equation*} \mathcal{L}\Gamma[(\cdot;y,s)](x,t) = 0 \quad{{for \;a.e.}}\;t > s \;{{and \;any}}\;x\in \mathbb{R}^N \end{equation*} |
and for any g\in C_0(\mathbb{R}^N) we also have:
\begin{equation*} \int_{\mathbb{R}^N}\Gamma(\cdot,t;y,s)g(y)dy\xrightarrow[t\to s^+]{} g(\cdot) \quad {{uniformly \;on}} \; \mathbb{R}^{N} \mathit{\text{;}} \end{equation*} |
iv) For any (y, s), (x, t)\in \mathbb{R}^{N+1} if t > s we have:
\begin{equation*} \int_{\mathbb{R}^N}\Gamma(z,t;y,s)dz = \int_{\mathbb{R}^N}\Gamma(x,t;z,s)dz = 1 \mathit{\text{;}} \end{equation*} |
v) For every (x, t;y, s)\in \mathbb{R}^{2N+2} if t > s then:
\begin{equation*} \nu^N\Gamma_\nu(x,t;y,s)\leq\Gamma(x,t;y,s)\leq\frac{1}{\nu^N}\Gamma_{\frac{1}{\nu}}(x,t;y,s) \end{equation*} |
where \nu is defined in (1.2).
Remark 2.1. It is interesting to notice that this fundamental solution reduces to the previous one when the matrix is constant, indeed in that case we have (see [7,8]):
\begin{equation} C(t,s) = C(t-s,0) = D_0(\sqrt{t-s})C(1,0)D_0(\sqrt{t-s}) \quad {for \;any }\;t > s \text{.} \end{equation} | (2.1) |
Actually, the assumptions of [4] are more general than those we are considering, in particular, with few changes in the statement of Theorem 2.2, we could assume that the structure of the matrix \mathbb{B} is (1.6) instead of (1.3), however, in the more general case (2.1) is not valid. We remark also that [4] contains existence and uniqueness for the following Cauchy problem:
\begin{equation*} \left\{\begin{array}{@{}l@{}c} \mathcal{L}u(x,t) = 0, & \text{ a.e. } t > 0 ,\quad \forall x\in\mathbb{R}^N,\\ u(x,0) = g(x),&\forall x\in\mathbb{R}^N. \end{array}\right. \end{equation*} |
As previously mentioned, the theory of [2,3] is fundamental in order to prove the main result of this article. Hence we end this section with some known results from the mentioned articles. Coherently with [2,3] for any multi-index \boldsymbol{\alpha}\in \mathbb{N}^N we define:
\begin{equation*} \omega(\alpha) = \sum\limits_{i = 1}^N \alpha_i q_i \text{.} \end{equation*} |
where the coefficients \{q_i\}_{i = 1}^N are the exponents in the dilations D_0(\lambda) (see Definition 1.1).
Theorem 2.3. [3, Theorem 1.4] Under the assumptions stated above the following properties hold:
i) Estimates on \Gamma : Let \boldsymbol{\alpha}\in \mathbb{N}^{N} be a multi-index. Then, there exist c = c(\nu, \boldsymbol{\alpha}) > 0 and a constant c_1 > 0 , independent of \nu and \boldsymbol{\alpha} , such that:
\begin{equation*} \big|D^{\boldsymbol{\alpha}}_x\Gamma(x,t;y,s) \big|\leq \frac{c}{(t-s)^{\omega(\alpha)/2}}\Gamma_{c_1\nu^{-1}}(x,t;y,s) \end{equation*} |
for every (x, t), (y, s)\in \mathbb{R}^{N+1} with t\not = s .
ii) Given \boldsymbol{\alpha}\in \mathbb{N}^N a nonzero multi-index, if t > s we have:
\begin{equation*} \int_{\mathbb{R}^N}D^\alpha_x\Gamma(x,t;y,s)dy = 0. \end{equation*} |
iii) There exist absolute constants c , \mu > 0 such that, for every fixed T\in\mathbb{R} and every f\in\mathcal{D}(S_{T}) , x\in\mathbb{R}^{N} and \tau < t < T , we have:
\begin{equation*} \int_\tau^t\int_{\mathbb{R}^{N}} |\partial_{x_i x_j}^2\Gamma(x,t;y,s)|\cdot \omega_{f,S_T}(\Vert E(s-t)x-y\Vert )\,dy\,ds \leq c\,\mathcal{U}_{f,S_T}^\mu(\sqrt{t-\tau}). \end{equation*} |
iv) {Representation formulas for solutions}: Let T > 0 be fixed and let u\in S^0(S_T) . If u\equiv0 in \{(x, t):t < 0\} , then the following representation formula holds:
\begin{equation*} u(x,t) = -\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}u(y,s)dyds \quad \forall (x,t)\in S_T \mathit{\text{.}} \end{equation*} |
Moreover, for i\in \{1, \dots, q\} , we have the estimate \Vert \partial_{x_i}u\Vert_{L^\infty(\mathbb{R}^N)}\leq c \Vert \mathcal{L}u\Vert_{L^\infty(S_T)} and
\begin{equation*} \partial_{x_i}u(x,t) = -\int_0^t\int_{\mathbb{R}^N}\partial_{x_i}\Gamma(x,t;y,s)\mathcal{L}u(y,s)dyds, \quad \forall (x,t)\in S_T\mathit{\text{.}} \end{equation*} |
v) Estimates: Let T > \tau > -\infty . Then, there exist c, \mu > 0 , only depending on \nu and \mathbb{B} , such that, for every u\in\mathcal{S}^{0}(S_T) satisfying u\equiv0 in \{(x, t):t < \tau\} and \mathcal{L}u\in\mathcal{D}(S_{T}) we have:
\begin{equation} \begin{split} \sum\limits_{i,j = 1}^{q}\Vert\partial_{x_{i}x_{j}}^{2}u\Vert_{L^\infty(S_T)}+ &\Vert Yu\Vert_{L^\infty(S_T)} \leq c\Big(\Vert \mathcal{L}u\Vert_{L^\infty(S_T)}+ \mathcal{U}^\mu_{\mathcal{L} u,S_T}(\sqrt{T-\tau})\Big) \mathit{\text{,}} \end{split} \end{equation} | (2.2) |
and for any r > 0 the following inequality holds:
\begin{equation} \omega_{\partial^2_{x_ix_j}u,S_T}(r)+\omega_{Y u,S_T}(r)\leq c\,\mathcal{M}_{\mathcal{L} u,S_T}(cr)\mathit{\text{.}} \end{equation} | (2.3) |
Remark 2.2. We must notice that we are exploiting only a part of the results contained in [3]. For instance, similar estimates hold also for a more general class of operators with coefficients a_{ij} which are partially log-Dini continuous and moreover local estimates involving time and space are available for the second order derivatives with respect to the first q space variables (hence they are actually continuous).
From the representation formula above (point ⅳ)) we can easily obtain the following representation formula which entails uniqueness of the solution to the Cauchy problem.
Corollary 2.1. Let T > 0 and let u\in \mathbb{S}(0, T) (see Definition 5) be such that \mathcal{L}u\in L^\infty(\mathbb{R}^N\times(0, T)) . Then, for any (x, t)\in \mathbb{R}^N\times(0, T] we have:
\begin{equation*} u(x,t) = -\int_0^T\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}u(y,s)dyds+ \int_{\mathbb{R}^N}\Gamma(x,t;y,0)u(y,0)dy \mathit{\text{.}} \end{equation*} |
Proof. For \varepsilon > 0 let \psi_\varepsilon:\mathbb{R}\to\mathbb{R} be a mollifier with support contained in (0, \varepsilon) and let \psi^\tau_\varepsilon(t): = \psi_\varepsilon(t-\tau) , \phi_\varepsilon^\tau(t): = \int_{-\infty} ^t\psi_\varepsilon^\tau(s)ds where \tau > 0 . The function \phi_\varepsilon^\tau u belongs to S^0(S_T) hence:
\begin{equation*} \phi_\varepsilon^\tau(t)u(x,t) = -\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}(\phi_\varepsilon^\tau u)(y,s)dyds \quad \forall (x,t)\in S_T \text{.} \end{equation*} |
Therefore, from \mathcal{L}(\phi_\varepsilon^\tau u) = -(\partial_t \phi_\varepsilon^\tau)u+ \phi_\varepsilon^\tau \mathcal{L}u = -\psi_\varepsilon^\tau u+\phi_\varepsilon^\tau \mathcal{L}u , we obtain:
\begin{equation*} \begin{split} \phi_\varepsilon^\tau(t)u(x,t)& = -\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}u(y,s)\phi_\varepsilon^\tau(s)dyds \\ &+\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)u(y,s)dy\psi_\varepsilon^\tau(s)ds = A^\tau_\varepsilon+B_\varepsilon^\tau \text{.} \end{split} \end{equation*} |
We claim that for t > \tau ,
\begin{align} \lim\limits_{\varepsilon\to0} A_\varepsilon^\tau = &-\int_\tau^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s) \mathcal{L}u(y,s)dyds, \end{align} | (2.4) |
\begin{align} \lim\limits_{\varepsilon\to0}B^\tau_\varepsilon = &\int_{\mathbb{R}^N}\Gamma(x,t;y,\tau)u(y,\tau)dy \text{.} \end{align} | (2.5) |
Indeed (2.4) follows by dominated convergence while (2.5) is immediate if we observe that s\mapsto\int_{\mathbb{R}^N}\Gamma(x, t;y, s)u(y, s)dy is continuous in (0, t) for any fixed (x, t) . We have proved that for any (x, t)\in \mathbb{R}^N\times (\tau, T)
\begin{equation*} u(x,t) = -\int_\tau^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}u(y,s)dyds+ \int_{\mathbb{R}^N}\Gamma(x,t;y,\tau)u(y,\tau)dy \text{.} \end{equation*} |
The last step is to take the limit for \tau\to0 . It is apparent that the first term in the right-hand side converges to \int_0^t\int_{\mathbb{R}^N}\Gamma(x, t;y, s)\mathcal{L}u(y, s)dyds while we claim that:
\begin{equation} \lim\limits_{\tau\to 0}\int_{\mathbb{R}^N}\Gamma(x,t;y,\tau)u(y,\tau)dy = \int_{\mathbb{R}^N}\Gamma(x,t;y,0)u(y,0)dy \text{.} \end{equation} | (2.6) |
To prove our claim we first notice that for any fixed (x, t;y)\in \mathbb{R}^N\times(0, +\infty)\times\mathbb{R}^N the function
\begin{equation*} w:[0,t)\to\mathbb{R}:\tau\mapsto\Gamma(x,t;y,\tau)u(y,\tau) \end{equation*} |
is continuous. Moreover, for any fixed (x, t)\in \mathbb{R}^N\times(0, +\infty) also the function
\begin{equation*} c:\mathbb{R}^N\times[0,t/2]\to\mathbb{R}:(y,t)\mapsto\Gamma(x,t;y,\tau)(1+\vert y\vert)^{N+1} \end{equation*} |
is continuous and since it tends to zero as \vert y\vert \to +\infty (uniformly in \tau ) it is also bounded. Therefore for any fixed (x, t)\in \mathbb{R}^N\times(0, +\infty) we have:
\begin{align*} \Gamma(x,t;y,\tau)u(y,\tau)\xrightarrow[\tau\to0^+]{}\Gamma(x,t;y,0)u(y,0),\qquad&\forall y\in\mathbb{R}^N,\\ \Gamma(x,t;y,\tau)\leq \Vert c\Vert_{L^\infty(\mathbb{R}^N\times[0,t/2])}(1+\vert y\vert)^{-N-1},\quad &\forall y\in \mathbb{R}^N\:\forall\tau\in[0,t/2]\text{.} \end{align*} |
Applying the dominated convergence theorem we obtain (2.6).
In this section we shall prove the following theorem:
Theorem 3.1. If f\in C_0^\infty(S_T) then, u:S_T\to\mathbb{R} defined by:
\begin{equation*} u(x,t) = -\int_{-\infty}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds \end{equation*} |
is such that u\in S^0(S_T) and satisfies \mathcal{L} u = f .
To prove the above result we need some preliminary lemmas.
Lemma 3.1. Let \Omega be an open set of \mathbb{R}^N and let w\in C(\Omega) . If w satisfies the following conditions for some i\in\{1, \dots, N\} :
i) For any K\subset\subset\Omega
\begin{equation*} \limsup\limits_{h\to0}\Big\Vert \frac{w(\cdot+h e_i)-w(\cdot)}{h}\Big\Vert_{L^\infty(K)} < +\infty\mathit{\text{;}} \end{equation*} |
ii) For almost every x\in \Omega
\begin{equation*} \frac{w(x+h e_i)-w(x)}{h}\to\partial_{x_i} w(x)\mathit{\text{.}} \end{equation*} |
Then, for every K\subset\subset\Omega
\begin{equation*} \frac{u(\cdot+h e_i)-w(\cdot)}{h}\mathop {\rightharpoonup}\limits_h^* \partial_{x_i} w(\cdot)\quad \mathit{\text{in}}\quad L^\infty(K) \end{equation*} |
and therefore
\begin{equation*} \partial_{x_i}w = D_{x_i} w\quad\mathit{\text{in}}\quad D'(\Omega), \end{equation*} |
where the symbols D_{x_i} w and \partial_{x_i}w denote the weak and the classical derivative of w with respect to x_i .
Proof. Let K be an open subset of \Omega with compact closure and let h_0 > 0 be such that:
\begin{equation*} \sup\limits_{|h|\in(0,h_0)}\Big\Vert \frac{w(\cdot+h e_i)-w(\cdot)}{h}\Big\Vert_{L^\infty(K)} < +\infty \text{.} \end{equation*} |
Thanks to the dominated convergence theorem, we obtain:
\begin{equation*} \frac{u(\cdot+h e_i)-w(\cdot)}{h}\mathop {\rightharpoonup}\limits_h^* \partial_{x_i} w(\cdot)\qquad \text{in}\quad L^\infty(K) \end{equation*} |
and since
\begin{equation*} \frac{w(\cdot+h e_i)-w(\cdot)}{h}\xrightarrow[h]{}D_{x_i} w(\cdot)\qquad \text{in}\quad D'(\Omega), \end{equation*} |
we have \partial_{x_i} w = D_{x_i} w .
Lemma 3.2. Let f\in C_0^\infty(S_T) and let \varepsilon > 0 . Moreover, let u_\varepsilon:S_T\to\mathbb{R} be defined by:
\begin{equation*} u_\varepsilon(x,t) = -\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds \mathit{\text{.}} \end{equation*} |
Then, we have
\begin{equation*} u_\varepsilon\in S^0(S_T),\qquad \mathcal{L}u_\varepsilon(x,t) = \int_{\mathbb{R}^N} \Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy \mathit{\text{.}} \end{equation*} |
Proof. First we observe that \Gamma , \partial_{x_i} \Gamma and \partial_{x_ix_j}^2 \Gamma are uniformly bounded on
\{(x,t,y,s)\in S_T\times S_T:t-s > \varepsilon/2\}. |
Indeed, from point ⅰ) of Theorem 2.3 for any (x, t;y, s)\in S_T\times S_T if t-s > \varepsilon we have:
\begin{align*} \Gamma(x,t;y,s)&\leq\frac{c(4\pi \nu c_1 )^{-\frac{N}{2}}}{\sqrt{c_0}(t-s)^{\frac{Q}{2}}} e^{- \frac{\nu c_1}{4} |D(\frac{1}{\sqrt{t-s}})(x-E(t-s)y)|_0^2}\leq\frac{c}{\varepsilon^{Q/2}}, \\ |\partial_{x_i} \Gamma(x,t;y,s)|&\leq \frac{c(4\pi \nu c_1 )^{-\frac{N}{2}}}{\sqrt{c_0}(t-s)^{\frac{\omega(e_i)+Q}{2}}} e^{- \frac{\nu c_1}{4}|D(\frac{1}{\sqrt{t-s}})(x-E(t-s)y)|_0^2}\leq \frac{c}{\varepsilon^a}, \\ |\partial_{x_ix_j} \Gamma(x,t;y,s)|&\leq \frac{c(4\pi \nu c_1 )^{-\frac{N}{2}}}{\sqrt{c_0}(t-s)^{\frac{\omega(e_i+e_j)+Q}{2}}} e^{- \frac{\nu c_1}{4} |D(\frac{1}{\sqrt{t-s}})(x-E(t-s)y)|_0^2}\leq \frac{c}{\varepsilon^{a'}}, \end{align*} |
for some fixed constants a, a' > 0 . Now we claim that a similar bound on \partial_t \Gamma(x, t;y, s) holds in every set of the kind:
\begin{equation*} \{(x,t;y,s)\in K\times(-\infty,T)\times\mathbb{R}^N\times(-\infty,T):\:t-s > \varepsilon\} \end{equation*} |
with K\subset\subset\mathbb{R}^N . Actually, form point ⅲ) of Theorem 2.2, for almost any
(x,t;y,s)\in K\times(-\infty,T)\times\mathbb{R}^N\times(-\infty,T) |
such that t-s > \varepsilon we have \mathcal{L}\Gamma(x, t;y, s) = 0 , taking a and a' as before:
\begin{equation*} \begin{split} |\partial_t \Gamma(x,t;y,s)| = &\Big|\sum\limits_{i,j = 1}^qa_{ij}(t)\partial_{x_ix_j}^2\Gamma(x,t;y,s)+ \sum\limits_{i,j = 1}^Nx_i b_{ij} \partial_{x_j}\Gamma(x,t;y,s)\Big| \\ \leq&c(\nu)\sum\limits_{i,j = 1}^q|\partial_{x_ix_j}^2\Gamma(x,t;y,s)|+ \sum\limits_{i,j = 1}^N|x_i b_{ij} \partial_{x_j}\Gamma(x,t;y,s)| \\ \leq& N^2 \frac{c}{\varepsilon^{a'}}+ \sup\limits_{x\in K}\Big(\sum\limits_{i,j = 1}^N|x_i b_{ij}|\Big) \frac{c}{\varepsilon^a} \text{.} \end{split} \end{equation*} |
With these preliminary observations, we can proceed with the computation of the classical derivative of \Gamma with respect the variable t . For |h|\in(0, \varepsilon/2) we compute the incremental ratio:
\begin{equation*} \begin{split} -\frac{1}{h}[u_\varepsilon(x,t+h)- u_\varepsilon(x,t)] = +&\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N} \left[ \frac{1}{h}\int_0^h\partial_t\Gamma(x,t+\theta;y,s)d\theta\right]f(y,s)dyds \\ +&\int_{t-\varepsilon}^{t+h-\varepsilon}\int_{\mathbb{R}^N}\left[ \frac{1}{h}\int_0^h\partial_t\Gamma(x,t+\theta;y,s)d\theta\right]f(y,s)dyds \\ +&\frac{1}{h}\int_{t-\varepsilon}^{t+h-\varepsilon}\int_{\mathbb{R}^N}\Gamma(x,t;y,s) f(y,s)dyds\equiv A_h+B_h+C_h \text{.} \end{split} \end{equation*} |
We want to prove that for a.e. (x, t)\in S_T
\begin{equation*} \begin{split} &A_h\xrightarrow[h\to0]{}\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N} \partial_t\Gamma(x,t;y,s)f(y,s)dyds,\\ \\ &B_h\xrightarrow[h\to0]{}0,\\ \\ &C_h\xrightarrow[h\to0]{} \int_{\mathbb{R}^N}\Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy \text{.} \end{split} \end{equation*} |
First we consider B_h . Thanks to the estimates on \partial_t\Gamma , we have
\begin{align*} |B_h|&\leq\Big|\int_{t-\varepsilon}^{t+h-\varepsilon}\int_{\mathbb{R}^N}\left[ \frac{1}{h}\int_0^h|\partial_t\Gamma(x,t+\theta;y,s)|d\theta\right]|f(y,s)|dyds\Big| \\ &\leq\Big(N^2 \frac{c}{\varepsilon^{a'}} + \sup\limits_{x\in K}\Big(\sum\limits_{i,j = 1}^N|x_i b_{ij}|\Big) \frac{c}{\varepsilon^a}\Big) \Big|\int_{t-\varepsilon}^{t+h-\varepsilon}\int_{\mathbb{R}^N}|f(y,s)|dyds\Big|, \end{align*} |
which tends to zero as h\to0 . The convergence of C_h is easily obtained. Indeed, owing to the mean value theorem, it follows that for any h there exists \delta = \delta(h)\in(0, 1) such that
\begin{equation*} C_h = \int_{\mathbb{R}^N}\Gamma(x,t+h,y,t-\varepsilon+\delta h)f(y,t-\varepsilon+\delta h)dy \text{.} \end{equation*} |
Hence, taking the limit as h\to 0 , by dominated convergence, we obtain
\begin{equation*} C_h \to \int_{\mathbb{R}^N}\Gamma(x,t;y,t-\varepsilon) f(y,t-\varepsilon)dy, \quad \text{for any }(x,t)\in S_T \text{.} \end{equation*} |
It is left to prove
\begin{equation*} A_h\to \int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_t\Gamma(x,t;y,s)f(y,s)dyds, \qquad \text{a.e.} \quad (x,t)\in S_T \text{.} \end{equation*} |
Since the derivative \partial_t\Gamma exists a.e. then, for a.e. (x, t)\in S_T and a.e. (y, s)\in S_T , we have
\begin{equation*} \chi_{(\varepsilon,+\infty)}(t-s)\frac{1}{h}\int_0^h\partial_t\Gamma(x,t+\theta;y,s)d\theta \xrightarrow[h]{}\chi_{(\varepsilon,+\infty)}(t-s)\partial_t\Gamma(x,t;y,s) \end{equation*} |
and moreover, thanks to the estimate on \partial_t\Gamma , for a.e. (x, t)\in S_T , we also have
\begin{align*} |\chi_{(\varepsilon,+\infty)}(t-s)f(y,s)\frac{1}{h}\int_0^h\partial_t\Gamma(x,t+\theta;y,s)d\theta| \leq \Big(N^2 \frac{c}{\varepsilon^{a'}}+ \sum\limits_{i,j = 1}^N|x_i b_{ij}|\frac{c}{\varepsilon^a}\Big) |f(y,s)|\in L^1(S_T) \text{.} \end{align*} |
Therefore, applying the dominated convergence, we finally obtain that for a.e. (x, t)\in S_T :
\begin{equation*} A_h\to\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_t\Gamma(x,t;y,s)f(y,s)dyds \text{.} \end{equation*} |
This proves that for a.e. \((x, t)\in S_T\)
\begin{equation*} \begin{split} \lim\limits_{h\to0}&\frac{u_\varepsilon(x,t+h)-u_\varepsilon(x,t)}{h} \\ = &-\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_t\Gamma(x,t;y,s)f(y,s)dyds- \int_{\mathbb{R}^N}\Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy \text{.} \end{split} \end{equation*} |
Now we shall obtain that the classical derivative (which is defined almost everywhere) is also a weak derivative by observing that the incremental ratio is uniformly locally bounded. Actually, let K be a fixed compact subset of \mathbb{R}^N , then from the estimates obtained at the beginning of the proof, for any (x, t)\in K\times (-\infty, T) we derive the following estimates:
\begin{align*} |A_h|&\leq\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\left[ \frac{1}{h}\int_0^h|\partial_t\Gamma(x,t+\theta;y,s)|d\theta\right]|f(y,s)|dyds \\ &\leq\Big(N^2 \frac{c}{\varepsilon^{a'}} + \sup\limits_{x\in K}\Big(\sum\limits_{i,j = 1}^N|x_i b_{ij}|\Big)\frac{c}{\varepsilon^{a}}\Big) \int_{S_T}|f|, \\ |B_h|&\leq\Big(N^2 \frac{c}{\varepsilon^{a'}}+ \sup\limits_{x\in K}\Big(\sum\limits_{i,j = 1}^N|x_i b_{ij}|\Big)\frac{c}{\varepsilon^{a}}\Big)\int_{S_T}|f| \end{align*} |
and
\begin{equation*} |C_h|\leq\frac{1}{h}\int_{t-\varepsilon}^{t+h-\varepsilon}\int_{\mathbb{R}^N}\Gamma(x,t;y,s)|f(y,s)|dyds\leq\\ \Vert f\Vert_{L^\infty(S_T)} \text{.} \end{equation*} |
The last inequality follows from point ⅳ) of Theorem 2.2. By Lemma 3.1 we can conclude that for almost any (x, t) the partial derivative with respect to t exists:
\begin{equation} \begin{split} \partial_t u_\varepsilon(x,t) = &-\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_t\Gamma(x,t;y,s)f(y,s)dyds \\ &-\int_{\mathbb{R}^N}\Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy \end{split} \end{equation} | (3.1) |
and moreover it is also a weak derivative.
For the derivatives with respect to the variables x_i for i\in\{1, \dots, N\} we can apply the standard theorem of differentiation under the integral. Indeed for any fixed t\in (-\infty, T) the function
\begin{equation*} h_t:\mathbb{R}^N\times\mathbb{R}^N\times(-\infty,t-\varepsilon)\to\mathbb{R}: (x,y,s)\mapsto\Gamma(x,t;y,s)f(y,s) \end{equation*} |
is of class C^2 with respect to x and moreover it and its x -derivatives are uniformly bounded by an L^1 function since for any (x, y, s)\in\mathbb{R}^N\times\mathbb{R}^N\times(-\infty, t-\varepsilon) , we have
\begin{align*} |h_t(x,y,s)|&\leq \frac{c}{\varepsilon^{Q/2}}|f(y,s)|\in L^1(\mathbb{R}^{N}\times(-\infty,t-\varepsilon)), \\ |\partial_{x_i} h_t(x,y,s)|&\leq \frac{c}{\varepsilon^a}|f(y,s)|\in L^1(\mathbb{R}^N\times(-\infty,t-\varepsilon)),\\ |\partial_{x_ix_j}^2 h_t(x,y,s)|&\leq \frac{c}{\varepsilon^{a'}}|f(y,s)|\in L^1(\mathbb{R}^N\times(-\infty,t-\varepsilon)) \text{.} \end{align*} |
Therefore, applying the standard theorem of differentiation under the integral sign we get that the classical derivatives of u_\varepsilon exist for all (x, t)\in S_T and moreover
\begin{equation} \partial_{x_i} u_\varepsilon(x,t) = -\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_{x_i}\Gamma(x,t;y,s)f(y,s)dyds, \end{equation} | (3.2) |
\begin{equation} \partial_{x_ix_j}^2 u_\varepsilon(x,t) = -\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\partial_{x_ix_j}^2\Gamma(x,t;y,s)f(y,s)dyds \text{.} \end{equation} | (3.3) |
Since the integrands are continuous and uniformly bounded by an L^1 function by the dominated convergence theorem it follows that \partial_{x_i} u_\varepsilon and \partial_{x_ix_j}^2u_\varepsilon are continuous, hence these derivatives are also weak derivatives. Finally, exploiting (3.1)–(3.3), we get that for almost every (x, t)\in S_T
\begin{equation*} \mathcal{L}u_\varepsilon(x,t) = -\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^N}\mathcal{L}\Gamma(x,t;y,s)f(y,s)dyds+ \int_{\mathbb{R}^N}\Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy \text{.} \end{equation*} |
Hence, applying point ⅱ) of Theorem 2.2, it follows:
\begin{equation*} \mathcal{L}u_\varepsilon(x,t) = \int_{\mathbb{R}^N} \Gamma(x,t;y,t-\varepsilon) f(y,t-\varepsilon)dy, \quad \text{a.e. } (x,t)\in S_T \text{.} \end{equation*} |
Lemma 3.3. If f\in C_0^\infty(S_T) then, for every K\subset\subset\mathbb{R}^N
\begin{equation*} \int_{\mathbb{R}^N}\Gamma(\cdot;y,t-\varepsilon)f(y,t-\varepsilon)dy\xrightarrow[\varepsilon\to0]{} f(\cdot)\:{{uniformly\; on}}\; K\times(-\infty,T) \mathit{\text{.}} \end{equation*} |
Proof. We proceed as in the first part of the proof of Proposition 3.10 in [2]. Owing to points ⅲ) and ⅳ) of Theorem 2.2:
\begin{equation*} \begin{split} \Big|\int_{\mathbb{R}^{N}}&\Gamma(x,t;y,t-\varepsilon)f(y,t-\varepsilon)dy-f(x,t)\Big| \\ \leq& \int_{\mathbb{R}^{N}}\Gamma(x,t;y,t-\varepsilon)|f(y,t-\varepsilon)-f(x,t)|dy \\ \leq& \frac{1}{\nu^{N}} \int_{\mathbb{R}^{N}}\Gamma_{\frac{1}{\nu}}(x,t;y,t-\varepsilon)|f(y,t-\varepsilon)-f(x,t)|dy \\ = & \int_{\mathbb{R}^{N}} \frac{ \exp{(-\frac{\nu}{4}|D_{0}(\frac{1}{\sqrt{\varepsilon}})(x-E(\varepsilon)y)|_0^2)}} { \sqrt{(\nu\:4\pi)^N\varepsilon^{Q}c_0(1)} } |f(y,t-\varepsilon)-f(x,t)|dy\\ = &\dots . \end{split} \end{equation*} |
Now we make the change of variable \{z = D_{0}(\frac{1}{\sqrt{\varepsilon}})(x-E(\varepsilon)y), dz = \frac{1}{\varepsilon^\frac{Q}{2}}dy\}
\begin{align*} \dots = &\int_{\mathbb{R}^{N}} \frac{ \exp{(-\frac{\nu}{4}|z|_0^{2})}} { \sqrt{(\nu\:4\pi)^Nc_0(1)} } |f(E(\varepsilon)(x-D_{0}(\sqrt{\varepsilon})z),t-\varepsilon)-f(x,t)|dz \\ \leq& \int_{\mathbb{R}^{N}} \frac{ \exp{(-\frac{\nu}{4}|z|_0^{2})}} { \sqrt{(\nu\:4\pi)^Nc_0(1)} } \Vert \nabla f\Vert_{L^\infty(S_T)} |(E(\varepsilon)x-x -E(\varepsilon)D_{0}(\sqrt{\varepsilon})z,-\varepsilon)|dz \\ \leq& \int_{\mathbb{R}^{N}} \frac{ \exp{(-\frac{\nu}{4}|z|_0^{2})}} { \sqrt{(\nu\:4\pi)^Nc_0(1)} } \Vert \nabla f\Vert_{L^\infty(S_T)} \{|E(\varepsilon)x-x|+ |E(\varepsilon)D_{0}(\sqrt{\varepsilon})z|+ \varepsilon\}dz \\ \leq& \int_{\mathbb{R}^{N}} \frac{ \exp{(-\frac{\nu}{4}|z|_0^{2})}} { \sqrt{(\nu\:4\pi)^Nc_0(1)} } \Vert \nabla f\Vert_{L^\infty(S_T)} \{|E(\varepsilon)x-x|+ \Vert E(\varepsilon)D_{0}(\sqrt{\varepsilon})\Vert |z|+ \varepsilon\}dz \\ \leq& C\Vert \nabla f\Vert_\infty \{\Vert E(\varepsilon)-I\Vert |x|+ \Vert E(\varepsilon)D_{0}(\sqrt{\varepsilon})\Vert + \varepsilon\}, \end{align*} |
which, for x varying in a compact set, vanishes uniformly as \varepsilon\to0 .
Proof of Theorem 3.1. In order to prove the existence theorem we exploit the uniform convergence of u_\varepsilon and its derivatives.
We begin with the convergence of u_\varepsilon .
By point ⅳ) of Theorem 2.2 we easily get:
\begin{equation*} \begin{split} \Big|-\int_{-\infty}^{t}\int_{\mathbb{R}^{N}}&\Gamma(x,t;y,s)f(y,s)dyds-u_{\varepsilon}(x,t)\Big| \\ &\leq\int_{t-\varepsilon}^{t}\int_{\mathbb{R}^{N}} \Gamma(x,t;y,s)|f(y,s)|dyds\\ &\leq \varepsilon \Vert f\Vert_{L^\infty(S_T)} \text{.} \end{split} \end{equation*} |
Then for the first derivatives we proceed in the same way as in Corollary 3.12 in [2]. For any i\in \{1, \dots, q\} we have:
\begin{equation*} \begin{split} \Big|-\int_{-\infty}^{t}&\int_{\mathbb{R}^{N}}\partial_{x_{i}}\Gamma(x,t;y,s)f(y,s)dyds -\partial_{x_{i}}u_{\varepsilon}(x,t)\Big| \\ = &\Big|\int_{t-\varepsilon}^{t}\int_{\mathbb{R}^{N}}\partial_{x_{i}}\Gamma(x,t;y,s)f(y,s)dyds\Big| \\ \leq&\int_{t-\varepsilon}^{t}\int_{\mathbb{R}^{N}} |\partial_{x_{i}}\Gamma(x,t;y,s)|dyds\Vert f\Vert_{\infty} \\ \leq& c\int_{t-\varepsilon}^{t} \frac{1}{\sqrt{t-s}}\left(\int_{\mathbb{R}^{N}} \Gamma_{c_{1}\nu^{-1}}(x,t,y,s)dy\right)ds \Vert f\Vert_{L^\infty(S_T)} \\ = &c\int_{t-\varepsilon}^{t}\frac{1}{\sqrt{t-s}}ds\Vert f\Vert_{L^\infty(S_T)}\\ = & 2c\sqrt{\varepsilon}\Vert f\Vert_{L^\infty(S_T)} \text{.} \end{split} \end{equation*} |
Now we claim that the integral
\begin{equation*} -\int_{-\infty}^{t}\int_{\mathbb{R}^{N}}\partial_{x_{j}x_{i}}^{2}\Gamma(x,t;y,s)[f(E(s-t)x,s)-f(y,s)]dyds \end{equation*} |
is absolutely convergent and that \partial_{x_{i}x_{j}}u_{\varepsilon} converges uniformly to it. Indeed, since f is C_0^\infty(\mathbb{R}^{N+1}) (therefore also \mathcal{D}(\mathbb{R}^{N+1}) ), for any (x, t)\in S_T there exists \tau < t such that {\rm{supp}}(f)\subset(\tau, +\infty)\times\mathbb{R}^N , hence, owing to point ⅲ) of Theorem 2.3, we have:
\begin{equation*} \begin{split} \int_{-\infty}^{t}&\int_{\mathbb{R}^{N}} |\partial_{x_{j}x_{i}}^{2}\Gamma(x,t;y,s)[f(E(s-t)x,s)-f(y,s)]|dyds \\ \leq&\mathbf{c}\int_\tau^t\int_{\mathbb{R}^{N}}|\partial_{x_{i}x_{j}}^{2}\Gamma(x,t;y,s)| \cdot\omega_{f,S_{T}}(\Vert E(s-t)x-y\Vert)\,dy\,ds \\ \leq&\mathbf{c}\:\mathcal{U}_{f,S_{T}}^{\mu}(\sqrt{t-\tau})\\ < &+\infty\text{.} \end{split} \end{equation*} |
Moreover, thanks to Theorem 2.3 point ⅱ) we have:
\begin{equation*} \partial_{x_{i}x_{j}}^{2}u_{\varepsilon}(x,t) = -\int_{-\infty}^{t-\varepsilon}\int_{\mathbb{R}^{N}} \partial_{x_{j}x_{i}}^{2}\Gamma(x,t;y,s)[f(E(s-t)x,s)-f(y,s)]dyds \text{.} \end{equation*} |
Therefore, from point ⅲ) of Theorem 2.3, we obtain:
\begin{equation*} \begin{split} \Big|-\int_{-\infty}^{t}\int_{\mathbb{R}^{N}} &\partial_{x_{j}x_{i}}^{2}\Gamma(x,t;y,s)[f(E(s-t)x,s)-f(y,s)]dyds -\partial_{x_ix_j}^2u_{\varepsilon}(x,t)\Big| \\ \leq&\mathbf{c}\int_{\mathbb{R}^{N}\times(t-\varepsilon,t)} |\partial_{x_{i}x_{j}}^{2} \Gamma(x,t;y,s)|\cdot\omega_{f,S_{T}}(\Vert E(s-t)x-y\Vert)\,dy\,ds \\ \leq&\mathbf{c}\:\mathcal{U}_{f,S_{T}}^{\mu}(\sqrt{\varepsilon})\text{.} \end{split} \end{equation*} |
We have proved that u has continuous derivatives up to the second order with respect to the variables x_i for i\in\{1, \dots, q\} . Applying Lemma 3 we obtain that \mathcal{L}u_\varepsilon\to f in L^\infty_{loc} hence thanks to the formerly proved limits we get:
\begin{equation*} Yu_\varepsilon\xrightarrow[\varepsilon]{L^\infty_{loc}}f-\sum\limits_{i,j = 1}^q a_{ij} \partial_{x_ix_j}^2u \text{.} \end{equation*} |
The convergence is also in D'(S_T) , so
Yu = f-\sum\limits_{i,j = 1}^q a_{ij} \partial_{x_ix_j}^2u |
in D'(S_T). Therefore Yu\in L^\infty(S_T) and \mathcal{L}u = f . Thus u\in S^0(S_T) .
This section is devoted to the proof of Theorem 1.1. As remarked in Section 1 the main point in the proof of this theorem is the existence of a solution, since uniqueness and the stability estimates follow from the results of [2,3].
Theorem 4.1. If f\in \mathcal{D}(S_T) and is compactly supported, then, the function u:S_T\to\mathbb{R} defined by:
\begin{equation*} u(x,t) = -\int_{-\infty}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds \end{equation*} |
is such that u\in S^0(S_T) , \mathcal{L}u = f and moreover we have the stability estimates (2.2) and (2.3).
Proof. In this proof the symbol * denotes the standard convolution.
Let f_\varepsilon = f*\varphi_\varepsilon be the convolution of f with a compactly supported mollifier \varphi_\varepsilon:\mathbb{R}^{N+1}\to \mathbb{R} and let u_\varepsilon\in S^0(S_T) be the solution given by the existence Theorem 3.1 with datum f_\varepsilon , namely:
\begin{equation*} u_\varepsilon(x,t) = -\int_{-\infty}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f_\varepsilon(y,s)dyds \text{.} \end{equation*} |
We want to prove that u_\varepsilon \mathop {\rightharpoonup}\limits_\varepsilon ^* u in L^\infty(S_T) . First we prove that
\begin{equation} f_\varepsilon \mathop {\rightharpoonup}\limits_{\varepsilon \to {0^ + }}^* f \text{ in } L^\infty \text{.} \end{equation} | (4.1) |
Indeed, since for any \phi\in L^1(\mathbb{R}^{N+1}) , \phi*\bar\varphi_\varepsilon\xrightarrow[\varepsilon]{L^1(S_T)}\phi where \bar\varphi_{\varepsilon}(x, t): = \varphi_{\varepsilon}(-x, -t) , then we easily have
\begin{equation*} _{L^\infty}\langle f_\varepsilon,\phi\rangle_{L^1} = _{L^\infty}\langle f,\phi*\bar\varphi_\varepsilon\rangle_{L^1} \to\:_{L^\infty}\langle f,\phi\rangle_{L^1},\quad \forall \phi\in L^1(S_T) \text{.} \end{equation*} |
Now, since f has compact support, there exist T_1 < T such that supp(f), supp(f_\varepsilon)\subset\mathbb{R}^N\times(T_1, T) for any \varepsilon sufficiently small. Then, observing that \Gamma(x, t;\cdot)\in L^1(\mathbb{R}^N\times (T_1, T)) for any (x, t) , we obtain
\begin{equation*} \int_{S_T}\Gamma(x,t;y,s)f_\varepsilon(y,s)dyds \to\int_{S_T}\Gamma(x,t;y,s)f(y,s)dyds,\qquad \forall (x,t)\in S_T, \end{equation*} |
which means that u_\varepsilon(x, t)\to u(x, t) for any (x, t)\in S_T .
Now, let \phi\in L^1(S_T) , since u_\varepsilon\to u pointwise and
|\phi(u_\varepsilon-u)|\leq 2 T |\phi|\Vert f\Vert_{L^\infty(S_T)} , |
applying again the dominated convergence theorem we get:
\begin{equation*} \int_{S_T} \phi(x,t)u_\varepsilon(x,t)dxdt \to \int_{S_T}\phi(x,t)u(x,t)dxdt, \end{equation*} |
hence
\begin{equation} \:_{L^\infty}\langle u_\varepsilon,\phi\rangle_{L^1}\to _{L^\infty}\langle u,\phi\rangle_{L^1},\quad \forall \phi\in L^1(S_T) \text{.} \end{equation} | (4.2) |
Then, notice that \omega_{f_\varepsilon, S_T}\leq\omega_{f, S_T} and \Vert u_\varepsilon\Vert_{L^\infty(S_T)} \leq \Vert f\Vert_{L^\infty(S_T)} , hence, by points ⅳ), ⅴ) of [Theorem 2.3], it follows that, for some fixed constant c > 0 for any i, j\in\{1, \dots, q\} and any \varepsilon > 0 sufficiently small we have
\begin{equation*} \sum\limits_{i,j = 1}^{q}\Vert \partial_{x_{i}x_{j}}^{2}u\Vert_{L^\infty(S_T)}+ \Vert Yu\Vert_{L^\infty(S_T)}\leq c\Big(\Vert f\Vert_{L^\infty(S_T)}+ \mathcal{U}^\mu_{f,S_T}(\sqrt{T-T_1})\Big) \end{equation*} |
and
\begin{equation*} \Vert \partial_{x_i}u_\varepsilon\Vert_{L^\infty(S_T)}\leq c \Vert f\Vert_{L^\infty(S_T)}\text{.} \end{equation*} |
Therefore the L^\infty(S_T) norms of u_\varepsilon , \partial_{x_i}u_\varepsilon ( i\in\{1, \dots, q\} ), \partial_{x_ix_j}^2u_\varepsilon ( i, j\in\{1, \dots q\} ) and Yu_\varepsilon are uniformly bounded (w.r.t. \varepsilon ). Hence we apply the Banach-Alaoglu theorem (in L^\infty(S_T) ) to u_\varepsilon , \partial_{x_i}u_\varepsilon , \partial_{x_ix_j}^2u_\varepsilon and Yu_\varepsilon . In this way we obtain a sequence of real numbers \varepsilon_k\downarrow0 such that u_{\varepsilon_k} and \partial_{x_i} u_{\varepsilon_k} , \partial_{x_ix_j}^2u_{\varepsilon_k} and Yu_{\varepsilon_k} converge in the weak topology \sigma(L^\infty(S_T), L^1(S_T)) to functions in L^\infty(S_T) . Notice that u_{\varepsilon_k} , \partial_{x_i} u_{\varepsilon_k} , Yu_{\varepsilon_k} and \partial_{x_ix_j}^2u_{\varepsilon_k} converge also in the sense of distributions hence, thanks to the uniqueness of the limit in the sense of distributions, we obtain that u\in S^0(S_T) and for i, j\in\{1, \dots, q\}
\begin{align*} &u_{\varepsilon_k}\mathop {\rightharpoonup}\limits_k^* u ,\quad \partial_{x_i} u_{\varepsilon_k} \mathop {\rightharpoonup}\limits_k^* \partial_{x_i} u, \\ \partial_{x_ix_j}^2& u_{\varepsilon_k} \mathop {\rightharpoonup}\limits_k^* \partial_{x_ix_j}^2 u ,\quad Y u_{\varepsilon_k}\mathop {\rightharpoonup}\limits_k^* Y u, \end{align*} |
in L^\infty(S_T) . Notice that actually it is not necessary to take a subsequence since the limit is unique.
It is left to show that \mathcal{L}u = f but thanks to (4.1) we only need to prove that \mathcal{L}u_\varepsilon \mathop {\rightharpoonup}\limits_\varepsilon^* \mathcal{L}u . Let \phi\in L^1(S_T) and i, j\in\{1, \dots, q\} then:
\begin{equation*} \begin{split} _{L^\infty}&\langle a_{i,j}\partial_{x_ix_j}u_\varepsilon,\phi\rangle_{L^1} = \:_{L^\infty}\langle\partial_{x_ix_j}u_\varepsilon,a_{i,j}\phi\rangle_{L^1}\to \:_{L^\infty}\langle \partial_{x_ix_j}u,a_{i,j}\phi\rangle_{L^1} = \:_{L^\infty}\langle a_{i,j}\partial_{x_ix_j}u,\phi\rangle_{L^1} \text{.} \end{split} \end{equation*} |
Notice that the estimates are valid since point ⅴ) of Theorem 2.3 only require u\in S^0(S_T) therefore the proof is completed.
Theorem 4.2. Let f\in D(S_T) such that supp(f)\subset \mathbb{R}^N\times(\tau, T) for some \tau\in(-\infty, T) and let u:S_T\to\mathbb{R} be defined by:
\begin{equation*} u(x,t) = -\int_{-\infty}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds \mathit{\text{.}} \end{equation*} |
Then, u\in S^0(S_T) , \mathcal{L}u = f and we have the estimates (2.2) and (2.3).
Proof. The proof of this theorem is similar to the previous one but this time f is approximated in a different way. Let \{\phi_i\}_i\subset C_0^\infty(\mathbb{R}^N) be such that 0\leq\phi_i\uparrow1 (as i\to+\infty ), define f_i = f \phi_i and u_i to be the solution of \mathcal{L}u_i = f_i given in Theorem 4.1. Notice that u_i admits the representation formula:
\begin{equation*} u_i(x,t) = -\int_{-\infty}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f_i(y,s)dyds\text{.} \end{equation*} |
Since \Gamma(x, t;\cdot) is integrable for any fixed (x, t) , thanks to dominated convergence, u_i\to u pointwise. Moreover \Vert u_i-u\Vert_{L^\infty(S_T)} \leq T\Vert f-f_j\Vert_{L^\infty(S_T)} \leq T\Vert f\Vert_{L^\infty(S_T)} hence, taking any \psi\in L^1(S_T) , by dominated convergence we obtain:
\begin{equation*} _{L^\infty}\langle u-u_i,\psi\rangle_{L^1} = \int_{S_T}(u-u_i)\psi dxdt\xrightarrow[i\to+\infty]{}0 \end{equation*} |
thus u_i\mathop {\rightharpoonup}\limits_{}^*u in L^\infty(S_T) .
Now we observe that f_i \mathop {\rightharpoonup}\limits_i^* f in L^\infty(S_T) . Indeed \Vert f_i-f\Vert_{L^\infty(S_T)} \leq\Vert f\Vert_{L^\infty(S_T)} hence, for fixed \psi\in L^1(S_T) , we can apply again the dominated convergence obtaining:
\begin{equation*} _{L^\infty}\langle f-f_i,\psi\rangle_{L^1} = \int_{S_T}f(1-\phi_i)\psi dxdt\xrightarrow[ i\to+\infty]{}0\text{.} \end{equation*} |
Finally, thanks to the estimates Theorem 2.3 v ) employing the same argument of the proof of Theorem 4.1 we find that u\in S^0(S_T) satisfies the estimates (2.2), (2.3) and for k, j\in\{1, \dots, q\}
\begin{align*} &u_{\varepsilon_k}\mathop {\rightharpoonup}\limits_k^* u ,\quad \partial_{x_i} u_{\varepsilon_k} \mathop {\rightharpoonup}\limits_k^* \partial_{x_i} u, \\ \partial_{x_ix_j}^2& u_{\varepsilon_k} \mathop {\rightharpoonup}\limits_k^* \partial_{x_ix_j}^2 u ,\quad Y u_{\varepsilon_k}\mathop {\rightharpoonup}\limits_k^* Y u, \end{align*} |
in L^\infty(S_T) . This entails also \mathcal{L}u = f .
Remark 4.1. As already mentioned, the paper [4] contains the solution to the problem:
\begin{equation*} \left\{\begin{matrix} \mathcal{L}u(x,t) = 0, & \text{ a.e. } t > 0, \quad \forall x\in\mathbb{R}^N,\\ u(x,0) = g(x), &\forall x\in\mathbb{R}^N. \end{matrix}\right. \end{equation*} |
Moreover, the explicit form of the solution is the following:
\begin{equation} u(x,t) = \int_{\mathbb{R}^N}\Gamma(x,t;y,0)g(y)dy\text{.} \end{equation} | (4.3) |
It is easily seen that u has continuous derivatives of any order with respect to x while it is locally Lipschitz with respect to t in (0, +\infty)\times \mathbb{R}^N . Moreover u satisfies \mathcal{L}u(x, t) = 0 for all x\in\mathbb{R}^N and a.e. t\in (0, +\infty) therefore u\in \mathbb{S}(0, +\infty) and \mathcal{L}u = 0 in the sense of distributions.
Now we can conclude the proof of Theorem 1.1.
Proof of Theorem 1.1. Let u_f be the solution of the problem (1.7) with datum g\equiv0 while let u_g be the solution to (1.7) with f\equiv0 . From what we have seen until now the function u: = u_f+u_g satisfies u\in \mathbb{S}(0, +\infty) , \mathcal{L}u = f and u(\cdot, 0) = g(\cdot) and moreover it is the unique solution of (1.7) and is given by
\begin{equation*} u(x,t) = -\int_0^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)f(y,s)dyds+\int_{\mathbb{R}^N}\Gamma(x,t;y,0)g(y)dy. \end{equation*} |
We are left to prove the estimates (1.9) and (1.10), to this aim we proceed on u_f and u_g separately Concerning u_f applyig (2.2) and (2.3) we easily obtain:
\begin{equation} \sum\limits_{i,j = 1}^{q}\Vert\partial_{x_{i}x_{j}}^{2}u_f\Vert_{L^\infty(S_T)}+ \Vert Yu_f\Vert_{L^\infty(S_T)}\leq c\Big(\Vert f\Vert_{L^\infty(S_T)}+\mathcal{U}^\mu_{f,S_T}(\sqrt{T})\Big), \end{equation} | (4.4) |
\begin{equation} \omega_{\partial^2_{x_ix_j}u,S_T}(r)+\omega_{Y u,S_T}(r)\leq c\mathcal{M}_{f,S_T}(cr)\text{.} \end{equation} | (4.5) |
While to prove the estimates for u_g we shall represent u_g as a solution of a nonhomogeneous problem (with null initial datum at t = -1 ) so that we can exploit again the estimates (2.2) and (2.3). Let \varphi\in C^\infty(\mathbb{R}) be a cut-off function satisfying \varphi(t) = 0 for t\leq-1 and \varphi(t) = 1 for t\geq0 . From the regularity properties of g it is easily verified that g\otimes\varphi(\cdot, \cdot+1) belongs to \mathbb{S}(0, +\infty) and therefore thanks to Corollary 2.1 we can represent g\otimes\varphi as:
\begin{equation*} g\otimes\varphi(x,t) = -\int_{-1}^t\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}(g\otimes\varphi)(y,s)dyds\text{.} \end{equation*} |
In particular for t = 0 we have:
\begin{equation} g(x) = -\int_{-1}^0\int_{\mathbb{R}^N}\Gamma(x,0;y,s)\mathcal{L}(g\otimes\varphi)(y,s)dyds\text{.} \end{equation} | (4.6) |
Moreover, it is easily seen that the function v defined by:
\begin{equation*} v(x,t) = -\int_{-1}^0\int_{\mathbb{R}^N}\Gamma(x,t;y,s)\mathcal{L}(g\otimes\varphi)(y,s)dyds \end{equation*} |
is a solution of:
\begin{equation} \mathcal{L}v(x,t) = \mathcal{L}(g\otimes\varphi)(x,t)\:\chi_{(-\infty,0)}(t)\text{.} \end{equation} | (4.7) |
Thanks to the uniqueness of the solution, from (4.7) and (4.6) we obtain
\begin{equation*} u_g(x,t) = v(x,t),\qquad\forall x\in \mathbb{R}^N,\quad\forall t > 0\text{.} \end{equation*} |
Hence we can apply the estimates (2.2) and (2.3) to v (with \tau = -1 ) in order to obtain some estimates for u_g . In particular we have:
\begin{equation} \begin{split} \sum\limits_{i,j = 1}^{q}\Vert\partial_{x_{i}x_{j}}^{2}u_g&\Vert_{L^\infty(S_{0,T})}+ \Vert Yu_g\Vert_{L^\infty(S_{0,T})} \leq c\big(\Vert \mathcal{L}(g\otimes\varphi)\Vert_{L^\infty(\mathbb{R}^N)}+ \mathcal{U}^\mu_{\mathcal{L}(g\otimes\varphi),\mathbb{R}^N}(\sqrt{T+1})\big), \end{split} \end{equation} | (4.8) |
\begin{equation} \omega_{\partial^2_{x_ix_j}u_g,S_{0,T}}(r)+ \omega_{Y u_g,S_{0,T}}(r)\leq c\mathcal{M}_{\mathcal{L}(g\otimes\varphi),S_T}(cr)\text{.} \end{equation} | (4.9) |
In order to conclude the proof we only need to bound the quatities on the right hand side of (4.8) and (4.9) in terms of g . Computing explicitly \mathcal{L}(g\otimes \varphi) we obtain:
\begin{equation*} \mathcal{L}(g\otimes \varphi)(x,t) = \sum\limits_{i,j = 1}^qa_{ij}(t)\partial_{x_ix_j}^2g(x)\varphi(t)+ (Yg)(x)\varphi(t)-g(x)\varphi'(t)\text{.} \end{equation*} |
From these computations we easily infer the following estimates:
\begin{equation} \begin{split} \Vert\mathcal{L}(g\otimes\varphi)\Vert_{L^\infty(\mathbb{R}^{N+1})}& \leq k\Big(\Vert g \Vert_{L^\infty(\mathbb{R}^N)}+ \sum\limits_{i,j = 1}^q\Vert\partial_{x_ix_j}^2 g\Vert_{L^\infty(\mathbb{R}^N)}+ \Vert Y g\Vert_{L^\infty(\mathbb{R}^N)}\Big), \end{split} \end{equation} | (4.10) |
\begin{equation} \omega_{\mathcal{L}(g\otimes\varphi),\mathbb{R}^{N+1}}(r)\leq k\Big(\omega_{g,\mathbb{R}^N}(r)+ \sum\limits_{i,j = 1}^q\omega_{\partial_{x_ix_j}^2g,\mathbb{R}^N}(r)+ \omega_{Yg,\mathbb{R}^N}(r)\Big)\text{.} \end{equation} | (4.11) |
where the constant k depend only on \Vert a_{ij}\Vert_{L^\infty(\mathbb{R}^{N+1})} , \Vert \varphi\Vert_{L^\infty(\mathbb{R})} and \Vert \varphi'\Vert_{L^\infty(\mathbb{R})} . Thanks to (4.8)–(4.11) we obtain:
\begin{equation} \begin{split} &\sum\limits_{i,j = 1}^{q}\Vert\partial_{x_{i}x_{j}}^{2}u_g\Vert_{L^\infty(S_{0,T})}+ \Vert Yu_g\Vert_{L^\infty(S_{0,T})} \\ &\leq c\Big(\Vert g\Vert_{L^\infty(\mathbb{R}^N)}+ \sum\limits_{i,j = 1}^q\Vert\partial_{x_ix_j}^2 g\Vert_{L^\infty(\mathbb{R}^N)}+ \Vert Y g\Vert_{L^\infty(\mathbb{R}^N)} \\ &+\mathcal{U}^\mu_{g,\mathbb{R}^N}(\sqrt{T+1})+ \sum\limits_{i,j = 1}^q\mathcal{U}^\mu_{\partial_{x_ix_j}^2 g,\mathbb{R}^N}(\sqrt{T+1})+ \mathcal{U}^\mu_{Y g,\mathbb{R}^N}(\sqrt{T+1})\Big), \end{split} \end{equation} | (4.12) |
\begin{equation} \begin{split} \omega_{\partial^2_{x_ix_j}u_g,S_{0,T}}(r)+\omega_{Y u_g,S_{0,T}} (r) \leq c\Big(\mathcal{M}_{g,\mathbb{R}^N}&(cr)+ \sum\limits_{i,j = 1}^q\mathcal{M}_{\partial_{x_ix_j}^2g,\mathbb{R}^N}(cr)+ \mathcal{M}_{Yg,\mathbb{R}^N}(cr)\Big)\text{.} \end{split} \end{equation} | (4.13) |
Finally, combining (4.4), (4.5), (4.12) and (4.13) we deduce (1.9) and (1.10).
Starting from the theory of [2,3,4] we were able to prove the well-posedness of a Cauchy problem for a degenerate Kolmogorov-Fokker-Planck operator under weak regularity assumptions on the coefficients and an the data. The main point was to prove the existence of a solution for the problem with null initial data since the existence of a solution to the homogeneous problem with nonnull initial data, the stability estimates and the uniqueness of the solution follows from known results. Since an explicit fundamental solution is known, in order to find a solution to the Cauchy problem we employed the Duhamel method. However, due to the low regularity of the coefficients, we first considered the case of smooth and compactly supported datum, then, by approximation, we have shown the existence of a solution for a compactly supported datum satisfying the minimal regularity assumptions and finally we weakened the assumptions on the support of the datum.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This paper originates from the master thesis [1] done at Politecnico di Milano under the guidance of Prof. Marco Bramanti and Dr. Stefano Biagi to whom my gratitude goes. The mentioned thesis is focused on the same operator but is based on the Schauder theory hence after suitable modifications the Sections 3 and 4 are taken from Sections 2.2 and 2.3 of [1]. I wish to thank Prof. Marco Bramanti and Dr. Stefano Biagi, for the help and the many suggestions they gave me during the writing of this article.
The author declares no conflict of interest.
[1] | T. Barbieri, On Kolmogorov Fokker Planck equations with linear drift and time dependent measurable coefficients, MS. Thesis, Politecnico di Milano, 2022. Available from: http://hdl.handle.net/10589/196258. |
[2] |
S. Biagi, M. Bramanti, Schauder estimates for Kolmogorov-Fokker-Planck operators with coefficients measurable in time and Hölder continuous in space, J. Math. Anal. Appl., 533 (2024), 127996. https://doi.org/10.1016/j.jmaa.2023.127996 doi: 10.1016/j.jmaa.2023.127996
![]() |
[3] | S. Biagi, M. Brmanti, B. Stroffolini, KFP operators with coefficients measurable in time and Dini continuous in space, J. Evol. Equ., unpublished work, 2023. |
[4] |
M. Bramanti, S. Polidoro, Fundamental solutions for Kolmogorov-Fokker-Planck operators with time-depending measurable coefficients, Math. Eng., 2 (2020), 734–771. https://doi.org/10.3934/mine.2020035 doi: 10.3934/mine.2020035
![]() |
[5] |
M. Di Francesco, A. Pascucci, On a class of degenerate parabolic equations of Kolmogorov type, Appl. Math. Res. eXpress, 2005 (2005), 77–116. https://doi.org/10.1155/AMRX.2005.77 doi: 10.1155/AMRX.2005.77
![]() |
[6] | A. Kolmogorov, Zufallige bewegungen (zur theorie der Brownschen bewegung), Ann. Math., 35 (1934), 116–117. |
[7] |
L. P. Kuptsov, Fundamental solutions of certain degenerate second-order parabolic equations, Math. Notes Acad. Sci. USSR, 31 (1982), 283–289. https://doi.org/10.1007/BF01138938 doi: 10.1007/BF01138938
![]() |
[8] | E. Lanconelli, S. Polidoro, On a class of hypoelliptic evolution operators, Rend. Sem. Mat.-Univ. Pol. Torino, 52 (1994), 29–63. |
[9] | G. Lucertini, S. Pagliarani, A. Pascucci, Optimal regularity for degenerate Kolmogorov equations with rough coefficients, arXiv, 2022. https://doi.org/10.48550/arXiv.2204.14158 |
[10] |
I. Sonin, On a class of degenerate diffusion processes, Theory Prob. Appl., 12 (1967), 490–496. https://doi.org/10.1137/1112059 doi: 10.1137/1112059
![]() |
[11] | M. Weber, The fundamental solution of a degenerate partial differential equation of parabolic type, Trans. Amer. Math. Soc., 71 (1951), 24–37. |