Loading [MathJax]/extensions/TeX/boldsymbol.js
Research article Special Issues

On the convergence rates of discrete solutions to the Wave Kinetic Equation

  • In this paper, we consider the long-term behavior of some special solutions to the Wave Kinetic Equation. This equation provides a mesoscopic description of wave systems interacting nonlinearly via the cubic NLS equation. Escobedo and Velázquez showed that, starting with initial data given by countably many Dirac masses, solutions remain a linear combination of countably many Dirac masses at all times. Moreover, there is convergence to a single Dirac mass at long times. The first goal of this paper is to give quantitative rates for the speed of said convergence. In order to study the optimality of the bounds we obtain, we introduce and analyze a toy model accounting only for the leading order quadratic interactions.

    Citation: Michele Dolce, Ricardo Grande. On the convergence rates of discrete solutions to the Wave Kinetic Equation[J]. Mathematics in Engineering, 2024, 6(4): 536-558. doi: 10.3934/mine.2024022

    Related Papers:

    [1] Lorenzo Pistone, Sergio Chibbaro, Miguel D. Bustamante, Yuri V. Lvov, Miguel Onorato . Universal route to thermalization in weakly-nonlinear one-dimensional chains. Mathematics in Engineering, 2019, 1(4): 672-698. doi: 10.3934/mine.2019.4.672
    [2] Nickolas Giardetti, Amy Shapiro, Stephen Windle, J. Douglas Wright . Metastability of solitary waves in diatomic FPUT lattices. Mathematics in Engineering, 2019, 1(3): 419-433. doi: 10.3934/mine.2019.3.419
    [3] Michael Herrmann, Karsten Matthies . Solitary waves in atomic chains and peridynamical media. Mathematics in Engineering, 2019, 1(2): 281-308. doi: 10.3934/mine.2019.2.281
    [4] Xavier Fernández-Real, Alessio Figalli . On the obstacle problem for the 1D wave equation. Mathematics in Engineering, 2020, 2(4): 584-597. doi: 10.3934/mine.2020026
    [5] Hao Zheng . The Pauli problem and wave function lifting: reconstruction of quantum states from physical observables. Mathematics in Engineering, 2024, 6(4): 648-675. doi: 10.3934/mine.2024025
    [6] Lars Eric Hientzsch . On the low Mach number limit for 2D Navier–Stokes–Korteweg systems. Mathematics in Engineering, 2023, 5(2): 1-26. doi: 10.3934/mine.2023023
    [7] Jonathan A. D. Wattis . Asymptotic approximations to travelling waves in the diatomic Fermi-Pasta-Ulam lattice. Mathematics in Engineering, 2019, 1(2): 327-342. doi: 10.3934/mine.2019.2.327
    [8] Plamen Stefanov . Conditionally stable unique continuation and applications to thermoacoustic tomography. Mathematics in Engineering, 2019, 1(4): 789-799. doi: 10.3934/mine.2019.4.789
    [9] Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025
    [10] Biagio Cassano, Lucrezia Cossetti, Luca Fanelli . Spectral enclosures for the damped elastic wave equation. Mathematics in Engineering, 2022, 4(6): 1-10. doi: 10.3934/mine.2022052
  • In this paper, we consider the long-term behavior of some special solutions to the Wave Kinetic Equation. This equation provides a mesoscopic description of wave systems interacting nonlinearly via the cubic NLS equation. Escobedo and Velázquez showed that, starting with initial data given by countably many Dirac masses, solutions remain a linear combination of countably many Dirac masses at all times. Moreover, there is convergence to a single Dirac mass at long times. The first goal of this paper is to give quantitative rates for the speed of said convergence. In order to study the optimality of the bounds we obtain, we introduce and analyze a toy model accounting only for the leading order quadratic interactions.



    In recent years, there has been an increasing interest in understanding the average behavior of out-of-equilibrium systems of many waves undergoing weakly nonlinear interactions. A fundamental example of such a system is given by the cubic Schrödinger equation.

    The kinetic formalism for such wave system, known as wave kinetic theory, consists in studying the evolution of the variance of the Fourier coefficients of such wave systems in the kinetic limit (i.e., as their size grows and the strength of the interactions diminishes), see [15] for details. This variance, upon rescaling in time, has been shown to satisfy the Wave Kinetic Equation (WKE):

    tn(t,ξ)=K(n(t,)),ξR3, (1.1)

    where

    K(n)(ξ)=(R3)2{ξ=ξ1ξ2+ξ3}δR(|ξ1|2|ξ2|2+|ξ3|2|ξ|2)n1n2n3n(1n1n1+1n21n3)dξ1dξ3

    with nj=n(ξj).

    Kinetic equations for wave systems first appeared in the work of Peierls [17], Nordheim [16] and in the work of Hasselman in the context of water waves [11,12]. A rigorous mathematical derivation was only recently achieved, starting with the work of Buckmaster, Germain, Hani, Shatah [1], that of Collot and Germain [2], and culminating with the recent works of Deng and Hani [3,4,5,6,7], where a full derivation is obtained. Other wave systems have also recently been considered, see for instance [10,18].

    Despite the rigorous justification of the WKE, many questions remain unanswered regarding the behavior of solutions to (1.1). The study of the well-posedness and long-term behavior of certain solutions to the WKE (1.1) was initiated by Escobedo and Velázquez in [8], as well as in [9]. In their work, they consider radial initial data in 3D, where one can explicitly integrate the delta function in (1.1) in the angular variables, leading to the isotropic WKE:

    tg1=D(ω1)Φ[(g1ω1+g2ω2)g3g4ω3ω4(g3ω3+g4ω4)g1g2ω1ω2]dω2dω3dω4,g1t=0=gin(ω1),gidωi=g(t,dωi),Φ=min{ω1,ω2,ω3,ω4},D(ω1)={ω30,ω40;ω3+ω4ω1},ω10. (1.2)

    Here g(t,ω)=|ξ|n(t,|ξ|2) and ω=|ξ|2, where n is the solution to (1.1). It is convenient to work with g so that it can be interpreted as a density of particles in the space {ω0} [8]. One can thus define

    Mass:M=R+g(t,dω),Energy:E=R+ωg(t,dω).

    Both quantities above are conserved as long as they are both initially finite.

    Escobedo and Velázquez then consider the weak formulation of the Eq (1.2), namely

    ddt(R+φ(t,ω)g(t,dω))= Rtφ(t,ω)g(t,dω)+R3+Φg1g2g3ω1ω2ω3[φ4+φ3φ2φ1]dω1dω2dω3,ω4= ω1+ω2ω30,φi=φ(ωi), (1.3)

    almost everywhere for any test function φC2c([0,T)×R+).

    One may show that (1.3) is globally well-posed in a space of Radon measures Mρ defined in (1.13) below. Moreover, it is possible to study the long-term behavior of such measure solutions and, among other results, Escobedo and Velázquez prove that [8]:

    ● If the (conserved) mass M=R+gindω is finite, then

    g(t,)δRas t, (1.4)

    with R=infA where the set A is defined in (2.6)-(2.7).

    ● If supp(gin)N, then

    supp(g(t,))Nfor any t>0. (1.5)

    The goal of this paper is to quantify the speed of the convergence in (1.4) in a special case where we start with energy concentrated on a set of discrete frequencies away from the origin. In this setting, we first observe an instantaneous spreading of energy towards all (discrete) frequencies, before the solution converges to a single Dirac mass concentrated at R. These complicated dynamics were first studied qualitatively in [8] in a more general scenario. The purpose of this article is to present some quantitative results in the special case of initial data displaying discrete dynamics.

    We consider the case supp(gin)N. It is therefore useful to introduce a set of test functions φnCc(R+) such that supp(φn)B(n,1/2) and define

    Fn(t):=R+φn(ω)g(t,dω). (1.6)

    Since supp(g(t,))N by (1.5), the functions Fn fully describes the dynamics of g solving (1.3). By a direct inspection of the weak formulation (1.3), we will derive the (infinite) system of ODE's describing the evolution of Fn. The particular structure of the equations is not relevant for the statement of the main result, but the equations of interest can be found in Lemma 2.4. Our first result, proved in Section 3, is the following:

    Theorem 1.1. Let gin=M1δ1+j=2mjδj be the initial data of (1.2) with M1>0 and (mj)j=21,r(R+) with r>1 (see (1.12)). Define

    M2:=j=2mj,E=j=1jmj=:M1+E2,

    and, without loss of generality, assume M1+M2=1. Then, there exists t0>0 such that for any t>t0 the following inequalities hold true:

    c2tt0+3E/b1(t0)+C2F2(t)1F1(t)c1(t0)b1(t0)(tt0)+3E, (1.7)

    where c1(t0),b1(t0) are given in (3.2) whereas c2,C2 are explicitly computable. Moreover, if M13E2/19 then t0=0.

    Notice that, being the mass conserved, we have

    Fk(t)j=2Fj(t)=1F1(t)

    for all k2. Therefore, the upper bound in Theorem 1.1 is true for all Fk with k2, whereas we are only able to prove the lower bound for F2.

    Thanks to the result above, we know that, if we wait long enough or we start with M1 large enough, the convergence towards the Dirac mass at {1} is at least O(t1/2) but cannot be faster than O(t1).

    Remark 1.2. If we set t0=0 but M1<3E2/19 then we need to change the lower bound in (1.7) to O(tα) with a time-rate α>1, explicitly given in (3.4). This suggests that the rate of convergence could be faster for small times and slow down as mass concentrates at {1}. Note that the convergence is at most polynomial for all times.

    Remark 1.3. If the lower bound for 1F1 in (1.7) were sharp, then one could derive lower and upper bounds for the speed of convergence of all the functions Fn for n large enough. In such case, they would decay as On(t1). This is the content of Proposition 3.4 in Section 3.

    In order to understand whether there exist solutions exhibiting a decay that saturate the lower bound in (1.7), we propose a toy model where we only keep the terms in (1.3) involving at least one interaction with the leading term F1. Moreover, we replace all terms F1 by its limit 1. In Section 4, we show that these reductions give rise to the following quadratic toy model:

    ddtFn=4FnnF2n12n14Fnnn1k=2Fkk2(Fnn)2+2k=n+1FkFk+1nk(k+1n). (1.8)

    We then look for self-similar solutions of the form

    Fn(t)=βnnt,for βn[0,), 2nN. (1.9)

    Analogous ansatzs in the continuous setting are common in the literature, see for instance the related works of Kierkels and Velázquez [13,14] in the WKE context.

    In such a toy model, positivity of solutions cannot be expected to hold anymore, unlike in (1.3). In fact, the question of existence of solutions (1.9) with strictly positive βn remains an interesting open question, which we answer with a further reduction. Indeed, our next result consists of showing the existence of strictly positive solutions for a truncated version of (1.8), since positivity is a key and physical feature of the solutions to the full WKE (1.2). In Section 4, we show that simple truncations of (1.8) do not admit positive solutions. We thus consider:

    nβn=4βnβ2n14βnn1k=2βk2β2n+23N2k=n+1βkβk+1n,for n=2,,N, (1.10)

    where NN, N2, may be as large as desired. We then have the following:

    Theorem 1.4. Fix any NN with N4. Fix λ1,λ2>0 such that λ1λ2>N/4. Then there exists some δ0(N,λ1,λ2)>0 such that for all δ<δ0, there exists a solution to (1.10) (which solves (1.8) for nN with the ansatz (1.9)) such that

    β2=λ1λ2+1/8+2/4+O(Nδmax{λ1,λ2}),β2N=λ1+O(δ),β2N+1=λ2+O(δ),0<βj=O(Nδmax{λ1,λ2}),j=3,,N,0<βj=O(δ),j=2N+2,,3N2,βj=0,otherwise. (1.11)

    The result in the theorem above is an example of a solution whose leading order terms are β2,β2N,β2N+1. This suggests that there could be self-similar solutions to (1.8) of the form (1.9). In fact, our long-term goal is not only to construct such solutions on the toy model (1.8), but to carry out a nonlinear perturbative argument for the full (1.3) around this "approximate" solution found in Theorem 1.4.

    The article is organized as follows: In Section 2, we discuss some background results and present the discrete WKE associated to (1.3) with initial data given by a linear combination of Dirac masses. In Section 3 we prove Theorem 1.1. In Section 4, we introduce our toy model, discuss possible truncations and prove Theorem 1.4. Finally, in Section 5, we give the derivation the discrete WKE from (1.3).

    The set of natural numbers N is taken without 0, namely N={1,2,}, and we define R+=[0,). When the index set in a sum consists only of non-positive indices, we consider that sum to be zero, e.g., n1k=1ak=0 whenever n1.

    We let 1,r(R+), r0, be the Banach space of sequences (mj)j=1, mj0, with the norm:

    j=1jrmj<. (1.12)

    Finally, we consider the space Mρ of non-negative Radon measures μ such that

    μρ=supR>11(1+R)ρ1RRR/2μ(dω)+10μ(dω)<. (1.13)

    In this note we will consider initial data μ0 in Mρ with some ρ<2, which guarantees a finite and conserved energy. This is equivalent to requiring (mj)jN1,r(R+) with r>1, as stated in Theorem 1.1.

    We will often omit the differential in some integrals when it is clear from the context, e.g.,

    0μ(t,dω)=0μ(t).

    In this section, we summarize a few results in [8] which will be useful in the rest of the paper. Furthermore, we present the equations satisfied by Fn defined in (1.6). A full proof of the derivation, which is technically simple yet computationally tedious, is given in Section 5.

    First of all, by [8, Proposition 2.28] we know that if ginMρ with ρ<2 then the weak solution to (1.2) has conserved and finite energy and mass for all times. As mentioned after (1.13), we know that our initial data in Theorem 1.1 is in Mρ with ρ<2 and therefore we always have finite and conserved mass and energy.

    Then we need two key results that will allow us to prove bounds on F1(t), and they are the foundation of our analysis for Fn(t) for n>1. The first one is a combination of [8, Proposition 2.22] and [8, Lemma 2.25].

    Lemma 2.1. Suppose that gMρ. Then, for any φC2b(R+),

    R3+Φ[(g1ω1+g2ω2)g3g4ω3ω4(g3ω3+g4ω4)g1g2ω1ω2]φ1=R3+g1g2g3ω1ω2ω3Gφ (2.1)

    where both integrals are in dω1dω2dω3 and the following notation is used:

    Gφ(ω1,ω2,ω3)=13[ωH1φ(ω1,ω2,ω3)+(ω0+ωω+)+H2φ(ω1,ω2,ω3)], (2.2)
    H1φ(ω1,ω2,ω3)=φ(ω++ω0ω)+φ(ω+ω+ω0)2φ(ω+), (2.3)
    H2φ(ω1,ω2,ω3)=φ(ω+)+φ(ω+ω0ω+)φ(ω0)φ(ω), (2.4)

    and

    ω+(ω1,ω2,ω3)=max{ω1,ω2,ω3},ω(ω1,ω2,ω3)=min{ω1,ω2,ω3},ω0(ω1,ω2,ω3)={ω1,ω2,ω3}{ω+,ω}.

    Moreover, if φ is convex we have that Gφ0.

    Let gMρ be a weak solution to (1.3) and φC(R+) be a convex function. Then

    ddt(0φ(ω)g(t,dω))0,for a.e. t0. (2.5)

    The nice monotonicity formula (2.1) is a direct computation using the symmetries of the Eq (1.3). The proof can be found in [8, Proposition 2.22].

    The next result guarantees that if we consider initial data gin with discrete support, the dynamics will be discrete and nontrivial for later times. Before we state these results, we define some auxiliary sets to identify supp(g(t)), with g being the solution to (1.3). Let A1=supp(gin), define An inductively as:

    An+1={x+yz:x,y,zAn}(0,). (2.6)

    The idea behind these sets is the following: The wave interactions captured by the WKE (1.3) are those between waves of different frequencies satisfying ω4=ω1+ω2ω3. As a result, waves with frequencies in the set An will produce waves with frequencies in the set An+1. In order to consider a set that includes all possible frequencies, we cannot stop this process at any finite n and it is therefore natural to define:

    A=n=1An. (2.7)

    Notice that, if we start with a finite number of Dirac masses, namely

    gin=M1δ1+N0j=2mjδj

    for some finite N02, it is easy to show that

    A1={1,2,,N0}A=N. (2.8)

    We are now ready to state the following:

    Lemma 2.2 (Lemma 3.5 in [8]). Let ρ<1, ginMρ and let g be the weak solution to the WKE (1.3). Suppose that M=R+gin(dω)>0. Then for any xA, any t>0 and any r>0 we have that

    B(r,x)g(t,dω)>0.

    The lemma above guarantees that N=Asupp(g(t)) for all times t>0. In order to show the opposite inclusion, whence proving that supp(g(t))=N, we need the following result.

    Lemma 2.3 (Lemma 3.8 in [8]). Let ρ<1, ginMρ and let g be the weak solution to the WKE (1.3). Suppose that M=R+gin(dω)>0 and that infA>0. Then supp(g(t))¯A for any t0.

    The dynamics of our problem are therefore discrete. Thanks to Lemmas 2.2 and 2.3, we know that the functions

    Fn(t)=R+φn(ω)g(t,dω)={n}g(t,dω),n1, (2.9)

    as introduced in (1.6), fully capture the dynamics of the problem. Thanks to the conservation of mass, we may assume that our solutions have unit mass. Then the conserved energy yields:

    +n=1Fn(t)=M1+M2=1,E=n=1nFn(t)=M1+E2, (2.10)

    where M2,E2 are as defined in Theorem 1.1. In order to prove lower bounds on F2, it is convenient to define

    Hn(t):=(Fn(t))1,t>0. (2.11)

    We know these functions are well defined, since Lemma 2.2 guarantees that Fn(t)>0 for all t>0 and nN. Next we state the (infinite) system of ODE's for these quantities.

    Lemma 2.4. Let Fn,Hn be respectively given in (1.6) and (2.11). Then

    ddtFn=Fn(QnUn)F2nLn+Cn, (2.12)
    ddtHn=Hn(UnQn)+LnH2nCn (2.13)

    where the terms Ln,Qn,Un,Cn are defined as follows:

    Ln=2nF1+2n(n1k=2Fk+2n1k=n+1Fk2nkk); (2.14)
    Qn= 4nF1F2n12n1+k=n+1F2kk+1nn1k=n2F2kk2kn (2.15)
    +2n(2n1m=2FmF2nm2nm+n1k=n+12k1m=n+1kFkFmkmk+mn);Un= 4nF1n1k=2Fkk+2n(n1k=2n+km=n+1FkFmkmn+km+2n1k=2k1m=2FkFmk) (2.16)
     +2nk=n+1n+k1m=k+1FkFmkmn+km;Cn= 2F1k=n+1FkFk+mnk(k+mn)+n1k=n+12F2kF2knk (2.17)
     +2(n1k=2k1m=2FkFmFk+mnkn+k=n+1n1m=2FkFmFk+mnk(k+mn)) +n(2k=n+2k1m=n+1FkFmFk+mnkm(k+mn)+k=n+1F2kF2knk2kn).

    The proof of Lemma 2.4 is a long albeit elementary computation, and so we postpone it to Section 5.

    Remark 2.5. In the definitions of Ln,Qn,Un,Cn, we have isolated the terms containing F1 that appear as first terms on the right-hand side. For the long-term behavior, one should have in mind that F1=1 up to small errors. Therefore when deriving a toy model or when doing a perturbative argument, it is natural to replace F1 by 1 and consider all the other terms as lower order terms to be neglected (more precisely, one would hope to bootstrap a suitable smallness condition).

    In this section, we prove Theorem 1.1 and Proposition 3.4. We start by studying the function F1. In particular, we want to be as quantitative as possible since, in view of the conservation of the mass (2.10), bounds on F1 will yield a priori bounds for the rest of the Fn, n2.

    Proposition 3.1. Let M1,M20 be such that M1+M2=1. Then for t>t0 we have

    1F1(t)c1(t0)b1(t0)(tt0)+3E, (3.1)
    c1(t0)=3E(1F1(t0)),b1(t0)=2F1(t0)(1F1(t0))2. (3.2)

    When t0=0, notice that c1(0)=:c1=3E(1M1) and b1(0)=:b1=2M1(1M1)2. Using (3.1), we readily obtain the following:

    Corollary 3.2. For t>t0 we have that

    1F1(t)F2(t)c2(tt0+3E/b1(t0))α+C2, (3.3)

    where c2,C2 can be explicitly computed and

    α=max{1,3E16F1(t0)}. (3.4)

    Remark 3.3. Given that F1(t)1 as t monotonically, one can always choose t0 large enough so that

    3E16F1(t0)1F1(t0)3E16. (3.5)

    This is telling us that a lower bound with a rate O(tα) with α>1 cannot be sustained for all times. Moreover, since E=E2M1, if we start with M13E2/19 then we can take t0=0 and α=1.

    The proof of Theorem 1.1 directly follows by combining Proposition 3.1 with Corollary 3.2. Therefore, we just prove the latter results. The start of the proof of Proposition 3.1 is similar to Theorem 3.2 in [8]. The convergence result presented in Theorem 3.2 in [8] is correct. However, the rate of convergence one could derive from the last differential inequality in the proof of Theorem 3.2 in [8] (see the second equality at page 53 in [8]) does not hold due to an error in said inequality. We fix this error as part of the proof of Proposition 3.1. In fact, if the convergence rate that can be deduced from [8] were true, we would be able to prove that 1F1(t)=O((tt0)1) and therefore Proposition 3.4 would not be a conditional result.

    Proof of Proposition 3.1. To obtain the bound for F1, namely (1.7), we follow the argument in the proof of [8, Theorem 3.2]. In particular, choose the test function:

    φ1(ω):=(32ω)+. (3.6)

    Notice that supp(φ1)[0,3/2] which implies F1(t)=φ1(ω)g(t).

    Given that φ1 in (3.6) is convex and that g solves (1.3), we apply Lemma 2.1 to get

    F1(t)=ddt({1}g(t))R3+g1g2g3ω1ω2ω3Gφ1, (3.7)

    where we omit the explicit dependencies on t,dωj to ease the notation. Recalling the definitions of G,H1,H2 in (2.2)-(2.3), we claim that

    H2φ10. (3.8)

    Indeed, the coefficient of H2φ1 in the formula of Gφ1 is nonzero only if ω0+ωω+>0. Note also that supp(φ1)supp(g(t))={1} so we only need to consider ω+,ω0,ωN, where the following happens:

    ● If ω+<1 then ω0<1 and ω<1 thus H2φ1=0.

    ● If ω+=1, we must have ω0=ω=1 since ω0+ωω+>0. In this case H2φ1=0.

    ● If ω+>1, there are two options:

    (1) Suppose at least one ω0 or ω is 1. Given that ω0+ωω+>0, the only option is ω=1 and ω0=ω+. In this case H2φ1=0.

    (2) If ω0 and ω are not 1, then only φ1(ω0+ωω+) may be nonzero (if ω0+ωω+=1) and thus H2φ10.

    Therefore, combining the inequality (3.7) with (3.8) and the definition of G (2.2), we get

    F1(t)13R3+g1g2g3ω0ω+H1φ1. (3.9)

    We can also restrict integration to the set ω+>1 or we would have H1φ1=0. From the definition of φ1 we see that φ1(ω+)=0 as well as φ1(ω++ω0ω)=0. Consequently,

    F1(t)13{ω+>1,ωjN}g1g2g3ω0ω+φ1(ω++ωω0).

    For φ1(ω++ωω0) to be nonzero we must also have that ω0=ω+>1 and ω=1. Indeed, if ω+ω0>0, since all frequencies are concentrated in N, we must have ω+ω01. Having also that ω1, we conclude ω++ωω02, which is outside the support of φ1. Similarly, since ω0ω+ if ω2 we have ω++ωω02. Therefore

    F1(t)13{ω=1, ω0=ω+>1}g1g2g3ω0ω+,13F1(t)(N{1}1ωg(t))213F1(t0)(N{1}1ωg(t))2. (3.10)

    In the last inequality we used the fact that F1(t) is monotone nondecreasing. We may choose t0=0 if F1(0)=M10, whereas if M1=0 any t0>0 would do in view of Lemma 2.2.

    At this stage, we note that the factor ω1/2 on the right-hand side of (3.10) was missing in the proof of Theorem 3.2 in [8], and it is not clear why it can be removed. In fact, without it one can simply exploit the conservation of mass to conclude, see [8]. Here we have to be more careful. We exploit both the conservation of the mass and energy and use an interpolation inequality. Namely, by the Hölder inequality

    N{1}g(t)(N{1}1ωg(t))23(N{1}ωg(t))13(N{1}1ωg(t))23E13, (3.11)

    where we used the conservation of the energy in the last inequality. Combining the bound above with (3.10), and using the conserved, normalized mass, we find that

    F1(t)13F1(t0)E(N{1}g(t))3=F1(t0)3E(1F1(t))3.

    Solving this differential inequality we obtain that

    1F1(t)c1(t0)b1(t0)(tt0)+3E, (3.12)

    where c1,b1 are defined in (3.2).

    Proof of Corollary 3.2. To prove the lower bound on F2, it is convenient to make use of H2=F12, which is well defined thanks to Lemma 2.2. On account of (2.13) we have that

    ddtH2H2(U2Q2)+L2, (3.13)

    where

    U2=22k=3FkFk+1k(k+1),Q2=46F1F3+k=3F2kk, (3.14)
    L2=F1+F332. (3.15)

    Using the Cauchy-Schwarz inequality, we bound U2 as

    U2k=3F2kk+12k=4F2kk. (3.16)

    For k2 we know that

    Fkj=2Fj=1F1. (3.17)

    Since Fk>0 for all k (see Lemma 2.2), (3.1) yields

    U2Q212k=4F2kk18(1F1)218c1(t0)2b1(t0)(tt0)+3E. (3.18)

    Without loss of generality, let us assume that t0=0 and M1>0 from now on. We denote c1(t0)=c1 and b1(t0)=b1 for this choice. Then

    ts(U2(τ)Q2(τ))dτts18c21b1τ+3Edτc218b1log(b1t+3Eb1s+3E). (3.19)

    We define α=c21/(8b1) as announced in (3.4). Then

    exp(ts(U2(τ)Q2(τ))dτ)(b1t+3Eb1s+3E)α. (3.20)

    Integrating (3.13) yields

    H2(t)et0(U2(s)Q2(s))dsH2(0)+t0ets(U2(τ)Q2(τ))dτL2(s)ds.

    If α1, combining (3.20) with (3.15), we estimate the second term as follows:

    t0ets(U2(τ)Q2(τ))dτL2(s)ds2t0(t+3E/b1s+3E/b1)αds=2α1[(0+3E/b1)(t+3E/b10+3E/b1)α(t+3E/b1)].

    Putting these estimates together, we obtain

    H2(t)2α1[3E/b1(t+3E/b13E/b1)α(t+3E/b1)]+(t+3E/b13E/b1)αH2(0).

    When α=1 one would have a logarithmic correction instead of a power law in the bound above. Since H2(t)=F12(t), we deduce the following:

    ● When α>1, we conclude that

    F2(t)c2((t+3E/b1)α+C2)1.

    ● If α=1 we have

    F2(t)c2((t+3E/b1)log(t+3E/b1)+C2)1.

    ● When α<1, the term in t is dominant and therefore we have that

    F2(t)c2((t+3E/b1)+C2)1.

    This concludes the proof of the corollary.

    Given the result in Theorem 1.1, it is natural to ask what could be the maximal speed of convergence of Fn with n>1. If the lower bound for 1F1 in (1.7) were sharp, then we can derive sharp lower and upper bounds for the speed of convergence of all the functions Fn for n large enough. This is the content of the following conditional result.

    Proposition 3.4. Under the same assumptions as in Theorem 1.1, suppose that the following inequality were true:

    1F1(t)ctt0+C, (3.21)

    for some constants c,C>0 and t>t0. Then for any n large enough such that

    γn:=2cn(1n+2+1{n>2}1n+1+1{n>2}22)<1, (3.22)

    the following inequality holds for all t>t0

    cntt0+CFn(t)ctt0+C, (3.23)

    where cn can be explicitly computed.

    Proof. Recall the definitions of Ln,Un in (2.14)–(2.16). Thanks to the bound on the total mass, we deduce that Ln4/n. Therefore, from (2.12) we get

    ddtFnUnFn4nF2n. (3.24)

    Regarding Un, exploiting the conservation of the mass, we have

    k=n+1n+k1m=k+1FkFmkmn+km1n+2k=n+1Fkm=n+2Fm1n+2(1F1)2. (3.25)

    If n>2, we have additional terms in Un, which we bound as follows:

    n1k=2n+km=n+1FkFmkmn+km1n+1n1k=2Fkn+km=n+1Fm1n+1(1F1)2, (3.26)

    and

    2n1k=2k1m=1FkFmk22(F1n1k=2Fk+n1k=2Fkk1m=2Fm)22(1F1)(F1+(1F1)). (3.27)

    The upper bound above is the worst in terms of decay in time since it contains a factor F1. Therefore, since (1F1)1, combining (3.25)–(3.27) we obtain

    Un˜γn(1F1),˜γn=2n(1n+2+χn>21n+1+χn>222). (3.28)

    For simplicity of notation, consider now t0=0 in (3.21). Combining (3.24) with (3.28) we obtain

    ddtFnγnt+CFn4nF2n, (3.29)

    where γn=˜γnc was given in (3.22). Defining Gn:=(t+C)γnFn, we find

    ddtGn4n(t+C)γnG2n. (3.30)

    Since γn<1 by hypothesis, by a comparison principle we get

    1Gn(0)1Gn(t)4n(1γn)((t+C)1γnC1γn), (3.31)

    which immediately implies

    Fn(t)14n(1γn)((t+C)C1γn(t+C)γn)+(CFn(0))1(t+C)γn, (3.32)

    whence proving Proposition 3.4, where cn can be computed from the inequality above.

    In this section we discuss the toy model announced in the introduction (1.8). Our main goal is to understand if there could be solutions exhibiting a decay of order O(t1), which could be possible for particular initial data and would justify the optimality of the lower bound in Theorem 1.1. Investigating this question directly on the full system (2.12) seems hard. For example, a naive ansatz imposing a polynomial decay for Fn is not consistent with the equations, and this is because F1 cannot decay and it behaves different with respect to all the other Fn. Therefore, we first aim at reducing the complexity of the system by performing the following reductions:

    ● By (2.12), for n2, dFn/dt is a weighted sum of products of the form FiFjFk. We drop all terms where {i,j,k}{1}=.

    ● All the remaining terms have at most one factor of F1. In view of Proposition 3.1, we substitute such terms by 1.

    The resulting toy model may be written as

    ddtFn=4FnnF2n12n14Fnnn1k=2Fkk2(Fnn)2+2k=n+1FkFk+1nk(k+1n). (4.1)

    The idea behind our approximating toy model is that the leading order terms are dictated by interactions with F1. Indeed, we know that all the mass is converging towards F1 and therefore interactions between Fj with j1 are lower order. In this toy model though, many nice properties such as positivity of the solution cannot be expected to hold anymore. On the other hand, the advantage of the Eq (4.1) is that the right-hand side is quadratic. We thus propose the natural self-similar ansatz of the form

    Fn(t)=βnnt,for 2nN.

    We plug this ansatz into (4.1) and derive equations for the coefficients (βn)n2

    nβn=4βnβ2n14βnn1k=2βk2β2n+2k=n+1βkβk+1n. (4.2)

    Our goal is to investigate wheter or not there are positive solutions to this toy model, always with the idea in mind of having something consistent with the behavior observed in the full WKE, especially regarding Lemmas 2.2 and 2.3 about the positivity of the coefficients βn. We do not focus too much on the mass and energy properties since one can suitably rescale the time in the self-similar ansatz to adjust the parameters.

    For the sake of understanding this toy model, we would like to introduce a suitable truncation and exhibit positive solutions to truncated system. The aim would be to use such solutions as the starting point of a perturbative argument in the full WKE. First of all, we observe the following:

    Remark 4.1. Consider a solution of the form βn=0 for all nN. In the case N=4, it is straightforward to check that β2,β3 must be given by

    β2=2+3314>0,β3=22+314<0,

    after imposing β2,β30. Similarly, if N=5 the only real-valued solutions have β4<0. For N=6, one can numerically compute the four exact (nonzero) real-valued solutions to the system, but none of them lies in (0,)4.

    These examples suggest that if we truncate brutally all the nN, there is no guarantee of finding strictly positive solutions to our system, which is clearly not consistent with Lemma 2.2.

    To overcome the issues related to a standard truncation as explained in Remark 4.1, we allow the presence of large gap between high and low frequencies. This is helpful because we are able to "force" a positive solution in the low frequencies by using very high frequencies as given parameters of the chosen cut-off. Essentially, we are exploiting the highly nonlocal nature of the system (4.2) and this can be heuristically motivated by the presence of a high-to-low frequency cascade. More specifically, we consider the truncated system for \mathit{\boldsymbol{\beta}}: = (\beta_2, \ldots, \beta_{N}) by setting

    \begin{equation} \beta_k = 0 \quad \mbox{for all}\ N+1 \leq k < 2N,\ \mbox{and}\ k\geq 3N-1. \end{equation} (4.3)

    We keep N-1 functions \mathit{\boldsymbol{\lambda}}: = (\beta_{2N}, \beta_{2N+1}, \ldots, \beta_{3N-2}) as parameters which are a priori fixed. With this in mind, (4.2) reads

    \begin{equation} -\sqrt{n}\, \beta_n = 4\, \beta_n \, \beta_{2n-1} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1}\beta_k - 2 \beta_n^2 + 2\sum\limits_{k = n+1}^{3N-2} \beta_k \beta_{k+1-n}, \qquad \mbox{for}\ n = 2,\ldots, N. \end{equation} (4.4)

    Exploiting (4.3), we may rewrite this as:

    \begin{equation} \begin{split} -\sqrt{n}\, \beta_n = &\ 4\, \beta_n \, \beta_{2n-1} \mathbb{1}_{\{2n-1\leq N\}} - 4\, \beta_n \, \sum\limits_{k = 2}^{n-1} \beta_k- 2 \beta_n^2\\ & + 2\sum\limits_{k = n+1}^{N} \beta_k \beta_{k+1-n} + 2\sum\limits_{k = 2N+1}^{3N-2} \beta_k \beta_{k+1-n} , \qquad \mbox{for}\ n = 2,\ldots, N. \end{split} \end{equation} (4.5)

    For this system, we have a special solution given by

    \begin{align} \mathit{\boldsymbol{\beta}}^0 = ( \gamma, 0, \ldots ,0), \qquad \mathit{\boldsymbol{\lambda}}^0 = (\lambda_1^0 , \lambda_2^0, 0, \ldots,0), \end{align} (4.6)

    where

    \begin{equation} \gamma = \frac{\sqrt{2} + \sqrt{2+ 16\lambda_1^{0}\lambda_2^{0} }}{4}. \end{equation} (4.7)

    Around this particular solution, we are able to show the existence of many solutions \mathit{\boldsymbol{\beta}} to (4.5) with strictly positive components. Therefore, we have many solutions for the system (4.2) when truncated at N except for the coefficients 2N, 2N+1 . This is the content of Theorem 1.4 which we restate more precisely as follows:

    Theorem 4.2. Fix any N\in{\mathbb N} with N\geq 4 . Fix \lambda_1^0, \lambda_2^0 > 0 such that

    \begin{equation} \lambda^0_1\lambda^0_2 > N/4. \end{equation} (4.8)

    Then there exists some \delta_0 (N, \lambda_1^0, \lambda_2^0) > 0 such that for all \delta < \delta_0 , there exists a solution to (4.5) (which solves (4.2) for n\leq N ) such that \beta_2, \ldots, \beta_N, \beta_{2N}, \beta_{3N-2} > 0 , and \beta_k = 0 for the rest of k\in{\mathbb N}\cap [2, \infty) . Moreover, this solution satisfies

    \begin{equation} \begin{split} \beta_2 & = \gamma + {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda^0_1,\lambda^0_2\}),\\ \beta_j & = {\mathcal O} ( \sqrt{N} \,\delta\, \max\{ \lambda^0_1,\lambda^0_2\}), \qquad j = 3,\ldots,N,\\ \beta_{2N} & = \lambda^0_1 + {\mathcal O} (\delta),\\ \beta_{2N+1} & = \lambda^0_2 + {\mathcal O} (\delta),\\ \beta_{j} & = {\mathcal O}(\delta), \qquad j = 2N+2,\ldots, 3N-2. \end{split} \end{equation} (4.9)

    Proof. The idea of the proof is to construct the positive solutions by using the implicit function theorem for a map whose zeros are solutions to (4.4). We have to carefully set the parameters in the special solution (4.6) in order to guarantee the positivity of the new solution. The proof is divided into four steps.

    Step 1. Consider the map

    \begin{split} f: {\mathbb R}^{N-1}\times & {\mathbb R}^{N-1} \longrightarrow {\mathbb R}^{N-1}, \\ f(\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) & = (f_n (\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) )_{n = 2}^{N} \end{split}

    where

    \begin{equation} \begin{split} f_n (\mathit{\boldsymbol{\beta}},\mathit{\boldsymbol{\lambda}}) = & \ 2 \beta_n^2 + 4\, \beta_n \, \sum\limits_{k = 2}^{n-1} \beta_k -4\, \beta_n \, \beta_{2n-1} \mathbb{1}_{\{2n-1\leq N\}}\\ & - 2\sum\limits_{k = n+1}^{N} \beta_k \beta_{k+1-n} -2 \sum\limits_{k = 2N+1}^{3N-2} \beta_k \beta_{k+1-n} - \sqrt{n} \beta_n \end{split} \end{equation} (4.10)

    and

    \mathit{\boldsymbol{\beta}} = ( \beta_2,\ldots,\beta_N), \qquad \mathit{\boldsymbol{\lambda}} = (\lambda_1 , \ldots, \lambda_{N-1}) = ( \beta_{2N},\ldots,\beta_{3N-2}).

    Notice that f_n(\mathit{\boldsymbol{\beta}}, \mathit{\boldsymbol{\lambda}}) = 0 for all 2\leq n\leq N corresponds to solution to (4.4). We thus consider the special point:

    \mathit{\boldsymbol{\beta}}^{0} = ( \gamma, 0, \ldots ,0), \qquad \mathit{\boldsymbol{\lambda}}^{0} = (\lambda_1^{0} , \lambda_2^{0}, 0, \ldots,0),

    where \gamma is defined in (4.7). We know that f (\mathit{\boldsymbol{\beta}}^{0}, \mathit{\boldsymbol{\lambda}}^{0}) = 0 . Moreover, the Jacobian matrix at this point is non-singular. More precisely,

    \begin{equation} J_{\beta} f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0} ) = \mbox{diag}\left( 4\gamma - \sqrt{n}\right)_{2\leq n\leq N} + \left( -2 \gamma\, \delta_{(i,j) = (i,i+1)}\right)_{1\leq i,j\leq N-1}. \end{equation} (4.11)

    Notice that this is an upper triangular matrix, meaning that it is invertible provided

    4\gamma - \sqrt{n} \neq 0, \qquad \mbox{for all}\ n = 2,\ldots, N-1.

    For reasons that will be clear later, we impose that

    \begin{equation} \gamma > \frac14\sqrt{N}, \end{equation} (4.12)

    which implies that every diagonal entry in (4.11) is strictly positive.

    By the implicit function theorem, there exist \varepsilon, \delta > 0 such that \mathit{\boldsymbol{\beta}} can be written as a smooth function of \mathit{\boldsymbol{\lambda}} in small neighborhoods of our zero, i.e.,

    \begin{split} \mathit{\boldsymbol{\beta}}: B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) & \longrightarrow B(\mathit{\boldsymbol{\beta}}^{0},\varepsilon),\\ \mathit{\boldsymbol{\lambda}} & \longmapsto \mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}}) \end{split}

    and such that f(\mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}}), \mathit{\boldsymbol{\lambda}}) = 0 for all \lambda \in B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) . Moreover, we know that

    \begin{equation} J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) = - J_{\mathit{\boldsymbol{\beta}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0})^{-1} \, \cdot\, J_{\mathit{\boldsymbol{\lambda}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0}). \end{equation} (4.13)

    Step 2. We now compute the Jacobian matrix J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) . Set

    \begin{equation} c_n = \frac{2\gamma}{4\gamma-\sqrt{n}}, \qquad n = 2,\ldots, N-1, \end{equation} (4.14)

    and let

    A: = \begin{pmatrix} 1 & c_2 & c_2 c_3 & c_2 c_3 c_4 & \ldots & \prod\limits_{j = 2}^{N-1} c_j \\ 0 & 1 & c_3 & c_3 c_4 & \ldots & \prod\limits_{j = 3}^{n-1} c_j\\ 0 & 0 & 1 & c_4 & \ldots & \prod\limits_{j = 4}^{N-1} c_j\\ \vdots & & \ddots & \ddots & \ddots & \vdots\\ \vdots & & & \ddots & \ddots & c_{N-1}\\ 0 & & \ldots & & 0 & 1 \end{pmatrix}.

    Then, it is easy to check that

    \begin{equation} J_{\mathit{\boldsymbol{\beta}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0})^{-1} = A \cdot \mbox{diag}\left( \frac{1}{4\gamma - \sqrt{n}}\right)_{2\leq n \leq N}, \end{equation} (4.15)

    which is an upper triangular matrix with strictly positive entries in the upper triangle, thanks to (4.12) and (4.14). Similarly, we may compute

    \begin{equation} J_{\mathit{\boldsymbol{\lambda}}}f (\mathit{\boldsymbol{\beta}}^{0},\mathit{\boldsymbol{\lambda}}^{0}) = - 2\begin{pmatrix} \lambda_1^{0} & \lambda_2^{0} & 0 & 0 &0 & \ldots & 0 \\ 0 & \lambda_1^{0} & \lambda_2^{0} & 0 & 0 &\ldots & 0 \\ 0 & 0 & \lambda_1^{0} & \lambda_2^{0} & 0 & \ldots & 0 \\ \vdots & & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & & \ddots & \ddots & \ddots & \vdots \\ \vdots & & & & \ddots & \ddots & \\ 0 & \cdots & & \cdots & & & \lambda_1^{0} \end{pmatrix}. \end{equation} (4.16)

    Therefore, using (4.13) it is not hard to deduce that J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) is an upper triangular matrix with strictly positive entries on the upper triangle.

    Step 3. We are finally in the position of constructing the solution \mathit{\boldsymbol{\beta}} with all positive entries. By the Fundamental Theorem of Calculus, we have that

    \begin{equation} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}) = \mathit{\boldsymbol{\beta}}^{0} + \int_0^1 J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0}))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})\, dt \end{equation} (4.17)

    where J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} is the Jacobian matrix of \mathit{\boldsymbol{\beta}} with respect to \mathit{\boldsymbol{\lambda}} .

    Let us choose \mathit{\boldsymbol{\lambda}}\in B(\mathit{\boldsymbol{\lambda}}^{0}, \delta) such that \mathit{\boldsymbol{\lambda}}- \mathit{\boldsymbol{\lambda}}^{0}\in (0, \infty)^{N-1} . Since the entries of the matrix J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) are strictly positive in the upper triangle, we have that

    [J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0}))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})]_j \Big |_{t = 0} > 0 , \qquad \mbox{for all} \quad j = 1,\ldots, N-1.

    By continuity of the Jacobian matrix, we can extend this positivity to any t\in [0, 1] as long as \mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0} is small enough (i.e., by potentially making \delta > 0 smaller). By (4.17), this implies that

    [\mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}) ]_j > 0, \qquad \mbox{for all} \quad j = 1,\ldots, N-1.

    Step 4. Let us further impose

    \begin{equation} \lambda_1^0 \lambda_2^0 > \frac14 N \end{equation} (4.18)

    in order to guarantee that \gamma > \sqrt{N}/2 . This implies that c_n in (4.14) satisfies c_n\leq 1 and therefore the entries of the matrix in (4.15) are of size 1/\sqrt{N} . Hence, the entries of the matrix J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}}^{0}) , see (4.13), have size \|{\boldsymbol{\lambda}^{0}}\|_{\infty}/\sqrt{N} . Therefore, by summing up at most N -terms of size \|{\boldsymbol{\lambda}^{0}}\|_{\infty}/\sqrt{N} , we infer

    \left\lVert {J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} ))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})} \right\rVert_{\infty}\lesssim \sqrt{N} \, \max\{\lambda_1^{0},\lambda_2^{0}\}\, \delta.

    By the continuity of the Jacobian matrix, one can arrange:

    \left\lVert {J_{\mathit{\boldsymbol{\lambda}}} \mathit{\boldsymbol{\beta}} ( \mathit{\boldsymbol{\lambda}}^{0} + t\, (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})) ))\cdot (\mathit{\boldsymbol{\lambda}}-\mathit{\boldsymbol{\lambda}}^{0})} \right\rVert_{\infty}\lesssim \sqrt{N} \, \max\{\lambda_1^{0},\lambda_2^{0}\}\, \delta, \qquad \forall t\in [0,1],

    by further reducing \delta if necessary. Therefore, in view of (4.17), we choose \delta small enough so that

    \begin{equation} \begin{split} \mathit{\boldsymbol{\beta}}(\mathit{\boldsymbol{\lambda}})_1 & = \beta_2 (\mathit{\boldsymbol{\lambda}}) = \gamma + {\mathcal O} ( \sqrt{N}\left\lVert {\mathit{\boldsymbol{\lambda}}^0} \right\rVert_{\infty}\, \delta),\\ \mathit{\boldsymbol{\beta}} (\mathit{\boldsymbol{\lambda}})_j & = {\mathcal O} ( \sqrt{N}\left\lVert {\mathit{\boldsymbol{\lambda}}^0} \right\rVert_{\infty}\, \delta),\qquad j = 2,\ldots,N-1,\\ \mathit{\boldsymbol{\lambda}}_1 & = \beta_{2N} = \lambda_1^0 + {\mathcal O} (\delta),\\ \mathit{\boldsymbol{\lambda}}_2 & = \beta_{2N+1} = \lambda_2^0 + {\mathcal O} (\delta),\\ \mathit{\boldsymbol{\lambda}}_j & = \beta_{2N+j-1} = {\mathcal O} (\delta), \qquad j = 3,\ldots, N-1, \end{split} \end{equation} (4.19)

    where \gamma is defined in (4.7) and we impose (4.18). This concludes the proof.

    We are going to derive the equations of F_n by the weak formulation (1.3). As test functions, we choose

    \begin{equation} \varphi_n( \omega) = \chi_{(n-1/2,n+1/2)}, \qquad \text{for } n\in \mathbb{N}\setminus \{0\}, \end{equation} (5.1)

    where the \chi are C_c^{\infty}({\mathbb R}) functions supported inside intervals of the form (n-1/2, n+1/2) and such that \varphi_n (n) = 1 .

    From the definition of F_n , see (1.6), and (1.3) we have

    \begin{equation} \begin{split} \partial_t F_n& = \int_{{\mathbb R}^3_+}\Phi \frac{g_1g_2g_3}{\sqrt{\omega_1\omega_2\omega_3}}[\varphi_{n,4}+\varphi_{n,3}-\varphi_{n,1}-\varphi_{n,2}]d \omega_1d \omega_2d \omega_3 = :\mathcal{I}[g,\varphi_n],\\ \omega_4& = \omega_1+ \omega_2- \omega_3. \end{split} \end{equation} (5.2)

    Notice that we always have \omega_3\neq \omega_2 since otherwise also \omega_4 = \omega_1 and the integrand above vanishes. Analogously, we have \omega_4\neq \omega_1 .

    We have to distinguish several cases depending on the values of \varphi_{n, 1} .

    \bullet Case \varphi_{n, 1} = \varphi_{n, 2} = 1 .

    First notice that we have \varphi_{n, 3} = \varphi_{n, 4} = 0 . Indeed, if \varphi_{n, 3} = 1 , this implies \omega_4 = \omega_1+\omega_2- \omega_3 = n , meaning that \varphi_{n, 4} = 1 . But if \varphi_{n, i} = 1 for i = 1, \dots, 4 the integrand is zero. Therefore we have \varphi_{n, 3} = \varphi_{n, 4} = 0 .

    When \omega_3 < n then \Phi = \sqrt{ \omega_3} , hence

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = \varphi_{n,2} = 1\}\cap \{ \omega_3 < n\}}] = -\frac{2}{n}F_n^2\sum\limits_{k = 1}^{n-1}F_k. \end{equation} (5.3)

    For \omega_3 > n we have \Phi = \sqrt{ \omega_4} = \sqrt{2n- \omega_3} , therefore

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = \varphi_{n,2} = 1\}\cap \{ \omega_3 > n\}}] = -\frac{2}{n}F_n^2\sum\limits_{k = n+1}^{2n-1}\frac{F_k\sqrt{2n-k}}{\sqrt{k}}. \end{equation} (5.4)

    \bullet Cases \varphi_{n, 1} = 1, \ \varphi_{n, 2} = 0 or \varphi_{n, 1} = 0, \ \varphi_{n, 2} = 1 .

    In view of the symmetry of the integrals, the two cases under consideration are equal. If \varphi_{n, 3} = 1 , then \omega_4 = \omega_2 meaning that \varphi_{n, 4} = 0 . But if \varphi_{n, 1} = \varphi_{n, 3} = 1 and \varphi_{n, 2} = \varphi_{n, 4} = 0 then the integrand is zero. Analogously when \varphi_{n, 4} = 1 . Hence we only have to consider \varphi_{n, 3} = \varphi_{n, 4} = 0 .

    We use the following convention for the indices in this case

    k = \omega_2 \text{ and } m = \omega_3.

    We start with the case \omega_2 < n . When \omega_3 > n then \Phi = \sqrt{n+k-m} and

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_2 < n, \omega_3 > n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = 1}^{n-1}\sum\limits_{m = n+1}^{n+k}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}. \end{equation} (5.5)

    When \omega_3 < \omega_2 one has \Phi = \sqrt{m} meaning that

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_3 < \omega_2 < n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = 2}^{n-1}\sum\limits_{m = 1}^{k-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} (5.6)

    If \omega_2 < \omega_3 < n then \Phi = \sqrt{k} so that

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_2 < \omega_3 < n\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{m = 2}^{n-1}\sum\limits_{k = 1}^{m-1}\frac{F_kF_m}{\sqrt{m}}. \end{equation} (5.7)

    When \omega_2 > n , first consider \omega_3 < \omega_2 . If \omega_3 < n then \Phi = \sqrt{m} and

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{ \omega_3 < n < \omega_2\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} (5.8)

    For n < \omega_3 < \omega_2 one has \Phi = \sqrt{n} hence we get

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{n < \omega_3 < \omega_2\}}] = -F_n\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_m}{\sqrt{km}}. \end{equation} (5.9)

    For \omega_3 > \omega_2 > n then \Phi = \sqrt{n+k-m} and

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,1} = 1, \varphi_{n,2} = 0\}\cap \{n < \omega_2 < \omega_3\}}] = -\frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = k+1}^{n+k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{n+k-m}. \end{equation} (5.10)

    This concludes all the possibles cases for \varphi_{n, 1} = 1, \varphi_{n, 2} = 0 . In account of the symmetry \omega_1\leftrightarrow \omega_2 , we remark again that all the terms appearing here are multiplied by a factor 2 in (2.12).

    \bullet Case \varphi_{n, 3} = \varphi_{n, 4} = 1.

    In this case we also have \varphi_{n, 1} = \varphi_{n, 2} = 0 since otherwise the integrand is zero. The indices convention in this case are

    k = \omega_2 \text{ and } m = \omega_1.

    We also have \omega_1+ \omega_2 = 2n , hence, if \omega_1 < \omega_2 then \omega_2 > n and \Phi = \sqrt{m} . Since we can always exchange \omega_1 and \omega_2 we conclude that

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = \varphi_{n,4} = 1\}}] = \frac{4}{\sqrt{n}}F_n\sum\limits_{m = 1}^{n-1}\frac{F_mF_{2n-m}}{\sqrt{2n-m}}. \end{equation} (5.11)

    \bullet Case \varphi_{n, 3} = 1, \varphi_{n, 4} = 0.

    In this case we know that \varphi_{n, 1} = \varphi_{n, 2} = 0 since otherwise the integrand is zero. We again denote

    k = \omega_2 \text{ and } m = \omega_1.

    First we consider \omega_1 < \omega_2 . In account of the symmetries, the case \omega_2 < \omega_1 will be the same. If \omega_2 < n then \Phi = \sqrt{k+m-n} and

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 < \omega_2 < n\}}] = \frac{1}{\sqrt{n}}F_n\sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\sum\limits_{m = n+1-k}^{k-1}\frac{F_kF_m}{\sqrt{km}}\sqrt{k+m-n}. \end{equation} (5.12)

    For \omega_1 < n < \omega_2 one has \Phi = \sqrt{m} hence

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 < n < \omega_2\}}] = \frac{1}{\sqrt{n}}F_n\sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_m}{\sqrt{k}}. \end{equation} (5.13)

    When n < \omega_1 < \omega_2 then \Phi = \sqrt{n} , from which we get

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{n < \omega_1 < \omega_2\}}] = F_n\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_m}{\sqrt{km}}. \end{equation} (5.14)

    The three terms above are multiplied by a factor 2 in (2.12) in view of the symmetry \omega_1 \leftrightarrow \omega_2 .

    We are left only with the case \omega_1 = \omega_2 . If \omega_2 > n then

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 = \omega_2 > n\}}] = F_n\sum\limits_{k = n+1}^{\infty}\frac{F_k^2}{k}. \end{equation} (5.15)

    When \omega_2 < n one has

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{ \omega_1 = \omega_2 < n\}}] = \frac{F_n}{\sqrt{n}}\sum\limits_{k = \lceil \frac{n}{2}\rceil }^{n-1}\frac{F_k^2}{k}\sqrt{2k-n}. \end{equation} (5.16)

    \bullet Case \varphi_{n, 4} = 1, \varphi_{n, 3} = 0.

    In this case one has \varphi_{n, 1} = \varphi_{n, 2} = 0 . We assume \omega_1 < \omega_2 as done previously. We again denote k = \omega_2 \text{ and } m = \omega_1.

    For \omega_1 < \omega_2 < n then \Phi = \sqrt{k+m-n} so that

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 < \omega_2 < n\}}] = \sum\limits_{k = 2}^{n-1}\sum\limits_{m = 2}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{kn}}. \end{equation} (5.17)

    For \omega_1 < n < \omega_2 one has \Phi = \sqrt{m} hence

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 < n < \omega_2\}}] = \sum\limits_{k = n+1}^{\infty}\sum\limits_{m = 1}^{n-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{k(k+m-n)}}. \end{equation} (5.18)

    When n < \omega_1 < \omega_2 then \Phi = \sqrt{n} , from which we get

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,3} = 1, \varphi_{n,4} = 0\}\cap \{n < \omega_1 < \omega_2\}}] = \sqrt{n}\sum\limits_{k = n+2}^{\infty}\sum\limits_{m = n+1}^{k-1}\frac{F_kF_mF_{k+m-n}}{\sqrt{km(k+m-n)}}. \end{equation} (5.19)

    The three terms above are multiplied by a factor 2 in (2.12) in view of the symmetry \omega_1 \leftrightarrow \omega_2 .

    We are left only with the case \omega_1 = \omega_2 . If \omega_2 > n then

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 = \omega_2 > n\}}] = \sqrt{n}\sum\limits_{k = n+1}^{\infty}\frac{F_k^2F_{2k-n}}{k\sqrt{2k-n}}. \end{equation} (5.20)

    When \omega_2 < n we get

    \begin{equation} \mathcal{I}[g,\varphi_n\mathbb{1}_{\{\varphi_{n,4} = 1, \varphi_{n,3} = 0\}\cap \{ \omega_1 = \omega_2 < n\}}] = \sum\limits_{k = \lceil \frac{n+1}{2}\rceil}^{n-1}\frac{F_k^2F_{2k-n}}{k}. \end{equation} (5.21)

    Therefore, from (5.3)–(5.21) the identity (2.12) follows: Notice the crucial cancellations of (5.9) with (5.14) and (5.8) with (5.13). The proof of (2.13) immediately follows by the fact that \partial_t H_n = -F_n^{-2} \partial_tF_n .

    In this paper, we consider solutions to the Wave Kinetic Equation with initial data given by a countable sum of delta functions, whose dynamics are discrete for all times. We derive a system of equations that describe this dynamics and carry out a quantitative study of their convergence to a single delta function. In particular, we prove upper and lower bounds for the rate of convergence. In order to study the optimality of these bounds, we introduce and analyze a toy model which captures the leading order quadratic interactions. Finally, we show the existence of a family of non-negative solutions to a truncation of this toy model.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are thankful to Juan J. L. Velázquez for several helpful discussions and comments. The research of MD was supported by the Swiss State Secretariat for Education, Research and lnnovation (SERI) under contract number MB22.00034 through the project TENSE. The research of both authors was backed by the GNAMPA-INdAM.

    The authors declare no conflicts of interest.



    [1] T. Buckmaster, P. Germain, Z. Hani, J. Shatah, Onset of the wave turbulence description of the longtime behavior of the nonlinear Schrödinger equation, Invent. Math., 225 (2021), 787–855. http://dx.doi.org/10.1007/s00222-021-01039-z doi: 10.1007/s00222-021-01039-z
    [2] C. Collot, P. Germain, On the derivation of the homogeneous kinetic wave equation, unpublished work.
    [3] Y. Deng, Z. Hani, On the derivation of the Wave Kinetic Equation for NLS, Forum Math., Pi, 9 (2021), e6. http://dx.doi.org/10.1017/fmp.2021.6 doi: 10.1017/fmp.2021.6
    [4] Y. Deng, Z. Hani, Propagation of chaos and the higher order statistics in the wave kinetic theory, J. Eur. Math. Soc., 2024. http://dx.doi.org/10.4171/jems/1488
    [5] Y. Deng, Z. Hani, Derivation of the Wave Kinetic Equation: full range of scaling laws, arXiv, 2023. http://dx.doi.org/10.48550/arXiv.2301.07063
    [6] Y. Deng, Z. Hani, Full derivation of the Wave Kinetic Equation, Invent. math., 233 (2023), 543–724. http://dx.doi.org/10.1007/s00222-023-01189-2 doi: 10.1007/s00222-023-01189-2
    [7] Y. Deng, Z. Hani, Long time justification of wave turbulence theory, arXiv, 2023. http://dx.doi.org/10.48550/arXiv.2311.10082
    [8] M. Escobedo, J. J. L. Velázquez, On the theory of weak turbulence for the nonlinear Schrödinger equation, Mem. Amer. Math. Soc., 238 (2015), 1124. http://dx.doi.org/10.1090/memo/1124 doi: 10.1090/memo/1124
    [9] P. Germain, A. Ionescu, M. B. Tran, Optimal local well-posedness theory for the kinetic wave equation, J. Funct. Anal., 279 (2020), 108570. http://dx.doi.org/10.1016/j.jfa.2020.108570 doi: 10.1016/j.jfa.2020.108570
    [10] A. Hannani, M. Rosenzweig, G. Staffilani, M. B. Tran, On the wave turbulence theory for a stochastic KdV type equation–Generalization for the inhomogeneous kinetic limit, arXiv, 2022. http://dx.doi.org/10.48550/arXiv.2210.17445
    [11] K. Hasselmann, On the non-linear energy transfer in a gravity-wave spectrum. Ⅰ. General theory, J. Fluid Mech., 12 (1962), 481–500. http://dx.doi.org/10.1017/S0022112062000373 doi: 10.1017/S0022112062000373
    [12] K. Hasselmann, On the non-linear energy transfer in a gravity wave spectrum. Ⅱ. Conservation theorems; wave-particle analogy; irreversibility, J. Fluid Mech., 15 (1963), 273–281. http://dx.doi.org/10.1017/S0022112063000239 doi: 10.1017/S0022112063000239
    [13] A. H. M. Kierkels, J. J. L. Velázquez, On the transfer of energy towards infinity in the theory of weak turbulence for the nonlinear Schrödinger equation, J. Stat. Phys., 159 (2015), 668–712. http://dx.doi.org/10.1007/s10955-015-1194-0 doi: 10.1007/s10955-015-1194-0
    [14] A. H. M. Kierkels, J. J. L. Velázquez, On self-similar solutions to a kinetic equation arising in weak turbulence theory for the nonlinear schrödinger equation, J. Stat. Phys., 163 (2016), 1350–1393. http://dx.doi.org/10.1007/s10955-016-1505-0 doi: 10.1007/s10955-016-1505-0
    [15] S. Nazarenko, Wave turbulence, Vol. 825, Springer Science & Business Media, 2011. http://dx.doi.org/10.1007/978-3-642-15942-8
    [16] L. Nordheim, On the kinetic method in the new statistics and application in the electron theory of conductivity, Proc. R. Soc. London. Ser. A, Containing Papers of a Mathematical and Physical Character, 119 (1928), 689–698. http://dx.doi.org/10.1098/rspa.1928.0126
    [17] R. Peierls, Zur kinetischen theorie der Wärmeleitung in Kristallen, Ann. Phys., 395 (1929), 1055–1101. http://dx.doi.org/10.1002/andp.19293950803 doi: 10.1002/andp.19293950803
    [18] G. Staffilani, M. B. Tran, On the wave turbulence theory for stochastic and random multidimensional KdV type equations, arXiv, 2021. http://dx.doi.org/10.48550/arXiv.2106.09819
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1067) PDF downloads(130) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog