Processing math: 60%
Research article Special Issues

Nonlinear autoregressive sieve bootstrap based on extreme learning machines

  • Received: 29 May 2019 Accepted: 09 October 2019 Published: 22 October 2019
  • The aim of the paper is to propose and discuss a sieve bootstrap scheme based on Extreme Learning Machines for non linear time series. The procedure is fully nonparametric in its spirit and retains the conceptual simplicity of the residual bootstrap. Using Extreme Learning Machines in the resampling scheme can dramatically reduce the computational burden of the bootstrap procedure, with performances comparable to the NN-Sieve bootstrap and computing time similar to the ARSieve bootstrap. A Monte Carlo simulation experiment has been implemented, in order to evaluate the performance of the proposed procedure and to compare it with the NN-Sieve bootstrap. The distributions of the bootstrap variance estimators appear to be consistent, delivering good results both in terms of accuracy and bias, for either linear and nonlinear statistics (such as the mean and the median) and smooth functions of means (such as the variance and the covariance).

    Citation: Michele La Rocca, Cira Perna. Nonlinear autoregressive sieve bootstrap based on extreme learning machines[J]. Mathematical Biosciences and Engineering, 2020, 17(1): 636-653. doi: 10.3934/mbe.2020033

    Related Papers:

    [1] Shiqiang Feng, Dapeng Gao . Existence of traveling wave solutions for a delayed nonlocal dispersal SIR epidemic model with the critical wave speed. Mathematical Biosciences and Engineering, 2021, 18(6): 9357-9380. doi: 10.3934/mbe.2021460
    [2] Zhihong Zhao, Yan Li, Zhaosheng Feng . Traveling wave phenomena in a nonlocal dispersal predator-prey system with the Beddington-DeAngelis functional response and harvesting. Mathematical Biosciences and Engineering, 2021, 18(2): 1629-1652. doi: 10.3934/mbe.2021084
    [3] Jian Fang, Na Li, Chenhe Xu . A nonlocal population model for the invasion of Canada goldenrod. Mathematical Biosciences and Engineering, 2022, 19(10): 9915-9937. doi: 10.3934/mbe.2022462
    [4] Guo Lin, Shuxia Pan, Xiang-Ping Yan . Spreading speeds of epidemic models with nonlocal delays. Mathematical Biosciences and Engineering, 2019, 16(6): 7562-7588. doi: 10.3934/mbe.2019380
    [5] Xixia Ma, Rongsong Liu, Liming Cai . Stability of traveling wave solutions for a nonlocal Lotka-Volterra model. Mathematical Biosciences and Engineering, 2024, 21(1): 444-473. doi: 10.3934/mbe.2024020
    [6] Xiao-Min Huang, Xiang-ShengWang . Traveling waves of di usive disease models with time delay and degeneracy. Mathematical Biosciences and Engineering, 2019, 16(4): 2391-2410. doi: 10.3934/mbe.2019120
    [7] Qiaoling Chen, Fengquan Li, Sanyi Tang, Feng Wang . Free boundary problem for a nonlocal time-periodic diffusive competition model. Mathematical Biosciences and Engineering, 2023, 20(9): 16471-16505. doi: 10.3934/mbe.2023735
    [8] Wenhao Chen, Guo Lin, Shuxia Pan . Propagation dynamics in an SIRS model with general incidence functions. Mathematical Biosciences and Engineering, 2023, 20(4): 6751-6775. doi: 10.3934/mbe.2023291
    [9] Max-Olivier Hongler, Roger Filliger, Olivier Gallay . Local versus nonlocal barycentric interactions in 1D agent dynamics. Mathematical Biosciences and Engineering, 2014, 11(2): 303-315. doi: 10.3934/mbe.2014.11.303
    [10] Tong Li, Zhi-An Wang . Traveling wave solutions of a singular Keller-Segel system with logistic source. Mathematical Biosciences and Engineering, 2022, 19(8): 8107-8131. doi: 10.3934/mbe.2022379
  • The aim of the paper is to propose and discuss a sieve bootstrap scheme based on Extreme Learning Machines for non linear time series. The procedure is fully nonparametric in its spirit and retains the conceptual simplicity of the residual bootstrap. Using Extreme Learning Machines in the resampling scheme can dramatically reduce the computational burden of the bootstrap procedure, with performances comparable to the NN-Sieve bootstrap and computing time similar to the ARSieve bootstrap. A Monte Carlo simulation experiment has been implemented, in order to evaluate the performance of the proposed procedure and to compare it with the NN-Sieve bootstrap. The distributions of the bootstrap variance estimators appear to be consistent, delivering good results both in terms of accuracy and bias, for either linear and nonlinear statistics (such as the mean and the median) and smooth functions of means (such as the variance and the covariance).


    As one of the most basic models in modeling infectious diseases, the SIR epidemiological model was introduced by Kermack and McKendrick [1] in 1927. Since then, a lot of differential equations have been studied as models for the spread of infectious diseases. Considering a continuous vaccination strategy, let V be a new group of vaccinated individuals, Liu et al. [2] formulated the following system of ordinary differential equations:

    {dS(t)dt=Λβ1S(t)I(t)αS(t)μ1S(t),dV(t)dt=αS(t)β2V(t)I(t)(γ1+μ1)V(x,t),dI(t)dt=β1S(t)I(t)+β2V(t)I(t)γI(x,t)μ3I(x,t),dR(t)dt=γ1V(t)+γI(t)μ1R(t), (1.1)

    where S(t), V(t), I(t) and R(t) denote the densities of susceptible, vaccinated, infective and removed individuals at time t, respectively. Λ denote the recruitment rate of susceptible individuals, μ1 denote the natural death rate. β1 is the rate of disease transmission between susceptible and infectious individuals, and β2 is the rate of disease transmission between vaccinated and infected individuals. γ denote the recovery rate, α is the vaccination rate and γ1 is the rate at which a vaccinated individual obtains immunity. In [2], the authors shown that the global dynamics of model (1.1) is completely determined by the basic reproduction number: that is, if the number is less than unity, then the disease-free equilibrium is globally asymptotically stable, while if the number is greater than unity, then a positive endemic equilibrium exists and it is globally asymptotically stable. Moreover, it was observed in Liu et al. [2] that vaccination has an effect of decreasing the basic reproduction number. By using the classical method of Lyapunov and graph-theoretic approach, Kuniya [3] studied the global stability of a multi-group SVIR epidemic model. Xu et.al [4] formulated a multi-group epidemic model with distributed delay and vaccination age, the authors established the global stability of the model, furthermore, the stochastic perturbation of the model is studied and it is proved that the endemic equilibrium of the stochastic model is stochastically asymptotically stable in the large under certain conditions. In [5,6,7], the global stability of different SVIR models with age structure are investigated.

    On the other hand, in order to understand the geographic spread of infectious disease, the spatial effect would give insights into disease spread and control. Due to this fact, many literatures have studied the spatial effects on epidemics by using reaction-diffusion equations (see, for instance, [8,9,10,11,12,13,14,15,16,17] and the references therein). In the study of reaction-diffusion models, the Laplacian operator describes the random diffusion of each individual, but it can not describe the long range diffusion. Therefore, a nonlocal dispersal term has been established, which is by a convolution operator:

    Jϕ(x)ϕ(x)=IRJ(xy)ϕ(y)dyϕ(x), (1.2)

    where ϕ(x) denote the densities of individuals at position x, J(xy) is interpreted as the probability of jumping from position y to position x, the convolution IRJ(xy)ϕ(y)dy is the rate at which individuals arrive at position x from all other positions, while IRJ(xy)ϕ(x)dy=ϕ(x) is the rate at which they leave position x to reach any other position. Problems involving such operators are called nonlocal diffusion problems and have appeared in various references [18,19,20,21,22,23,24,25,26,27,28,29,30,31].

    Recently, Li et al. [23] proposed a nonlocal dispersal SIR model with delay:

    {S(x,t)t=d1(JS(x,t)S(x,t))+Λβ1S(x,t)I(x,tτ)1+θI(x,tτ)μ1S(x,t),I(x,t)t=d2(JI(x,t)I(x,t))+β1S(x,t)I(x,tτ)1+θI(x,tτ)γI(x,t)μ3I(x,t),R(x,t)t=d3(JR(x,t)R(x,t))+γI(x,t)μ1R(x,t), (1.3)

    where S(x,t), I(x,t) and R(x,t) denote the densities of susceptible, infective and removed individuals at position x and time t, respectively. θ measures the saturation level. di(i=1,2,3) describes the spatial motility of each compartments. The biological meaning of other parameters are the same as in model (1.1). The authors find that there exists traveling wave solution if the basic reproduction number 0>1 and the wave speed cc, where c is the minimal wave speed. They also obtain the nonexistence of traveling wave solution for 0>1 and any 0<c<c or 0<1.

    Motivated by [2] and [23], in this paper, we consider a nonlocal dispersal epidemic model with vaccination and delay. Precisely, we study the following model.

    {S(x,t)t=d1(JS(x,t)S(x,t))+Λβ1S(x,t)I(x,tτ)αS(x,t)μ1S(x,t),V(x,t)t=d2(JV(x,t)V(x,t))β2V(x,t)I(x,tτ)+αS(x,t)(γ1+μ1)V(x,t),I(x,t)t=d3(JI(x,t)I(x,t))+β1S(x,t)I(x,tτ)+β2V(x,t)I(x,tτ)γI(x,t)μ3I(x,t),R(x,t)t=d4(JR(x,t)R(x,t))+β1S(x,t)I(x,tτ)+γ1V(x,t)+γI(x,t)μ4R(x,t), (1.4)

    where S(x,t), V(x,t), I(x,t) and R(x,t) denote the densities of susceptible, vaccinated, infective and removed individuals at position x and time t, respectively. di(i=1,2,3,4) describes the spatial motility of each compartments. The biological meaning of other parameters are the same as in model (1.1). J is the standard convolution operator satisfying the following assumptions.

    Assumption 1.1. [23,24] The kernel function J satisfies

    (J1) JC1(IR), J(x)0,  J(x)=J(x),  IRJ(x)dx=1 and J is compactly supported.

    (J2) There exists a constant λM(0,+) such that

    IRJ(x)eλxdx<+,  for  any  λ[0,λM)

    and

    limλλM0IRJ(x)eλxdx+.

    The organization of this paper is as follows. In section 2, we proved the existence of traveling wave solutions of (1.4) for c>c by applying Schauder's fixed point theorem and Lyapunov method. In section 3, we show that the existence of traveling wave solutions of (1.4) for c=c. Furthermore, we investigate the nonexistence of traveling wave solutions under some conditions in section 4. At last, there is a brief discussion.

    In this section, we study the existence of traveling wave solutions of system (1.4). Since we have assumed that the recovered have gained permanent immunity and R(x,t) is decoupled from other equations, we indeed need to study the following subsystem of (1.4)

    {S(x,t)t=d1(JS(x,t)S(x,t))+Λβ1S(x,t)I(x,tτ)αS(x,t)μ1S(x,t),V(x,t)t=d2(JV(x,t)V(x,t))β2V(x,t)I(x,tτ)+αS(x,t)μ2V(x,t),I(x,t)t=d3(JI(x,t)I(x,t))+(β1S(x,t)+β2V(x,t))I(x,tτ)γI(x,t)μ3I(x,t). (2.1)

    where μ2=γ1+μ1. Obviously, system (2.1) always has a disease-free equilibrium E0=(S0,V0,0)=(Λμ1+α,Λαμ2(μ1+α),0). Denote the basic reproduction number as following:

    0=β1S0+β2V0μ3+γ. (2.2)

    Furthermore, there exists another equilibrium E=(S,V,I) satisfying

    {Λβ1SIαSμ1S=0,β2VI+αSμ2V=0,(β1S+β2V)IγIμ3I=0. (2.3)

    From [2,Theorem 2.1], system (2.1) has a unique positive equilibrium E if 0>1.

    Let ξ=x+ct and substituting ξ into system (2.1), then we obtain the wave form equations as

    {cS(ξ)=d1(JS(ξ)S(ξ))+Λβ1S(ξ)I(ξcτ)αS(ξ)μ1S(ξ),cV(ξ)=d2(JV(ξ)V(ξ))+αS(ξ)β2V(ξ)I(ξcτ)μ2V(ξ),cI(ξ)=d3(JI(ξ)I(ξ))+β1S(ξ)I(ξcτ)+β2V(ξ)I(ξcτ)γI(ξ)μ3I(ξ). (2.4)

    We want to find traveling wave solutions with the following asymptotic boundary conditions:

    limξ(S(ξ),V(ξ),I(ξ))=(S0,V0,0) (2.5)

    and

    limξ+(S(ξ),V(ξ),I(ξ))=(S,V,I). (2.6)

    Consider the following linear system of system (2.4) at infection-free equilibrium (S0,V0,0),

    cI(ξ)=d3(JI(ξ)I(ξ))+β1S0I(ξcτ)+β2V0I(ξcτ)(γ+μ3)I(ξ). (2.7)

    Let I(ξ)=eλξ, we have

    Δ(λ,c)d3IRJ(x)eλxdx(d3+γ+μ3)cλ+β1S0ecτλ+β2V0ecτλ=0. (2.8)

    By some calculations, we obtain

    Δ(0,c)=β1S0+β2V0γμ3,   limc+Δ(λ,c)=  for  λ>0,Δ(λ,c)λ|(0,c)=ccτ(β1S0+β2V0)<0  for  c>0,Δ(λ,c)c=λτλecτλ(β1S0+β2V0)<0  for  λ>0,2Δ(λ,c)λ2=d3IRJ(x)x2eλxdx+(cτ)2ecτλ(β1S0+β2V0)>0.

    For any cIR, Δ(0,c)=01, gives us Δ(0,c)>0 if 0>1. Then there exist c>0 and λ>0 such that Δ(λ,c)λ|(λ,c)=0, we have the following lemma.

    Lemma 2.1. Let 0>1, we have

    (ⅰ) If c=c, then Δ(λ,c)=0 has two same positive real roots λ;

    (ⅱ) If 0<c<c, then Δ(λ,c)>0 for all λ(0,λc,τ), where λc,τ(0,+];

    (ⅲ) If c>c, then Δ(λ,c)=0 has two positive real roots λ1(c),λ2(c).

    Denote λc=λ1(c), from Lemma 2.1, we have

    0<λc<λ<λ2(c)<λc,τ.

    For the followings in this section, we always fix c>c and 0>1. Define the following functions:

    {¯S(ξ)=S0,¯V(ξ)=V0,¯I(ξ)=eλcξ,  {S_(ξ)=max{S0M1eε1ξ,0},V_(ξ)=max{V0M2eε2ξ,0},I_(ξ)=max{eλcξ(1M3eε3ξ),0},

    where Mi and εi(i=1,2,3) are some positive constants to be determined in the following lemmas.

    Lemma 2.2. The function ¯I(ξ)=eλcξ satisfies

    cI(ξ)d3(JI(ξ)I(ξ))+β1S0I(ξcτ)+β2V0I(ξcτ)γI(ξ)μ3I(ξ). (2.9)

    Lemma 2.3. The functions ¯S(ξ)=S0 and ¯V(ξ)=V0 satisfy

    {cS(ξ)d1(JS(ξ)S(ξ))+Λβ1S(ξ)I_(ξcτ)αS(ξ)μ1S(ξ),cV(ξ)d2(JV(ξ)V(ξ))+αS(ξ)β2V(ξ)I_(ξcτ)μ2V(ξ). (2.10)

    The proof is trivial, so we omitted the above two lemmas.

    Lemma 2.4. For each 0<ε1<λc sufficiently small and M1 large enough, the function S_(ξ)=max{S0M1eε1ξ,0} satisfies

    cS(ξ)d1(JS(ξ)S(ξ))+Λβ1S(ξ)¯I(ξcτ)(μ1+α)S(ξ), (2.11)

    with ξX11ε1lnS0M1.

    Proof. See Appendix A.

    Lemma 2.5. For each 0<ε2<λc sufficiently small and M2 large enough, the function V_(ξ)=max{V0M2eε2ξ,0} satisfies

    cV(ξ)d2(JV(ξ)V(ξ))+αS_(ξ)β2V(ξ)¯I(ξcτ)μ2V(ξ), (2.12)

    with ξX21ε2lnV0M2.

    The proof is similar with Lemma 2.4.

    Lemma 2.6. Let 0<ε3<min{ε1/2,ε2/2} and M3>max{S0,V0} is large enough, then the function I_(ξ)=max{eλcξ(1M3eε3ξ),0} satisfies

    cI(ξ)d3(JI(ξ)I(ξ))+β1S_(ξ)I(ξcτ)+β2V_(ξ)I(ξcτ)γI(ξ)μ3I(ξ), (2.13)

    with ξX31ε3ln1M3.

    Proof. See Appendix B.

    Let X>max{X1,X2,X3}, define

    ΓX={(ϕφψ)C([X,X],IR3)|S_(ξ)ϕ(ξ)S0,  ϕ(X)=S_(X),  for ξ[X,X];V_(ξ)φ(ξ)V0,  φ(X)=V_(X),  for ξ[X,X];I_(ξ)ψ(ξ)¯I(ξ),  ψ(X)=I_(X),  for ξ[X,X].}.

    For given (ϕ(ξ),φ(ξ),ψ(ξ))ΓX, define

    ˆϕ(ξ)={ϕ(X),  for ξ>X,ϕ(ξ),   for ξ[Xcτ,X],S_(ξ),   for ξXcτ,   ˆφ(ξ)={φ(X),  for ξ>X,φ(ξ),   for ξ[Xcτ,X],V_(ξ),   for ξXcτ,

    and

    ˆψ(ξ)={ψ(X),  for ξ>X,ψ(ξ),   for ξ[Xcτ,X],I_(ξ),    for ξXcτ.

    We have

    {S_(ξ)ˆϕ(ξ)S0,V_(ξ)ˆφ(ξ)V0,I_(ξ)ˆψ(ξ)¯I(ξ).

    For any ξ[X,X], consider the following initial value problem

    {cS(ξ)=d1IRJ(y)ˆϕ(ξy)dy+Λβ1S(ξ)ψ(ξcτ)(d1+μ1+α)S(ξ),cV(ξ)=d2IRJ(y)ˆφ(ξy)dy+αϕ(ξ)β2V(ξ)ψ(ξcτ)(d2+μ2)V(ξ),cI(ξ)=d3IRJ(y)ˆψ(ξy)dy+β1ϕ(ξ)ψ(ξcτ)+β2φ(ξ)ψ(ξcτ)(d3+γ+μ3)I(ξ),S(X)=S_(X),   V(X)=V_(X),   I(X)=I_(X). (2.14)

    From the standard theory of functional differential equations (see [32]), the initial value problem (2.14) admits a unique solution (SX(ξ),VX(ξ),IX(ξ)) satisfying

    (SX,VX,IX)C1([X,X]),

    this defines an operator A=(A1,A2,A3):ΓXC1([X,X]) as

    SX=A1(ϕ,φ,ψ), VX=A2(ϕ,φ,ψ), IX=A3(ϕ,φ,ψ).

    Next we show the operator A=(A1,A2,A3) has a fixed point in ΓX.

    Lemma 2.7. The operator A=(A1,A2,A3) maps ΓX into itself.

    Proof. Firstly, we show that S_(ξ)SX(ξ) for any ξ[X,X]. If ξ(X1,X), S_(ξ)=0 is a lower solution of the first equation of (2.14). If ξ(X,X1), S_(ξ)=S0M1eε1ξ, by Lemma 2.4, we have

    cS_(ξ)d1IRJ(y)ˆϕ(ξy)dyΛ+β1S_(ξ)ψ(ξcτ)(d1+μ1+α)S_(ξ)cS_(ξ)d1IRJ(y)S_(ξy)dyΛ+β1S_(ξ)¯I(ξcτ)(d1+μ1+α)S_(ξ))0,

    which implies that S_(ξ)=S0M1eε1ξ is a lower solution of the first equation of (2.14). Thus S_(ξ)SX(ξ) for any ξ[X,X].

    Secondly, we show that SX(ξ)¯S(ξ)=S0 for any ξ[X,X]. In fact,

    d1IRJ(y)ˆϕ(ξy)dy+Λβ1S0ψ(ξcτ)(d1+μ1+α)S0d1IRJ(y)S0dy+Λβ1S0I_(ξcτ)(d1+μ1+α)S00,

    thus ¯S(ξ)=S0 is an upper solution to the first equation of (2.14), which gives us SX(ξ)S0 for any ξ[X,X].

    Similarly, V_(ξ)VX(ξ)¯V(ξ) and I_(ξ)IX(ξ)¯I(ξ) for any ξ[X,X].

    Lemma 2.8. The operator A is completely continuous.

    Proof. Suppose (ϕi(ξ),φi(ξ),ψi(ξ))ΓX, i=1,2.

    SX,i(ξ)=A1(ϕi(ξ),φi(ξ),ψi(ξ)),VX,i(ξ)=A2(ϕi(ξ),φi(ξ),ψi(ξ)),IX,i(ξ)=A3(ϕi(ξ),φi(ξ),ψi(ξ)),

    We show the operator A is continuous. By direct calculation, we have

    SX(ξ)=S_(X)exp{1cξX(d1+μ1+α+β1ψ(scτ))ds}+1cξXexp{1cξη(d1+μ1+α+β1ψ(scτ))ds}fϕ(η)dη,
    VX(ξ)=V_(X)exp{1cξX(d2+μ2+β2ψ(scτ))ds}+1cξXexp{1cξη(d2+μ2+β2ψ(scτ))ds}fφ(η)dη,

    and

    IX(ξ)=I_(X)exp{(d3+γ+μ3)(ξ+X)c}+1cξXexp{(d3+γ+μ3)(ξη)c}fψ(η)dη.

    where

    fϕ(η)=d1IRJ(ηy)ˆϕ(y)dy+Λ,fφ(η)=d2IRJ(ηy)ˆφ(y)dy+αϕ(η),fψ(η)=d3IRJ(ηy)ˆψ(y)dy+(β1ϕ(η)+β2φ(η))ψ(ηcτ).

    For any (ϕi,φi,ψi)ΓX, i=1,2, we have

    |fϕ1(η)fϕ2(η)|=d1|IRJ(ηy)[ˆϕ1(y)ˆϕ2(y)]dy|d1|XXJ(ξy)(ϕ1(y)ϕ2(y))dy|+d1|XJ(ξy)(ϕ1(X)ϕ2(X))dy|2d1maxy[X,X]|ϕ1(y)ϕ2(y)|,
    |fφ1(η)fφ2(η)|=d2|IRJ(ηy)[ˆφ1(y)ˆφ2(y)]dy+α(ϕ1(η)ϕ2(η))|2d2maxy[X,X]|φ1(y)φ2(y)|+αmaxy[X,X]|ϕ1(y)ϕ2(y)|,
    |fψ1(η)fψ2(η)|(2d2+β1S0+β2V0)maxy[X,X]|ψ1(y)ψ2(y)|+β1eλcξmaxy[X,X]|ϕ1(y)ϕ2(y)|+β2eλcξmaxy[X,X]|φ1(y)φ2(y)|.

    Here we use

    |β1ϕ2(ξ)ψ2(ξcτ)β1ϕ1(ξ)ψ1(ξcτ)||β1ϕ2(ξ)ψ2(ξcτ)β1ϕ2(ξ)ψ1(ξcτ)|+|β1ϕ2(ξ)ψ1(ξcτ)β1ϕ1(ξ)ψ1(ξcτ)|β1S0maxy[X,X]|ψ1(y)ψ2(y)|+β1eλcξmaxy[X,X]|ϕ1(y)ϕ2(y)|.

    and

    |β2φ2(ξ)ψ2(ξcτ)β2φ1(ξ)ψ1(ξcτ)|β2V0maxy[X,X]|ψ1(y)ψ2(y)|+β2eλcξmaxy[X,X]|φ1(y)φ2(y)|.

    Thus, we obtain that the operator A is continuous. Next, we show A is compact. Indeed, since SX, VX and IX are class of C1([X,X]), note that

    c(SX,1(ξ)SX,2(ξ))+(d1+μ1+α)(SX,1(ξ)SX,2(ξ))=d1IRJ(ξy)(ˆϕ1(y)ˆϕ2(y))dy+β1ϕ2(ξ)ψ2(ξcτ)β1ϕ1(ξ)ψ1(ξcτ)(2d1+β1eλcξ)maxy[X,X]|ϕ1(y)ϕ2(y)|+β1S0maxy[X,X]|ψ1(y)ψ2(y)|.

    Same arguments with VX and IX, give us SX, VX and IX are bounded. Then A is compact and the operator A is completely continuous. This ends the proof.

    Obviously, ΓX is a bounded closed convex set, applying the Schauder's fixed point theorem ([33] Corollary 2.3.10), we have the following theorem.

    Theorem 2.1. There exists (SX,VX,IX)ΓX such that

    (SX(ξ),VX(ξ),IX(ξ))=A(SX,VX,IX)(ξ)

    for ξ[X,X].

    Now we are in position to show the existence of traveling wave solutions, before that we do some estimates for SX(), VX() and IX().

    Define

    C1,1([X,X])={uC1([X,X])|u,uare Lipschitz continuous}

    with norm

    uC1,1([X,X])=maxx[X,X]|u|+maxx[X,X]|u|+supx,y[X,X]xy|u(x)u(y)||xy|.

    Lemma 2.9. There exists a constant C(Y)>0 such that

    SXC1,1([Y,Y])C(Y),  VXC1,1([Y,Y])C(Y),  IXC1,1([Y,Y])C(Y)

    for Y<X and X>max{X1,X2,X3}.

    Proof. Recall that (SX,EX,IX) is the fixed point of the operator A, then

    cSX(ξ)=d1+J(y)ˆSX(ξy)dy+Λβ1SX(ξ)IX(ξcτ)(d1+μ1+α)SX(ξ), (2.15)
    cVX(ξ)=d2+J(y)ˆVX(ξy)dy+αSX(ξ)β2VX(ξ)IX(ξcτ)(d2+μ2)VX(ξ), (2.16)
    cIX(ξ)=d3+J(y)ˆIX(ξy)dy+β1SX(ξ)IX(ξcτ)+β2VX(ξ)IX(ξcτ)(d3+μ3)IX(ξ), (2.17)

    where

    (ˆSX(ξ),ˆVX(ξ),ˆIX(ξ))={(SX(X),VX(X),IX(X)),  for ξ>X,(SX(ξ),VX(ξ),IX(ξ)),      for ξ[Xcτ,X],(S_(ξ),V_(ξ),I_(ξ)),            for ξXcτ,

    following that SX(ξ)S0,  VX(ξ)V0,  IX(ξ)eλcY for any ξ[Y,Y]. Then

    |SX(ξ)|d1c|+J(y)ˆSX(ξy)dy|+Λc+d1+μ1+αc|SX(ξ)|+β1c|SX(ξ)||IX(ξcτ)|2d1+μ1+αcS0+Λc+β1S0ceλcY,|VX(ξ)|2d2+μ2cV0+αS0c+β2V0ceλcY,|IX(ξ)|(d3+μ3c+β1S0c+β2V0c)eλcY.

    Thus, there exists some constant C1(Y)>0 such that

    SXC1([Y,Y])C1(Y),  VXC1([Y,Y])C1(Y),  IXC1([Y,Y])C1(Y).

    Then for any ξ1,ξ2[Y,Y] such that

    |SX(ξ1)SX(ξ2)|C1(Y)|ξ1ξ2|,  |VX(ξ1)VX(ξ2)|C1(Y)|ξ1ξ2|,  |IX(x1)IX(x2)|C1(Y)|ξ1ξ2|.

    From (2.15), we have

    c|SX(ξ1)SX(ξ2)|d1|+J(y)(ˆSX(ξ1y)ˆSX(ξ2y))dy|+(d1+μ1+α)|SX(ξ1)SX(ξ2)|+S0|IX(ξ1)IX(ξ2)|.

    Recall (J1) of Assumption 1.1, we know J is Lipschitz continuous and compactly supported on IR, let L be the Lipschitz constant for J and R be the radius of suppJ. Then

    d1|+J(y)(ˆSX(ξ1y)ˆSX(ξ2y))dy|=d1|RRJ(y)ˆSX(ξ1y)dyRRJ(y)ˆSX(ξ2y)dy|=d1|ξ1+Rξ1RJ(ξ1y)ˆSX(y)dyξ2+Rξ2RJ(y)ˆSX(y)dy|=d1|(ξ2Rξ1R+ξ2+Rξ2R+ξ1+Rξ2+R)J(ξ1y)ˆSX(y)dyξ2+Rξ2RJ(y)ˆSX(y)dy|d1|ξ1+Rξ2+RJ(ξ1y)ˆSX(y)dy|+d1|ξ2Rξ1RJ(ξ1y)ˆSX(y)dy|+d1|ξ2+Rξ2R(J(ξ1y)J(ξ2y))ˆSX(y)dy|d1(2S0JL+2RLS0)|ξ1ξ2|.

    Thus there exists some constant C2(Y)>0 such that

    |SX(ξ1)SX(ξ2)|C2(Y)|ξ1ξ2|.

    Similarly

    |VX(ξ1)VX(ξ2)|C2(Y)|ξ1ξ2|,  |IX(ξ1)IX(ξ2)|C2(Y)|ξ1ξ2|.

    From the above discussion, there exists some constant C(Y)>0 for any Y<X that is independent of X such that

    SXC1,1([Y,Y])C(Y),  VXC1,1([Y,Y])C(Y),  IXC1,1([Y,Y])C(Y).

    Now let {Xn}+n=1 be an increasing sequence such that Xn{X1,X2,X3}, Xn>Y+R for each n and limXn=+, where R is the radius of suppJ. For every c>c, we have (SXn,VXn,IXn)ΓXn satisfying Lemma 2.9 and Equations (2.15)-(2.17).For the sequence (SXn,VXn,IXn), we can extract a subsequence denoted by {SXnk}kIN, {VXnk}kIN and {IXnk}kIN tending to functions (S,V,I)C1(IR) in the following topologies

    SXnkS,  VXnkV  and  IXnkI  in  C1loc(IR)  as  k+.

    Since J is compactly supported, applying the dominated convergence theorem, thus

    limk+IRJ(y)ˆSXnk(ξy)dy=IRJ(y)S(ξy)dy=JS(ξ),
    limk+IRJ(y)ˆVXnk(ξy)dy=IRJ(y)V(ξy)dy=JV(ξ)

    and

    limk+IRJ(y)ˆIXnk(ξy)dy=IRJ(y)I(ξy)dy=JI(ξ).

    Moreover, (S,V,I) satisfies system (2.4) and

    S_(ξ)S(ξ)S0,  V_(ξ)V(ξ)V0,  I_(ξ)I(ξ)eλcξ.

    Next, we show that I(ξ) is bounded in IR by the method in [34] (see also [35,36]).

    Lemma 2.10. There exists some positive constant C such that

    IRJ(y)I(ξy)I(ξ)dy<C,   I(ξcτ)I(ξ)<C   and   |I(ξ)I(ξ)|<C.

    Proof. Let θ(ξ)=I(ξ)I(ξ), from the third equation of (2.4), we have

    θ(ξ)d3c(IRJ(y)I(ξy)I(ξ)dy1)γ+μ3c=d3cIRJ(y)eξyξθ(s)dsdy(d3+γ+μ3c).

    Set ϖ=(d3+γ+μ3c) and W(ξ)=exp{ϖξ+ξ0θ(s)ds}, thus

    W(ξ)=(ϖ+θ(ξ))W(ξ)d3cIRJ(y)eξyξθ(s)dsdyW(ξ),

    that is W(ξ) is non-decreasing. We can take some R0>0 with 2R0<R, where R is the radius of suppJ. Then by the same argument in [34,Lemma 2.2], we have

    W(ξ)d3cR0IRJ(y)eϖyW(ξR0y)dy

    and

    W(ξ+R0)σ0W(ξ)   for  all  ξIR,

    where

    σ0d3cR02R0J(y)eϖydy.

    Thus

    IRJ(y)I(ξy)I(ξ)dy=0J(y)I(ξy)I(ξ)dy++0J(y)I(ξy)I(ξ)dy=0J(y)eϖyW(ξy)W(ξ)dy++0J(y)eϖyW(ξy)W(ξ)dyσ00J(y)eϖyW(ξyR0)W(ξ)dy++0J(y)eϖydycσ0d3R0++0J(y)eϖydy.

    Again with the third equation of (2.4), we have

    I(ξ)+ϖI(ξ)=d3JI(ξ)+β1S(ξ)I(ξcτ)+β2V(ξ)I(ξcτ)>0   for  all  ξIR.

    Let U(ξ)=eϖξI(ξ), then U(ξ)0, it follows that

    I(ξcτ)I(ξ)eϖcτ.

    Furthermore,

    |I(ξ)I(ξ)|d3cIRJ(y)I(ξy)I(ξ)dy+(β1S0+β2V0)I(ξcτ)I(ξ)+ϖ.

    This completes the proof.

    Lemma 2.11. Choose ck(c,c+1) and let {ck,Sk,Vk,Ik} be a sequence of traveling waves of (2.1) with speeds {ck}. If there is a sequence {ξk} such that Ik(ξk)+ as k+, then Sk(ξk)0 and Vk(ξk)0 as k+.

    Proof. Assume that there is a subsequence of {ξk}kIN again denoted by ξk, such that Ik(ξk)+ as k+ and Sk(ξk)ε in IR for all kIN with some positive constant ε. From the first equation of (2.4), we have

    Sk(ξ)2d1S0+Λcdelta0  in  IR.

    It follows that

    Sk(ξ)ε2,   ξ[ξkdelta,ξk],

    for all kIN, where delta=εdelta0. By Lemma 2.10, we have |IkIk|<C0 for some C0>0. Then

    Ik(ξk)Ik(ξcτ)=exp{ξkξcτIk(s)Ik(s)ds}eC0(cτ+delta),  ξ[ξkdelta,ξk]

    for all kIN. Thus

    minξ[ξkdelta,ξk]Ik(ξcτ)eC0(cτ+delta)Ik(ξk),

    which give us

    minξ[ξkdelta,ξk]Ik(ξcτ)+  as  k+

    since Ik(ξk)+ as k+. Recalling the first equation of (2.4), one can have

    maxξ[ξkdelta,ξk]Sk(ξ)delta0β1ε2minξ[ξkdelta,ξk]Ik(ξcτ)  as  k+.

    Moreover, there exists some K>0 such that

    Sk(ξ)2S0delta,  kK  and  ξ[ξkdelta,ξk].

    Note that Sk<S0 in IR for each kIN. Hence Sk(ξk)S0 for all kK, which reduces to a contradiction since Sk(ξk)ε in IR for all kIN with some positive constant ε. Similarly, we can show that Vk(ξk)0 as k+. This completes the proof.

    Lemma 2.12. If lim supξI(ξ)=, then limξI(ξ)=.

    The proof is similar to that of [34,Lemma 2.4], so we omit the details. With the previous lemmas, we can show that I(ξ) is bounded in IR.

    Theorem 2.2.I(ξ) is bounded in IR.

    Proof. Assume that lim supξI(ξ)=, then we have limξS(ξ)=0 and limξV(ξ)=0 from Lemma 2.11 and Lemma 2.12. Set θ(ξ)=I(ξ)I(ξ), from the third equation of (2.4), we have

    cθ(ξ)=d3IRJ(y)eξyξθ(s)ds(d3+γ+μ3)+B(ξ),

    where

    B(ξ)=[β1S(ξ)+β2V(ξ)]I(ξcτ)I(ξ).

    Since I(ξcτ)I(ξ)<C for some positive constant C from Lemma 2.10. By using [34,Lemma 2.5], we can get that limξ+θ(ξ) exists and satisfies the following equation

    f(λ,c)d3(IRJ(y)eλy1)cλ(γ+μ3).

    By some calculations, we obtain

    f(0,c)<0,  f(λ,c)λ|λ=0<0,  2f(λ,c)λ2>0  and  limλ+f(λ,c)=.

    Thus, I(ξ) is bounded by using the same arguments in [34,Theorem 2.6]. This ends the proof.

    Since I(ξ) is bounded in IR, we assume that there exists a positive constant ρ< such that I(ξ)<ρ. Furthermore, it can be verified Λμ1+α+β1ρ is a lower solution of S and αΛ(μ1+α+β1ρ)(μ2+β2ρ) is a lower solution of V. Then we have the following proposition.

    Proposition 2.1.S(ξ),V(ξ) and I(ξ) satisfy

    Λμ1+α+β1ρS(ξ)S0,  αΛ(μ1+α+β1ρ)(μ2+β2ρ)V(ξ)V0,  I_(ξ)I(ξ)ρ

    for ξIR.

    The following lemma is to show that I(ξ) cannot approach 0.

    Lemma 2.13. Assume that 0>1, then for each c>c, we have

    lim infξI(ξ)>0.

    Proof. We only need to show that if I(ξ)ε0 for some small enough constant ε0>0, then I(ξ)>0 for all ξIR. Assume by way of contradiction that there is no such ε0, that is there exist some sequence {ξk}kIN such that I(ξk)0 as k+ and I(ξk)0. Denote

    Sk(ξ)S(ξk+ξ),  Vk(ξ)V(ξk+ξ)  and  Ik(ξ)I(ξk+ξ).

    Thus we have Ik(0)0 as k+ and Ik(ξ)0 locally uniformly in IR as k+. As a consequence, there also holds that Ik(ξ)0 locally uniformly in IR as k+ by the third equation of (2.4). From the argument in [25,Theorem 2.9], we can obtain that S=S0 and V=V0.

    Let ψk(ξ)Ik(ξ)Ik(0). By Lemma 2.10, and in the view of

    ψk(ξ)=Ik(ξ)Ik(0)=Ik(ξ)Ik(ξ)ψk(ξ),

    we have ψk(ξ) and ψk(ξ) are also locally uniformly in IR as k+. Letting k+, thus

    cψ(ξ)=d3IRJ(y)ψ(ξy)dy+(β1S0+β2V0)ψ(ξcτ)(d3+γ+μ3)ψ(ξ).

    One can have ψ(ξ)>0 in IR. In fact, if there exist some ξ0 such that ψ(ξ0)=0 and ψ(ξ)>0 for all ξ<ξ0, then

    0=d3IRJ(y)ψ(ξ0y)dy+(β1S0+β2V0)ψ(ξ0cτ)>0,

    which is a contradiction.

    Denote Z(ξ)ψ(ξ)ψ(ξ), it is easy to verify Z(ξ) satisfies

    cZ(ξ)=d3IRJ(y)eξyξZ(s)dsdy+(β1S0+β2V0)eξcτξZ(s)ds(d3+γ+μ3). (2.18)

    Then by similar discussion in [25,Theorem 2.9], for 0>1 and c>c, we have

    0<ψ(0)=limk+ψn(0)=limk+In(0)In(0).

    Thus, I(ξk)=In(0)>0, which is a contradiction. This completes the proof.

    Remark 2.1. In the proof of Lemma 2.13, we need to show that Z(±) exist in Equation (2.18). In [25], the authors applying [37,Lemma 3.4] to show that Z(±) exist. There is a time delay term in Equation (2.18) which is different from [37,Lemma 3.4], but we can still using the method in [37,Lemma 3.4] to proof Z(±) exist. The proof is trivial, so we omitted it.

    Now, we can give the main result in this section.

    Theorem 2.3. Suppose 0>1, then for every c>c, system (2.1) admits a nontrivial traveling wave solution (S(x+ct),V(x+ct),I(x+ct)) satisfying the asymptotic boundary condition (2.5) and (2.6).

    Proof. First, it is easy to verify that S()=S0,V()=V0,I()=0 by Lemmas 2.4, 2.5 and 2.6.

    Next, we will show (S(ξ),V(ξ),I(ξ))=(S,V,I) as ξ+ by using Lyapunov function. From Proposition 2.1 and Lemma 2.13, we have S(ξ)>0, V(ξ)>0 and I(ξ)>0.

    Let g(x)=x1lnx, α+(y)=+yJ(x)dx, α(y)=yJ(x)dx. Since J is compactly supported, and recall that R is the radius of suppJ, hence

    α+(y)0  and  α(y)0  for   |y|R. (2.19)

    Define the following Lyapunov functional

    L(S,V,I)(ξ)=cSL1(ξ)+cVL2(ξ)+cIL3(ξ)+d1SU1(ξ)+d2VU2(ξ)+d3IU3(ξ)

    where

    L1(ξ)=g(S(ξ)S);   L2(ξ)=g(V(ξ)V);L3(ξ)=g(I(ξ)I)+(μ3+γ)Icτ0g(I(ξθ)I)dθ;U1(ξ)=+0α+(y)g(S(ξy)S)dy0α(y)g(S(ξy)S)dy;U2(ξ)=+0α+(y)g(V(ξy)V)dy0α(y)g(V(ξy)V)dy;U3(ξ)=+0α+(y)g(I(ξy)I)dy0α(y)g(I(ξy)I)dy.

    Thanks to [38,Theorem 1] and S(ξ)>0, V(ξ)>0, I(ξ)>0, we can get that L1(ξ), L2(ξ) and L3(ξ) are bounded from below. Furthermore, by using (2.19), Proposition 2.1 and Lemma 2.13, we can claim that U1(ξ), U2(ξ) and U3(ξ) is bounded from below. Thus L(S,V,I)(ξ) is well defined and bounded from below. Note that α±=12, dα+(y)dy=J(y) and dα(y)dy=J(y), we have

    dU1(ξ)dξ=ddξ+0α+(y)g(S(ξy)S)dyddξ0α(y)g(S(ξy)S)dy=+0α+(y)ddξg(S(ξy)S)dy0α(y)ddξg(S(ξy)S)dy=+0α+(y)ddyg(S(ξy)S)dy+0α(y)ddyg(S(ξy)S)dy=g(S(ξ)S)+J(y)g(S(ξy)S)dy.

    Similarly,

    dU2(ξ)dξ=g(V(ξ)V)+J(y)g(V(ξy)V)dy;dU3(ξ)dξ=g(I(ξ)I)+J(y)g(I(ξy)I)dy.

    By some calculations, it can be shown that

    ddξcτ0g(I(ξθ)I)dθ=cτ0ddξg(I(ξθ)I)dθ=cτ0ddθg(I(ξθ)I)dθ=I(ξ)II(ξcτ)I+lnI(ξcτ)I(ξ).

    Thus

    dL(ξ)dξ=(1SS(ξ))(d1(JS(ξ)S(ξ))+Λβ1S(ξ)I(ξcτ)(α+μ1)S(ξ))+(1VV(ξ))(d2(JV(ξ)V(ξ))+αS(ξ)β2V(ξ)I(ξcτ)μ2E(ξ))+(1II(ξ))(d3(JI(ξ)I(ξ))+β1S(ξ)I(ξcτ)+β2V(ξ)I(ξcτ)(γ+μ3)I(ξ))+(μ3+γ)I(I(ξ)II(ξcτ)I+lnI(ξcτ)I(ξ))+d1Sg(S(ξ)S)d1S+J(y)g(S(ξy)S)dy+d2Vg(V(ξ)V)d2V+J(y)g(V(ξy)V)dy+d3Ig(I(ξ)I)d3I+J(y)g(I(ξy)I)dyB1+B2,

    where

    B1=(1SS(ξ))d1(JS(ξ)S(ξ))+d1Sg(S(ξ)S)d1S+J(y)g(S(ξy)S)dy+(1VV(ξ))d2(JV(ξ)V(ξ))+d2Vg(V(ξ)V)d2V+J(y)g(V(ξy)V)dy+(1II(ξ))d3(JI(ξ)I(ξ))+d3Ig(I(ξ)I)d3I+J(y)g(I(ξy)I)dy,

    and

    B2=(1SS(ξ))(Λβ1S(ξ)I(ξcτ)(α+μ1)S(ξ))+(1VV(ξ))(αS(ξ)β2V(ξ)I(ξcτ)μ2E(ξ))+(1II(ξ))(β1S(ξ)I(ξcτ)+β2V(ξ)I(ξcτ)(γ+μ3)I(ξ))+(μ3+γ)I(I(ξ)II(ξcτ)I+lnI(ξcτ)I(ξ)).

    For B1, using lnS(ξ)S=lnS(ξy)SlnS(ξy)S(ξ), thus

    (1SS(ξ))d1(JS(ξ)S(ξ))+d1Sg(S(ξ)S)d1S+J(y)g(S(ξy)S)dy=d1S+J(y)[S(ξy)SS(ξy)S(ξ)lnS(ξ)S]d1S+J(y)g(S(ξy)S)dy=d1S+J(y)[g(S(ξy)S)g(S(ξy)S(ξ))]d1S+J(y)g(S(ξy)S)dy=d1S+J(y)g(S(ξy)S(ξ))dy.

    Then

    \begin{align} \nonumber B_1 = &-d_1S^*\int_{-\infty}^{+\infty}J(y)g\left(\frac{S(\xi-y)}{S(\xi)}\right)\rm{d} y-d_2V^*\int_{-\infty}^{+\infty}J(y)g\left(\frac{V(\xi-y)}{V(\xi)}\right)\rm{d} y\\ &-d_3I^*\int_{-\infty}^{+\infty}J(y)g\left(\frac{I(\xi-y)}{I(\xi)}\right)\rm{d} y. \end{align} (2.20)

    For B_2, by some calculation yields

    \begin{align*} B_2 = &\mu_1S^*\left(2-\frac{S(\xi)}{S^*}-\frac{S^*}{S(\xi)}\right)-\beta_1S^*I^*g\left(\frac{S(\xi)I(\xi-c\tau)}{S^*I(\xi)}\right)\\ &-\beta_2V^*I^*\left[g\left(\frac{V(\xi)I(\xi-c\tau)}{V^*I(\xi)}\right)+g\left(\frac{S(\xi)V^*}{S^*V(\xi)}\right)\right]\\ &-\mu_2V^*\left[g\left(\frac{V(\xi)}{V^*}\right)+g\left(\frac{S(\xi)V^*}{S^*V(\xi)}\right)\right]\\ &-(\alpha S^*+\beta_1S^*I^*)g\left(\frac{S^*}{S(\xi)}\right), \end{align*}

    here we use (\mu_3+\gamma)I^* = \beta_1S^*I^*+\beta_2V^*I^* and \alpha S(\xi)\frac{V^*}{V(\xi)} = (\beta_2V^*I^*+\mu_2V^*)\frac{S(\xi)V^*}{S^*V(\xi)}. Combining B_1 and B_2, we obtain L(\xi) is decreasing in \xi.

    Consider an increasing sequence \{\xi_n\}_{n\geq 0} with \xi_n>0 such that \xi_n\rightarrow+\infty when n\rightarrow+\infty and denote

    \{S_n(\xi) = S(\xi+\xi_n)\}_{n\geq 0}, \ \ \{V_n(\xi) = V(\xi+\xi_n)\}_{n\geq 0, }\ \ \textrm{and}\ \ \{I_n(\xi) = I(\xi+\xi_n)\}_{n\geq 0}.

    We can assume that S_n, \ V_n and I_n converge to some nonnegative functions S_\infty, \ V_\infty and I_\infty. Furthermore, since L(S, V, I)(\xi) is non-increasing on \xi, then there exists a constant \hat{C} and large n such that

    \hat{C}\leq L(S_n, V_n, I_n)(\xi) = L(S, V, I)(\xi+\xi_n)\leq L(S, V, I)(\xi).

    Therefore there exists some \tilde{\rm{d}elta}\in {\rm IR} such that \lim_{n\rightarrow\infty} L(S_n, V_n, I_n)(\xi) = \tilde{\rm{d}elta} for any \xi\in {\rm IR}. By Lebegue dominated convergence theorem, gives us

    \lim\limits_{n\rightarrow+\infty}L(S_n, V_n, I_n)(\xi) = L(S_\infty, V_\infty, I_\infty)(\xi), \ \xi\in {\rm IR}.

    Thus

    L(S_\infty, V_\infty, I_\infty)(\xi) = \tilde{\rm{d}elta}.

    Note that \frac{\rm{d} L}{\rm{d} \xi} = 0 if and only if S(\xi)\equiv S^*, V(\xi)\equiv V^* and I(\xi)\equiv I^*, it follows that

    (S_\infty, V_\infty, I_\infty)\equiv (S^*, V^*, I^*).

    This completes the proof.

    In this section, we investigate the existence of traveling wave solutions for the case c = c^* by a limiting argument(see [23,39]).

    Theorem 3.1. Suppose \Re_0>1, then for every c = c^*, system (2.1) admits a nontrivial traveling wave solution (S(x+c^*t), V(x+c^*t), I(x+c^*t)) satisfying

    \lim\limits_{\xi\rightarrow+\infty}(S(\xi), V(\xi), I(\xi)) = (S^*, V^*, I^*).

    Furthermore, if we assume that S(-\infty) and V(-\infty) exist, then (S(x+c^*t), V(x+c^*t), I(x+c^*t)) also satisfying

    \lim\limits_{\xi\rightarrow-\infty}(S(\xi), V(\xi), I(\xi)) = (S_0, V_0, 0).

    Proof. Let \{c_n\}\subset(c^*, c^*+1) be a decreasing sequence such that \lim\limits_{n\rightarrow\infty}c_n = c^*. Then for each c_n, there exists a traveling wave solution (S_n(\cdot), V_n(\cdot), I_n(\cdot)) of system (2.4) with asymptotic boundary condition (2.5) and (2.6). Since (S_n(\cdot+a), V_n(\cdot+a), I_n(\cdot+a)) are also solutions of (2.4) for any a\in{\rm IR}, we can assume that

    I_n(0) = \rm{d}elta^*, \ \ I_n(\xi)\leq\rm{d}elta^*, \ \ \xi \lt 0

    with 0<\rm{d}elta<I^* is small enough.

    Similar to [23,39], we can find a subsequence of (S_n, V_n, I_n), again denoted by (S_n, V_n, I_n), such that (S_n, V_n, I_n) and (S'_n, V'_n, I'_n) converge uniformly on every bounded interval to function (S, V, I) and (S', V', I'), respectively. Applying the Lebesgue dominated convergence theorem, it then follows that

    \lim\limits_{n\rightarrow\infty}J*S_n = J*S, \ \ \lim\limits_{n\rightarrow\infty}J*V_n = J*V, \ \ \textrm{and}\ \ \lim\limits_{n\rightarrow\infty}J*I_n = J*I

    on every bounded interval. Then we get that (S, V, I) satisfies system (2.4). From the proof of Theorem 2.3, the Lyapunov functional is independent of c. By the same argument in the proof of Theorem 2.3, we claim that I(\xi)>0 for any \xi\in{\rm IR}. Hence, we can still get that

    \lim\limits_{\xi\rightarrow+\infty}S(\xi) = S^*, \ \ \lim\limits_{\xi\rightarrow+\infty}V(\xi) = V^*, \ \ \lim\limits_{\xi\rightarrow+\infty}I(\xi) = I^*.

    Moreover, we have

    I(0) = \rm{d}elta^*, \ \ I(\xi)\leq\rm{d}elta^*, \ \ \xi \lt 0.

    Let

    S_{sup} = \limsup\limits_{\xi\rightarrow-\infty}S(\xi), \ \ V_{sup} = \limsup\limits_{\xi\rightarrow-\infty}V(\xi), \ \ I_{sup} = \limsup\limits_{\xi\rightarrow-\infty}I(\xi)

    and

    S_{inf} = \liminf\limits_{\xi\rightarrow-\infty}S(\xi), \ \ V_{inf} = \liminf\limits_{\xi\rightarrow-\infty}V(\xi), \ \ I_{inf} = \liminf\limits_{\xi\rightarrow-\infty}I(\xi).

    Next, we show that I(-\infty) exists. By way of contradiction, assume that I_{inf} < I_{sup}. Then there exist sequences \{x_n\} and \{y_n\} satisfying x_n, \ y_n\rightarrow-\infty as n\rightarrow+\infty such that

    \lim\limits_{n\rightarrow+\infty}I(x_n) = I_{inf}\ \ \textrm{}\ \ \lim\limits_{n\rightarrow+\infty}I(y_n) = I_{sup}.

    Since we assumed that S(-\infty) and V(-\infty) exist, then S_{sup} = S_{inf} = S(-\infty) and V_{sup} = V_{inf} = V(-\infty). From [40,Lemma 2.3], we can obtain that S'(-\infty) = 0 and V'(-\infty) = 0. For any sequence \{\xi_n\}, \xi_n\rightarrow -\infty as n\rightarrow+\infty, using Fatou Lemma, one have that

    S(-\infty)\leq\liminf\limits_{n\rightarrow\infty}J*S(\xi_n)\leq\limsup\limits_{n\rightarrow\infty}J*S(\xi_n)\leq S(-\infty).

    and

    V(-\infty)\leq\liminf\limits_{n\rightarrow\infty}J*V(\xi_n)\leq\limsup\limits_{n\rightarrow\infty}J*V(\xi_n)\leq V(-\infty).

    Thus, we have

    \lim\limits_{n\rightarrow\infty}[J*S(\xi_n)-S(\xi_n)] = 0\ \ \textrm{and}\ \ \lim\limits_{n\rightarrow\infty}[J*V(\xi_n)-V(\xi_n)] = 0

    Taking \xi = x_n and \xi = y_n in the first equation of system 2.4, and letting n\rightarrow\infty, we obtain that I_{inf} = I_{sup}, which is a contradiction. Hence, I(-\infty) exists and I(-\infty)<\rm{d}elta^*. From system (2.4) and [40,Lemma 2.3], we obtain

    \begin{equation} \left\{ \begin{array}{l} \displaystyle \Lambda - \beta_1 S(-\infty)I(-\infty) - \alpha S(-\infty) - \mu_1 S(-\infty = 0), \\ \displaystyle \alpha S(-\infty)- \beta_2 V(-\infty)I(-\infty) - \mu_2 V(-\infty) = 0, \\ \displaystyle \beta_1 S(-\infty)I(-\infty) + \beta_2 V(-\infty)I(-\infty)) - \gamma I(-\infty) - \mu_3 I(-\infty) = 0. \end{array}\right. \end{equation} (3.1)

    In the view of \rm{d}elta^*<I^*, it follows that

    \lim\limits_{\xi\rightarrow-\infty}S(\xi) = S_0, \ \ \lim\limits_{\xi\rightarrow-\infty}V(\xi) = V_0, \ \ \lim\limits_{\xi\rightarrow-\infty}I(\xi) = 0.

    This completes the proof.

    Remark 3.1. For the case c = c^*, there is a priori condition assuming S(-\infty) and V(-\infty) exist. This condition is only necessary for the difficulty in mathematics. In [34], the authors have given some results for the case c = c^* in a nonlocal diffusive SIR model without constant recruitment, but some estimates is much more difficult for our model with constant recruitment and time delay as in [34,Section 3]. Thus, how to extend the methods in [34] to our model, it will be an interesting problem for further investigation.

    In this section, we show the nonexistence of traveling waves when \Re_0>1 with 0<c<c^*.

    Theorem 4.1. If \Re_0>1 and 0<c<c^*, then there exists no nontrivial positive solutions of (2.4) with (2.5) and (2.6).

    Proof. Since \Re_0>1 gives us \beta_1S_0+\beta_2V_0>\mu_3+\gamma. Assume there exists nontrivial positive solution (S, V, I) of (2.4) with (2.5) and (2.6). Then there exists a positive constant K>0 large enough such that, for any \xi<-K, we have

    \begin{equation}\label{Equ1} c I'(\xi) \geq d_3(J*I(\xi) - I(\xi)) +\frac{\beta_1S_0 + \beta_2V_0 - (\gamma + \mu_3)}{2}I(\xi-c\tau) +(\gamma+\mu_3)(I(\xi-c\tau) - I(\xi)) \end{equation} (4.1)

    holds. Let K(\xi) = \int_{-\infty}^\xi I(\eta)\rm{d} \eta. By Fubini theorem, thus

    \begin{align}\label{Equ2} d_3\int_{-\infty}^{\xi}J * I(s)\rm{d} s = &d_3\int_{-\infty}^{\xi} \int_{{\rm IR}} J(y) I(s-y)\rm{d} y\rm{d} s\\ \nonumber = &d_3\int_{{\rm IR}} \int_{-\infty}^{\xi} J(y) I(s-y)\rm{d} s\rm{d} y\\ \nonumber = &d_3\int_{{\rm IR}} J(y) \int_{-\infty}^{\xi} I(s-y)\rm{d} s\rm{d} y\\ \nonumber = & d_3 J * K(\xi).\nonumber \end{align} (4.2)

    Integrating the both sides of (4.1) from -\infty to \xi with \xi\leq-K, we have

    \begin{align}\label{Equ3} \nonumber cI(\xi) \geq &d_3(J*K(\xi) - K(\xi)) + (\gamma + \mu_3)[K(\xi-c\tau) - K(\xi)]\\ & + \frac{\beta_1S_0 + \beta_2V_0 - (\gamma + \mu_3)}{2} K(\xi-c\tau). \end{align} (4.3)

    Furthermore, the following two equations hold.

    \begin{align}\label{Equ4} \nonumber\int_{-\infty}^\xi[K(\eta-c\tau) - K(\eta)]\rm{d} \eta = &\int_{-\infty}^\xi (-c\tau) \int_0^1\frac{\partial K(\eta-c\tau s)}{\partial s}\rm{d} s\rm{d} \eta\\ = &- c\tau \int_0^1 K(\xi-c\tau s)\rm{d} s \end{align} (4.4)

    and

    \begin{align}\label{Equ5} d_3\nonumber\int_{-\infty}^\xi[J*K(\eta) - K(\eta)]\rm{d} \eta = &d_3\int_{-\infty}^\xi \int_{-\infty}^{+\infty}(-x)J(x)\int_0^1\frac{\partial K(\eta-x s)}{\partial s}\rm{d} s\rm{d} x\rm{d} \eta\\ = &d_3\int_{-\infty}^{+\infty} (-x) J(x)\int_0^1 K(\xi-x s)\rm{d} s \rm{d} x. \end{align} (4.5)

    Integrating both sides of inequality (4.3) from -\infty to \xi, and combining Equations (4.4) and (4.5) yield

    \begin{align}\label{Equ6} \nonumber&\frac{\beta_1S_0 + \beta_2V_0 - (\gamma + \mu_3)}{2} \int_{-\infty}^\xi K(\eta-c\tau)\rm{d} \eta\\ \nonumber\leq&c K(\xi) + (\gamma+\mu_3)c\tau \int_0^1 K(\xi-c\tau s)\rm{d} s\\ \nonumber& + d_3\int_{-\infty}^{+\infty} x J(x)\int_0^1 K(\xi-x s)\rm{d} s \rm{d} x\\ \leq&\left(c+d_3\int_{{\rm IR}}xJ(x)\rm{d} x+(\gamma+\mu_3)c\tau\right)K(\xi), \end{align} (4.6)

    Here we use xK(\xi-sx) as a non-increasing function with s\in(0, 1). By (J1) of Assumption 1.1, we have \int_{{\rm IR}} x J(x)\rm{d} x = 0. Then for \xi<-K, we have

    \begin{align}\label{Equ7} \nonumber&\frac{\beta_1S_0 + \beta_2V_0 - (\gamma + \mu_3)}{2} \int_0^{+\infty} K(\xi - \eta - c\tau)\rm{d} \eta\\ \leq&(c+(\gamma+\mu_3)c\tau)K(\xi), \end{align} (4.7)

    For the non-decreasing function K(\xi), there exists some \tilde{\eta} with \tilde{\eta} + c\tau > 0 such that

    \begin{align}\label{Equ8} \nonumber&\frac{\beta_1S_0 + \beta_2V_0 - (\gamma + \mu_3)}{2} (\tilde{\eta} + c\tau) K(\xi - \tilde{\eta} - c\tau)\\ \leq&(c+(\gamma+\mu_3)c\tau)K(\xi), \end{align} (4.8)

    Thus there exists a sufficiently large constant \theta>-c\tau and some constant \varepsilon\in (0, 1), such that

    K(\xi-\theta -c\tau)\leq\varepsilon K(\xi), \ \ \xi\leq-M.

    Let

    p(\xi) = K(\xi)e^{-\nu\xi},

    where

    0 \lt \nu\triangleq\frac{1}{\theta+c\tau}\ln \frac{1}{\varepsilon} \lt \lambda_c,

    By some simple calculation, we have

    p(\xi-\theta-c\tau)\leq p(\xi).

    Using L'Hospital's rule yields

    \lim\limits_{\xi\rightarrow+\infty}p(\xi) = \lim\limits_{\xi\rightarrow+\infty}\frac{K(\xi)}{e^{\nu\xi}} = \lim\limits_{\xi\rightarrow+\infty}\frac{I(\xi)}{\nu e^{\nu\xi}} = 0,

    Note that p(\xi)\geq 0, thus there exists a constant p_0 such that

    \begin{align}\label{Equ9} p(\xi) = K(\xi)e^{-\nu\xi}\leq p_0, \ \ \xi\in {\rm IR}. \end{align} (4.9)

    On the other hand, since S(\xi)\leq S_0 and V(\xi)\leq V_0 for \xi\in{\rm IR}, recall the third equation of (2.4), we have

    \begin{align}\label{Equ91} \nonumber cI'(\xi) = &d_3(J*I(\xi)-I(\xi)) + \beta_1 S(\xi)I(\xi-c\tau) + \beta_2 V(\xi)I(\xi-c\tau) - \gamma I(\xi) - \mu_3 I(\xi)\\ \leq&d_3(J*I(\xi)-I(\xi)) + \beta_1 S_0I(\xi-c\tau) + \beta_2 V_0I(\xi-c\tau) - \gamma I(\xi) - \mu_3 I(\xi). \end{align} (4.10)

    Integrating the both sides of (4.10) from -\infty to \xi yields

    \begin{align}\label{Equ10} cI(\xi)\leq d_3J*K(\xi)-(\gamma+\mu_3+d_3)K(\xi) + (\beta_1S_0+\beta_2V_0)K(\xi-c\tau). \end{align} (4.11)

    From (4.9), using J is compactly supported, for \xi\in{\rm IR}, there exists a positive constant M_1 such that

    \begin{align}\label{Equ11} \nonumber (d_3J*K(\xi))e^{-\nu \xi} = &d_3\int_{{\rm IR}}J(y)e^{-\nu \xi}K(\xi-y)\rm{d} y\\ = & d_3\int_{{\rm IR}}J(y)e^{-\nu y}K(\xi-y) e^{-\nu (\xi-y)}\rm{d} y\\ \nonumber\leq& d_3p_0\int_{{\rm IR}} J(y)e^{-\nu y} \rm{d} y\\ \nonumber\leq&M_1. \end{align} (4.12)

    Thus there exists a constant M_2>0 such that

    \begin{align} I(\xi) e^{-\nu \xi}\leq M_2, \ \ \xi\in{\rm IR}, \end{align} (4.13)

    since (4.9), (4.11) and (4.12) hold. Then

    \begin{align} \sup\limits_{\xi\in{\rm IR}}\{I(\xi) e^{-\nu \xi}\} \lt +\infty. \end{align} (4.14)

    By the same procedure in (4.12), there exists a positive constant M_2 such that

    \begin{align}\label{Equ12} (d_3J*I(\xi))e^{-\nu \xi}\leq&M_2. \end{align} (4.15)

    Hence

    \begin{align} \sup\limits_{\xi\in{\rm IR}}\{I'(\xi) e^{-\nu \xi}\} \lt +\infty. \end{align} (4.16)

    For \lambda\in{\rm IC} with 0<\textrm{Re}\lambda<\nu, define the following two-side Laplace transform of I(\xi),

    \begin{align} \nonumber \mathcal{L}_I(\lambda):& = \int_{{\rm IR}}I(\xi)e^{-\lambda \xi}\rm{d} \xi. \end{align}

    From (2.4), we have

    \begin{align} \nonumber &d_3(J*I(\xi)-I(\xi)) - cI'(\xi) + (\beta_1 S_0 + \beta_2V_0) I(\xi-c\tau) - (\gamma+\mu_3) I(\xi)\\ = &\beta_1 (S_0 - S(\xi))I(\xi-c\tau) + \beta_2 (V_0 - V(\xi))I(\xi-c\tau). \end{align} (4.17)

    Take the two-side Laplace transform to the above equation, thus

    \begin{align}\label{Equ13} \Delta(\lambda, c)\mathcal{L}_I(\lambda) = \int_{\rm IR}e^{-\lambda \xi}[\beta_1 (S_0 - S(\xi))I(\xi-c\tau) + \beta_2 (V_0 - V(\xi))I(\xi-c\tau)]\rm{d} \xi \end{align} (4.18)

    for \lambda \in {\rm IC} with 0<\textrm{Re}\lambda<\nu. Let L(\xi) = S_0 -S(\xi), we have 0\leq L(\xi)\leq S_0 and \lim_{\xi\rightarrow -\infty} L(\xi) = 0. Then from the first equation of (2.4), we have

    c L'(\xi) = d_1 (J*L(\xi)-L(\xi)) + \beta_1 S_(\xi)I(\xi-c\tau) + (\alpha+\mu_1)S(\xi).

    Let \eta\in C^{\infty}({\rm IR}, [0, 1]) be a nonnegative nondecreasing function, \eta(x)\equiv0 in (-\infty, -2] and \eta(x)\equiv1 in [-1, +\infty). For N\in{\rm IN}, set \eta_N = \eta\left(\frac{x}{N}\right). Then, taking 0\leq \nu_0\leq\nu, we have

    c \int_{{\rm IR}}L'(\xi)e^{-\nu_0 \xi}\eta_N\rm{d} \xi = d_1\int_{{\rm IR}}(J*L(\xi)-L(\xi))e^{-\nu_0 \xi}\eta_N\rm{d} \xi + \int_{{\rm IR}}S(\xi)[\beta_1I(\xi-c\tau) + \alpha + \mu_1] e^{-\nu_0 \xi}\eta_N\rm{d} \xi.

    By the argument in [22,Theorem 3.1], there exists a constant \Xi>0 dependent on \nu_0 such that

    \int_{{\rm IR}}L(\xi)e^{-\nu_0 \xi}\rm{d} \xi \leq \Xi.

    Thus,

    \int_{{\rm IR}}\beta_1(S_0 - S(\xi))I(\xi-c\tau)e^{-(\nu+\nu_0)\xi}\rm{d} \xi\leq \beta_1\sup\limits_{\xi\in{\rm IR}}\{I(\xi) e^{-\nu \xi}\}\int_{{\rm IR}}L(\xi)e^{-\nu_0 \xi}\rm{d} \xi \lt \infty.

    Similarly,

    \int_{{\rm IR}}\beta_2(V_0 - V(\xi))I(\xi-c\tau)e^{-(\nu+\nu_0)\xi}\rm{d} \xi \lt \infty.

    From the property of Laplace transform [41], \mathcal{L}_I(\lambda) is well defined with \textrm{Re}\lambda>0. Note that Equation (4.18) can be rewritten as

    \begin{align}\label{Equ16} \int_{\rm IR}e^{-\lambda \xi}\left[\Delta(\lambda, c)I(\xi)+\beta_1 (S_0 - S(\xi))I(\xi-c\tau) + \beta_2 (V_0 - V(\xi))I(\xi-c\tau)\right]\rm{d} \xi = 0. \end{align} (4.19)

    Recall (J2) of Assumption 1.1, then \Delta(\lambda, c)\rightarrow+\infty as \xi\rightarrow+\infty for c\in(0, c^*) which is a contradiction of (4.19). This completes the proof.

    As traveling wave solutions describe the transition from disease-free equilibrium to endemic equilibrium when the wave speed is larger than the minimal wave speed. Now, we focus on how the parameters in system (2.1) can affect the wave speed. Suppose (\hat{\lambda}, \hat{c}) be a zero root of \Delta(\lambda, c), recall that V_0 = \frac{\Lambda\alpha}{\mu_2(\mu_1+\alpha)} and \mu_2 = \mu_1 +\gamma_1, we have

    \Delta(\hat{\lambda}, \hat{c}) = d_3\int_{{\rm IR}}J(x)e^{-\hat{\lambda} x}\rm{d} x-(d_3+\gamma+\mu_3)-\hat{c}\hat{\lambda}+\beta_1S_0e^{-\hat{c}\tau\hat{\lambda}}+\frac{\beta_2\Lambda\alpha}{(\mu_1 +\gamma_1)(\mu_1+\alpha)}e^{-\hat{c}\tau\hat{\lambda}} = 0.

    By some calculations, we obtain

    \frac{\rm{d} \hat{c}}{\rm{d} d_3} = \frac{\int_{{\rm IR}}J(x)[e^{-\hat\lambda x}-1]\rm{d} x}{\hat\lambda(1+ [\beta_1 S_0 + \beta_2 V_0] \tau e^{-\hat{c}\tau\hat{\lambda}})} \gt 0, \ \ \frac{\rm{d} \hat{c}}{\rm{d} \tau} = -\frac{\beta_1S_0 + \beta_2V_0}{e^{\hat{c}\tau\hat{\lambda}} + \beta_1S_0\tau + \beta_2V_0\tau} \lt 0,
    \frac{\rm{d} \hat{c}}{\rm{d} \beta_1} = \frac{S_0e^{-\hat{c}\tau\hat{\lambda}}}{\hat\lambda(1+ [\beta_1 S_0 + \beta_2 V_0] \tau e^{-\hat{c}\tau\hat{\lambda}})} \gt 0, \ \ \frac{\rm{d} \hat{c}}{\rm{d} \beta_2} = \frac{V_0e^{-\hat{c}\tau\hat{\lambda}}}{\hat\lambda(1+ [\beta_1 S_0 + \beta_2 V_0] \tau e^{-\hat{c}\tau\hat{\lambda}})} \gt 0,

    and

    \frac{\rm{d} \hat{c}}{\rm{d} \gamma_1} = -\frac{\beta_2V_0 e^{-\hat{c}\tau\hat{\lambda}}}{(\mu_1+\gamma_1)\hat\lambda(1+ [\beta_1 S_0 + \beta_2 V_0] \tau e^{-\hat{c}\tau\hat{\lambda}})} \lt 0,

    that is, \hat{c} is a decreasing function on \gamma_1 and \tau, while \hat{c} is an increasing function on d_3, \beta_1 and \beta_2. From the biological point of view, this indicates the following four scenarios:

    Ⅰ. The more successful the vaccination, the slower the disease spreads;

    Ⅱ. The longer the latent period, the slower the disease spreads;

    Ⅲ. The faster infected individuals move, the faster the disease spreads;

    Ⅳ. The more effective the infections are, the faster the disease spreads.

    Now, we are in a position to make the following summary:

    Mathematically, we investigated a nonlocal dispersal epidemic model with vaccination and delay; The existence of traveling wave solutions is studied by applying Schauder fixed point theorem with upper-lower solutions, that is there exists traveling wave solutions when \Re_0>1 with c>c^*. Furthermore, the boundary asymptotic behaviour of traveling wave solutions at +\infty was established by the methods of constructing suitable Lyapunov like function. We also showed that there exists traveling wave solutions when \Re_0>1 with c = c^*. Finally, we proved the nonexistence of traveling wave solutions under the assumptions \Re_0>1 and 0<c<c^*.

    Biologically, our results imply that the nonlocal dispersal and infection ability of infected individuals can accelerate the spreading of infectious disease, while the latent period and successful rate of vaccination can slow down the disease spreads.

    The authors are very grateful to the editors and three reviewers for their valuable comments and suggestions that have helped us improving the presentation of this paper. We would also very grateful to Prof.Shigui Ruan, Dr. Sanhong Liu and Dr.Wen-Bing Xu for their valuable comments and helpful advice. This work is supported by Natural Science Foundation of China (No.11871179; No.11771374), and the first author was also partially supported by China Scholarship Council (No.201706120216). R. Zhang acknowledges the kind hospitality received from the Department of Mathematics at the University of Miami, where part of the work was completed.

    All authors declare no conflicts of interest in this paper.

    Proof. If \xi>\mathfrak{X}_1, then \underline{S}(\xi) = 0, equation (2.11) holds. If \xi<\mathfrak{X}_1, then \underline{S}(\xi) = S_0-M_1 e^{\varepsilon_1 \xi}, we have

    \begin{align*} &c{\underline{S}}'(\xi)- d_1(J*\underline{S}(\xi)-\underline{S}(\xi)) - \Lambda + \beta_1\underline{S}(\xi)\overline{I}(\xi-c\tau) +(\mu_1+\alpha) \underline{S}(\xi)\\ = &-c\varepsilon_1M_1e^{\varepsilon_1\xi}+d_1M_1e^{\varepsilon_1\xi}\int_{{\rm IR}}J(x)e^{-\varepsilon_1x}\rm{d} x-d_1M_1e^{\varepsilon_1\xi}-\Lambda\\ &+\beta_1(S_0-M_1e^{\varepsilon_1\xi})e^{\lambda_c(\xi-c\tau)}+(\mu_1+\alpha)(S_0-M_1 e^{\varepsilon_1 \xi})\\ \leq&e^{\varepsilon_1\xi}\left[-c\varepsilon_1M_1e^{\varepsilon_1\xi}+d_1M_1e^{\varepsilon_1\xi}\int_{{\rm IR}}J(x)e^{-\varepsilon_1x}\rm{d} x-d_1M_1e^{\varepsilon_1\xi}+\beta_1S_0\left(\frac{S_0}{M_1}\right)^{\frac{\lambda-\varepsilon_1}{\varepsilon_1}}\right]. \end{align*}

    Here we use

    e^{(\lambda_c-\varepsilon_1)\xi} \lt \left(\frac{S_0}{M_1}\right)^{\frac{\lambda_c-\varepsilon_1}{\varepsilon_1}}\ \ \ \textrm{for}\ \ \ \xi \lt \mathfrak{X}_1.

    Keeping \varepsilon_1M_1 = 1, letting M_1\rightarrow+\infty for some M_1>S_0 large enough and \varepsilon_1 small enough, we have

    \begin{equation*} c{\underline{S}}'(\xi)- d_1(J*\underline{S}(\xi)-\underline{S}(\xi)) - \Lambda + \beta_1\underline{S}(\xi)\overline{I}(\xi-c\tau) +(\mu_1+\alpha) \underline{S}(\xi)\leq0. \end{equation*}

    This completes the proof.

    Proof. If \xi > \frac{1}{\varepsilon_3}\ln \frac{1}{M_3}, the Equation (2.13) holds since \underline{I}(\xi) = 0. If \xi < \frac{1}{\varepsilon_3}\ln \frac{1}{M_3}, then \underline{I}(\xi) = e^{\lambda_c\xi}(1-M_3e^{\varepsilon_3 \xi}), we have the following four cases.

    Case Ⅰ: \xi>\max\{\mathfrak{X}_1, \mathfrak{X}_2\}.

    In this case, \underline{S}(\xi) = \underline{V}(\xi) = 0. Thus, Equation (2.13) is equivalent to

    c{\underline{I}}'(\xi) \leq d_3(J*\underline{I}(\xi)-\underline{I}(\xi)) - \gamma \underline{I}(\xi) - \mu_3 \underline{I}(\xi),

    that is

    \begin{align*} &c\lambda_c-d_3\int_{{\rm IR}}J(y)e^{-\lambda_cy}\rm{d} y+d_3+\gamma+\mu_3\\ \leq& M_3e^{\varepsilon_3\xi}\left[c(\lambda+\varepsilon_3)-d_3\int_{{\rm IR}}J(y)e^{-(\lambda_c+\varepsilon_3)y}\rm{d} y+d_3+\gamma+\mu_3\right]. \end{align*}

    From \Delta(\lambda_c, c) = 0 and \Delta(\lambda_c+\varepsilon_3, c)<0, we have

    \beta_1S_0e^{-c\tau\lambda_c}+\beta_2V_0e^{-c\tau\lambda_c}\\ \leq M_3e^{\varepsilon_3\xi}\left[-\Delta(\lambda_c+\varepsilon_3, c)+\beta_1S_0e^{-c\tau(\lambda_c+\varepsilon_3)}+\beta_2V_0e^{-c\tau(\lambda_c+\varepsilon_3)}\right],

    Because \tau>0, \lambda_c>0, it suffices to prove

    \begin{align*} \beta_1S_0+\beta_2V_0\leq M_3e^{\varepsilon_3\xi}\left[-\Delta(\lambda_c+\varepsilon_3, c)+\beta_1S_0e^{-c\tau(\lambda_c+\varepsilon_3)}+\beta_2V_0e^{-c\tau(\lambda_c+\varepsilon_3)}\right]. \end{align*}

    Since \xi>\max\{\mathfrak{X}_1, \mathfrak{X}_2\}, M_3>\max\{S_0, V_0\} and 0<\varepsilon_3<\min\{\varepsilon_1/2, \varepsilon_2/2\}, note that e^{\varepsilon_3\xi}\geq\left(\frac{S_0}{M_1}\right)^{\frac{1}{2}}\left(\frac{V_0}{M_2}\right)^{\frac{1}{2}}, then we only need to ensure

    \beta_1S_0+\beta_2V_0\leq-\Delta(\lambda_c+\varepsilon_3, c)M_3\left(\frac{S_0}{M_1}\right)^{\frac{1}{2}}\left(\frac{V_0}{M_2}\right)^{\frac{1}{2}}.

    Thus, Equation (2.13) holds for sufficiently large M_3>0 with

    M_3\geq \frac{\beta_1S_0+\beta_2V_0}{-\Delta(\lambda_c+\varepsilon_3, c)}\sqrt{\frac{S_0}{M_1}}\sqrt{\frac{V_0}{M_2}}\triangleq\Pi_1.

    Case Ⅱ: \mathfrak{X}_1>\xi>\mathfrak{X}_2.

    In this case, \underline{S}(\xi) = S_0-M_1e^{\varepsilon_1\xi} and \underline{V}(\xi) = 0. Hence, Equation (2.13) is equivalent to

    c{\underline{I}}'(\xi) \leq d_3(J*\underline{I}(\xi)-\underline{I}(\xi)) - \gamma \underline{I}(\xi) - \mu_3 \underline{I}(\xi) + \beta_1\underline{S}(\xi)\underline{I}(\xi-c\tau),

    that is

    \begin{align*} &c\lambda_c-d_3\int_{{\rm IR}}J(y)e^{-\lambda_cy}\rm{d} y+d_3+\gamma+\mu_3-\beta_1S_0e^{-\lambda_cc\tau}+\beta_1M_1e^{\varepsilon_1\xi-\lambda_cc\tau}\\ \leq& M_3e^{\varepsilon_3\xi}\left[c(\lambda+\varepsilon_3)-d_3\int_{{\rm IR}}J(y)e^{-(\lambda_c+\varepsilon_3)y}\rm{d} y+d_3+\gamma+\mu_3-\beta_1S_0e^{-(\varepsilon_3+\lambda_c)c\tau}+\beta_1M_1e^{\varepsilon_1\xi-(\varepsilon_3+\lambda_c)c\tau}\right], \end{align*}

    we need to prove

    \begin{equation*} \beta V_0\leq -\Delta(\lambda_c+\varepsilon_3, c)M_3e^{\varepsilon_3\xi}. \end{equation*}

    Choose M_3 large enough with

    M_3\geq \frac{\beta_2 \sqrt{V_0M_2}}{-\Delta(\lambda_c+\varepsilon_3, c)}\triangleq\Pi_2.

    Case Ⅲ: \mathfrak{X}_2>\xi>\mathfrak{X}_1.

    In this case, \underline{V}(\xi) = V_0-M_2e^{\varepsilon_2\xi} and \underline{S}(\xi) = 0. Similar to Case Ⅱ, Equation (2.13) holds if we choose

    M_3\geq \frac{\beta_1 \sqrt{S_0M_1}}{-\Delta(\lambda_c+\varepsilon_3, c)}\triangleq\Pi_3

    large enough.

    Case Ⅵ: \xi<\min\{\mathfrak{X}_1, \mathfrak{X}_2\}.

    In this case, \underline{S}(\xi) = S_0-M_1e^{\varepsilon_1\xi} and \underline{V}(\xi) = V_0-M_2e^{\varepsilon_2\xi}, Equation (2.13) is equivalent to

    c{\underline{I}}'(\xi) \leq d_3(J*\underline{I}(\xi)-\underline{I}(\xi)) - \gamma \underline{I}(\xi) - \mu_3 \underline{I}(\xi) + \beta_1\underline{S}(\xi)\underline{I}(\xi-c\tau) + \beta_2\underline{V}(\xi)\underline{I}(\xi-c\tau),

    that is

    \begin{align*} c\lambda_c-d_3&\int_{{\rm IR}}J(y)e^{-\lambda_cy}\rm{d} y+d_3+\gamma+\mu_3-\beta_1S_0e^{-\lambda_cc\tau}-\beta_2V_0e^{-\lambda_cc\tau}+\beta_1M_1e^{\varepsilon_1\xi-\lambda_cc\tau}+\beta_2M_2e^{\varepsilon_2\xi-\lambda_cc\tau}\\ \leq M_3e^{\varepsilon_3\xi}&\left(c(\lambda+\varepsilon_3)-d_3\int_{{\rm IR}}J(y)e^{-(\lambda_c+\varepsilon_3)y}\rm{d} y+d_3+\gamma+\mu_3-\beta_1S_0e^{-(\varepsilon_3+\lambda_c)c\tau}\right.\\ &+\left.\beta_1M_1e^{\varepsilon_1\xi-(\varepsilon_3+\lambda_c)c\tau}-\beta_2V_0e^{-(\varepsilon_3+\lambda_c)c\tau}+\beta_2M_2e^{\varepsilon_1\xi-(\varepsilon_3+\lambda_c)c\tau}\right) \end{align*}

    we only need to ensure

    M_3\geq\frac{\beta_1M_1e^{(\varepsilon_1-\varepsilon_3)\xi-\lambda_cc\tau}+\beta_2M_2e^{(\varepsilon_2-\varepsilon_3)\xi-\lambda_cc\tau}}{-\Delta(\lambda_c+\varepsilon_3, c)+\beta_1M_1e^{\varepsilon_1\xi-(\varepsilon_3+\lambda_c)c\tau}+\beta_2M_2e^{\varepsilon_2\xi-(\varepsilon_3+\lambda_c)c\tau}}

    Since \xi<\min\{\mathfrak{X}_1, \mathfrak{X}_2\}, 0<S_0<M_3, 0<V_0<M_3, \varepsilon_3<\min\{\varepsilon_1/2, \varepsilon_2/2\} and \tau>0, we have

    \frac{\beta_1M_1e^{(\varepsilon_1-\varepsilon_3)\xi-\lambda_cc\tau}+\beta_2M_2e^{(\varepsilon_2-\varepsilon_3)\xi-\lambda_cc\tau}}{-\Delta(\lambda_c+\varepsilon_3, c)+\beta_1M_1e^{\varepsilon_1\xi-(\varepsilon_3+\lambda_c)c\tau}+\beta_2M_2e^{\varepsilon_2\xi-(\varepsilon_3+\lambda_c)c\tau}} \lt \frac{\beta_1\sqrt{S_0M_1}+\beta_2\sqrt{V_0M_2}}{-\Delta(\lambda_c+\varepsilon_3, c)}.

    Then Equation (2.13) holds if we choose M_3 large enough with

    M_3\geq\frac{\beta_1\sqrt{S_0M_1}+\beta_2\sqrt{V_0M_2}}{-\Delta(\lambda_c+\varepsilon_3, c)}\triangleq\Pi_4.

    Through the above discussion, Equation (2.13) holds if we choose M_3\geq\max\{\Pi_1, \Pi_2, \Pi_3, \Pi_4\} large enough for all \xi\in{\rm IR}. Here we completes the proof.



    [1] J. P. Kreiss, Bootstrap procedures for AR(∞)-processes, in Bootstrapping and Related Techniques (eds. K.-H. Jockel, G. Rothe and W. Sendler), Springer, Heidelberg, (1992), 107-113.
    [2] P. Bühlmann, Sieve bootstrap for time series, Bernoulli, 3 (1997), 123-148.
    [3] P. J. Bickel and P. Bühlmann, A new mixing notion and functional central limit theorems for a sieve bootstrap in time series, Bernoulli, 5 (1999), 413-446.
    [4] A. M. Alonso, D. Peña and J. Romo, Forecasting time series with sieve bootstrap, J. Stat. Plann. Infer., 100 (2002), 1-11.
    [5] A. M. Alonso, D. Peña and J. Romo, On sieve bootstrap prediction intervals, Stat. Probabili. Lett., 65 (2003), 13-20.
    [6] A. Zagdanski, On the construction and properties of bootstrap-t prediction intervals for stationary time series, Probab. Math. Stati. PWN, 25 (2005), 133-154.
    [7] A. M. Alonso and A. E. Sipols, A time series bootstrap procedure for interpolation intervals, Comput. Stat. Data Anal., 52 (2008), 1792-1805.
    [8] P. Mukhopadhyay and V. A. Samaranayake, Prediction intervals for time series: a modified sieve bootstrap approach, Commun. Stat. Simul. Comput., 39 (2010), 517-538.
    [9] G. Ulloa, H. Allende-Cid and H. Allende Robust sieve bootstrap prediction intervals for contaminated time series, Int. J. Pattern Recognit. Artif. Intell., 28 (2014).
    [10] Y. Chang and J. Y. Park, A sieve bootstrap for the test of a unit root, J. Time Ser. Anal., 24 (2003), 379-400.
    [11] Z. Psaradakis, Blockwise bootstrap testing for stationarity, Stat. Probabili. Lett., 76 (2006), 562 -570.
    [12] D. S. Poskitt, Properties of the sieve bootstrap for fractionally integrated and non-invertible processes, J. Time Ser. Anal., 29 (2008), 224-250.
    [13] D. S. Poskitt, G. M. Martin and S. D. Grose, Bias reduction of long memory parameter estimators via the pre-filtered sieve bootstrap, arXiv preprint arXiv, 2014 (2014).
    [14] E. Paparoditis, Sieve bootstrap for functional time series, Ann. Stat., 46 (2018), 3510-3538.
    [15] M. Meyer, C. Jentsch and J. P. Kreiss Baxter's inequality and sieve bootstrap for random fields, Bernoulli, 23 (2017), 2988-3020.
    [16] J. P. Kreiss, E. Paparoditis and D. N. Politis, On the range of validity of the autoregressive sieve bootstrap, Ann. Stat., 39 (2011), 2103-2130.
    [17] M. Fragkeskou and E. Paparoditis, Extending the Range of Validity of the Autoregressive (Sieve) Bootstrap, J. Time Ser. Anal., 39 (2018), 356-379.
    [18] F. Giordano, M. La Rocca and C. Perna, Forecasting nonlinear time series with neural network sieve bootstrap, Comput. Stat. Data Anal., 51 (2007), 3871-3884.
    [19] F. Giordano, M. La Rocca and C. Perna, Properties of the neural network sieve bootstrap, J. Nonparametr. Stat., 23 (2011), 803-817.
    [20] G. B. Huang, Q. Y. Zhu and C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), 489-501.
    [21] G. B. Huang, H. Zhou, X. Ding, et al., Extreme learning machine for regression and multiclass classification, IEEE Trans. Syst. Man Cybern. Part B, 42 (2012), 513-529.
    [22] W. Haerdle and A. Tsybakov, Local polynomial estimators of the volatility function in nonparametric autoregression, J. Econometrics, 81 (1997), 223-242.
    [23] J. Franke and M. Diagne, Estimating market risk with neural network, Stat. Decisions, 24 (2006), 233-253.
    [24] A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inf. Theory, 39 (1993), 930-945.
    [25] K. Hornik, M. Stinchcombe and P. Auer, Degree of approximation results for feedforward networks approximating unknown mappings and their derivatives, Neural Comput., 6 (1994), 1262-1275.
    [26] Y. Makovoz, Random approximates and neural networks, J. Approximation Theory, 85 (1994), 98-109.
    [27] X. Chen and H. White, Improved Rates and Asymptotic Normality for Nonparametric Neural Network Estimators, IEEE Trans. Inf. Theory, 45 (1999), 682-691.
    [28] X. Chen and X. Shen, Asymptotic Properties of Sieve Extremum Estimates for Weakly Dependent Data with Applications, Econometrica, 66 (1998), 299-315.
    [29] J. Zhang, Sieve Estimates via Neural Network for Strong Mixing Processes, Stat. Inference Stochastic Processes, 7 (2004), 115-135.
    [30] S. F. Crone and N. Kourentzes, Feature selection for time series prediction-A combined filter and wrapper approach for neural networks, Neurocomputing, 7 (2010), 1923-1936.
    [31] C. Wang, Y. Qi, M. Shao, et al., A fitting model for feature selection with fuzzy rough sets, IEEE Trans. Fuzzy Syst., 25 (2017), 741-753.
    [32] D. Yu and L. Deng, Efficient and effective algorithms for training single hidden- layer neural networks, Pattern Recognit. Lett., 33 (2012), 554-558.
    [33] K. Li, J. X. Peng and G. W. Irwin, A fast nonlinear model identification method, IEEE Trans. Autom. Control, 50 (2005), 1211-1216.
    [34] X. Yao, A review of evolutionary artificial neural networks, Int. J. Intell. Syst., 8 (1993), 539-567.
    [35] G. B. Huang, D. H. Wang and Y. Lan, Extreme learning machines: a survey, Int. J. Mach. Learn. Cybern., 2 (2011), 107-122.
    [36] S. Ding, H. Zhao, Y. Zhang, et al. Extreme learning machine: algorithm, theory and applications, Artif. Intell. Rev., 44 (2015), 103-115. doi: 10.1007/s10462-013-9405-z
    [37] G. Huang, G. B. Huang, S. Song, et al., Trends in extreme learning machines: A review, Neural Networks, 61 (2015), 32-48.
    [38] G. H. Huang, H. Zhou, X. Ding, et al., Extreme learning machine for regression and multiclass classification, IEEE Trans. Syst. Man Cybern. Part B, 42 (2012), 513-529.
    [39] G. B. Huang, L. Chen and C. K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks, 17 (2006), 879-892.
    [40] G. B. Huang and L. Chen, Convex incremental extreme learning machine, Neurocomputing, 70 (2007), 3056-3062.
    [41] G. B. Huang and L. Chen, Enhanced random search based incremental extreme learning machine, Neurocomputing, 71 (2008), 3460-3468.
    [42] J. Lin, J. Yin, Z. Cai, et al., A secure and practical mechanism of outsourcing extreme learning machine in cloud computing, IEEE Intell. Syst., 28 (1999), 35-38.
    [43] E. Cule and S. Moritz, ridge: Ridge Regression with Automatic Selection of the Penalty Parameter, R package version, (2019), https://CRAN.R-project.org/package=ridge.
    [44] Z. Cai, J. Fan and Q. Yao, Functional-coefficient regression models for nonlinear time series, J. Am. Stat. Assoc., 95 (2000), 941-956.
    [45] H. Kuswanto and P. Sibbertsen, Can we distinguish between common nonlinear time series models and long memory?, Discussion papers//School of Economics and Management of the Hanover Leibniz University., (2007).
  • This article has been cited by:

    1. Kaiyan Zhao, Shaojuan Ma, Qualitative analysis of a two-group SVIR epidemic model with random effect, 2021, 2021, 1687-1847, 10.1186/s13662-021-03332-w
    2. Jinhu Xu, Yan Geng, Dimitri Volchenkov, Dynamics of a Diffusive Multigroup SVIR Model with Nonlinear Incidence, 2020, 2020, 1099-0526, 1, 10.1155/2020/8847023
    3. Shu Liao, Weiming Yang, Fang Fang, Traveling waves for a cholera vaccination model with nonlocal dispersal, 2021, 44, 0170-4214, 5150, 10.1002/mma.7099
    4. JIANGBO ZHOU, JINGHUAN LI, JINGDONG WEI, LIXIN TIAN, Wave propagation in a diffusive SAIV epidemic model with time delays, 2022, 33, 0956-7925, 674, 10.1017/S0956792521000188
    5. Daozhou Gao, Yuan Lou, Impact of State-Dependent Dispersal on Disease Prevalence, 2021, 31, 0938-8974, 10.1007/s00332-021-09731-3
    6. Chunyue Wang, Jinliang Wang, Ran Zhang, Global analysis on an age‐space structured vaccination model with Neumann boundary condition, 2022, 45, 0170-4214, 1640, 10.1002/mma.7879
    7. Jinliang Wang, Ran Zhang, Toshikazu Kuniya, A reaction–diffusion Susceptible–Vaccinated–Infected–Recovered model in a spatially heterogeneous environment with Dirichlet boundary condition, 2021, 190, 03784754, 848, 10.1016/j.matcom.2021.06.020
    8. Ran Zhang, Dan Li, Hongquan Sun, Traveling wave solutions for a discrete diffusive epidemic model with asymptomatic carriers, 2023, 16, 1793-5245, 10.1142/S1793524522500796
    9. Yahui Wang, Xinjian Wang, Guo Lin, Propagation thresholds in a diffusive epidemic model with latency and vaccination, 2023, 74, 0044-2275, 10.1007/s00033-022-01935-1
    10. Soufiane Bentout, Salih Djilali, Toshikazu Kuniya, Jinliang Wang, Mathematical analysis of a vaccination epidemic model with nonlocal diffusion, 2023, 0170-4214, 10.1002/mma.9162
    11. Ran Zhang, Shengqiang Liu, WAVE PROPAGATION FOR A DISCRETE DIFFUSIVE VACCINATION EPIDEMIC MODEL WITH BILINEAR INCIDENCE, 2023, 13, 2156-907X, 715, 10.11948/20220040
    12. Rukhsar Ikram, Amir Khan, Aeshah A. Raezah, Impact of supervise neural network on a stochastic epidemic model with Levy noise, 2024, 9, 2473-6988, 21273, 10.3934/math.20241033
    13. Ran Zhang, Hongyong Zhao, Traveling wave solutions for a three-component noncooperative systems arising in nonlocal diffusive biological models, 2024, 17, 1793-5245, 10.1142/S1793524523500675
    14. Lianwen Wang, Xingyu Wang, Zhijun Liu, Yating Wang, Global dynamics and traveling waves for a diffusive SEIVS epidemic model with distributed delays, 2024, 128, 10075704, 107638, 10.1016/j.cnsns.2023.107638
    15. Xinzhi Ren, Lili Liu, Tianran Zhang, Xianning Liu, Minimal Wave Speed for a Nonlocal Viral Infection Dynamical Model, 2024, 8, 2504-3110, 135, 10.3390/fractalfract8030135
    16. Liwen Song, Sanyi Tang, Changcheng Xiang, Robert A. Cheke, Sha He, Modelling and bifurcation analysis of spatiotemporal hormetic effects on pest control, 2023, 177, 09600779, 114194, 10.1016/j.chaos.2023.114194
    17. Jinxin Wang, Shi-Liang Wu, Mingdi Huang, Haiqin Zhao, Spatial spread for a delayed and nonlocal foot-and-mouth disease model, 2024, 76, 14681218, 104006, 10.1016/j.nonrwa.2023.104006
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4660) PDF downloads(396) Cited by(4)

Figures and Tables

Figures(7)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog