Global analysis on a class of multi-group SEIR model with latency and relapse

  • Received: 01 March 2015 Accepted: 29 June 2018 Published: 01 October 2015
  • MSC : Primary: 92B05, 34D23; Secondary: 34D20.

  • In this paper, we investigate the global dynamics of a multi-group SEIR epidemic model,allowing heterogeneity of the host population, delay in latency and delay due torelapse distribution for the human population.Our results indicate that when certain restrictions on nonlinear growth rate and incidence are fulfilled, the basic reproduction number R0 plays the key role of a global threshold parameter in the sense that the long-time behaviors of the model depend only on R0. The proofs of the main results utilize the persistence theory indynamical systems, Lyapunov functionals guided by graph-theoretical approach.

    Citation: Jinliang Wang, Hongying Shu. Global analysis on a class of multi-group SEIR model with latency and relapse[J]. Mathematical Biosciences and Engineering, 2016, 13(1): 209-225. doi: 10.3934/mbe.2016.13.209

    Related Papers:

    [1] Gideon Simpson, Daniel Watkins . Relative entropy minimization over Hilbert spaces via Robbins-Monro. AIMS Mathematics, 2019, 4(3): 359-383. doi: 10.3934/math.2019.3.359
    [2] Kai Wang, Xiheng Wang, Xiaoping Wang . Curvature estimation for point cloud 2-manifolds based on the heat kernel. AIMS Mathematics, 2024, 9(11): 32491-32513. doi: 10.3934/math.20241557
    [3] Guangshuai Zhou, Chuancun Yin . Family of extended mean mixtures of multivariate normal distributions: Properties, inference and applications. AIMS Mathematics, 2022, 7(7): 12390-12414. doi: 10.3934/math.2022688
    [4] Pingyun Li, Chuancun Yin . Tail risk measures with application for mixtures of elliptical distributions. AIMS Mathematics, 2022, 7(5): 8802-8821. doi: 10.3934/math.2022491
    [5] Gaosheng Liu, Yang Bai . Statistical inference in functional semiparametric spatial autoregressive model. AIMS Mathematics, 2021, 6(10): 10890-10906. doi: 10.3934/math.2021633
    [6] Carey Caginalp . A minimization approach to conservation laws with random initialconditions and non-smooth, non-strictly convex flux. AIMS Mathematics, 2018, 3(1): 148-182. doi: 10.3934/Math.2018.1.148
    [7] H. M. Barakat, M. H. Dwes . Asymptotic behavior of ordered random variables in mixture of two Gaussian sequences with random index. AIMS Mathematics, 2022, 7(10): 19306-19324. doi: 10.3934/math.20221060
    [8] Stefano Bonaccorsi, Bernard Hanzon, Giulia Lombardi . A generalized Budan-Fourier approach to generalized Gaussian and exponential mixtures. AIMS Mathematics, 2024, 9(10): 26499-26537. doi: 10.3934/math.20241290
    [9] Miyoun Jung . A variational image denoising model under mixed Cauchy and Gaussian noise. AIMS Mathematics, 2022, 7(11): 19696-19726. doi: 10.3934/math.20221080
    [10] Essam A. Ahmed, Laila A. Al-Essa . Inference of stress-strength reliability based on adaptive progressive type-Ⅱ censing from Chen distribution with application to carbon fiber data. AIMS Mathematics, 2024, 9(8): 20482-20515. doi: 10.3934/math.2024996
  • In this paper, we investigate the global dynamics of a multi-group SEIR epidemic model,allowing heterogeneity of the host population, delay in latency and delay due torelapse distribution for the human population.Our results indicate that when certain restrictions on nonlinear growth rate and incidence are fulfilled, the basic reproduction number R0 plays the key role of a global threshold parameter in the sense that the long-time behaviors of the model depend only on R0. The proofs of the main results utilize the persistence theory indynamical systems, Lyapunov functionals guided by graph-theoretical approach.


    1. Introduction

    Cao et al. [4] reviewed techniques for extracting local features for automatic object recognition in images. Multivariate Gaussians can represent the important features and their mutual correlations needed for accurate document retrieval from databases. The natural choice for discrimination between pairs of such distributions is the Fisher information metric on the Riemannian manifold of smooth probability density functions coordinatized by the parameters of the distribution [1,2]. However, it is not known analytically in some important cases of practical interest.

    We have used multivariate Gaussians for face recognition using the neighbourhoods of colour pixel features at landmark points in face images [12], where we found that the spatial covariances among pixel colours was important. Craciunesco and Murari et al [5,8] used geodesic distance on Gaussian manifolds to interpret time series in very large databases from Tokomak measurements in fusion research. Verdoolaege, Shabbir et al [10,13] used multivariate generalized Gaussians for colour texture discrimination in the wavelet domain. In these studies the discrimination used approximations to the information distance between pairs of multivariate Gaussian probability density functions. Nielsen et al [9] suggested an entropic quantization method for approximating distances in the case of mixtures of multivariate Gaussians.

    The k-variate Gaussian distributions have probability density functions:

    f(μ,Σ)=e12(xμ)TΣ1(xμ)(2π)k|Σ|, (1)

    where xRk is the random variable, μRk a k-dimensional mean vector, and ΣR(k2+k)/2 is the k×k positive definite symmetric covariance matrix, for features with k-dimensional representation.

    The Riemannian manifold Mk of the family of k-variate Gaussians for a given k is well understood through information geometric study using the Fisher metric [1,3,6,11]. For an introduction to information geometry and a range of applications see [1,2,7]. What we have analytically are natural metrics, on the space of means and on the space of covariances, giving the information distance between two multivariate Gaussians fA(μA,Σ),fB(μB,Σ) of the same number k of variables in two particular cases:

    1. ΣA=ΣB=Σ: fA(μA,Σ),fB(μB,Σ)

    Here we have the positive definite symmetric quadratic form Σ to give a distance between two mean vectors:

    Dμ(fA,fB)=(μAμB)TΣ1(μAμB). (2)

    So also we have a norm on mean vectors for each fA(μA,Σ):

    ||μA||=(μA)T(ΣA)1(μA)  (3)

    which is evidently sensitive to the covariance.

    2. μA=μB=μ: fA(μ,ΣA),fB(μ,ΣB)

    Here we use a positive definite symmetric matrix constructed from ΣA and ΣB to give distance between two covariance matrices; this information metric was given by Atkinson and Mitchell [3] from a result attributed to S.T. Jensen, using

    SAB=ΣA1/2ΣBΣA1/2,  with  {λABj}=Eig(SAB)  then
    DΣ(fA,fB)=12kj=1log2(λABj). (4)

    We note that (4) is in agreement also with a special case of the geodesic distance given by Shabbir et al [10] for generalized multivariate Gaussians with the same mean.

    In principle, (4) yields all of the geodesic distances since the information metric is invariant under affine transformations of the mean [3] Appendix 1; see also the article of P.S. Eriksen [6]. The equations for the geodesics were shown by Skovgaard [11] to be

    ¨μ=˙ΣΣ1˙μ¨Σ=˙ΣΣ1˙Σ˙μ˙μT. (5)

    Eriksen [6] observed that the family Mk of k-variate Gaussians is isometric to the space GA+(k)/SO(k) where GA+(k) consists of positive affine transformations. Hence by a translation it is sufficient to restrict the geodesic to one through Σ=I the identity, in the direction (B,v). Then, through the change of coordinates, Δ=Σ1, δ=Σ1μ, Equation (5) becomes

    ˙Δ=BΔ+vδT  with  Δ(0)=I,δ(0)=0,˙δ=Bδ+(1+δTΔ1δ)v. (6)

    Then using

    A=(Bv0vT0vT0vB) (7)

    Eriksen proved that the geodesic solution curve is given by

    Λ:R:→Mk:teAt=(ΔδΦδT1+δTΔ1δγTΦTγΓ) (8)
    where  γ=Δ1δ+ΦTΔ1δ, and  δTΔ1δ  =  γTΓ1γ. (9)
    So  (Δ(t),δ(t))  =  (Γ(t),δ(t)). (10)

    Of course, the analytic difficulty is the requirement to find the length of the geodesic between two points in Mk to obtain a distance function, that being the infimum of arc length over all curves joining the points.


    2. Approximating distances between arbitrary mixtures of multivariate Gaussians

    Here we consider a mixture distribution consisting of a linear combination of k-variate Gaussians {fk,k=2,3,,N} with an increasing sequence of k=2,3,,N variables and with probability density functions:

    f2(μ2,Σ2),f3(μ3,Σ3)...,fN(μN,ΣN)  and kRkfk=1 (11)

    where μkRk is the k-vector of means and ΣkR(k2+k)/2 is the positive definite symmetric (k×k) covariance matrix with components (σij),ij=1,2,...,k. The standard basis for the space of covariance matrices is Eij=1ii for i=j, Eij=1ij+1ji for ij so

    Σ=kij=1σijEij.

    We presume that the parameters and relative weights wk of these component probability density functions (11) have been obtained empirically, giving a mixture density:

    fA=Nk=2wAkfAk,  with wAk0 and Nk=2wAk=1. (12)

    Given two such distributions, fA=fA(μA,ΣA,wA) and fB=fB(μB,ΣB,wB), we wish to be able to estimate the information distance D(fA,fB) between them.

    There is no general analytic solution for the geodesic distance between two k-variate Gaussians, but for many purposes the absolute information distance is not essential and comparative values may suffice for proximity testing, then the sum D=Dμ+DΣ from (2) and (4) is a natural approximation. Indeed, (4) gives the geodesic distance between fA with ΣA=I and fB with μA=μB=0 and the information metric is invariant under affine transformations of the mean [3,6].

    So, a fortiori, also we do not have the distance between two mixtures of multivariate Gaussians: fA(μA,ΣA,wA) and fB(μB,ΣB,wB). For this we must resort to approximations for incorporating the weightings of component Gaussians. In practice, it may not matter greatly since the relation between a reasonable approximation and a true geodesic distance is likely to be monotonic, which may be adequate for many applications, and was what we found in our face recognition work [12].


    2.1. Averaging distances over weightings

    Perhaps the most natural method is to combine equations (2) and (4) through the linear combination (12), obtaining an approximation as a corresponding linear combination of distances. Given two mixture distributions fA(μA,ΣA),fB(μB,ΣB) we could split the distance estimate function D# into D#μ and D#Σ as follows with δμ=(μAμB):

    D#μ(fA,fB)=Nk=212(wAkδμTΣA1kδμ+wBkδμTΣBk1δμ) (13)
    D#Σ(fA,fB)=Nk=212(wAk12Nk=2(logλABk)2+wBk12Nk=2(logλBAk)2) {λABk}=EigHABk,   HABk=((ΣAk)1/2ΣBk(ΣAk)1/2). (14)

    We note D#μ(fA,fB) in (13) does give a potentially useful distance measure between the two Gaussian mixtures, since it incorporates both means and covariances. Figure 1 shows the effect on D#μ values between differing averaged weighting sequences for random k-variate Gaussians having k=2,3,4,5 variables, with increasing weights fA, uniform weights fB, and decreasing weights fC. The gA,gB,gC are for the same mixtures except that ΣC2 has been replaced by ΣC2/5 and hA,hB,hC are for the same mixtures except that ΣC5 has been replaced by ΣC5/5 to show the effect of a change in one covariance component. For these simulations Mathematica was used to generate random mean vectors μi[5,10] and random covariance matrix elements σij[5,10],ij, σii[6,6], choices that ensured positive definite symmetric matrices for covariances. This approach was chosen to try to illustrate the effects of weighting sequences and isolated covariance changes on the measurements of distances between mixtures. Table 1 shows mean values ¯D#μ and ¯D#Σ for the pairs of mixtures (fA,fB),(fB,fC),(fA,fC) for the 20 random Gaussians with weights wAk=(0.1,0.2,0.3,0.4), wBk=(0.25,0.25,0.25,0.25), wCk=(0.4,0.3,0.2,0.1).

    Figure 1. Illustration of the effects of weighting sequences and covariances in mixtures. Upper row: D#μ(fA,fB),D#μ(fB,fC),D#μ(fA,fC), Middle row: D#μ(gA,gB),D#μ(gB,gC),D#μ(gA,gC), Lowest row: D#μ(hA,hB),D#μ(hB,hC),D#μ(hA,hC) for 20 random k-variate Gaussians for k=2,3,4,5 with weights wAk=(0.1,0.2,0.3,0.4), wBk=(0.25,0.25,0.25,0.25),wCk=(0.4,0.3,0.2,0.1).
    Table 1. Mean values ¯D#μ and ¯D#Σ for the pairs of mixtures (fA,fB),(fB,fC),(fA,fC) for the 20 random Gaussians with weights wAk=(0.1,0.2,0.3,0.4), wBk=(0.25,0.25,0.25,0.25), wCk=(0.4,0.3,0.2,0.1).
    ¯D#μ(A,B) ¯D#μ(B,C) ¯D#μ(A,C)
    Mixture f0.60850.56350.5868
    Mixture g0.70870.76940.7522
    Mixture h0.90020.71260.7774
    ¯D#Σ(A,B) ¯D#Σ(B,C) ¯D#Σ(A,C)
    Mixture f0.86070.81100.8537
    Mixture g0.86070.81100.8537
    Mixture h0.86070.81100.8537
     | Show Table
    DownLoad: CSV

    As expected from the form of D#Σ in equation (14), its values in the table of means is unaffected by the scale changes in covariances through mixtures f,g,h, but is sensitive to weighting sequences.

    However, the k-variate components from two mixtures might not come from the same feature space in some applications so there may be no connection between the contributing features they are representing. On the other hand, a commonly used feature space is that of pixel colours in different locations, as for example in texture and face recognition and those feature spaces are the same.


    2.2. Mixtures projected onto the complex plane

    The new implementation described here uses the information geometric norm on the mean vectors and the Frobenius norm on the covariance matrices to project the mixture distributions onto the complex plane. This 2-dimensional representation reveals influences of the means and covariances in the mixtures, which itself may be a valuable It allows also the direct calculation of a distance between two mixture distributions using moduli, without assuming any connections between the mixtures, though this has the effect of smoothing the component influences of the means and covariances.

    The idea here is simple: for each mixture distribution fA given by a weighted sum (12) we obtain two numbers ||μA|| and ||ΣA|| being the weighted sums of norms of means and covariances. The norm on mean vectors is given by (3) and for the covariance matrices we need a matrix norm, which here we choose as the Frobenius norm given for an n×n matrix Mαβ by the square root of the sum of squares of its elements mαβ,

    ||Mαβ||2=nα=1nβ=1(mαβ)2

    Note that if Mαβ has eigenvalues {λα} and is represented on a basis of eigenvectors then

    ||Mαβ||2=nα=1(λα)2.

    Given a mixture distribution fA consisting of M different multivariate Gaussians:

    GA={GAi(μAi,ΣAi)}i=1,M with weights wA={wAi}i=1,M we have

    fA=Mm=1wAmGAm
    ||μA||=Mm=1wAm(μAm)T.(ΣAm)1.(μAm) (15)
    ||ΣA||=Mm=1wAm||ΣAm||2. (16)

    Now we can represent fA by the complex number ϕA=||μA||+i||ΣA|| and its difference from another such complex number ϕB for fB gives us a distance measure in our reduced space of mixtures:

    Δ(fA,fB)=|ϕBϕA|. (17)

    Figure 2 shows a plot of the points (||μ||,||Σ||)C for the 250 mixtures of random k-variate Gaussians having k=2,3,4,5 variables, with increasing weights fA, uniform weights fB, and decreasing weights fC. The gA,gB,gC are for the same mixtures except that ΣC2 has been replaced by ΣC2/5 and hA,hB,hC are for the same mixtures except that ΣC5 has been replaced by ΣC5/5 to show the effect of a change in one covariance component. In each case the mean for each over the 250 replications is shown as a large point. This was done using Mathematica with random mean vectors μi[5,10] and random covariance matrix elements σij[5,10],ij, σii[6,6].

    Figure 2. Mixture projection onto C.  Mixtures (fA,fB,fC) for 250 random Gaussians with weights wAk=(0.1,0.2,0.3,0.4), wBk=(0.25,0.25,0.25,0.25), wCk=(0.4,0.3,0.2,0.1) are shown plotted in (||μ||,||Σ||)-space for k-variate Gaussians having k=2,3,4,5 variables, with increasing weights fA, uniform weights fB, and decreasing weights fC. The gA,gB,gC are for the same mixtures except that ΣC2 has been replaced by ΣC2/5 and hA,hB,hC are for the same mixtures except that ΣC5 has been replaced by ΣC5/5 to show the effect of a change in one covariance component. The mean for each over the 250 replications is shown as a larger point.

    3. Discussion

    There are efficient software programs for extracting from large data sets and image sequences certain mixtures of probability distributions, such as multivariate Gaussians, to represent the important features and their mutual correlations needed for accurate document retrieval from databases. The lack of an analytic solution to the geodesic distance equations between points in the Riemannian space of multivariate Gaussian mixtures, Equation (12), with an information metric, means that approximate solutions need to be found for practical applications. We have illustrated a new approximation for the case of 250 mixtures of k-variate Gaussians for k=2,3,4,5 with four weightings wAk=(0.1,0.2,0.3,0.4), wBk=(0.25,0.25,0.25,0.25), wCk=(0.4,0.3,0.2,0.1) of the component Gaussians that are increasing, uniform and decreasing. These simulations show the effects of covariance changes and the effects of weighting sequences on each given collection of k-variate Gaussians for k=2,3,4,5.


    4. Conclusions

    Whereas there are not analytic expressions for the information geometric distance between pairs of mixtures of multivariate Gaussians, we have shown that there are several choices for good information geometric approximate distances which are easy to compute. The new method yielded evident discrimination between pairs of these mixtures, shown in easily interpretable graphical form, Figure 2, distinguishing effects of covariance changes and effects of weighting sequences.


    Acknowledgement

    The methods described in §2.1 were developed with J. Scharcanski and J. Soldera during a visit to UFRGS, Brazil with a grant from The London Mathematical Society in 2013 and the author is grateful to CAPES (Coordeanao de Aperfeioamento de Pessoal de Nivel Superior, Brazil) for partially funding this project. An application of the results to face recognition was reported elsewhere in a joint paper [12].


    Conflict of interest

    The author declares that there are no conflicts of interest in this paper.


    [1] in: L.I. Lutwick (eds.), Tuberculosis: A Clinical Handbook, Chapman and Hall, London, 1995, 54-101.
    [2] Nature, 280 (1979), 361-367.
    [3] Oxford University Press, Oxford, 1991.
    [4] Funkcial. Ekvac., 31 (1988), 331-347.
    [5] in: T.G. Hallam, L.J. Gross, S.A. Levin (eds.), Mathematical Ecology, World Scientific, Teaneck NJ, (1988), 317-342.
    [6] Academic Press, New York, 1979.
    [7] in: Lecture Notes in Mathematics, Vol. 35, Springer, Berlin, 1967.
    [8] American Public Health Association, Washington, 1999.
    [9] J. Theor. Biol., 291 (2011), 56-64.
    [10] J. Math. Biol., 28 (1990), 365-382.
    [11] Clin. Inf. Dis., 33 (2001), 1762-1769.
    [12] J. Differential Equations, 218 (2005), 292-324.
    [13] Canad. Appl. Math. Quart., 14 (2006), 259-284.
    [14] Proc. Amer. Math. Soc., 136 (2008), 2793-2802.
    [15] Appl. Math. Sci., vol. 99, Springer, New York, 1993.
    [16] Tran. R. Soc. Trop. Med. Hyg., 94 (2000), 247-249.
    [17] Theor. Popu. Biol., 14 (1978), 338-349.
    [18] in: Lecture Notes in Mathematics, vol. 1473, Springer-Verlag, berlin, 1991.
    [19] SIAM J. Appl. Math., 52 (1992), 835-854.
    [20] Bull. Math. Biol., 71 (2009), 75-83.
    [21] Math. Biosci., 28 (1976), 221-236.
    [22] Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, 1976.
    [23] SIAM J. Appl. Math., 70 (2010), 2434-2448.
    [24] J. Differential Equations, 248 (2010), 1-20.
    [25] J. Math. Anal. Appl., 361 (2010), 38-47.
    [26] Nonlinear Anal.: RWA, 12 (2011), 119-127.
    [27] National Academy Press, Washington, 1994.
    [28] W.A. Benjamin Inc., New York, 1971.
    [29] Nonlinear Anal.: RWA, 13 (2012), 1581-1592.
    [30] Appl. Math. Comput., 218 (2011), 280-286.
    [31] in: Mathematics in Biology and Medicine, Lecture Notes in Biomathematics, Springer, 57 (1995), 185-189.
    [32] Math. Biosci. Eng., 4 (2007), 205-219.
    [33] Math. Biosci., 207 (2007), 89-103.
    [34] J. Amer. Med. Assoc., 259 (1988), 1051-1053.
    [35] J. Biol. Dyn., 8 (2014), 99-116.
    [36] J. Biol. Syst., 20 (2012), 235-258.
    [37] Math. Comput. Simul., 79 (2008), 500-510.
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3260) PDF downloads(611) Cited by(16)

Article outline

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog