Loading [MathJax]/jax/output/SVG/jax.js
Research article

Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay

  • Received: 27 October 2023 Revised: 20 November 2023 Accepted: 22 November 2023 Published: 13 December 2023
  • MSC : 34C23, 34K18, 37GK15, 39A11, 92B20

  • Recently, delayed dynamical model has witnessed a great interest from many scholars in biological and mathematical areas due to its potential application in describing the interaction of different biological populations. In this article, relying the previous studies, we set up two new predator-prey systems incorporating delay. By virtue of fixed point theory, inequality tactics and an appropriate function, we explore well-posedness (includes existence and uniqueness, boundedness and non-negativeness) of the solution of the two formulated delayed predator-prey systems. With the aid of bifurcation theorem and stability theory of delayed differential equations, we gain the parameter conditions on the emergence of stability and bifurcation phenomenon of the two formulated delayed predator-prey systems. By applying two controllers (hybrid controller and extended delayed feedback controller) we can efficaciously regulate the region of stability and the time of occurrence of bifurcation phenomenon for the two delayed predator-prey systems. The effect of delay on stabilizing the system and adjusting bifurcation is investigated. Computer simulation plots are provided to sustain the acquired prime outcomes. The conclusions of this article are completely new and can provide some momentous instructions in dominating and balancing the densities of predator and prey.

    Citation: Wei Ou, Changjin Xu, Qingyi Cui, Yicheng Pang, Zixin Liu, Jianwei Shen, Muhammad Zafarullah Baber, Muhammad Farman, Shabir Ahmad. Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay[J]. AIMS Mathematics, 2024, 9(1): 1622-1651. doi: 10.3934/math.2024080

    Related Papers:

    [1] Ruimei Gao, Yuyu Wang . The characteristic polynomials of semigeneric threshold arrangements. AIMS Mathematics, 2023, 8(12): 28569-28581. doi: 10.3934/math.20231462
    [2] Ruimei Gao, Jingjing Qiang, Miao Zhang . The Characteristic polynomials of semigeneric graphical arrangements. AIMS Mathematics, 2023, 8(2): 3226-3235. doi: 10.3934/math.2023166
    [3] Rabha W. Ibrahim, Dumitru Baleanu . Global stability of local fractional Hénon-Lozi map using fixed point theory. AIMS Mathematics, 2022, 7(6): 11399-11416. doi: 10.3934/math.2022636
    [4] Peirong Li, Hong Bian, Haizheng Yu, Yan Dou . Clar covering polynomials of polycyclic aromatic hydrocarbons. AIMS Mathematics, 2024, 9(5): 13385-13409. doi: 10.3934/math.2024653
    [5] Qian Cao, Xiaojin Guo . Anti-periodic dynamics on high-order inertial Hopfield neural networks involving time-varying delays. AIMS Mathematics, 2020, 5(6): 5402-5421. doi: 10.3934/math.2020347
    [6] Emad H. M. Zahran, Omar Abu Arqub, Ahmet Bekir, Marwan Abukhaled . New diverse types of soliton solutions to the Radhakrishnan-Kundu-Lakshmanan equation. AIMS Mathematics, 2023, 8(4): 8985-9008. doi: 10.3934/math.2023450
    [7] M. Syed Ali, M. Hymavathi, Bandana Priya, Syeda Asma Kauser, Ganesh Kumar Thakur . Stability analysis of stochastic fractional-order competitive neural networks with leakage delay. AIMS Mathematics, 2021, 6(4): 3205-3241. doi: 10.3934/math.2021193
    [8] Chahn Yong Jung, Muhammad Shoaib Saleem, Shamas Bilal, Waqas Nazeer, Mamoona Ghafoor . Some properties of η-convex stochastic processes. AIMS Mathematics, 2021, 6(1): 726-736. doi: 10.3934/math.2021044
    [9] Mohammed A. Almalahi, Satish K. Panchal, Fahd Jarad, Mohammed S. Abdo, Kamal Shah, Thabet Abdeljawad . Qualitative analysis of a fuzzy Volterra-Fredholm integrodifferential equation with an Atangana-Baleanu fractional derivative. AIMS Mathematics, 2022, 7(9): 15994-16016. doi: 10.3934/math.2022876
    [10] Bundit Unyong, Vediyappan Govindan, S. Bowmiya, G. Rajchakit, Nallappan Gunasekaran, R. Vadivel, Chee Peng Lim, Praveen Agarwal . Generalized linear differential equation using Hyers-Ulam stability approach. AIMS Mathematics, 2021, 6(2): 1607-1623. doi: 10.3934/math.2021096
  • Recently, delayed dynamical model has witnessed a great interest from many scholars in biological and mathematical areas due to its potential application in describing the interaction of different biological populations. In this article, relying the previous studies, we set up two new predator-prey systems incorporating delay. By virtue of fixed point theory, inequality tactics and an appropriate function, we explore well-posedness (includes existence and uniqueness, boundedness and non-negativeness) of the solution of the two formulated delayed predator-prey systems. With the aid of bifurcation theorem and stability theory of delayed differential equations, we gain the parameter conditions on the emergence of stability and bifurcation phenomenon of the two formulated delayed predator-prey systems. By applying two controllers (hybrid controller and extended delayed feedback controller) we can efficaciously regulate the region of stability and the time of occurrence of bifurcation phenomenon for the two delayed predator-prey systems. The effect of delay on stabilizing the system and adjusting bifurcation is investigated. Computer simulation plots are provided to sustain the acquired prime outcomes. The conclusions of this article are completely new and can provide some momentous instructions in dominating and balancing the densities of predator and prey.



    Generalized linear models are extensions of the linear regression model to avoid the selection of the normality response and linearity imposed by the linear regression model, which is impossible for binary or count responses. Regression for count data is widely performed by models, such as Poisson [1], negative binomial [2] and zero-inflated regressions [3,4,5]. The well-known property of a Poisson distribution shows its mean that is equal to the variance. This situation is often unrealistic as the distribution of counts tends to have a variance not equal to its mean. When data handles over-dispersion, a negative binomial distribution is utilized to model count variables. Zero-inflated models are used to model count data that have many zero counts. Moreover, the Conway-Maxwell Poisson distribution is used to deal with under-dispersion and over-dispersion [2]. Also, the discrete Weibull distribution is examined to handle under-dispersion and over-dispersion discrete data. This model was first introduced by Nakagawa and Osaki [6]. The motivation for considering the discrete Weibull distribution stems from the vital role the that continuous Weibull distribution plays in the survival analysis and failure time study. Similarly, the continuous Weibull distribution is widely used in probabilistic modeling and a fatigue life prediction model [7,8]. However, there are many challenging things about this distribution that has yet to be proposed.

    The inference for parameters of the discrete Weibull regression model has been investigated in a few studies based on a parameter affecting zero observation related to the explanatory variables through the log-log and logit link functions. Kalktawi [9] and Englehardt and Li [10] showed how a discrete Weibull regression model can be adapted to address over-dispersion and under-dispersion via the log-log link function. Moreover, Klakattawi et al. [11] proposed an ability to adapt in a simple way to different types of dispersions: Over-dispersion, under-dispersion and covariate-specific dispersion. Peluso and Vinciotti [12] conducted a simulation study linking two parameters to inspect a discrete Weibull regression model's level of flexibility.

    The maximum likelihood estimation of parameters is valid for an asymptotically large sample size of data [13]. One of the most common problems occurring in the count regression model is the maximum likelihood estimates that become unstable with larger standard errors of the estimates that affect statistical inference when insufficiently large sample sizes manifest. To overcome the problem, various alternatives to the maximum likelihood estimation have been proposed, and the Bayesian estimation is one of them. However, the Bayes estimators depend on the prior distributions of the parameters in the model. Many researchers at different periods of time worked in this area of research proposed different prior distributions in the count regression model (see, for example, Haselimashhadi et al. [14], Gelman et al. [15], Fu [16] and Chanialidis et al. [17]). Recently, Chaiprasithikul and Duangsaphon [18,19] proposed the Bayesian estimation for censored data and the zero-inflated and hurdle discrete Weibull regression models via the log-log link function for a parameter that affects zero observation. Uniform non-informative and normal prior distributions were used to account for the regression coefficients. It was demonstrated that this suggestion performs well when applied to real datasets and in simulation studies. Alternatively, a regression structure for the discrete Weibull model through the median link function was proposed by Kalktawi [9]. In addition, there are many works that have developed the Bayesian estimation procedure for the reliability and life testing experiments (see, for example, Ahmadini et al. [20] and Okasha et al. [21]).

    In the present article, the Bayesian estimation is examined, based on the random walk Metropolis algorithm for the median discrete Weibull regression model under the three different prior distributions: Uniform non-informative, normal and Laplace prior distributions. A simulation study is conducted to compare the performance of three different prior distributions and the maximum likelihood estimation in both under-dispersion and over-dispersion cases. Moreover, a real dataset is analyzed to see how the model works in practice.

    The rest of paper is organized as follows. An overview of the median discrete Weibull regression model is presented in section two, along with the maximum likelihood estimation, Bayesian estimation, simulation study and real data analysis. In section three, results and discussion are explained. Finally, this paper is concluded in section four.

    A discrete Weibull distribution (type one) and some properties were introduced by Nakagawa and Osaki [6]. The cumulative distribution function and the probability mass function of a random variable are given by

    FY(y;q,β)={1q(y+1)β;y=0,1,...0;otherwise, (1)

    and

    pY(y;q,β)={qyβq(y+1)β;y=0,1,...0;otherwise, (2)

    respectively, where 0<q<1 and β>0 are the shape parameters. When y=0, the parameter q=1pY(0;q,β), which is the probability of Y more than zero. In other words, when q is small, an excessive zero case occurs. Parameter β indicates the skewness and controls the range of values of Y. Moreover, Kalktawi [9] proposed the parameter β reflects the dispersion of data through the numerical analyses; if 0<β1, the data is over-dispersion, if β2, the data is under-dispersion and if 1<β<2, the data can be either over-dispersed or under-dispersed depending on the value of q. Thus, the discrete Weibull distribution is suitable for both over-dispersion and under-dispersion. Meanwhile, there are the special cases of a discrete Weibull distribution: Geometric distribution, the discrete Exponential distribution (see Sato et al. [22]), the discrete Rayleigh distribution (see Roy [23]) and the Bernoulli distribution.

    The mean and variance of a discrete Weibull distribution are no closed-form expressions, but the numerical approximations can be obtained (see Barbiero [24]). Another property of a discrete Weibull distribution is the quantile function Q(τ). Let's say the τth (0<τ<1) quantile that is the smallest value of y for which F(y)τ. The quantile Q(τ) has a closed-form expression as given by

    Q(τ)=[(ln(1τ)lnq)1/β1]. (3)

    The quantile formula provided in Eq (3) can be applied. The median for discrete distributions can be defined as any value of y that F(y)0.5, then the median can be easily obtained from the closed form as given by

    M=(ln 2lnq)1/β1. (4)

    The presence of regression analysis for count data is a statistical process to measure the relationship between a count variable and one or more explanatory variables. Klakattawi et al. [11] showed how a discrete Weibull regression model can be adapted to address over-dispersion and under-dispersion via the log-log link function to the parameter q. Moreover, Haselimashhadi et al. [14] applied the logit link function to the parameter q, which is commonly used in classification problems for probabilities that are bounded between zero and one. Furthermore, they performed β that depends on explanatory variables through the log link function. The proposed discrete Weibull regression model, unlike generalized linear models in which the conditional mean is central to the interpretation, has the advantage that the conditional quantiles can be easily extracted from the fitted model. Moreover, the regression coefficients can be easily interpreted in terms of changes in the conditional median. According to the median, it has a closed-form expression and is more appropriate than the mean because of the skewness and outliers commonly found for counting data. Kalktawi [9] performed the median link function to a discrete Weibull regression model and used the maximum likelihood method for parameters estimation.

    This study determines Yi,i=1,2,...,n as a count response variable, which takes only the non-negative integer values with the k explanatory variables, xi=(1,xi1,...,xik) and a vector composed of regression coefficients as α=(α0,α1,...,αk). It is assumed that the parameter qi=q(xi) is related to k explanatory variables xi via the median link function as follows

    g(Mi)=xiα=α0+α1xi1+...+αkxik. (5)

    In the context, it is useful to assume that

    g(Mi)=ln(Mi+1). (6)

    Thus,

    Mi+1=exiα. (7)

    Equation (7) is substituted to Eq (4); hence, the parameter qi can be obtained as

    q(xi)=e(ln2eβxiα). (8)

    The conditional probability mass function of Yi given xi can be written as

    pY(yi|xi)={e(ln2)(yiexiα)βe(ln2)(yi+1exiα)β;y=0,1,...0;otherwise. (9)

    In this section, the maximum likelihood estimation for the discrete Weibull regression model is performed by linking only parameter q. The likelihood function of the median discrete Weibull regression is given by

    L(α,β|y,x)=ni=1(e(ln2)(yiexiα)βe(ln2)(yi+1exiα)β). (10)

    The log-likelihood function of the discrete Weibull regression model via median is given by

    l(α,β|y,x)=ni=1ln(e(ln2)(yiexiα)βe(ln2)(yi+1exiα)β). (11)

    The maximum likelihood estimation of the parameters is obtained by setting the first partial derivatives of the log-likelihood function with respect to each unknown parameter equal to zero;

    lαj=ni=1(ln2)βxijexiαwi(α,β)(yi(yiexiα)β1e(ln2)(yiexiα)β(yi+1)((yi+1)exiα)β1e(ln2)((yi+1)exiα)β),
    lβ=ni=1ln 2wi(α,β)((yiexiα)βe(ln2)(yiexiα)β[lnyixiα]((yi+1)exiα)βe(ln2)((yi+1)exiα)β[ln(yi+1)xiα]),

    where wi(α,β)=e(ln2)(yiexiα)βe(ln2)(yi+1exiα)β.

    The maximum likelihood estimators do not have a closed form solution because of the complex form of the likelihood equations. It is very difficult to prove that the solution to the normal equations gives a global maximum. Therefore, the maximum likelihood estimators are estimated by using the numerical method applied in the function optim() from package stats in R language, which minimizes the negative log-likelihood function of the median discrete Weibull regression model.

    Let I(α,β) be the observed Fisher's information matrix for the (k+2)×(k+2) unknown parameters that contain negative of the second derivative of the log-likelihood function; hence, the variance-covariance matrix is the inverse of the observed Fisher's information matrix,

    =I1(α,β). (14)

    The maximum likelihood estimators are substituted, thus resulting in an estimator of denoted by ˆ,

    ˆ=(ˆσα0α0ˆσα0αkˆσα0βˆσα0αkˆσαkαkˆσαkβˆσα0βˆσαkαβˆσββ). (15)

    This matrix can be obtained by inverting the Hessian matrix from the function hessian() in R language. The Hessian matrix contains the second derivative of the negative log-likelihood, i.e., moreover, the Hessian matrix is the observed Fisher's information matrix.

    According to the parameter inferences performed using the maximum likelihood method, under some regularity conditions [25], these estimators enjoy standard asymptotic properties. Thus, by the asymptotic normality of maximum likelihood estimators, the 100(1α)% confidence intervals for parameters αj, j=0,1,2,..., k and β, respectively are

    ^αj±zα/2ˆσαjαjandˆβ±zα/2ˆσββ, (16)

    where zα/2 is the upper α/2th quantile of the standard normal distribution.

    The Metropolis-Hastings (MH) algorithm is the most popular example of a Markov chain Monte Carlo (MCMC) method for simulating a sample from a probability distribution that is the target distribution from which direct sampling is difficult. This algorithm is similar to the acceptance-rejection method; the proposal (candidate) value can be generated from the proposal distribution, then, the proposal value is accepted with an acceptance probability. Moreover, the MH algorithm is converging to the target distribution itself. For more details on the MH algorithm, see Hastings [26] and Gilks et al. [27].

    Given y=(y1,y2,...,yn) is the vector of the observed values of a random sample Y1,Y2,...Yn, let p(θ|y) be the target distribution, while θ is the vector of current state values (parameters) and θ is the proposal value generated from the proposal distribution q(θ|θ). Then, the proposal value θ is accepted with the probability p=min(1,Rθ), where

    Rθ=p(θ|y)p(θ|y)×q(θ|θ)q(θ|θ).

    The iterative steps of the MH algorithm can be described as follows

    Step 1: Initialize the parameter θ(0) for the algorithm.

    Step 2: For l=1,2,...,L repeat the following steps:

      a. Generate θq(θ|θ(t1)).

      b. Calculate p=min(1,Rθ).

      c. Generate u from a uniform distribution, uU(0,1).

        If up, accept θ and set θ(l)=θ with probability p.

        If u>p, reject θ and set θ(l)=θ(l1) with probability 1-p.

    A random walk Metropolis algorithm is a special case of the MH algorithm. In the random walk Metropolis algorithm, the proposal distribution is symmetrical, depending only on the distance between the current state value and the proposal value, then the proposal value θ is accepted with probability p=min(1,Rθ), where

    Rθ=p(θ|y)p(θ|y).

    The algorithm of random walk Metropolis can be summarized followed by the above steps with adjusting Step 2. Generate random error ϵ from a multivariate normal distribution with a zero-mean vector and variance-covariance .

    This section performs the Bayes estimators for the median discrete Weibull regression model based on three schemes of prior distributions as follows:

    ⅰ) Uniform non-informative prior distribution: If no prior information is available, a default flat prior can be resorted to, then it is easy to focus on the uniform non-informative prior distribution. The following the prior distributions are

    π(αj)1,j=0,1,...,k,andπ(β)1.

    ⅱ) Normal prior distribution: As stated earlier, the possible values of αj are real numbers, which corresponds to the possible values of a normal distribution; this study selects the prior distribution of αj; that is, a normal distribution with the hyperparameters as (μαj,σ2αj), j=0,1,...,k. For parameter β, this study selects the prior distribution that is Gamma distribution with the hyperparameters as (a,b). The following prior distributions are

    π(αj)=12πσ2αje12σ2αj(αjμαj)2,μαjR,σ2αj>0,j=0,1,2,...,k

    and

    π(β)=1baΓ(α)βα1eβ/b,a,b>0.

    ⅲ) Laplace prior distribution: If prior information is available, this study can perform the informative prior distribution that should include all possible values of parameter. The possible values of αj are real numbers which corresponds to the possible values of a Laplace distribution; it selects the prior distribution of αj; that is, a Laplace distribution with the hyperparameters as (0,1/λ). Similarly, it selects the prior distribution, which is Gamma distribution with the hyperparameters as (a,b). The following prior distributions are

    π(αj)=λ2eλ|αj|,λ>0,j=0,1,2,...,k,

    and

    π(β)=1baΓ(α)βa1eβ/b,a,b>0.

    The joint prior distributions of the parameters α and β under the independence assumption is

    π(θ)=π(α0)...π(αk)π(β), (17)

    where θ=(α0,α1,...,αk,β).

    The choice of the hyperparameters' values is generally modified by available information of dataset to improve the Bayes estimators. For example, it fixes the hyperparameters' values of αj,j=0,1,2,...,k of normal prior distribution with mean zero and high variance. For Laplace prior distribution, it fixes the hyperparameters' values of αj,j=0,1,2,...,k with some λ>0. In addition, the hyperparameters' values of β are considered by the maximum likelihood estimator of β with the mean of Gamma distribution. The joint posterior density function of the parameters α and β can be written as:

    p(θ|y,x)=L(θ|y,x)π(θ)..L(θ|y,x)π(θ)dα0...dαkdβL(θ|y,x)π(θ), (18)

    where L(θ|y,x) is the likelihood function of the median discrete Weibull regression model in Eq (10).

    The Bayes estimator of each parameter under the squared error loss function is the expected value of each parameter under the joint posterior density function. Therefore, the Bayes estimators are given by

    ^αj=...αjp(θ|y,x)dα0...dαkdβ (19)

    and

    ˆβ=...βp(θ|y,x)dα0...dαkdβ, (20)

    where j=0,1,2,...,k.

    A difficulty to the implementation of Bayesian procedure is that of obtaining the posterior distribution. The process often requires the integration which is very difficult to calculate especially when dealing with complex and high-dimensional models. In such a situation, MH algorithms are highly helpful in this case to model deviations from the posterior density and generate accurate approximations [26,27].

    Since the integral in Eqs (19) and (20) does not have a closed form, this study chose the random walk MH algorithm to estimate the Bayes estimators. It also determines the joint posterior density function of the parameters α and β in Eq (18) as the target distribution, while θ is the current state value and θ is the proposal value generated from the proposal distribution q(θ|θ). Then, the proposal value θ is accepted with the probability p=min(1,Rθ), where

    Rθ=L(θ|y,x)π(θ)L(θ|y,x)π(θ)×q(θ|θ)q(θ|θ). (21)

    For the random walk Metropolis algorithm, the proposal distribution is symmetrical, depending only on the distance between the current state value and the proposal value. Then, the proposal value θ is accepted with probability p=min(1,Rθ), where

    Rθ=L(θ|y,x)π(θ)L(θ|y,x)π(θ). (22)

    The iterative steps of the random walk Metropolis algorithm can be described as follows:

    Step 1: Initialize the parameters θ(0)=(α(0),β(0)) for the algorithm using the maximum likelihood estimation (MLE) of the parameters θ=(α,β).

    Step 2: For l=1,2,...,L, repeat the following steps:

    a. Generate random error vector ϵ from a multivariate normal distribution with a zero-mean vector and variance-covariance matrix as a diagonal matrix in which the diagonal elements are the diagonal of the inverse of the observed Fisher's information matrix; ϵN(μμ=0,=diag(I1(θ))).

    Then, set θ=θ(l1)+ϵ.

    b. Calculate p=min(1,Rθ) where Rθ=L(θ|y,x)π(θ)L(θ|y,x)π(θ).

    c. Generate μ from a uniform distribution; uU(0,1).

      If up, accept θ and set θ(l)=θ with probability p.

      If u>p, reject θ and set θ(l)=θ(l1)= with probability1p.

    Step 3: Remove B of the chain for burn-in.

    Step 4: Calculate the estimated values of the Bayes estimators of the parameters α, and β from the average of the generated values as given by

    ˆθBayes=1LBLl=B+1θ(t), (23)

    where θ is a parameter in vector θ=(α,β).

    The construction of the highest posterior density (HPD) credible intervals of the parameters αj,j=0,1,2,..,k, and β follows the Monte Carlo procedure. Given an MCMC sample θ(l),l=B+1,B+2,...,L, the HPD interval for θ can be shown as follows:

    Step 1: Sort θ(l),l=B+1,B+2,...,L to obtain the ordered value

    θ(1)θ(2)...θ(LB).

    Step 2: Compute the 100(1α)% HPD credible intervals

    Ri(LB)=(θ(i),θ(i+[(1α)(LB)])),i=1,2,...,(LB)[(1α)(LB)], (24)

    where [(1α)(LB)] is the integer part of (1α)(LB) and θ is a parameter in vector θ=(α,β).

    In this section, the Monte Carlo simulation is conducted to assess and compare the performance of the Bayesian estimation via the random walk Metropolis algorithm for the median discrete Weibull regression model under difference three prior distributions for the regression parameters: Uniform non-informative prior (Bayes(U)), normal prior (Bayes(N)) and Laplace prior (Bayes(L)). Moreover, the MLE is considered. The various selected sample sizes (n) are 50,100 and 200. The three explanatory variables are considered: A standard normal distribution (x1N(0,1)), a uniform distribution that lies between -0.3 and 0.3 (x2U(0.3,0.3)) and a Bernoulli distribution with probability of success 0.4 (x3Ber(0.4)). In particular, this study selects the regression parameters to take values (α0,α1,α2,α3)=(1.5,0.4,0.2,0.8) and β=0.9 for over-dispersion, β=2.5 for under-dispersion and β=1.6 for either over-dispersed or under-dispersed, depending on the value of q.

    The parameters are estimated by using the numerical method. In this paper, the Nelder-Mead method in the function optim() from package stats in R is applied to estimate parameters of the median discrete Weibull regression model. For Bayesian estimation, it fixes the hyperparameters' values of αj and j=0,1,2,3 of normal prior distribution with mean zero and variance 1002 and the hyperparameters' values of β as one and the maximum likelihood estimator. In addition, it fixes the hyperparameters' values of αj and j=0,1,2,3 of Laplace prior distribution as 0.5 and the hyperparameters' values of β as one and the maximum likelihood estimator. Additionally, this study considers 10,000 iterations of the sampler and uses the first 10% of the data as burn-in. This simulation study is repeated 1,000 times. The measures of accuracy for the estimators are

    (ⅰ) the estimates of the parameters (Est.)

    Est.=1,000l=1^αj(l)/1,000, (25)

    (ⅱ) the mean square error (MSE)

    MSE=1,000l=1(^αj(l)αj)2/1,000, (26)

    (ⅲ) the coverage probability (CP)

    CP=#(LCLαj<αj<UCLαj)/1,000, (27)

    (ⅳ) the average length (AL)

    AL=1,000l=1(UCL(l)αjLCL(l)αj)/1,000, (28)

    where ^αj is the j-th estimator LCL(l)αj, UCL(l)αj are the j-th lower bound and upper bound for the 95% confidence interval of the l -th time and LCLαj<αj<UCLαj is the total of the number of times that αj is inside the confidence interval. The same measure of accuracy has been applied for the estimators of parameter β. Tables 13 report the estimates of the parameters (Est.) together with the MSE. In addition, Tables 46 report the 95% CP and the AL.

    Table 1.  Est. and MSE when θ = (1.5, 0.4, -0.2, 0.8, 0.9).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    Est. MSE Est. MSE Est. MSE Est. MSE
    α0 1.5070 0.0578 1.5006 0.0579 1.4884 0.0585 1.4803 0.0578
    α1 0.4035 0.0316 0.4096 0.0308 0.4098 0.0309 0.3961 0.0296
    50 α2 -0.3635 1.0364 -0.1902 0.9833 -0.1870 0.9819 -0.1390 0.5564
    α3 0.7746 0.1228 0.8001 0.1195 0.8005 0.1196 0.7704 0.1110
    β 0.9622 0.0191 0.9274 0.0150 0.9139 0.0142 0.9109 0.0140
    α0 1.4906 0.0273 1.4898 0.0263 1.4836 0.0265 1.4813 0.0268
    α1 0.3995 0.0155 0.4027 0.0149 0.4031 0.0150 0.3964 0.0148
    100 α2 -0.3319 0.5356 -0.1948 0.4429 -0.1953 0.4420 -0.1637 0.3085
    α3 0.8031 0.0583 0.8164 0.0566 0.8169 0.0569 0.7998 0.0559
    β 0.9286 0.0077 0.9125 0.0065 0.9060 0.0063 0.9042 0.0063
    α0 1.49830 0.0149 1.5013 0.0132 1.4983 0.0132 1.4970 0.0132
    α1 0.3990 0.0070 0.4018 0.0065 0.4018 0.0064 0.3986 0.0064
    200 α2 -0.3440 0.3627 -0.1686 0.2088 -0.1687 0.2088 -0.1483 0.1623
    α3 0.7938 0.0284 0.8031 0.0253 0.8032 0.0252 0.7948 0.0251
    β 0.9095 0.0037 0.9058 0.0031 0.9026 0.0030 0.9015 0.0030
    Note: the boldface identifies the smallest MSE for each case.

     | Show Table
    DownLoad: CSV
    Table 2.  Est. and MSE when θ=(1.5,0.4,-0.2,0.8,1.6).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    Est. MSE Est. MSE Est. MSE Est. MSE
    α0 1.5009 0.0187 1.5026 0.0175 1.4962 0.0177 1.4951 0.0176
    α1 0.4054 0.0102 0.4052 0.0097 0.4050 0.0097 0.4003 0.0096
    50 α2 -0.2270 0.3470 -0.1931 0.3101 -0.1933 0.3108 -0.1647 0.2268
    α3 0.7936 0.0398 0.8000 0.0378 0.8006 0.0377 0.7886 0.0373
    β 1.7116 0.0584 1.6525 0.0451 1.6303 0.0420 1.6281 0.0418
    α0 1.4952 0.0086 1.4955 0.0081 1.4922 0.0082 1.4916 0.0082
    α1 0.4012 0.0051 0.4017 0.0048 0.4016 0.0048 0.3995 0.0048
    100 α2 -0.2314 0.1710 -0.2003 0.1391 -0.1982 0.1372 -0.1798 0.1147
    α3 0.8051 0.0193 0.8091 0.0180 0.8089 0.0180 0.8039 0.0179
    β 1.6521 0.0248 1.6248 0.0196 1.6140 0.0189 1.6126 0.0190
    α0 1.5006 0.0047 1.5007 0.0041 1.4991 0.0041 1.4987 0.0041
    α1 0.4012 0.0022 0.4010 0.0021 0.4010 0.0021 0.4001 0.0021
    200 α2 -0.2090 0.0871 -0.1821 0.0661 -0.1823 0.0661 -0.1693 0.0582
    α3 0.7979 0.0093 0.8016 0.0081 0.8017 0.0081 0.7990 0.0080
    β 1.6195 0.0111 1.6090 0.0088 1.6040 0.0086 1.6027 0.0087
    Note: the boldface identifies the smallest MSE for each case.

     | Show Table
    DownLoad: CSV
    Table 3.  Est. and MSE when θ=(1.5,0.4,-0.2,0.8,2.5).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    Est. MSE Est. MSE Est. MSE Est. MSE
    α0 1.4971 0.0089 1.5011 0.0074 1.4972 0.0074 1.4977 0.0075
    α1 0.4025 0.0043 0.4034 0.0041 0.4014 0.0041 0.4035 0.0041
    50 α2 -0.2189 0.1713 -0.1965 0.1302 -0.1782 0.1076 -0.1971 0.1300
    α3 0.7958 0.0188 0.8004 0.0156 0.7952 0.0156 0.8002 0.0157
    β 2.6337 0.1496 2.5842 0.1114 2.5483 0.1030 2.5507 0.1033
    α0 1.4957 0.0045 1.4975 0.0034 1.4975 0.0034 1.4954 0.0034
    α1 0.4016 0.0024 0.4018 0.0020 0.4017 0.0020 0.4008 0.0020
    100 α2 -0.2216 0.1024 -0.2018 0.0599 -0.2015 0.0597 -0.1889 0.0540
    α3 0.8022 0.0105 0.8056 0.0075 0.8057 0.0075 0.8034 0.0075
    β 2.5486 0.0729 2.5403 0.0471 2.5239 0.0454 2.5225 0.0454
    α0 1.4987 0.0024 1.5007 0.0018 1.4998 0.0018 1.4996 0.0018
    α1 0.4002 0.0012 0.4006 0.0009 0.4005 0.0009 0.4001 0.0009
    200 α2 -0.2118 0.0703 -0.1947 0.0410 -0.1949 0.0409 -0.1868 0.0393
    α3 0.7987 0.0052 0.8016 0.0037 0.8014 0.0037 0.8004 0.0036
    β 2.5099 0.0401 2.5144 0.0239 2.5064 0.0236 2.5054 0.0236
    Note: the boldface identifies the smallest MSE for each case.

     | Show Table
    DownLoad: CSV
    Table 4.  CP and AL when θ = (1.5, 0.4, -0.2, 0.8, 0.9).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    CP AL CP AL CP AL CP AL
    α0 0.918 0.8825 0.939 0.9398 0.943 0.9523 0.948 0.9497
    α1 0.925 0.6384 0.952 0.6802 0.956 0.6906 0.958 0.6776
    50 α2 0.932 3.6636 0.943 3.8821 0.953 3.9383 0.975 3.3774
    α3 0.923 1.2656 0.939 1.3614 0.941 1.3795 0.947 1.3446
    β 0.933 0.4552 0.932 0.4476 0.938 0.4428 0.938 0.4412
    α0 0.940 0.6229 0.952 0.6465 0.950 0.6499 0.952 0.6510
    α1 0.918 0.4443 0.931 0.4573 0.928 0.4603 0.925 0.4584
    100 α2 0.921 2.5582 0.944 2.6243 0.951 2.6481 0.964 2.4005
    α3 0.935 0.8935 0.948 0.9242 0.945 0.9306 0.949 0.9235
    β 0.938 0.3066 0.948 0.3026 0.947 0.3013 0.945 0.3010
    α0 0.931 0.4491 0.949 0.4521 0.951 0.4538 0.946 0.4542
    α1 0.947 0.3138 0.956 0.3166 0.951 0.3179 0.954 0.3170
    200 α2 0.871 1.8096 0.948 1.8175 0.946 1.8231 0.959 1.7103
    α3 0.939 0.6332 0.958 0.6405 0.958 0.6438 0.958 0.6430
    β 0.920 0.2115 0.937 0.2098 0.936 0.2094 0.936 0.2092

     | Show Table
    DownLoad: CSV
    Table 5.  CP and AL when θ=(1.5,0.4,-0.2,0.8,1.6).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    CP AL CP AL CP AL CP AL
    α0 0.915 0.4946 0.938 0.5214 0.944 0.5286 0.946 0.5278
    α1 0.926 0.3605 0.944 0.3820 0.953 0.3863 0.950 0.3836
    50 α2 0.917 2.0497 0.950 2.1733 0.951 2.2057 0.971 2.0250
    α3 0.926 0.7132 0.939 0.7624 0.946 0.7728 0.945 0.7663
    β 0.926 0.7826 0.940 0.7683 0.940 0.7569 0.941 0.7562
    α0 0.935 0.3518 0.949 0.3601 0.946 0.3618 0.948 0.3624
    α1 0.912 0.2509 0.923 0.2575 0.932 0.2588 0.931 0.2585
    100 α2 0.921 1.4423 0.941 1.4748 0.946 1.4822 0.958 1.4122
    α3 0.936 0.5041 0.945 0.5194 0.942 0.5211 0.948 0.5199
    β 0.930 0.5268 0.947 0.5210 0.953 0.5179 0.951 0.5171
    α0 0.936 0.2504 0.955 0.2532 0.950 0.2534 0.952 0.2536
    α1 0.945 0.1769 0.955 0.1790 0.954 0.1794 0.953 0.1792
    200 α2 0.929 1.0147 0.950 1.0240 0.945 1.0292 0.959 0.9931
    α3 0.944 0.3567 0.958 0.3610 0.960 0.3619 0.962 0.3623
    β 0.917 0.3627 0.943 0.3617 0.941 0.3596 0.939 0.3594

     | Show Table
    DownLoad: CSV
    Table 6.  CP and AL when θ=(1.5,0.4,-0.2,0.8,2.5).
    n parameter MLE Bayes(U) Bayes(N) Bayes(L)
    CP AL CP AL CP AL CP AL
    α0 0.919 0.3255 0.948 0.3348 0.944 0.3375 0.947 0.3381
    α1 0.928 0.2383 0.937 0.2466 0.947 0.2478 0.944 0.2496
    50 α2 0.911 1.3709 0.944 1.4014 0.961 1.3439 0.944 1.4162
    α3 0.920 0.4746 0.943 0.4896 0.936 0.4926 0.939 0.4948
    β 0.924 1.2153 0.941 1.1989 0.943 1.1721 0.941 1.1712
    α0 0.926 0.2311 0.946 0.2317 0.950 0.2327 0.946 0.2331
    α1 0.915 0.1657 0.929 0.1665 0.929 0.1672 0.932 0.1673
    100 α2 0.899 0.9562 0.943 0.9509 0.943 0.9554 0.946 0.9264
    α3 0.924 0.3337 0.941 0.3343 0.944 0.3360 0.946 0.3356
    β 0.904 0.8210 0.946 0.8140 0.949 0.8042 0.947 0.8039
    α0 0.929 0.1630 0.955 0.1623 0.958 0.1623 0.957 0.1625
    α1 0.921 0.1158 0.946 0.1153 0.948 0.1156 0.948 0.1157
    200 α2 0.907 0.6652 0.946 0.6583 0.948 0.6591 0.949 0.6471
    α3 0.934 0.2324 0.951 0.2318 0.952 0.2323 0.952 0.2324
    β 0.896 0.5652 0.942 0.5617 0.942 0.5584 0.940 0.5588

     | Show Table
    DownLoad: CSV

    In this section, the median discrete Weibull regression is applied to a real data set that shows the ability for over-dispersion data (see Kalktawi [9]). This data is available under the "COUNT" package in R from Hosmer and Lemeshow [28] and represents the number of visits to a doctor by pregnant women in the first three months of their pregnancies with 189 observations. The response variable is the number of physician visits in first trimester, and the three explanatory variables are history of mother smoking (1 = history of mother smoking; 0 = mother nonsmoker) (x1), weight (lbs) at last menstrual period (x2) and age of mother (x3). For fitting the discrete Weibull distribution of the response variable, the Kolmogorov-Smirnov statistic is 0.0985 less than the critical value of 0.0989. Thus, this data can be modeled by the discrete Weibull distribution. Moreover, this data is modeled by the Poisson, negative binomial and discrete Weibull distributions. The results show that the Akaike information criterion (AIC) from the Poisson, negative binomial and discrete Weibull distributions are 476.59,466.85 and 466.84, respectively. Additionally, the mean and variance of the data are 0.7937 and 1.1221, respectively, which indicates an over-dispersion case.

    We estimate parameters and construct the 95% confidence intervals via the maximum likelihood estimation method. To demonstrate how the proposed Bayesian method under the three prior distributions can be used in practice, this study calculates parameter estimates and the 95% HPD interval of the parameters with L=10,000 replicates and 10% of the chain for burn-in; B=1,000. In addition, the three information criteria, namely, the AIC, the Bayesian information criterion (BIC) and the deviance information criterion (DIC) (see in Kalktawi [9] and Haselimashhadi et al. [14]) are applied to compare models with different estimates of parameters, which are models for the three explanatory variables and a subset of parameters of that significance. All results are reported in Table 7. Along with, the traceplot, autocorrelation for sampled values and posterior densities of significant independent variables are performed in Figure 1.

    Table 7.  Parameter estimates, the 95% confidence intervals, and the three information criteria.
    Models Parameters MLE Bayes(U) Bayes(N) Bayes(L)
    x1,x2,x3 α0 -1.0983* -1.1566* -1.667* -1.0553*
    (-1.8269, -0.3697) (-2.0877, -0.4473) (-1.8724, -0.4639) (-1.7395, -0.3875)
    α1 -0.6240 -0.6690 -0.6980 -0.0478
    (-0.3173, 0.1924) (-0.3491, 0.1849) (-0.3169, 0.1738) (-0.3182, 0.1795)
    α2 0.0029 0.0031 0.0030 0.0029
    (-0.0009, 0.0069) (-0.0008, 0.0072) (-0.0007, 0.0070) (-0.0009, 0.0074)
    α3 0.0295* 0.0306* 0.0314* 0.0273*
    (0.0058, 0.0532) (0.0055, 0.0585) (0.0083, 0.0552) (0.0057, 0.0490)
    β 1.19312* 1.1676* 1.1564* 1.1717*
    (0.9986, 1.3875) (0.9897, 1.3639) (0.9684, 1.3795) (0.9848, 1.3579)
    AIC 463.1398 463.2289 463.3156 463.2471
    BIC 479.3486 479.4376 479.5244 479.4558
    DIC - 463.3949 463.0142 462.3886
    x3 α0 -0.7974* -0.8387* -0.7974* -0.7791*
    (-1.3929, -0.2019) (-1.5015, -0.2737) (-1.3929, -0.2019) (-1.4164, -0.1910)
    α3 0.0321* 0.0333* 0.0321* 0.0310*
    (0.0083, 0.0558) (0.0111, 0.0574) (0.0083, 0.0558) (0.0083, 0.0555)
    β 1.1733* 1.1587* 1.1733* 1.1545*
    (0.9850, 1.3617) (0.9676, 1.3525) (0.9850, 1.3617) (0.9532, 1.3618)
    AIC 459.5681** 461.6021** 461.7598** 461.6235**
    BIC 466.0515** 471.3273** 471.3521** 471.3487**
    DIC - 461.5384** 461.7598** 461.7922**
    Note: (*) denotes the 95% confidence intervals does not contain zero (statistically significant) and (**) denotes the minimum value of each information criteria between models of and only.

     | Show Table
    DownLoad: CSV
    Figure 1.  Traceplot, autocorrelation and posterior densities for regression coefficient of Bayes(L).

    An inspection of Tables 13 show that the estimates of the parameters (Est.) in the simulation study obtained by all methods seem to be close to the true parameter values. Moreover, all of the estimators have monotonic behaviors according to the MSE, namely, when n increases, the estimated MSE values decrease. The Bayes estimators have a smaller MSE than the estimators of MLE. The MSE of the Bayes(L) outperforms other methods in almost all situations. Additionally, note that the MSE for all estimators of the three Bayesian methods behave very similarly when n=200. Conversely, the MLE presents the highest MSE but has a satisfactory performance when sample size increases. In addition, , note that the MSE of the estimators of α2 are very high and may cause what we define as a strong effect on x2 or the high variance of the estimators of α2. Furthermore, the explanatory variable affects the MSE of estimators. However, when n is large enough and β increases, the MSE of estimators will decrease and will no longer be very high anymore.

    Tables 46 show that when sample sizes were increased, the CP of all methods was generally close to the nominal confidence level. The CP obtained when using the three Bayesian methods are closer to the nominal level than using MLE method. Additionally, the CP for the three prior distributions of the Bayesian method behave remarkably similar. Regarding the AL, as sample sizes were increased, the AL of the 95% confidence intervals decreased for all methods. For cases β=0.9 and β=1.6, the AL based on the Bayes(L) were the shortest for almost all situations after the CP was considered, while the Bayes(U) were the shortest in the case of β=2.5. Although the AL for the MLE method performs the shortest in some situations, but the CP is farthest from the nominal confidence level. Additionally, the results of the AL for the estimator α2 are quite wide, which is the explanation given for the extremely high MSE values of the estimator α2.

    Table 7 shows the results of significant explanatory variables that are selected from the three explanatory variables. We only report that an explanatory variable x3 (age of mother) shows significant in all methods. Results from comparing models with different estimates of parameters suggest that a model for only x3 provided better fitting than a model for the three explanatory variables, according to the AIC, BIC and DIC. Regarding the DIC of the three Bayesian methods for each of the two models, they exhibited remarkably similar behavior. Additionally, the AIC and BIC are included as well. This finding corresponds to the results of the simulation study in case n=200. Figure 1 shows the traceplot, autocorrelation for sampled values and posterior densities for regression coefficient α3 based on the Bayes(L). It can be seen that the trace plot showed adequate convergence. Moreover, it is clear that the sampled values are well mixed and exhibit adequate stability for autocorrelation.

    This paper introduced the Bayesian estimation for the discrete Weibull regression model via the median link function. The Bayesian approach was considered on the three different prior distributions: Uniform non-informative prior, normal prior and Laplace prior. The augmented random walk Metropolis procedure was also proposed to compute the Bayes estimates of the unknown parameters. Moreover, the maximum likelihood estimation was compared. The performance of these methods was compared by using the Monte Carlo simulation based on the MSE and CP criteria. These criteria were calculated for different sample sizes based on both under-dispersion and over-dispersion data, along with the application of the methods illustrated by using a real dataset available on the literature to compare models with different estimates of parameters via the Akaike information criterion, the Bayesian information criterion and the deviance information criterion. Based on MSE criterion, the Bayesian using Laplace prior distribution for estimating the parameters performs better than other approaches. Additionally, the three Bayesian methods behaved very similarly with the large sample size. Estimated coverage probabilities of the three Bayesian approaches were considered as the criteria of a good confidence interval. In additioon, the results of real data analysis were coincided with those in the simulation study. Overall, the Bayesian estimation using Laplace prior distribution outperforms other methods for parameters estimation. However, the Bayesian estimation using all three prior distributions can be an effective alternative for this model.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to the Editor in Chief and anonymous referee for the insightful and constructive comments that improve this paper.

    All authors declare no conflicts of interest in this paper.



    [1] E. Balc, Predation fear and its carry-over effect in a fractional order prey-predator model with prey refuge, Chaos Soliton. Fract., 175 (2023), 114016. https://doi.org/10.1016/j.chaos.2023.114016 doi: 10.1016/j.chaos.2023.114016
    [2] S. Pandey, U. Ghosh, D. Das, S. Chakraborty, A. Sarkar, Rich dynamics of a delay-induced stage-structure prey-predator model with cooperative behaviour in both species and the impact of prey refuge, Math. Comput. Simulat., 216 (2024), 49–76. https://doi.org/10.1016/j.matcom.2023.09.002 doi: 10.1016/j.matcom.2023.09.002
    [3] F. Rao, Y. Kang, Dynamics of a stochastic prey-predator system with prey refuge, predation fear and its carry-over effects, Chaos Soliton. Fract., 175 (2023), 113935. https://doi.org/10.1016/j.chaos.2023.113935 doi: 10.1016/j.chaos.2023.113935
    [4] K. Sarkar, S. Khajanchi, Spatiotemporal dynamics of a predator-prey system with fear effect, J. Franklin Inst., 360 (2023), 7380–7414. https://doi.org/10.1016/j.jfranklin.2023.05.034 doi: 10.1016/j.jfranklin.2023.05.034
    [5] J. L. Xiao, Y. H. Xia, Spatiotemporal dynamics in a diffusive predator-prey model with multiple Allee effect and herd behavior, J. Math. Anal. Appl., 529 (2024), 127569. https://doi.org/10.1016/j.jmaa.2023.127569 doi: 10.1016/j.jmaa.2023.127569
    [6] P. Mishra, D. Wrzosek, Pursuit-evasion dynamics for Bazykin-type predator-prey model with indirect predator taxis, J. Diff. Equat., 361 (2023), 391–416. https://doi.org/10.1016/j.jde.2023.02.063 doi: 10.1016/j.jde.2023.02.063
    [7] W. Choi, K. Kim, I. Ahn, Predator-prey models with prey-dependent diffusion on predators in spatially heterogeneous habitat, J. Math. Anal. Appl., 525 (2023), 127130. https://doi.org/10.1016/j.jmaa.2023.127130 doi: 10.1016/j.jmaa.2023.127130
    [8] Q. Li, Y. Y Zhang, Y. N. Xiao, Canard phenomena for a slow-fast predator-prey system with group defense of the prey, J. Math. Anal. Appl., 527 (2023), 127418. https://doi.org/10.1016/j.jmaa.2023.127418 doi: 10.1016/j.jmaa.2023.127418
    [9] D. Sen, S. Petrovskii, S. Ghorai, M. Banerjee, Rich bifurcation structure of prey-predator model induced by the Allee effect in the growth of generalist predator, Int. J. Bifurcat. Chaos, 30 (2020), 2050084. https://doi.org/10.1142/S0218127420500844 doi: 10.1142/S0218127420500844
    [10] S. Dey, M. Banerjee, S. Ghorai, Analytical detection of stationary turing pattern in a predator-prey system with generalist predator, Math. Model. Nat. Phenom., 17 (2022), 33. https://doi.org/10.1051/mmnp/2022032 doi: 10.1051/mmnp/2022032
    [11] J. Roy, M. Banerjee, Global stability of a predator-prey model with generalist predator, Appl. Math. Lett., 142 (2023), 108659. https://doi.org/10.1016/j.aml.2023.108659 doi: 10.1016/j.aml.2023.108659
    [12] R. Xu. Global stability and Hopf bifurcation of a predator-prey model with stage structure and delayed predator response, Nonlinear Dynam., 67 (2012), 1683–1693. https://doi.org/10.1007/s11071-011-0096-1 doi: 10.1007/s11071-011-0096-1
    [13] C. J. Xu, D. Mu, Z. X. Liu, Y. C. Pang, C. Aouiti, O. Tunc, et al., Bifurcation dynamics and control mechanism of a fractional-order delayed Brusselator chemical reaction model, MATCH-Commun. Math. Co., 89 (2023), 73–106. https://doi.org/10.46793/match.89-1.073X doi: 10.46793/match.89-1.073X
    [14] C. J. Xu, C. Aouiti, Z. X. Liu, P. L. Li, L. Y. Yao, Bifurcation caused by delay in a fractional-order coupled Oregonator model in chemistry, MATCH-Commun. Math. Co., 88 (2022), 371–396. https://doi.org/10.46793/match.88-2.371X doi: 10.46793/match.88-2.371X
    [15] C. J. Xu, W. Zhang, C. Aouiti, Z. X. Liu, P. L. Li, L. Y. Yao, Bifurcation dynamics in a fractional-order Oregonator model including time delay, MATCH-Commun. Math. Co., 87 (2022), 397–414. https://doi.org/10.46793/match.87-2.397X doi: 10.46793/match.87-2.397X
    [16] Q. Y. Cui, C. J. Xu, W. Ou, Y. C. Pang, Z. X. Liu, P. L. Li, et al., Bifurcation behavior and hybrid controller design of a 2D Lotka-Volterra commensal symbiosis system accompanying delay, Mathematics, 11 (2023), 4808. https://doi.org/10.3390/math11234808 doi: 10.3390/math11234808
    [17] C. J. Xu, X. H. Cui, P. L. Li, J. L. Yan, L. Y. Yao, Exploration on dynamics in a discrete predator-prey competitive model involving feedback controls, J. Biol. Dynam., 17 (2023), 2220349. https://doi.org/10.1080/17513758.2023.2220349 doi: 10.1080/17513758.2023.2220349
    [18] D. Mu, C. J. Xu, Z. X. Liu, Y. C. Pang, Further insight into bifurcation and hybrid control tactics of a chlorine dioxide-iodine-malonic acid chemical reaction model incorporating delays, MATCH Commun. Math. Comput. Chem., 89 (2023), 529–566. https://doi.org/10.46793/match.89-3.529M doi: 10.46793/match.89-3.529M
    [19] P. L. Li, X. Q. Peng, C. J. Xu, L. Q. Han, S. R. Shi, Novel extended mixed controller design for bifurcation control of fractional-order Myc/E2F/miR-17-92 network model concerning delay, Math. Method. Appl. Sci., 46 (2023), 18878–18898. https://doi.org/10.1002/mma.9597 doi: 10.1002/mma.9597
    [20] P. L. Li, R. Gao, C. J. Xu, J. W. Shen, S. Ahmad, Y. Li, Exploring the impact of delay on Hopf bifurcation of a type of BAM neural network models concerning three nonidentical delays, Neural Process Lett., 55 (2023), 11595–11635. https://doi.org/10.1007/s11063-023-11392-0 doi: 10.1007/s11063-023-11392-0
    [21] S. Li, C. D. Huang, X. Y. Song, Detection of Hopf bifurcations induced by pregnancy and maturation delays in a spatial predator-prey model via crossing curves method, Chaos Soliton. Fract., 175 (2023), 114012. https://doi.org/10.1016/j.chaos.2023.114012 doi: 10.1016/j.chaos.2023.114012
    [22] X. Z. Feng, X. Liu, C. Sun, Y. L. Jiang, Stability and Hopf bifurcation of a modified Leslie-Gower predator-prey model with Smith growth rate and B-D functional response, Chaos Soliton. Fract., 174 (2023), 113794. https://doi.org/10.1016/j.chaos.2023.113794 doi: 10.1016/j.chaos.2023.113794
    [23] Z. Z. Zhang, H. Z. Yang, Hybrid control of Hopf bifurcation in a two prey one predator system with time delay, In: Proceeding of the 33rd Chinese Control Conference, IEEE, Nanjing, China, 2014, 6895–6900. https://doi.org/10.1109/chicc.2014.6896136
    [24] L. P. Zhang, H. N. Wang, M. Xu, Hybrid control of bifurcation in a predator-prey system with three delays, Acta Phys. Sin., 60 (2011), 010506. https://doi.org/10.7498/aps.60.010506 doi: 10.7498/aps.60.010506
    [25] Z. Liu, K. W. Chuang, Hybrid control of bifurcation in continuous nonlinear dynamical systems, Int. J. Bifurcat. Chaos, 15 (2005), 1895–3903. https://doi.org/10.1142/S0218127405014374 doi: 10.1142/S0218127405014374
    [26] J. Alidousti, Stability and bifurcation analysis for a fractional prey-predator scavenger model, Appl. Math. Model., 81 (2020), 342–355. https://doi.org/10.1016/j.apm.2019.11.025 doi: 10.1016/j.apm.2019.11.025
    [27] W. G. Zhou, C. D. Huang, M. Xiao, J. D. Cao, Hybrid tactics for bifurcation control in a fractional-order delayed predator-prey model, Physica A, 515 (2019), 183–191. https://doi.org/10.1016/j.physa.2018.09.185 doi: 10.1016/j.physa.2018.09.185
    [28] Y. Q. Zhang, P. L. Li, C. J. Xu, X. Q. Peng, R. Qiao, Investigating the effects of a fractional operator on the evolution of the ENSO model: Bifurcations, stability and numerical analysis, Fractal Fract., 7 (2023), 602. https://doi.org/10.3390/fractalfract7080602 doi: 10.3390/fractalfract7080602
    [29] P. L. Li, Y. J. Lu, C. J. Xu, J. Ren, Insight into Hopf bifurcation and control methods in fractional order BAM neural networks incorporating symmetric structure and delay, Cogn. Comput., 15 (2023), 1825–1867. https://doi.org/10.1007/s12559-023-10155-2 doi: 10.1007/s12559-023-10155-2
    [30] C. J. Xu, M. Farman, Dynamical transmission and mathematical analysis of Ebola virus using a constant proportional operator with a power law kernel, Fractals Fract., 7 (2023), 706. https://doi.org/10.3390/fractalfract7100706 doi: 10.3390/fractalfract7100706
    [31] C. J. Xu, Y. Y. Zhao, J. T. Lin, Y. C. Pang, Z. X. Liu, J. W. Shen, et al., Mathematical exploration on control of bifurcation for a plankton-oxygen dynamical model owning delay, J. Math. Chem., 2023, 1–31. https://doi.org/10.1007/s10910-023-01543-y doi: 10.1007/s10910-023-01543-y
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1784) PDF downloads(115) Cited by(44)

Figures and Tables

Figures(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog